License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2018-04-04 01:23:33 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
#include <linux/bitops.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/page-flags.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/writeback.h>
|
|
|
|
#include <linux/pagevec.h>
|
2011-05-21 03:50:29 +08:00
|
|
|
#include <linux/prefetch.h>
|
2011-05-27 00:01:56 +08:00
|
|
|
#include <linux/cleancache.h>
|
2008-01-25 05:13:08 +08:00
|
|
|
#include "extent_io.h"
|
|
|
|
#include "extent_map.h"
|
2008-08-20 20:51:49 +08:00
|
|
|
#include "ctree.h"
|
|
|
|
#include "btrfs_inode.h"
|
2011-07-22 21:41:52 +08:00
|
|
|
#include "volumes.h"
|
2011-11-09 20:44:05 +08:00
|
|
|
#include "check-integrity.h"
|
2012-03-13 21:38:00 +08:00
|
|
|
#include "locking.h"
|
2012-06-05 02:03:51 +08:00
|
|
|
#include "rcu-string.h"
|
2013-09-22 12:54:23 +08:00
|
|
|
#include "backref.h"
|
2017-06-23 10:09:57 +08:00
|
|
|
#include "disk-io.h"
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
static struct kmem_cache *extent_state_cache;
|
|
|
|
static struct kmem_cache *extent_buffer_cache;
|
2018-05-21 06:25:56 +08:00
|
|
|
static struct bio_set btrfs_bioset;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2014-07-07 03:09:59 +08:00
|
|
|
static inline bool extent_state_in_tree(const struct extent_state *state)
|
|
|
|
{
|
|
|
|
return !RB_EMPTY_NODE(&state->rb_node);
|
|
|
|
}
|
|
|
|
|
2013-04-23 00:12:31 +08:00
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
2008-01-25 05:13:08 +08:00
|
|
|
static LIST_HEAD(buffers);
|
|
|
|
static LIST_HEAD(states);
|
2008-09-08 23:18:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
static DEFINE_SPINLOCK(leak_lock);
|
2013-04-23 00:12:31 +08:00
|
|
|
|
|
|
|
static inline
|
|
|
|
void btrfs_leak_debug_add(struct list_head *new, struct list_head *head)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&leak_lock, flags);
|
|
|
|
list_add(new, head);
|
|
|
|
spin_unlock_irqrestore(&leak_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void btrfs_leak_debug_del(struct list_head *entry)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&leak_lock, flags);
|
|
|
|
list_del(entry);
|
|
|
|
spin_unlock_irqrestore(&leak_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void btrfs_leak_debug_check(void)
|
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
|
|
|
while (!list_empty(&states)) {
|
|
|
|
state = list_entry(states.next, struct extent_state, leak_list);
|
2015-01-15 02:52:13 +08:00
|
|
|
pr_err("BTRFS: state leak: start %llu end %llu state %u in tree %d refs %d\n",
|
2014-07-07 03:09:59 +08:00
|
|
|
state->start, state->end, state->state,
|
|
|
|
extent_state_in_tree(state),
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_read(&state->refs));
|
2013-04-23 00:12:31 +08:00
|
|
|
list_del(&state->leak_list);
|
|
|
|
kmem_cache_free(extent_state_cache, state);
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!list_empty(&buffers)) {
|
|
|
|
eb = list_entry(buffers.next, struct extent_buffer, leak_list);
|
2018-01-26 02:02:48 +08:00
|
|
|
pr_err("BTRFS: buffer leak start %llu len %lu refs %d bflags %lu\n",
|
|
|
|
eb->start, eb->len, atomic_read(&eb->refs), eb->bflags);
|
2013-04-23 00:12:31 +08:00
|
|
|
list_del(&eb->leak_list);
|
|
|
|
kmem_cache_free(extent_buffer_cache, eb);
|
|
|
|
}
|
|
|
|
}
|
2013-04-30 23:22:23 +08:00
|
|
|
|
2013-12-13 23:02:44 +08:00
|
|
|
#define btrfs_debug_check_extent_io_range(tree, start, end) \
|
|
|
|
__btrfs_debug_check_extent_io_range(__func__, (tree), (start), (end))
|
2013-04-30 23:22:23 +08:00
|
|
|
static inline void __btrfs_debug_check_extent_io_range(const char *caller,
|
2013-12-13 23:02:44 +08:00
|
|
|
struct extent_io_tree *tree, u64 start, u64 end)
|
2013-04-30 23:22:23 +08:00
|
|
|
{
|
2018-11-01 20:09:49 +08:00
|
|
|
struct inode *inode = tree->private_data;
|
|
|
|
u64 isize;
|
|
|
|
|
|
|
|
if (!inode || !is_data_inode(inode))
|
|
|
|
return;
|
|
|
|
|
|
|
|
isize = i_size_read(inode);
|
|
|
|
if (end >= PAGE_SIZE && (end % 2) == 0 && end != isize - 1) {
|
|
|
|
btrfs_debug_rl(BTRFS_I(inode)->root->fs_info,
|
|
|
|
"%s: ino %llu isize %llu odd range [%llu,%llu]",
|
|
|
|
caller, btrfs_ino(BTRFS_I(inode)), isize, start, end);
|
|
|
|
}
|
2013-04-30 23:22:23 +08:00
|
|
|
}
|
2013-04-23 00:12:31 +08:00
|
|
|
#else
|
|
|
|
#define btrfs_leak_debug_add(new, head) do {} while (0)
|
|
|
|
#define btrfs_leak_debug_del(entry) do {} while (0)
|
|
|
|
#define btrfs_leak_debug_check() do {} while (0)
|
2013-04-30 23:22:23 +08:00
|
|
|
#define btrfs_debug_check_extent_io_range(c, s, e) do {} while (0)
|
2008-09-08 23:18:08 +08:00
|
|
|
#endif
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
struct tree_entry {
|
|
|
|
u64 start;
|
|
|
|
u64 end;
|
|
|
|
struct rb_node rb_node;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct extent_page_data {
|
|
|
|
struct bio *bio;
|
|
|
|
struct extent_io_tree *tree;
|
2008-11-07 11:02:51 +08:00
|
|
|
/* tells writepage not to lock the state bits for this range
|
|
|
|
* it still does the unlocking
|
|
|
|
*/
|
2009-04-21 03:50:09 +08:00
|
|
|
unsigned int extent_locked:1;
|
|
|
|
|
2016-11-01 21:40:10 +08:00
|
|
|
/* tells the submit_bio code to use REQ_SYNC */
|
2009-04-21 03:50:09 +08:00
|
|
|
unsigned int sync_io:1;
|
2008-01-25 05:13:08 +08:00
|
|
|
};
|
|
|
|
|
2018-03-02 00:56:34 +08:00
|
|
|
static int add_extent_changeset(struct extent_state *state, unsigned bits,
|
2015-10-12 14:53:37 +08:00
|
|
|
struct extent_changeset *changeset,
|
|
|
|
int set)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!changeset)
|
2018-03-02 00:56:34 +08:00
|
|
|
return 0;
|
2015-10-12 14:53:37 +08:00
|
|
|
if (set && (state->state & bits) == bits)
|
2018-03-02 00:56:34 +08:00
|
|
|
return 0;
|
2015-10-12 15:35:38 +08:00
|
|
|
if (!set && (state->state & bits) == 0)
|
2018-03-02 00:56:34 +08:00
|
|
|
return 0;
|
2015-10-12 14:53:37 +08:00
|
|
|
changeset->bytes_changed += state->end - state->start + 1;
|
2017-02-13 20:42:29 +08:00
|
|
|
ret = ulist_add(&changeset->range_changed, state->start, state->end,
|
2015-10-12 14:53:37 +08:00
|
|
|
GFP_ATOMIC);
|
2018-03-02 00:56:34 +08:00
|
|
|
return ret;
|
2015-10-12 14:53:37 +08:00
|
|
|
}
|
|
|
|
|
2019-01-25 13:09:15 +08:00
|
|
|
static int __must_check submit_one_bio(struct bio *bio, int mirror_num,
|
|
|
|
unsigned long bio_flags)
|
|
|
|
{
|
|
|
|
blk_status_t ret = 0;
|
|
|
|
struct extent_io_tree *tree = bio->bi_private;
|
|
|
|
|
|
|
|
bio->bi_private = NULL;
|
|
|
|
|
|
|
|
if (tree->ops)
|
|
|
|
ret = tree->ops->submit_bio_hook(tree->private_data, bio,
|
2019-04-11 00:46:04 +08:00
|
|
|
mirror_num, bio_flags);
|
2019-01-25 13:09:15 +08:00
|
|
|
else
|
|
|
|
btrfsic_submit_bio(bio);
|
|
|
|
|
|
|
|
return blk_status_to_errno(ret);
|
|
|
|
}
|
|
|
|
|
2019-03-20 14:27:42 +08:00
|
|
|
/* Cleanup unsubmitted bios */
|
|
|
|
static void end_write_bio(struct extent_page_data *epd, int ret)
|
|
|
|
{
|
|
|
|
if (epd->bio) {
|
|
|
|
epd->bio->bi_status = errno_to_blk_status(ret);
|
|
|
|
bio_endio(epd->bio);
|
|
|
|
epd->bio = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-20 14:27:41 +08:00
|
|
|
/*
|
|
|
|
* Submit bio from extent page data via submit_one_bio
|
|
|
|
*
|
|
|
|
* Return 0 if everything is OK.
|
|
|
|
* Return <0 for error.
|
|
|
|
*/
|
|
|
|
static int __must_check flush_write_bio(struct extent_page_data *epd)
|
2019-01-25 13:09:15 +08:00
|
|
|
{
|
2019-03-20 14:27:41 +08:00
|
|
|
int ret = 0;
|
2019-01-25 13:09:15 +08:00
|
|
|
|
2019-03-20 14:27:41 +08:00
|
|
|
if (epd->bio) {
|
2019-01-25 13:09:15 +08:00
|
|
|
ret = submit_one_bio(epd->bio, 0, 0);
|
2019-03-20 14:27:41 +08:00
|
|
|
/*
|
|
|
|
* Clean up of epd->bio is handled by its endio function.
|
|
|
|
* And endio is either triggered by successful bio execution
|
|
|
|
* or the error handler of submit bio hook.
|
|
|
|
* So at this point, no matter what happened, we don't need
|
|
|
|
* to clean up epd->bio.
|
|
|
|
*/
|
2019-01-25 13:09:15 +08:00
|
|
|
epd->bio = NULL;
|
|
|
|
}
|
2019-03-20 14:27:41 +08:00
|
|
|
return ret;
|
2019-01-25 13:09:15 +08:00
|
|
|
}
|
2017-06-23 10:16:17 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
int __init extent_io_init(void)
|
|
|
|
{
|
2012-09-07 17:00:48 +08:00
|
|
|
extent_state_cache = kmem_cache_create("btrfs_extent_state",
|
2009-04-13 21:33:09 +08:00
|
|
|
sizeof(struct extent_state), 0,
|
2016-06-24 02:17:08 +08:00
|
|
|
SLAB_MEM_SPREAD, NULL);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!extent_state_cache)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2012-09-07 17:00:48 +08:00
|
|
|
extent_buffer_cache = kmem_cache_create("btrfs_extent_buffer",
|
2009-04-13 21:33:09 +08:00
|
|
|
sizeof(struct extent_buffer), 0,
|
2016-06-24 02:17:08 +08:00
|
|
|
SLAB_MEM_SPREAD, NULL);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!extent_buffer_cache)
|
|
|
|
goto free_state_cache;
|
2013-05-18 06:30:14 +08:00
|
|
|
|
2018-05-21 06:25:56 +08:00
|
|
|
if (bioset_init(&btrfs_bioset, BIO_POOL_SIZE,
|
|
|
|
offsetof(struct btrfs_io_bio, bio),
|
|
|
|
BIOSET_NEED_BVECS))
|
2013-05-18 06:30:14 +08:00
|
|
|
goto free_buffer_cache;
|
2013-09-20 11:37:07 +08:00
|
|
|
|
2018-05-21 06:25:56 +08:00
|
|
|
if (bioset_integrity_create(&btrfs_bioset, BIO_POOL_SIZE))
|
2013-09-20 11:37:07 +08:00
|
|
|
goto free_bioset;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
|
2013-09-20 11:37:07 +08:00
|
|
|
free_bioset:
|
2018-05-21 06:25:56 +08:00
|
|
|
bioset_exit(&btrfs_bioset);
|
2013-09-20 11:37:07 +08:00
|
|
|
|
2013-05-18 06:30:14 +08:00
|
|
|
free_buffer_cache:
|
|
|
|
kmem_cache_destroy(extent_buffer_cache);
|
|
|
|
extent_buffer_cache = NULL;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
free_state_cache:
|
|
|
|
kmem_cache_destroy(extent_state_cache);
|
2013-05-18 06:30:14 +08:00
|
|
|
extent_state_cache = NULL;
|
2008-01-25 05:13:08 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2018-02-20 00:24:18 +08:00
|
|
|
void __cold extent_io_exit(void)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2013-04-23 00:12:31 +08:00
|
|
|
btrfs_leak_debug_check();
|
2012-09-26 09:33:07 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure all delayed rcu free are flushed before we
|
|
|
|
* destroy caches.
|
|
|
|
*/
|
|
|
|
rcu_barrier();
|
2016-01-29 21:36:35 +08:00
|
|
|
kmem_cache_destroy(extent_state_cache);
|
|
|
|
kmem_cache_destroy(extent_buffer_cache);
|
2018-05-21 06:25:56 +08:00
|
|
|
bioset_exit(&btrfs_bioset);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2019-03-01 10:47:58 +08:00
|
|
|
void extent_io_tree_init(struct btrfs_fs_info *fs_info,
|
2019-03-01 10:47:59 +08:00
|
|
|
struct extent_io_tree *tree, unsigned int owner,
|
|
|
|
void *private_data)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2019-03-01 10:47:58 +08:00
|
|
|
tree->fs_info = fs_info;
|
2010-02-24 03:43:04 +08:00
|
|
|
tree->state = RB_ROOT;
|
2008-01-25 05:13:08 +08:00
|
|
|
tree->ops = NULL;
|
|
|
|
tree->dirty_bytes = 0;
|
2008-01-29 22:59:12 +08:00
|
|
|
spin_lock_init(&tree->lock);
|
2017-05-05 23:57:13 +08:00
|
|
|
tree->private_data = private_data;
|
2019-03-01 10:47:59 +08:00
|
|
|
tree->owner = owner;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2019-03-25 20:31:24 +08:00
|
|
|
void extent_io_tree_release(struct extent_io_tree *tree)
|
|
|
|
{
|
|
|
|
spin_lock(&tree->lock);
|
|
|
|
/*
|
|
|
|
* Do a single barrier for the waitqueue_active check here, the state
|
|
|
|
* of the waitqueue should not change once extent_io_tree_release is
|
|
|
|
* called.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
|
|
|
while (!RB_EMPTY_ROOT(&tree->state)) {
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
|
|
|
|
node = rb_first(&tree->state);
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
rb_erase(&state->rb_node, &tree->state);
|
|
|
|
RB_CLEAR_NODE(&state->rb_node);
|
|
|
|
/*
|
|
|
|
* btree io trees aren't supposed to have tasks waiting for
|
|
|
|
* changes in the flags of extent states ever.
|
|
|
|
*/
|
|
|
|
ASSERT(!waitqueue_active(&state->wq));
|
|
|
|
free_extent_state(state);
|
|
|
|
|
|
|
|
cond_resched_lock(&tree->lock);
|
|
|
|
}
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
}
|
|
|
|
|
2008-12-02 22:54:17 +08:00
|
|
|
static struct extent_state *alloc_extent_state(gfp_t mask)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
|
2017-01-09 22:39:02 +08:00
|
|
|
/*
|
|
|
|
* The given mask might be not appropriate for the slab allocator,
|
|
|
|
* drop the unsupported bits
|
|
|
|
*/
|
|
|
|
mask &= ~(__GFP_DMA32|__GFP_HIGHMEM);
|
2008-01-25 05:13:08 +08:00
|
|
|
state = kmem_cache_alloc(extent_state_cache, mask);
|
2008-04-01 23:21:40 +08:00
|
|
|
if (!state)
|
2008-01-25 05:13:08 +08:00
|
|
|
return state;
|
|
|
|
state->state = 0;
|
2016-02-11 20:24:13 +08:00
|
|
|
state->failrec = NULL;
|
2014-07-07 03:09:59 +08:00
|
|
|
RB_CLEAR_NODE(&state->rb_node);
|
2013-04-23 00:12:31 +08:00
|
|
|
btrfs_leak_debug_add(&state->leak_list, &states);
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_set(&state->refs, 1);
|
2008-01-25 05:13:08 +08:00
|
|
|
init_waitqueue_head(&state->wq);
|
2012-03-01 21:56:26 +08:00
|
|
|
trace_alloc_extent_state(state, mask, _RET_IP_);
|
2008-01-25 05:13:08 +08:00
|
|
|
return state;
|
|
|
|
}
|
|
|
|
|
2010-05-26 08:56:50 +08:00
|
|
|
void free_extent_state(struct extent_state *state)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
if (!state)
|
|
|
|
return;
|
2017-03-03 16:55:19 +08:00
|
|
|
if (refcount_dec_and_test(&state->refs)) {
|
2014-07-07 03:09:59 +08:00
|
|
|
WARN_ON(extent_state_in_tree(state));
|
2013-04-23 00:12:31 +08:00
|
|
|
btrfs_leak_debug_del(&state->leak_list);
|
2012-03-01 21:56:26 +08:00
|
|
|
trace_free_extent_state(state, _RET_IP_);
|
2008-01-25 05:13:08 +08:00
|
|
|
kmem_cache_free(extent_state_cache, state);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-02-12 23:05:53 +08:00
|
|
|
static struct rb_node *tree_insert(struct rb_root *root,
|
|
|
|
struct rb_node *search_start,
|
|
|
|
u64 offset,
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node *node,
|
|
|
|
struct rb_node ***p_in,
|
|
|
|
struct rb_node **parent_in)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2014-02-12 23:05:53 +08:00
|
|
|
struct rb_node **p;
|
2009-01-06 10:25:51 +08:00
|
|
|
struct rb_node *parent = NULL;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct tree_entry *entry;
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
if (p_in && parent_in) {
|
|
|
|
p = *p_in;
|
|
|
|
parent = *parent_in;
|
|
|
|
goto do_insert;
|
|
|
|
}
|
|
|
|
|
2014-02-12 23:05:53 +08:00
|
|
|
p = search_start ? &search_start : &root->rb_node;
|
2009-01-06 10:25:51 +08:00
|
|
|
while (*p) {
|
2008-01-25 05:13:08 +08:00
|
|
|
parent = *p;
|
|
|
|
entry = rb_entry(parent, struct tree_entry, rb_node);
|
|
|
|
|
|
|
|
if (offset < entry->start)
|
|
|
|
p = &(*p)->rb_left;
|
|
|
|
else if (offset > entry->end)
|
|
|
|
p = &(*p)->rb_right;
|
|
|
|
else
|
|
|
|
return parent;
|
|
|
|
}
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
do_insert:
|
2008-01-25 05:13:08 +08:00
|
|
|
rb_link_node(node, parent, p);
|
|
|
|
rb_insert_color(node, root);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-06-05 19:50:04 +08:00
|
|
|
/**
|
|
|
|
* __etree_search - searche @tree for an entry that contains @offset. Such
|
|
|
|
* entry would have entry->start <= offset && entry->end >= offset.
|
|
|
|
*
|
|
|
|
* @tree - the tree to search
|
|
|
|
* @offset - offset that should fall within an entry in @tree
|
|
|
|
* @next_ret - pointer to the first entry whose range ends after @offset
|
|
|
|
* @prev - pointer to the first entry whose range begins before @offset
|
|
|
|
* @p_ret - pointer where new node should be anchored (used when inserting an
|
|
|
|
* entry in the tree)
|
|
|
|
* @parent_ret - points to entry which would have been the parent of the entry,
|
|
|
|
* containing @offset
|
|
|
|
*
|
|
|
|
* This function returns a pointer to the entry that contains @offset byte
|
|
|
|
* address. If no such entry exists, then NULL is returned and the other
|
|
|
|
* pointer arguments to the function are filled, otherwise the found entry is
|
|
|
|
* returned and other pointers are left untouched.
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
static struct rb_node *__etree_search(struct extent_io_tree *tree, u64 offset,
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node **next_ret,
|
2019-01-30 22:51:00 +08:00
|
|
|
struct rb_node **prev_ret,
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node ***p_ret,
|
|
|
|
struct rb_node **parent_ret)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2008-02-02 03:51:59 +08:00
|
|
|
struct rb_root *root = &tree->state;
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node **n = &root->rb_node;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct rb_node *prev = NULL;
|
|
|
|
struct rb_node *orig_prev = NULL;
|
|
|
|
struct tree_entry *entry;
|
|
|
|
struct tree_entry *prev_entry = NULL;
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
while (*n) {
|
|
|
|
prev = *n;
|
|
|
|
entry = rb_entry(prev, struct tree_entry, rb_node);
|
2008-01-25 05:13:08 +08:00
|
|
|
prev_entry = entry;
|
|
|
|
|
|
|
|
if (offset < entry->start)
|
2013-11-26 23:41:47 +08:00
|
|
|
n = &(*n)->rb_left;
|
2008-01-25 05:13:08 +08:00
|
|
|
else if (offset > entry->end)
|
2013-11-26 23:41:47 +08:00
|
|
|
n = &(*n)->rb_right;
|
2009-01-06 10:25:51 +08:00
|
|
|
else
|
2013-11-26 23:41:47 +08:00
|
|
|
return *n;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
if (p_ret)
|
|
|
|
*p_ret = n;
|
|
|
|
if (parent_ret)
|
|
|
|
*parent_ret = prev;
|
|
|
|
|
2019-01-30 22:51:00 +08:00
|
|
|
if (next_ret) {
|
2008-01-25 05:13:08 +08:00
|
|
|
orig_prev = prev;
|
2009-01-06 10:25:51 +08:00
|
|
|
while (prev && offset > prev_entry->end) {
|
2008-01-25 05:13:08 +08:00
|
|
|
prev = rb_next(prev);
|
|
|
|
prev_entry = rb_entry(prev, struct tree_entry, rb_node);
|
|
|
|
}
|
2019-01-30 22:51:00 +08:00
|
|
|
*next_ret = prev;
|
2008-01-25 05:13:08 +08:00
|
|
|
prev = orig_prev;
|
|
|
|
}
|
|
|
|
|
2019-01-30 22:51:00 +08:00
|
|
|
if (prev_ret) {
|
2008-01-25 05:13:08 +08:00
|
|
|
prev_entry = rb_entry(prev, struct tree_entry, rb_node);
|
2009-01-06 10:25:51 +08:00
|
|
|
while (prev && offset < prev_entry->start) {
|
2008-01-25 05:13:08 +08:00
|
|
|
prev = rb_prev(prev);
|
|
|
|
prev_entry = rb_entry(prev, struct tree_entry, rb_node);
|
|
|
|
}
|
2019-01-30 22:51:00 +08:00
|
|
|
*prev_ret = prev;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
static inline struct rb_node *
|
|
|
|
tree_search_for_insert(struct extent_io_tree *tree,
|
|
|
|
u64 offset,
|
|
|
|
struct rb_node ***p_ret,
|
|
|
|
struct rb_node **parent_ret)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2019-01-30 22:51:00 +08:00
|
|
|
struct rb_node *next= NULL;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct rb_node *ret;
|
2008-01-29 22:59:12 +08:00
|
|
|
|
2019-01-30 22:51:00 +08:00
|
|
|
ret = __etree_search(tree, offset, &next, NULL, p_ret, parent_ret);
|
2009-01-06 10:25:51 +08:00
|
|
|
if (!ret)
|
2019-01-30 22:51:00 +08:00
|
|
|
return next;
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-11-26 23:41:47 +08:00
|
|
|
static inline struct rb_node *tree_search(struct extent_io_tree *tree,
|
|
|
|
u64 offset)
|
|
|
|
{
|
|
|
|
return tree_search_for_insert(tree, offset, NULL, NULL);
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* utility function to look for merge candidates inside a given range.
|
|
|
|
* Any extents with matching state are merged together into a single
|
|
|
|
* extent in the tree. Extents with EXTENT_IO in their state field
|
|
|
|
* are not merged because the end_io handlers need to be able to do
|
|
|
|
* operations on them without sleeping (or doing allocations/splits).
|
|
|
|
*
|
|
|
|
* This should be called with the tree lock held.
|
|
|
|
*/
|
2011-07-22 00:56:09 +08:00
|
|
|
static void merge_state(struct extent_io_tree *tree,
|
|
|
|
struct extent_state *state)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *other;
|
|
|
|
struct rb_node *other_node;
|
|
|
|
|
2019-03-14 21:28:31 +08:00
|
|
|
if (state->state & (EXTENT_LOCKED | EXTENT_BOUNDARY))
|
2011-07-22 00:56:09 +08:00
|
|
|
return;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
other_node = rb_prev(&state->rb_node);
|
|
|
|
if (other_node) {
|
|
|
|
other = rb_entry(other_node, struct extent_state, rb_node);
|
|
|
|
if (other->end == state->start - 1 &&
|
|
|
|
other->state == state->state) {
|
2018-11-01 20:09:52 +08:00
|
|
|
if (tree->private_data &&
|
|
|
|
is_data_inode(tree->private_data))
|
|
|
|
btrfs_merge_delalloc_extent(tree->private_data,
|
|
|
|
state, other);
|
2008-01-25 05:13:08 +08:00
|
|
|
state->start = other->start;
|
|
|
|
rb_erase(&other->rb_node, &tree->state);
|
2014-07-07 03:09:59 +08:00
|
|
|
RB_CLEAR_NODE(&other->rb_node);
|
2008-01-25 05:13:08 +08:00
|
|
|
free_extent_state(other);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
other_node = rb_next(&state->rb_node);
|
|
|
|
if (other_node) {
|
|
|
|
other = rb_entry(other_node, struct extent_state, rb_node);
|
|
|
|
if (other->start == state->end + 1 &&
|
|
|
|
other->state == state->state) {
|
2018-11-01 20:09:52 +08:00
|
|
|
if (tree->private_data &&
|
|
|
|
is_data_inode(tree->private_data))
|
|
|
|
btrfs_merge_delalloc_extent(tree->private_data,
|
|
|
|
state, other);
|
2011-06-21 02:53:48 +08:00
|
|
|
state->end = other->end;
|
|
|
|
rb_erase(&other->rb_node, &tree->state);
|
2014-07-07 03:09:59 +08:00
|
|
|
RB_CLEAR_NODE(&other->rb_node);
|
2011-06-21 02:53:48 +08:00
|
|
|
free_extent_state(other);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-07-14 11:19:08 +08:00
|
|
|
static void set_state_bits(struct extent_io_tree *tree,
|
2015-10-12 14:53:37 +08:00
|
|
|
struct extent_state *state, unsigned *bits,
|
|
|
|
struct extent_changeset *changeset);
|
2011-07-14 11:19:08 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* insert an extent_state struct into the tree. 'bits' are set on the
|
|
|
|
* struct before it is inserted.
|
|
|
|
*
|
|
|
|
* This may return -EEXIST if the extent is already there, in which case the
|
|
|
|
* state struct is freed.
|
|
|
|
*
|
|
|
|
* The tree lock is not taken internally. This is a utility function and
|
|
|
|
* probably isn't what you want to call (see set/clear_extent_bit).
|
|
|
|
*/
|
|
|
|
static int insert_state(struct extent_io_tree *tree,
|
|
|
|
struct extent_state *state, u64 start, u64 end,
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node ***p,
|
|
|
|
struct rb_node **parent,
|
2015-10-12 14:53:37 +08:00
|
|
|
unsigned *bits, struct extent_changeset *changeset)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
|
2019-06-19 02:00:05 +08:00
|
|
|
if (end < start) {
|
|
|
|
btrfs_err(tree->fs_info,
|
|
|
|
"insert state: end < start %llu %llu", end, start);
|
|
|
|
WARN_ON(1);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
state->start = start;
|
|
|
|
state->end = end;
|
2009-09-12 04:12:44 +08:00
|
|
|
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, state, bits, changeset);
|
2011-07-14 11:19:08 +08:00
|
|
|
|
2014-02-12 23:05:53 +08:00
|
|
|
node = tree_insert(&tree->state, NULL, end, &state->rb_node, p, parent);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (node) {
|
|
|
|
struct extent_state *found;
|
|
|
|
found = rb_entry(node, struct extent_state, rb_node);
|
2019-06-19 02:00:05 +08:00
|
|
|
btrfs_err(tree->fs_info,
|
|
|
|
"found node %llu %llu on insert of %llu %llu",
|
2013-08-20 19:20:07 +08:00
|
|
|
found->start, found->end, start, end);
|
2008-01-25 05:13:08 +08:00
|
|
|
return -EEXIST;
|
|
|
|
}
|
|
|
|
merge_state(tree, state);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* split a given extent state struct in two, inserting the preallocated
|
|
|
|
* struct 'prealloc' as the newly created second half. 'split' indicates an
|
|
|
|
* offset inside 'orig' where it should be split.
|
|
|
|
*
|
|
|
|
* Before calling,
|
|
|
|
* the tree has 'orig' at [orig->start, orig->end]. After calling, there
|
|
|
|
* are two extent state structs in the tree:
|
|
|
|
* prealloc: [orig->start, split - 1]
|
|
|
|
* orig: [ split, orig->end ]
|
|
|
|
*
|
|
|
|
* The tree locks are not taken by this function. They need to be held
|
|
|
|
* by the caller.
|
|
|
|
*/
|
|
|
|
static int split_state(struct extent_io_tree *tree, struct extent_state *orig,
|
|
|
|
struct extent_state *prealloc, u64 split)
|
|
|
|
{
|
|
|
|
struct rb_node *node;
|
2009-09-12 04:12:44 +08:00
|
|
|
|
2018-11-01 20:09:53 +08:00
|
|
|
if (tree->private_data && is_data_inode(tree->private_data))
|
|
|
|
btrfs_split_delalloc_extent(tree->private_data, orig, split);
|
2009-09-12 04:12:44 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc->start = orig->start;
|
|
|
|
prealloc->end = split - 1;
|
|
|
|
prealloc->state = orig->state;
|
|
|
|
orig->start = split;
|
|
|
|
|
2014-02-12 23:05:53 +08:00
|
|
|
node = tree_insert(&tree->state, &orig->rb_node, prealloc->end,
|
|
|
|
&prealloc->rb_node, NULL, NULL);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (node) {
|
|
|
|
free_extent_state(prealloc);
|
|
|
|
return -EEXIST;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-12 16:39:48 +08:00
|
|
|
static struct extent_state *next_state(struct extent_state *state)
|
|
|
|
{
|
|
|
|
struct rb_node *next = rb_next(&state->rb_node);
|
|
|
|
if (next)
|
|
|
|
return rb_entry(next, struct extent_state, rb_node);
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* utility function to clear some bits in an extent state struct.
|
2018-11-28 19:05:13 +08:00
|
|
|
* it will optionally wake up anyone waiting on this state (wake == 1).
|
2008-01-25 05:13:08 +08:00
|
|
|
*
|
|
|
|
* If no bits are set on the state struct after clearing things, the
|
|
|
|
* struct is freed and removed from the tree
|
|
|
|
*/
|
2012-03-12 16:39:48 +08:00
|
|
|
static struct extent_state *clear_state_bit(struct extent_io_tree *tree,
|
|
|
|
struct extent_state *state,
|
2015-10-12 15:35:38 +08:00
|
|
|
unsigned *bits, int wake,
|
|
|
|
struct extent_changeset *changeset)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2012-03-12 16:39:48 +08:00
|
|
|
struct extent_state *next;
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits_to_clear = *bits & ~EXTENT_CTLBITS;
|
2018-03-02 00:56:34 +08:00
|
|
|
int ret;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2010-05-16 22:48:47 +08:00
|
|
|
if ((bits_to_clear & EXTENT_DIRTY) && (state->state & EXTENT_DIRTY)) {
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 range = state->end - state->start + 1;
|
|
|
|
WARN_ON(range > tree->dirty_bytes);
|
|
|
|
tree->dirty_bytes -= range;
|
|
|
|
}
|
2018-11-01 20:09:51 +08:00
|
|
|
|
|
|
|
if (tree->private_data && is_data_inode(tree->private_data))
|
|
|
|
btrfs_clear_delalloc_extent(tree->private_data, state, bits);
|
|
|
|
|
2018-03-02 00:56:34 +08:00
|
|
|
ret = add_extent_changeset(state, bits_to_clear, changeset, 0);
|
|
|
|
BUG_ON(ret < 0);
|
2009-10-09 01:34:05 +08:00
|
|
|
state->state &= ~bits_to_clear;
|
2008-01-25 05:13:08 +08:00
|
|
|
if (wake)
|
|
|
|
wake_up(&state->wq);
|
2010-05-16 22:48:47 +08:00
|
|
|
if (state->state == 0) {
|
2012-03-12 16:39:48 +08:00
|
|
|
next = next_state(state);
|
2014-07-07 03:09:59 +08:00
|
|
|
if (extent_state_in_tree(state)) {
|
2008-01-25 05:13:08 +08:00
|
|
|
rb_erase(&state->rb_node, &tree->state);
|
2014-07-07 03:09:59 +08:00
|
|
|
RB_CLEAR_NODE(&state->rb_node);
|
2008-01-25 05:13:08 +08:00
|
|
|
free_extent_state(state);
|
|
|
|
} else {
|
|
|
|
WARN_ON(1);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
merge_state(tree, state);
|
2012-03-12 16:39:48 +08:00
|
|
|
next = next_state(state);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2012-03-12 16:39:48 +08:00
|
|
|
return next;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2011-04-20 14:44:57 +08:00
|
|
|
static struct extent_state *
|
|
|
|
alloc_extent_state_atomic(struct extent_state *prealloc)
|
|
|
|
{
|
|
|
|
if (!prealloc)
|
|
|
|
prealloc = alloc_extent_state(GFP_ATOMIC);
|
|
|
|
|
|
|
|
return prealloc;
|
|
|
|
}
|
|
|
|
|
2013-04-26 04:41:01 +08:00
|
|
|
static void extent_io_tree_panic(struct extent_io_tree *tree, int err)
|
2011-10-04 11:22:32 +08:00
|
|
|
{
|
2018-07-19 01:23:45 +08:00
|
|
|
struct inode *inode = tree->private_data;
|
|
|
|
|
|
|
|
btrfs_panic(btrfs_sb(inode->i_sb), err,
|
|
|
|
"locking error: extent tree was modified by another thread while locked");
|
2011-10-04 11:22:32 +08:00
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* clear some bits on a range in the tree. This may require splitting
|
|
|
|
* or inserting elements in the tree, so the gfp mask is used to
|
|
|
|
* indicate which allocations or sleeping are allowed.
|
|
|
|
*
|
|
|
|
* pass 'wake' == 1 to kick any sleepers, and 'delete' == 1 to remove
|
|
|
|
* the given range from the tree regardless of state (ie for truncate).
|
|
|
|
*
|
|
|
|
* the range [start, end] is inclusive.
|
|
|
|
*
|
2012-03-01 21:56:29 +08:00
|
|
|
* This takes the tree lock, and returns 0 on success and < 0 on error.
|
2008-01-25 05:13:08 +08:00
|
|
|
*/
|
2017-10-31 23:30:47 +08:00
|
|
|
int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-10-12 15:35:38 +08:00
|
|
|
unsigned bits, int wake, int delete,
|
|
|
|
struct extent_state **cached_state,
|
|
|
|
gfp_t mask, struct extent_changeset *changeset)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
2009-09-03 03:04:12 +08:00
|
|
|
struct extent_state *cached;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct extent_state *prealloc = NULL;
|
|
|
|
struct rb_node *node;
|
2009-05-27 21:16:03 +08:00
|
|
|
u64 last_end;
|
2008-01-25 05:13:08 +08:00
|
|
|
int err;
|
2010-02-04 03:33:23 +08:00
|
|
|
int clear = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-12-13 23:02:44 +08:00
|
|
|
btrfs_debug_check_extent_io_range(tree, start, end);
|
2019-03-01 10:48:00 +08:00
|
|
|
trace_btrfs_clear_extent_bit(tree, start, end - start + 1, bits);
|
2013-04-30 23:22:23 +08:00
|
|
|
|
2013-06-22 04:37:03 +08:00
|
|
|
if (bits & EXTENT_DELALLOC)
|
|
|
|
bits |= EXTENT_NORESERVE;
|
|
|
|
|
2010-05-16 22:48:47 +08:00
|
|
|
if (delete)
|
|
|
|
bits |= ~EXTENT_CTLBITS;
|
|
|
|
|
2019-03-14 21:28:31 +08:00
|
|
|
if (bits & (EXTENT_LOCKED | EXTENT_BOUNDARY))
|
2010-02-04 03:33:23 +08:00
|
|
|
clear = 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
again:
|
2015-11-07 08:28:21 +08:00
|
|
|
if (!prealloc && gfpflags_allow_blocking(mask)) {
|
2014-11-03 22:12:57 +08:00
|
|
|
/*
|
|
|
|
* Don't care for allocation failure here because we might end
|
|
|
|
* up not needing the pre-allocated extent state at all, which
|
|
|
|
* is the case if we only have in the tree extent states that
|
|
|
|
* cover our input range and don't cover too any other range.
|
|
|
|
* If we end up needing a new extent state we allocate it later.
|
|
|
|
*/
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = alloc_extent_state(mask);
|
|
|
|
}
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2009-09-03 03:04:12 +08:00
|
|
|
if (cached_state) {
|
|
|
|
cached = *cached_state;
|
2010-02-04 03:33:23 +08:00
|
|
|
|
|
|
|
if (clear) {
|
|
|
|
*cached_state = NULL;
|
|
|
|
cached_state = NULL;
|
|
|
|
}
|
|
|
|
|
2014-07-07 03:09:59 +08:00
|
|
|
if (cached && extent_state_in_tree(cached) &&
|
|
|
|
cached->start <= start && cached->end > start) {
|
2010-02-04 03:33:23 +08:00
|
|
|
if (clear)
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_dec(&cached->refs);
|
2009-09-03 03:04:12 +08:00
|
|
|
state = cached;
|
2009-09-24 07:51:09 +08:00
|
|
|
goto hit_next;
|
2009-09-03 03:04:12 +08:00
|
|
|
}
|
2010-02-04 03:33:23 +08:00
|
|
|
if (clear)
|
|
|
|
free_extent_state(cached);
|
2009-09-03 03:04:12 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* this search will find the extents that end after
|
|
|
|
* our range starts
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, start);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!node)
|
|
|
|
goto out;
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
2009-09-03 03:04:12 +08:00
|
|
|
hit_next:
|
2008-01-25 05:13:08 +08:00
|
|
|
if (state->start > end)
|
|
|
|
goto out;
|
|
|
|
WARN_ON(state->end < start);
|
2009-05-27 21:16:03 +08:00
|
|
|
last_end = state->end;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2012-02-16 18:34:37 +08:00
|
|
|
/* the state doesn't have the wanted bits, go ahead */
|
2012-03-12 16:39:48 +08:00
|
|
|
if (!(state->state & bits)) {
|
|
|
|
state = next_state(state);
|
2012-02-16 18:34:37 +08:00
|
|
|
goto next;
|
2012-03-12 16:39:48 +08:00
|
|
|
}
|
2012-02-16 18:34:37 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state | or
|
|
|
|
* | ------------- state -------------- |
|
|
|
|
*
|
|
|
|
* We need to split the extent we found, and may flip
|
|
|
|
* bits on second half.
|
|
|
|
*
|
|
|
|
* If the extent we found extends past our range, we
|
|
|
|
* just split and search again. It'll get split again
|
|
|
|
* the next time though.
|
|
|
|
*
|
|
|
|
* If the extent we found is inside our range, we clear
|
|
|
|
* the desired bit on it.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (state->start < start) {
|
2011-04-20 14:44:57 +08:00
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2008-01-25 05:13:08 +08:00
|
|
|
err = split_state(tree, state, prealloc, start);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (state->end <= end) {
|
2015-10-12 15:35:38 +08:00
|
|
|
state = clear_state_bit(tree, state, &bits, wake,
|
|
|
|
changeset);
|
2012-05-10 18:10:39 +08:00
|
|
|
goto next;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
* We need to split the extent, and clear the bit
|
|
|
|
* on the first half
|
|
|
|
*/
|
|
|
|
if (state->start <= end && state->end > end) {
|
2011-04-20 14:44:57 +08:00
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2008-01-25 05:13:08 +08:00
|
|
|
err = split_state(tree, state, prealloc, end + 1);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
if (wake)
|
|
|
|
wake_up(&state->wq);
|
2009-09-24 07:51:09 +08:00
|
|
|
|
2015-10-12 15:35:38 +08:00
|
|
|
clear_state_bit(tree, prealloc, &bits, wake, changeset);
|
2009-09-12 04:12:44 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-09-24 07:51:09 +08:00
|
|
|
|
2015-10-12 15:35:38 +08:00
|
|
|
state = clear_state_bit(tree, state, &bits, wake, changeset);
|
2012-02-16 18:34:37 +08:00
|
|
|
next:
|
2009-05-27 21:16:03 +08:00
|
|
|
if (last_end == (u64)-1)
|
|
|
|
goto out;
|
|
|
|
start = last_end + 1;
|
2012-03-12 16:39:48 +08:00
|
|
|
if (start <= end && state && !need_resched())
|
2012-02-16 18:34:36 +08:00
|
|
|
goto hit_next;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
search_again:
|
|
|
|
if (start > end)
|
|
|
|
goto out;
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2015-11-07 08:28:21 +08:00
|
|
|
if (gfpflags_allow_blocking(mask))
|
2008-01-25 05:13:08 +08:00
|
|
|
cond_resched();
|
|
|
|
goto again;
|
2016-04-27 07:02:15 +08:00
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
if (prealloc)
|
|
|
|
free_extent_state(prealloc);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2012-03-01 21:56:26 +08:00
|
|
|
static void wait_on_state(struct extent_io_tree *tree,
|
|
|
|
struct extent_state *state)
|
2008-12-02 19:36:10 +08:00
|
|
|
__releases(tree->lock)
|
|
|
|
__acquires(tree->lock)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
prepare_to_wait(&state->wq, &wait, TASK_UNINTERRUPTIBLE);
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
schedule();
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
finish_wait(&state->wq, &wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* waits for one or more bits to clear on a range in the state tree.
|
|
|
|
* The range [start, end] is inclusive.
|
|
|
|
* The tree lock is taken by this function
|
|
|
|
*/
|
2013-04-29 21:38:46 +08:00
|
|
|
static void wait_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
|
|
|
unsigned long bits)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
struct rb_node *node;
|
|
|
|
|
2013-12-13 23:02:44 +08:00
|
|
|
btrfs_debug_check_extent_io_range(tree, start, end);
|
2013-04-30 23:22:23 +08:00
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
again:
|
|
|
|
while (1) {
|
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, start);
|
2014-03-31 21:53:25 +08:00
|
|
|
process_node:
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!node)
|
|
|
|
break;
|
|
|
|
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
|
|
|
|
if (state->start > end)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (state->state & bits) {
|
|
|
|
start = state->start;
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_inc(&state->refs);
|
2008-01-25 05:13:08 +08:00
|
|
|
wait_on_state(tree, state);
|
|
|
|
free_extent_state(state);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
start = state->end + 1;
|
|
|
|
|
|
|
|
if (start > end)
|
|
|
|
break;
|
|
|
|
|
2014-03-31 21:53:25 +08:00
|
|
|
if (!cond_resched_lock(&tree->lock)) {
|
|
|
|
node = rb_next(node);
|
|
|
|
goto process_node;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2011-07-22 00:56:09 +08:00
|
|
|
static void set_state_bits(struct extent_io_tree *tree,
|
2008-01-25 05:13:08 +08:00
|
|
|
struct extent_state *state,
|
2015-10-12 14:53:37 +08:00
|
|
|
unsigned *bits, struct extent_changeset *changeset)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits_to_set = *bits & ~EXTENT_CTLBITS;
|
2018-03-02 00:56:34 +08:00
|
|
|
int ret;
|
2009-09-12 04:12:44 +08:00
|
|
|
|
2018-11-01 20:09:50 +08:00
|
|
|
if (tree->private_data && is_data_inode(tree->private_data))
|
|
|
|
btrfs_set_delalloc_extent(tree->private_data, state, bits);
|
|
|
|
|
2010-05-16 22:48:47 +08:00
|
|
|
if ((bits_to_set & EXTENT_DIRTY) && !(state->state & EXTENT_DIRTY)) {
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 range = state->end - state->start + 1;
|
|
|
|
tree->dirty_bytes += range;
|
|
|
|
}
|
2018-03-02 00:56:34 +08:00
|
|
|
ret = add_extent_changeset(state, bits_to_set, changeset, 1);
|
|
|
|
BUG_ON(ret < 0);
|
2010-05-16 22:48:47 +08:00
|
|
|
state->state |= bits_to_set;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2014-10-13 19:28:38 +08:00
|
|
|
static void cache_state_if_flags(struct extent_state *state,
|
|
|
|
struct extent_state **cached_ptr,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned flags)
|
2009-09-03 03:04:12 +08:00
|
|
|
{
|
|
|
|
if (cached_ptr && !(*cached_ptr)) {
|
2014-10-13 19:28:38 +08:00
|
|
|
if (!flags || (state->state & flags)) {
|
2009-09-03 03:04:12 +08:00
|
|
|
*cached_ptr = state;
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_inc(&state->refs);
|
2009-09-03 03:04:12 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-10-13 19:28:38 +08:00
|
|
|
static void cache_state(struct extent_state *state,
|
|
|
|
struct extent_state **cached_ptr)
|
|
|
|
{
|
|
|
|
return cache_state_if_flags(state, cached_ptr,
|
2019-03-14 21:28:31 +08:00
|
|
|
EXTENT_LOCKED | EXTENT_BOUNDARY);
|
2014-10-13 19:28:38 +08:00
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
2009-09-03 01:24:36 +08:00
|
|
|
* set some bits on a range in the tree. This may require allocations or
|
|
|
|
* sleeping, so the gfp mask is used to indicate what is allowed.
|
2008-01-25 05:13:08 +08:00
|
|
|
*
|
2009-09-03 01:24:36 +08:00
|
|
|
* If any of the exclusive bits are set, this will fail with -EEXIST if some
|
|
|
|
* part of the range already has the desired bits set. The start of the
|
|
|
|
* existing range is returned in failed_start in this case.
|
2008-01-25 05:13:08 +08:00
|
|
|
*
|
2009-09-03 01:24:36 +08:00
|
|
|
* [start, end] is inclusive This takes the tree lock.
|
2008-01-25 05:13:08 +08:00
|
|
|
*/
|
2009-09-03 01:24:36 +08:00
|
|
|
|
2012-03-01 21:57:19 +08:00
|
|
|
static int __must_check
|
|
|
|
__set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits, unsigned exclusive_bits,
|
2013-04-29 21:38:46 +08:00
|
|
|
u64 *failed_start, struct extent_state **cached_state,
|
2015-10-12 14:53:37 +08:00
|
|
|
gfp_t mask, struct extent_changeset *changeset)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
struct extent_state *prealloc = NULL;
|
|
|
|
struct rb_node *node;
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node **p;
|
|
|
|
struct rb_node *parent;
|
2008-01-25 05:13:08 +08:00
|
|
|
int err = 0;
|
|
|
|
u64 last_start;
|
|
|
|
u64 last_end;
|
2009-09-24 07:51:09 +08:00
|
|
|
|
2013-12-13 23:02:44 +08:00
|
|
|
btrfs_debug_check_extent_io_range(tree, start, end);
|
2019-03-01 10:48:00 +08:00
|
|
|
trace_btrfs_set_extent_bit(tree, start, end - start + 1, bits);
|
2013-04-30 23:22:23 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
again:
|
2015-11-07 08:28:21 +08:00
|
|
|
if (!prealloc && gfpflags_allow_blocking(mask)) {
|
2016-04-27 07:03:45 +08:00
|
|
|
/*
|
|
|
|
* Don't care for allocation failure here because we might end
|
|
|
|
* up not needing the pre-allocated extent state at all, which
|
|
|
|
* is the case if we only have in the tree extent states that
|
|
|
|
* cover our input range and don't cover too any other range.
|
|
|
|
* If we end up needing a new extent state we allocate it later.
|
|
|
|
*/
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = alloc_extent_state(mask);
|
|
|
|
}
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2009-09-03 03:22:30 +08:00
|
|
|
if (cached_state && *cached_state) {
|
|
|
|
state = *cached_state;
|
2011-06-21 02:53:48 +08:00
|
|
|
if (state->start <= start && state->end > start &&
|
2014-07-07 03:09:59 +08:00
|
|
|
extent_state_in_tree(state)) {
|
2009-09-03 03:22:30 +08:00
|
|
|
node = &state->rb_node;
|
|
|
|
goto hit_next;
|
|
|
|
}
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2013-11-26 23:41:47 +08:00
|
|
|
node = tree_search_for_insert(tree, start, &p, &parent);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!node) {
|
2011-04-20 14:44:57 +08:00
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2013-11-26 23:41:47 +08:00
|
|
|
err = insert_state(tree, prealloc, start, end,
|
2015-10-12 14:53:37 +08:00
|
|
|
&p, &parent, &bits, changeset);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
|
|
|
|
2013-11-26 23:01:34 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
2009-08-06 00:57:59 +08:00
|
|
|
hit_next:
|
2008-01-25 05:13:08 +08:00
|
|
|
last_start = state->start;
|
|
|
|
last_end = state->end;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
*
|
|
|
|
* Just lock what we found and keep going
|
|
|
|
*/
|
|
|
|
if (state->start == start && state->end <= end) {
|
2009-09-03 01:24:36 +08:00
|
|
|
if (state->state & exclusive_bits) {
|
2008-01-25 05:13:08 +08:00
|
|
|
*failed_start = state->start;
|
|
|
|
err = -EEXIST;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-09-24 07:51:09 +08:00
|
|
|
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, state, &bits, changeset);
|
2009-09-03 03:04:12 +08:00
|
|
|
cache_state(state, cached_state);
|
2008-01-25 05:13:08 +08:00
|
|
|
merge_state(tree, state);
|
2009-05-27 21:16:03 +08:00
|
|
|
if (last_end == (u64)-1)
|
|
|
|
goto out;
|
|
|
|
start = last_end + 1;
|
2012-05-10 18:10:39 +08:00
|
|
|
state = next_state(state);
|
|
|
|
if (start < end && state && state->start == start &&
|
|
|
|
!need_resched())
|
|
|
|
goto hit_next;
|
2008-01-25 05:13:08 +08:00
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
* or
|
|
|
|
* | ------------- state -------------- |
|
|
|
|
*
|
|
|
|
* We need to split the extent we found, and may flip bits on
|
|
|
|
* second half.
|
|
|
|
*
|
|
|
|
* If the extent we found extends past our
|
|
|
|
* range, we just split and search again. It'll get split
|
|
|
|
* again the next time though.
|
|
|
|
*
|
|
|
|
* If the extent we found is inside our range, we set the
|
|
|
|
* desired bit on it.
|
|
|
|
*/
|
|
|
|
if (state->start < start) {
|
2009-09-03 01:24:36 +08:00
|
|
|
if (state->state & exclusive_bits) {
|
2008-01-25 05:13:08 +08:00
|
|
|
*failed_start = start;
|
|
|
|
err = -EEXIST;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-04-20 14:44:57 +08:00
|
|
|
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2008-01-25 05:13:08 +08:00
|
|
|
err = split_state(tree, state, prealloc, start);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (state->end <= end) {
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, state, &bits, changeset);
|
2009-09-03 03:04:12 +08:00
|
|
|
cache_state(state, cached_state);
|
2008-01-25 05:13:08 +08:00
|
|
|
merge_state(tree, state);
|
2009-05-27 21:16:03 +08:00
|
|
|
if (last_end == (u64)-1)
|
|
|
|
goto out;
|
|
|
|
start = last_end + 1;
|
2012-05-10 18:10:39 +08:00
|
|
|
state = next_state(state);
|
|
|
|
if (start < end && state && state->start == start &&
|
|
|
|
!need_resched())
|
|
|
|
goto hit_next;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state | or | state |
|
|
|
|
*
|
|
|
|
* There's a hole, we need to insert something in it and
|
|
|
|
* ignore the extent we found.
|
|
|
|
*/
|
|
|
|
if (state->start > start) {
|
|
|
|
u64 this_end;
|
|
|
|
if (end < last_start)
|
|
|
|
this_end = end;
|
|
|
|
else
|
2009-01-06 10:25:51 +08:00
|
|
|
this_end = last_start - 1;
|
2011-04-20 14:44:57 +08:00
|
|
|
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2011-04-20 14:45:49 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid to free 'prealloc' if it can be merged with
|
|
|
|
* the later extent.
|
|
|
|
*/
|
2008-01-25 05:13:08 +08:00
|
|
|
err = insert_state(tree, prealloc, start, this_end,
|
2015-10-12 14:53:37 +08:00
|
|
|
NULL, NULL, &bits, changeset);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
|
|
|
|
2009-09-12 04:12:44 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
|
|
|
prealloc = NULL;
|
2008-01-25 05:13:08 +08:00
|
|
|
start = this_end + 1;
|
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
* We need to split the extent, and set the bit
|
|
|
|
* on the first half
|
|
|
|
*/
|
|
|
|
if (state->start <= end && state->end > end) {
|
2009-09-03 01:24:36 +08:00
|
|
|
if (state->state & exclusive_bits) {
|
2008-01-25 05:13:08 +08:00
|
|
|
*failed_start = start;
|
|
|
|
err = -EEXIST;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-04-20 14:44:57 +08:00
|
|
|
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
|
|
|
BUG_ON(!prealloc);
|
2008-01-25 05:13:08 +08:00
|
|
|
err = split_state(tree, state, prealloc, end + 1);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, prealloc, &bits, changeset);
|
2009-09-03 03:04:12 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
2008-01-25 05:13:08 +08:00
|
|
|
merge_state(tree, prealloc);
|
|
|
|
prealloc = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-04-27 07:02:15 +08:00
|
|
|
search_again:
|
|
|
|
if (start > end)
|
|
|
|
goto out;
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
if (gfpflags_allow_blocking(mask))
|
|
|
|
cond_resched();
|
|
|
|
goto again;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (prealloc)
|
|
|
|
free_extent_state(prealloc);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2013-04-29 21:38:46 +08:00
|
|
|
int set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits, u64 * failed_start,
|
2013-04-29 21:38:46 +08:00
|
|
|
struct extent_state **cached_state, gfp_t mask)
|
2012-03-01 21:57:19 +08:00
|
|
|
{
|
|
|
|
return __set_extent_bit(tree, start, end, bits, 0, failed_start,
|
2015-10-12 14:53:37 +08:00
|
|
|
cached_state, mask, NULL);
|
2012-03-01 21:57:19 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2011-09-27 01:56:12 +08:00
|
|
|
/**
|
2012-07-11 15:26:19 +08:00
|
|
|
* convert_extent_bit - convert all bits in a given range from one bit to
|
|
|
|
* another
|
2011-09-27 01:56:12 +08:00
|
|
|
* @tree: the io tree to search
|
|
|
|
* @start: the start offset in bytes
|
|
|
|
* @end: the end offset in bytes (inclusive)
|
|
|
|
* @bits: the bits to set in this range
|
|
|
|
* @clear_bits: the bits to clear in this range
|
2012-09-28 05:07:30 +08:00
|
|
|
* @cached_state: state that we're going to cache
|
2011-09-27 01:56:12 +08:00
|
|
|
*
|
|
|
|
* This will go through and set bits for the given range. If any states exist
|
|
|
|
* already in this range they are set with the given bit and cleared of the
|
|
|
|
* clear_bits. This is only meant to be used by things that are mergeable, ie
|
|
|
|
* converting from say DELALLOC to DIRTY. This is not meant to be used with
|
|
|
|
* boundary bits like LOCK.
|
2016-04-27 05:54:39 +08:00
|
|
|
*
|
|
|
|
* All allocations are done with GFP_NOFS.
|
2011-09-27 01:56:12 +08:00
|
|
|
*/
|
|
|
|
int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits, unsigned clear_bits,
|
2016-04-27 05:54:39 +08:00
|
|
|
struct extent_state **cached_state)
|
2011-09-27 01:56:12 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
struct extent_state *prealloc = NULL;
|
|
|
|
struct rb_node *node;
|
2013-11-26 23:41:47 +08:00
|
|
|
struct rb_node **p;
|
|
|
|
struct rb_node *parent;
|
2011-09-27 01:56:12 +08:00
|
|
|
int err = 0;
|
|
|
|
u64 last_start;
|
|
|
|
u64 last_end;
|
2014-10-13 19:28:39 +08:00
|
|
|
bool first_iteration = true;
|
2011-09-27 01:56:12 +08:00
|
|
|
|
2013-12-13 23:02:44 +08:00
|
|
|
btrfs_debug_check_extent_io_range(tree, start, end);
|
2019-03-01 10:48:00 +08:00
|
|
|
trace_btrfs_convert_extent_bit(tree, start, end - start + 1, bits,
|
|
|
|
clear_bits);
|
2013-04-30 23:22:23 +08:00
|
|
|
|
2011-09-27 01:56:12 +08:00
|
|
|
again:
|
2016-04-27 05:54:39 +08:00
|
|
|
if (!prealloc) {
|
2014-10-13 19:28:39 +08:00
|
|
|
/*
|
|
|
|
* Best effort, don't worry if extent state allocation fails
|
|
|
|
* here for the first iteration. We might have a cached state
|
|
|
|
* that matches exactly the target range, in which case no
|
|
|
|
* extent state allocations are needed. We'll only know this
|
|
|
|
* after locking the tree.
|
|
|
|
*/
|
2016-04-27 05:54:39 +08:00
|
|
|
prealloc = alloc_extent_state(GFP_NOFS);
|
2014-10-13 19:28:39 +08:00
|
|
|
if (!prealloc && !first_iteration)
|
2011-09-27 01:56:12 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock(&tree->lock);
|
2012-09-28 05:07:30 +08:00
|
|
|
if (cached_state && *cached_state) {
|
|
|
|
state = *cached_state;
|
|
|
|
if (state->start <= start && state->end > start &&
|
2014-07-07 03:09:59 +08:00
|
|
|
extent_state_in_tree(state)) {
|
2012-09-28 05:07:30 +08:00
|
|
|
node = &state->rb_node;
|
|
|
|
goto hit_next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-09-27 01:56:12 +08:00
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2013-11-26 23:41:47 +08:00
|
|
|
node = tree_search_for_insert(tree, start, &p, &parent);
|
2011-09-27 01:56:12 +08:00
|
|
|
if (!node) {
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
2011-12-08 09:08:40 +08:00
|
|
|
if (!prealloc) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2013-11-26 23:41:47 +08:00
|
|
|
err = insert_state(tree, prealloc, start, end,
|
2015-10-12 14:53:37 +08:00
|
|
|
&p, &parent, &bits, NULL);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
2013-11-26 23:01:34 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
|
|
|
prealloc = NULL;
|
2011-09-27 01:56:12 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
hit_next:
|
|
|
|
last_start = state->start;
|
|
|
|
last_end = state->end;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
*
|
|
|
|
* Just lock what we found and keep going
|
|
|
|
*/
|
|
|
|
if (state->start == start && state->end <= end) {
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, state, &bits, NULL);
|
2012-09-28 05:07:30 +08:00
|
|
|
cache_state(state, cached_state);
|
2015-10-12 15:35:38 +08:00
|
|
|
state = clear_state_bit(tree, state, &clear_bits, 0, NULL);
|
2011-09-27 01:56:12 +08:00
|
|
|
if (last_end == (u64)-1)
|
|
|
|
goto out;
|
|
|
|
start = last_end + 1;
|
2012-05-10 18:10:39 +08:00
|
|
|
if (start < end && state && state->start == start &&
|
|
|
|
!need_resched())
|
|
|
|
goto hit_next;
|
2011-09-27 01:56:12 +08:00
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
* or
|
|
|
|
* | ------------- state -------------- |
|
|
|
|
*
|
|
|
|
* We need to split the extent we found, and may flip bits on
|
|
|
|
* second half.
|
|
|
|
*
|
|
|
|
* If the extent we found extends past our
|
|
|
|
* range, we just split and search again. It'll get split
|
|
|
|
* again the next time though.
|
|
|
|
*
|
|
|
|
* If the extent we found is inside our range, we set the
|
|
|
|
* desired bit on it.
|
|
|
|
*/
|
|
|
|
if (state->start < start) {
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
2011-12-08 09:08:40 +08:00
|
|
|
if (!prealloc) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-09-27 01:56:12 +08:00
|
|
|
err = split_state(tree, state, prealloc, start);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
2011-09-27 01:56:12 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (state->end <= end) {
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, state, &bits, NULL);
|
2012-09-28 05:07:30 +08:00
|
|
|
cache_state(state, cached_state);
|
2015-10-12 15:35:38 +08:00
|
|
|
state = clear_state_bit(tree, state, &clear_bits, 0,
|
|
|
|
NULL);
|
2011-09-27 01:56:12 +08:00
|
|
|
if (last_end == (u64)-1)
|
|
|
|
goto out;
|
|
|
|
start = last_end + 1;
|
2012-05-10 18:10:39 +08:00
|
|
|
if (start < end && state && state->start == start &&
|
|
|
|
!need_resched())
|
|
|
|
goto hit_next;
|
2011-09-27 01:56:12 +08:00
|
|
|
}
|
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state | or | state |
|
|
|
|
*
|
|
|
|
* There's a hole, we need to insert something in it and
|
|
|
|
* ignore the extent we found.
|
|
|
|
*/
|
|
|
|
if (state->start > start) {
|
|
|
|
u64 this_end;
|
|
|
|
if (end < last_start)
|
|
|
|
this_end = end;
|
|
|
|
else
|
|
|
|
this_end = last_start - 1;
|
|
|
|
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
2011-12-08 09:08:40 +08:00
|
|
|
if (!prealloc) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-09-27 01:56:12 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid to free 'prealloc' if it can be merged with
|
|
|
|
* the later extent.
|
|
|
|
*/
|
|
|
|
err = insert_state(tree, prealloc, start, this_end,
|
2015-10-12 14:53:37 +08:00
|
|
|
NULL, NULL, &bits, NULL);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
2012-09-28 05:07:30 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
2011-09-27 01:56:12 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
start = this_end + 1;
|
|
|
|
goto search_again;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* | ---- desired range ---- |
|
|
|
|
* | state |
|
|
|
|
* We need to split the extent, and set the bit
|
|
|
|
* on the first half
|
|
|
|
*/
|
|
|
|
if (state->start <= end && state->end > end) {
|
|
|
|
prealloc = alloc_extent_state_atomic(prealloc);
|
2011-12-08 09:08:40 +08:00
|
|
|
if (!prealloc) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-09-27 01:56:12 +08:00
|
|
|
|
|
|
|
err = split_state(tree, state, prealloc, end + 1);
|
2011-10-04 11:22:32 +08:00
|
|
|
if (err)
|
|
|
|
extent_io_tree_panic(tree, err);
|
2011-09-27 01:56:12 +08:00
|
|
|
|
2015-10-12 14:53:37 +08:00
|
|
|
set_state_bits(tree, prealloc, &bits, NULL);
|
2012-09-28 05:07:30 +08:00
|
|
|
cache_state(prealloc, cached_state);
|
2015-10-12 15:35:38 +08:00
|
|
|
clear_state_bit(tree, prealloc, &clear_bits, 0, NULL);
|
2011-09-27 01:56:12 +08:00
|
|
|
prealloc = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
search_again:
|
|
|
|
if (start > end)
|
|
|
|
goto out;
|
|
|
|
spin_unlock(&tree->lock);
|
2016-04-27 05:54:39 +08:00
|
|
|
cond_resched();
|
2014-10-13 19:28:39 +08:00
|
|
|
first_iteration = false;
|
2011-09-27 01:56:12 +08:00
|
|
|
goto again;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
if (prealloc)
|
|
|
|
free_extent_state(prealloc);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/* wrappers around set/clear extent bit */
|
2015-10-12 14:53:37 +08:00
|
|
|
int set_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end,
|
2016-04-27 05:54:39 +08:00
|
|
|
unsigned bits, struct extent_changeset *changeset)
|
2015-10-12 14:53:37 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We don't support EXTENT_LOCKED yet, as current changeset will
|
|
|
|
* record any bits changed, so for EXTENT_LOCKED case, it will
|
|
|
|
* either fail with -EEXIST or changeset will record the whole
|
|
|
|
* range.
|
|
|
|
*/
|
|
|
|
BUG_ON(bits & EXTENT_LOCKED);
|
|
|
|
|
2016-04-27 05:54:39 +08:00
|
|
|
return __set_extent_bit(tree, start, end, bits, 0, NULL, NULL, GFP_NOFS,
|
2015-10-12 14:53:37 +08:00
|
|
|
changeset);
|
|
|
|
}
|
|
|
|
|
2019-03-27 20:24:10 +08:00
|
|
|
int set_extent_bits_nowait(struct extent_io_tree *tree, u64 start, u64 end,
|
|
|
|
unsigned bits)
|
|
|
|
{
|
|
|
|
return __set_extent_bit(tree, start, end, bits, 0, NULL, NULL,
|
|
|
|
GFP_NOWAIT, NULL);
|
|
|
|
}
|
|
|
|
|
2015-10-12 15:35:38 +08:00
|
|
|
int clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
|
|
|
unsigned bits, int wake, int delete,
|
2017-10-31 23:37:52 +08:00
|
|
|
struct extent_state **cached)
|
2015-10-12 15:35:38 +08:00
|
|
|
{
|
|
|
|
return __clear_extent_bit(tree, start, end, bits, wake, delete,
|
2017-10-31 23:37:52 +08:00
|
|
|
cached, GFP_NOFS, NULL);
|
2015-10-12 15:35:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int clear_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end,
|
2016-04-27 05:54:39 +08:00
|
|
|
unsigned bits, struct extent_changeset *changeset)
|
2015-10-12 15:35:38 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Don't support EXTENT_LOCKED case, same reason as
|
|
|
|
* set_record_extent_bits().
|
|
|
|
*/
|
|
|
|
BUG_ON(bits & EXTENT_LOCKED);
|
|
|
|
|
2016-04-27 05:54:39 +08:00
|
|
|
return __clear_extent_bit(tree, start, end, bits, 0, 0, NULL, GFP_NOFS,
|
2015-10-12 15:35:38 +08:00
|
|
|
changeset);
|
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* either insert or lock state struct between start and end use mask to tell
|
|
|
|
* us if waiting is desired.
|
|
|
|
*/
|
2009-09-03 01:24:36 +08:00
|
|
|
int lock_extent_bits(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-12-03 21:30:40 +08:00
|
|
|
struct extent_state **cached_state)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
u64 failed_start;
|
2015-01-15 02:52:13 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
while (1) {
|
2015-12-03 21:30:40 +08:00
|
|
|
err = __set_extent_bit(tree, start, end, EXTENT_LOCKED,
|
2012-03-01 21:57:19 +08:00
|
|
|
EXTENT_LOCKED, &failed_start,
|
2015-10-12 14:53:37 +08:00
|
|
|
cached_state, GFP_NOFS, NULL);
|
2012-03-01 21:57:19 +08:00
|
|
|
if (err == -EEXIST) {
|
2008-01-25 05:13:08 +08:00
|
|
|
wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED);
|
|
|
|
start = failed_start;
|
2012-03-01 21:57:19 +08:00
|
|
|
} else
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
WARN_ON(start > end);
|
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2012-03-01 21:57:19 +08:00
|
|
|
int try_lock_extent(struct extent_io_tree *tree, u64 start, u64 end)
|
Btrfs: nuke fs wide allocation mutex V2
This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
of little locks.
There is now a pinned_mutex, which is used when messing with the pinned_extents
extent io tree, and the extent_ins_mutex which is used with the pending_del and
extent_ins extent io trees.
The locking for the extent tree stuff was inspired by a patch that Yan Zheng
wrote to fix a race condition, I cleaned it up some and changed the locking
around a little bit, but the idea remains the same. Basically instead of
holding the extent_ins_mutex throughout the processing of an extent on the
extent_ins or pending_del trees, we just hold it while we're searching and when
we clear the bits on those trees, and lock the extent for the duration of the
operations on the extent.
Also to keep from getting hung up waiting to lock an extent, I've added a
try_lock_extent so if we cannot lock the extent, move on to the next one in the
tree and we'll come back to that one. I have tested this heavily and it does
not appear to break anything. This has to be applied on top of my
find_free_extent redo patch.
I tested this patch on top of Yan's space reblancing code and it worked fine.
The only thing that has changed since the last version is I pulled out all my
debugging stuff, apparently I forgot to run guilt refresh before I sent the
last patch out. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
2008-10-30 02:49:05 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
u64 failed_start;
|
|
|
|
|
2012-03-01 21:57:19 +08:00
|
|
|
err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, EXTENT_LOCKED,
|
2015-10-12 14:53:37 +08:00
|
|
|
&failed_start, NULL, GFP_NOFS, NULL);
|
2008-10-31 02:19:50 +08:00
|
|
|
if (err == -EEXIST) {
|
|
|
|
if (failed_start > start)
|
|
|
|
clear_extent_bit(tree, start, failed_start - 1,
|
2017-10-31 23:37:52 +08:00
|
|
|
EXTENT_LOCKED, 1, 0, NULL);
|
Btrfs: nuke fs wide allocation mutex V2
This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
of little locks.
There is now a pinned_mutex, which is used when messing with the pinned_extents
extent io tree, and the extent_ins_mutex which is used with the pending_del and
extent_ins extent io trees.
The locking for the extent tree stuff was inspired by a patch that Yan Zheng
wrote to fix a race condition, I cleaned it up some and changed the locking
around a little bit, but the idea remains the same. Basically instead of
holding the extent_ins_mutex throughout the processing of an extent on the
extent_ins or pending_del trees, we just hold it while we're searching and when
we clear the bits on those trees, and lock the extent for the duration of the
operations on the extent.
Also to keep from getting hung up waiting to lock an extent, I've added a
try_lock_extent so if we cannot lock the extent, move on to the next one in the
tree and we'll come back to that one. I have tested this heavily and it does
not appear to break anything. This has to be applied on top of my
find_free_extent redo patch.
I tested this patch on top of Yan's space reblancing code and it worked fine.
The only thing that has changed since the last version is I pulled out all my
debugging stuff, apparently I forgot to run guilt refresh before I sent the
last patch out. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
2008-10-30 02:49:05 +08:00
|
|
|
return 0;
|
2008-10-31 02:19:50 +08:00
|
|
|
}
|
Btrfs: nuke fs wide allocation mutex V2
This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
of little locks.
There is now a pinned_mutex, which is used when messing with the pinned_extents
extent io tree, and the extent_ins_mutex which is used with the pending_del and
extent_ins extent io trees.
The locking for the extent tree stuff was inspired by a patch that Yan Zheng
wrote to fix a race condition, I cleaned it up some and changed the locking
around a little bit, but the idea remains the same. Basically instead of
holding the extent_ins_mutex throughout the processing of an extent on the
extent_ins or pending_del trees, we just hold it while we're searching and when
we clear the bits on those trees, and lock the extent for the duration of the
operations on the extent.
Also to keep from getting hung up waiting to lock an extent, I've added a
try_lock_extent so if we cannot lock the extent, move on to the next one in the
tree and we'll come back to that one. I have tested this heavily and it does
not appear to break anything. This has to be applied on top of my
find_free_extent redo patch.
I tested this patch on top of Yan's space reblancing code and it worked fine.
The only thing that has changed since the last version is I pulled out all my
debugging stuff, apparently I forgot to run guilt refresh before I sent the
last patch out. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
2008-10-30 02:49:05 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-12-03 20:08:59 +08:00
|
|
|
void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end)
|
2013-03-27 01:07:00 +08:00
|
|
|
{
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long index = start >> PAGE_SHIFT;
|
|
|
|
unsigned long end_index = end >> PAGE_SHIFT;
|
2013-03-27 01:07:00 +08:00
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
while (index <= end_index) {
|
|
|
|
page = find_get_page(inode->i_mapping, index);
|
|
|
|
BUG_ON(!page); /* Pages should be in the extent_io_tree */
|
|
|
|
clear_page_dirty_for_io(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
2013-03-27 01:07:00 +08:00
|
|
|
index++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-12-03 20:08:59 +08:00
|
|
|
void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
|
2013-03-27 01:07:00 +08:00
|
|
|
{
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long index = start >> PAGE_SHIFT;
|
|
|
|
unsigned long end_index = end >> PAGE_SHIFT;
|
2013-03-27 01:07:00 +08:00
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
while (index <= end_index) {
|
|
|
|
page = find_get_page(inode->i_mapping, index);
|
|
|
|
BUG_ON(!page); /* Pages should be in the extent_io_tree */
|
|
|
|
__set_page_dirty_nobuffers(page);
|
2015-02-12 07:26:55 +08:00
|
|
|
account_page_redirty(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
2013-03-27 01:07:00 +08:00
|
|
|
index++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/* find the first state struct with 'bits' set after 'start', and
|
|
|
|
* return it. tree->lock must be held. NULL will returned if
|
|
|
|
* nothing was found after 'start'
|
|
|
|
*/
|
2013-04-26 04:41:01 +08:00
|
|
|
static struct extent_state *
|
|
|
|
find_first_extent_bit_state(struct extent_io_tree *tree,
|
2015-01-15 02:52:13 +08:00
|
|
|
u64 start, unsigned bits)
|
2008-02-19 01:12:38 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
|
|
|
node = tree_search(tree, start);
|
2009-01-06 10:25:51 +08:00
|
|
|
if (!node)
|
2008-02-19 01:12:38 +08:00
|
|
|
goto out;
|
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (1) {
|
2008-02-19 01:12:38 +08:00
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
2009-01-06 10:25:51 +08:00
|
|
|
if (state->end >= start && (state->state & bits))
|
2008-02-19 01:12:38 +08:00
|
|
|
return state;
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2008-02-19 01:12:38 +08:00
|
|
|
node = rb_next(node);
|
|
|
|
if (!node)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2011-07-14 11:19:45 +08:00
|
|
|
/*
|
|
|
|
* find the first offset in the io tree with 'bits' set. zero is
|
|
|
|
* returned if we find something, and *start_ret and *end_ret are
|
|
|
|
* set to reflect the state struct that was found.
|
|
|
|
*
|
2012-04-06 14:35:47 +08:00
|
|
|
* If nothing was found, 1 is returned. If found something, return 0.
|
2011-07-14 11:19:45 +08:00
|
|
|
*/
|
|
|
|
int find_first_extent_bit(struct extent_io_tree *tree, u64 start,
|
2015-01-15 02:52:13 +08:00
|
|
|
u64 *start_ret, u64 *end_ret, unsigned bits,
|
2012-09-28 05:07:30 +08:00
|
|
|
struct extent_state **cached_state)
|
2011-07-14 11:19:45 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
int ret = 1;
|
|
|
|
|
|
|
|
spin_lock(&tree->lock);
|
2012-09-28 05:07:30 +08:00
|
|
|
if (cached_state && *cached_state) {
|
|
|
|
state = *cached_state;
|
2014-07-07 03:09:59 +08:00
|
|
|
if (state->end == start - 1 && extent_state_in_tree(state)) {
|
2018-08-23 03:14:53 +08:00
|
|
|
while ((state = next_state(state)) != NULL) {
|
2012-09-28 05:07:30 +08:00
|
|
|
if (state->state & bits)
|
|
|
|
goto got_it;
|
|
|
|
}
|
|
|
|
free_extent_state(*cached_state);
|
|
|
|
*cached_state = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
free_extent_state(*cached_state);
|
|
|
|
*cached_state = NULL;
|
|
|
|
}
|
|
|
|
|
2011-07-14 11:19:45 +08:00
|
|
|
state = find_first_extent_bit_state(tree, start, bits);
|
2012-09-28 05:07:30 +08:00
|
|
|
got_it:
|
2011-07-14 11:19:45 +08:00
|
|
|
if (state) {
|
2014-10-13 19:28:38 +08:00
|
|
|
cache_state_if_flags(state, cached_state, 0);
|
2011-07-14 11:19:45 +08:00
|
|
|
*start_ret = state->start;
|
|
|
|
*end_ret = state->end;
|
|
|
|
ret = 0;
|
|
|
|
}
|
2012-09-28 05:07:30 +08:00
|
|
|
out:
|
2011-07-14 11:19:45 +08:00
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-27 20:24:17 +08:00
|
|
|
/**
|
2019-06-03 18:06:02 +08:00
|
|
|
* find_first_clear_extent_bit - find the first range that has @bits not set.
|
|
|
|
* This range could start before @start.
|
2019-03-27 20:24:17 +08:00
|
|
|
*
|
|
|
|
* @tree - the tree to search
|
|
|
|
* @start - the offset at/after which the found extent should start
|
|
|
|
* @start_ret - records the beginning of the range
|
|
|
|
* @end_ret - records the end of the range (inclusive)
|
|
|
|
* @bits - the set of bits which must be unset
|
|
|
|
*
|
|
|
|
* Since unallocated range is also considered one which doesn't have the bits
|
|
|
|
* set it's possible that @end_ret contains -1, this happens in case the range
|
|
|
|
* spans (last_range_end, end of device]. In this case it's up to the caller to
|
|
|
|
* trim @end_ret to the appropriate size.
|
|
|
|
*/
|
|
|
|
void find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start,
|
|
|
|
u64 *start_ret, u64 *end_ret, unsigned bits)
|
|
|
|
{
|
|
|
|
struct extent_state *state;
|
|
|
|
struct rb_node *node, *prev = NULL, *next;
|
|
|
|
|
|
|
|
spin_lock(&tree->lock);
|
|
|
|
|
|
|
|
/* Find first extent with bits cleared */
|
|
|
|
while (1) {
|
|
|
|
node = __etree_search(tree, start, &next, &prev, NULL, NULL);
|
|
|
|
if (!node) {
|
|
|
|
node = next;
|
|
|
|
if (!node) {
|
|
|
|
/*
|
|
|
|
* We are past the last allocated chunk,
|
|
|
|
* set start at the end of the last extent. The
|
|
|
|
* device alloc tree should never be empty so
|
|
|
|
* prev is always set.
|
|
|
|
*/
|
|
|
|
ASSERT(prev);
|
|
|
|
state = rb_entry(prev, struct extent_state, rb_node);
|
|
|
|
*start_ret = state->end + 1;
|
|
|
|
*end_ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2019-06-03 18:06:02 +08:00
|
|
|
/*
|
|
|
|
* At this point 'node' either contains 'start' or start is
|
|
|
|
* before 'node'
|
|
|
|
*/
|
2019-03-27 20:24:17 +08:00
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
2019-06-03 18:06:02 +08:00
|
|
|
|
|
|
|
if (in_range(start, state->start, state->end - state->start + 1)) {
|
|
|
|
if (state->state & bits) {
|
|
|
|
/*
|
|
|
|
* |--range with bits sets--|
|
|
|
|
* |
|
|
|
|
* start
|
|
|
|
*/
|
|
|
|
start = state->end + 1;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* 'start' falls within a range that doesn't
|
|
|
|
* have the bits set, so take its start as
|
|
|
|
* the beginning of the desired range
|
|
|
|
*
|
|
|
|
* |--range with bits cleared----|
|
|
|
|
* |
|
|
|
|
* start
|
|
|
|
*/
|
|
|
|
*start_ret = state->start;
|
|
|
|
break;
|
|
|
|
}
|
2019-03-27 20:24:17 +08:00
|
|
|
} else {
|
2019-06-03 18:06:02 +08:00
|
|
|
/*
|
|
|
|
* |---prev range---|---hole/unset---|---node range---|
|
|
|
|
* |
|
|
|
|
* start
|
|
|
|
*
|
|
|
|
* or
|
|
|
|
*
|
|
|
|
* |---hole/unset--||--first node--|
|
|
|
|
* 0 |
|
|
|
|
* start
|
|
|
|
*/
|
|
|
|
if (prev) {
|
|
|
|
state = rb_entry(prev, struct extent_state,
|
|
|
|
rb_node);
|
|
|
|
*start_ret = state->end + 1;
|
|
|
|
} else {
|
|
|
|
*start_ret = 0;
|
|
|
|
}
|
2019-03-27 20:24:17 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the longest stretch from start until an entry which has the
|
|
|
|
* bits set
|
|
|
|
*/
|
|
|
|
while (1) {
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
if (state->end >= start && !(state->state & bits)) {
|
|
|
|
*end_ret = state->end;
|
|
|
|
} else {
|
|
|
|
*end_ret = state->start - 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
node = rb_next(node);
|
|
|
|
if (!node)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
spin_unlock(&tree->lock);
|
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* find a contiguous range of bytes in the file marked as delalloc, not
|
|
|
|
* more than 'max_bytes'. start and end are used to return the range,
|
|
|
|
*
|
2018-11-29 11:33:38 +08:00
|
|
|
* true is returned if we find something, false if nothing was in the tree
|
2008-09-30 03:18:18 +08:00
|
|
|
*/
|
2018-11-29 11:33:38 +08:00
|
|
|
static noinline bool find_delalloc_range(struct extent_io_tree *tree,
|
2010-02-03 05:19:11 +08:00
|
|
|
u64 *start, u64 *end, u64 max_bytes,
|
|
|
|
struct extent_state **cached_state)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
u64 cur_start = *start;
|
2018-11-29 11:33:38 +08:00
|
|
|
bool found = false;
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 total_bytes = 0;
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, cur_start);
|
2008-04-01 23:21:40 +08:00
|
|
|
if (!node) {
|
2018-11-29 11:33:38 +08:00
|
|
|
*end = (u64)-1;
|
2008-01-25 05:13:08 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (1) {
|
2008-01-25 05:13:08 +08:00
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
2008-09-26 22:05:38 +08:00
|
|
|
if (found && (state->start != cur_start ||
|
|
|
|
(state->state & EXTENT_BOUNDARY))) {
|
2008-01-25 05:13:08 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!(state->state & EXTENT_DELALLOC)) {
|
|
|
|
if (!found)
|
|
|
|
*end = state->end;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-02-03 05:19:11 +08:00
|
|
|
if (!found) {
|
2008-01-25 05:13:08 +08:00
|
|
|
*start = state->start;
|
2010-02-03 05:19:11 +08:00
|
|
|
*cached_state = state;
|
2017-03-03 16:55:19 +08:00
|
|
|
refcount_inc(&state->refs);
|
2010-02-03 05:19:11 +08:00
|
|
|
}
|
2018-11-29 11:33:38 +08:00
|
|
|
found = true;
|
2008-01-25 05:13:08 +08:00
|
|
|
*end = state->end;
|
|
|
|
cur_start = state->end + 1;
|
|
|
|
node = rb_next(node);
|
|
|
|
total_bytes += state->end - state->start + 1;
|
2013-10-08 10:11:09 +08:00
|
|
|
if (total_bytes >= max_bytes)
|
2013-08-31 02:38:49 +08:00
|
|
|
break;
|
|
|
|
if (!node)
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2017-02-10 23:41:05 +08:00
|
|
|
static int __process_pages_contig(struct address_space *mapping,
|
|
|
|
struct page *locked_page,
|
|
|
|
pgoff_t start_index, pgoff_t end_index,
|
|
|
|
unsigned long page_ops, pgoff_t *index_ret);
|
|
|
|
|
2012-03-01 21:56:26 +08:00
|
|
|
static noinline void __unlock_for_delalloc(struct inode *inode,
|
|
|
|
struct page *locked_page,
|
|
|
|
u64 start, u64 end)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
{
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long index = start >> PAGE_SHIFT;
|
|
|
|
unsigned long end_index = end >> PAGE_SHIFT;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2017-02-10 23:42:14 +08:00
|
|
|
ASSERT(locked_page);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (index == locked_page->index && end_index == index)
|
2012-03-01 21:56:26 +08:00
|
|
|
return;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2017-02-10 23:42:14 +08:00
|
|
|
__process_pages_contig(inode->i_mapping, locked_page, index, end_index,
|
|
|
|
PAGE_UNLOCK, NULL);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static noinline int lock_delalloc_pages(struct inode *inode,
|
|
|
|
struct page *locked_page,
|
|
|
|
u64 delalloc_start,
|
|
|
|
u64 delalloc_end)
|
|
|
|
{
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long index = delalloc_start >> PAGE_SHIFT;
|
2017-02-10 23:42:14 +08:00
|
|
|
unsigned long index_ret = index;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long end_index = delalloc_end >> PAGE_SHIFT;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
int ret;
|
|
|
|
|
2017-02-10 23:42:14 +08:00
|
|
|
ASSERT(locked_page);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (index == locked_page->index && index == end_index)
|
|
|
|
return 0;
|
|
|
|
|
2017-02-10 23:42:14 +08:00
|
|
|
ret = __process_pages_contig(inode->i_mapping, locked_page, index,
|
|
|
|
end_index, PAGE_LOCK, &index_ret);
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
__unlock_for_delalloc(inode, locked_page, delalloc_start,
|
|
|
|
(u64)index_ret << PAGE_SHIFT);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-11-29 11:33:38 +08:00
|
|
|
* Find and lock a contiguous range of bytes in the file marked as delalloc, no
|
|
|
|
* more than @max_bytes. @Start and @end are used to return the range,
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
*
|
2018-11-29 11:33:38 +08:00
|
|
|
* Return: true if we find something
|
|
|
|
* false if nothing was in the tree
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
*/
|
2018-11-19 17:38:17 +08:00
|
|
|
EXPORT_FOR_TESTS
|
2018-11-29 11:33:38 +08:00
|
|
|
noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
|
2013-10-10 00:00:56 +08:00
|
|
|
struct page *locked_page, u64 *start,
|
2018-10-26 19:43:20 +08:00
|
|
|
u64 *end)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
{
|
2019-06-21 23:02:54 +08:00
|
|
|
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
|
2018-10-26 19:43:20 +08:00
|
|
|
u64 max_bytes = BTRFS_MAX_EXTENT_SIZE;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
u64 delalloc_start;
|
|
|
|
u64 delalloc_end;
|
2018-11-29 11:33:38 +08:00
|
|
|
bool found;
|
2009-09-03 03:22:30 +08:00
|
|
|
struct extent_state *cached_state = NULL;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
int ret;
|
|
|
|
int loops = 0;
|
|
|
|
|
|
|
|
again:
|
|
|
|
/* step one, find a bunch of delalloc bytes starting at start */
|
|
|
|
delalloc_start = *start;
|
|
|
|
delalloc_end = 0;
|
|
|
|
found = find_delalloc_range(tree, &delalloc_start, &delalloc_end,
|
2010-02-03 05:19:11 +08:00
|
|
|
max_bytes, &cached_state);
|
2008-11-01 00:46:39 +08:00
|
|
|
if (!found || delalloc_end <= *start) {
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
*start = delalloc_start;
|
|
|
|
*end = delalloc_end;
|
2010-02-03 05:19:11 +08:00
|
|
|
free_extent_state(cached_state);
|
2018-11-29 11:33:38 +08:00
|
|
|
return false;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
}
|
|
|
|
|
2008-11-01 00:46:39 +08:00
|
|
|
/*
|
|
|
|
* start comes from the offset of locked_page. We have to lock
|
|
|
|
* pages in order, so we can't process delalloc bytes before
|
|
|
|
* locked_page
|
|
|
|
*/
|
2009-01-06 10:25:51 +08:00
|
|
|
if (delalloc_start < *start)
|
2008-11-01 00:46:39 +08:00
|
|
|
delalloc_start = *start;
|
|
|
|
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
/*
|
|
|
|
* make sure to limit the number of pages we try to lock down
|
|
|
|
*/
|
2013-10-08 10:11:09 +08:00
|
|
|
if (delalloc_end + 1 - delalloc_start > max_bytes)
|
|
|
|
delalloc_end = delalloc_start + max_bytes - 1;
|
2009-01-06 10:25:51 +08:00
|
|
|
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
/* step two, lock all the pages after the page that has start */
|
|
|
|
ret = lock_delalloc_pages(inode, locked_page,
|
|
|
|
delalloc_start, delalloc_end);
|
2018-10-26 19:43:21 +08:00
|
|
|
ASSERT(!ret || ret == -EAGAIN);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (ret == -EAGAIN) {
|
|
|
|
/* some of the pages are gone, lets avoid looping by
|
|
|
|
* shortening the size of the delalloc range we're searching
|
|
|
|
*/
|
2009-09-03 03:22:30 +08:00
|
|
|
free_extent_state(cached_state);
|
2014-05-21 20:49:54 +08:00
|
|
|
cached_state = NULL;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (!loops) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
max_bytes = PAGE_SIZE;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
loops = 1;
|
|
|
|
goto again;
|
|
|
|
} else {
|
2018-11-29 11:33:38 +08:00
|
|
|
found = false;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
goto out_failed;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* step three, lock the state bits for the whole range */
|
2015-12-03 21:30:40 +08:00
|
|
|
lock_extent_bits(tree, delalloc_start, delalloc_end, &cached_state);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
|
|
|
/* then test to make sure it is all still delalloc */
|
|
|
|
ret = test_range_bit(tree, delalloc_start, delalloc_end,
|
2009-09-03 03:22:30 +08:00
|
|
|
EXTENT_DELALLOC, 1, cached_state);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (!ret) {
|
2009-09-03 03:22:30 +08:00
|
|
|
unlock_extent_cached(tree, delalloc_start, delalloc_end,
|
2017-12-13 04:43:52 +08:00
|
|
|
&cached_state);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
__unlock_for_delalloc(inode, locked_page,
|
|
|
|
delalloc_start, delalloc_end);
|
|
|
|
cond_resched();
|
|
|
|
goto again;
|
|
|
|
}
|
2009-09-03 03:22:30 +08:00
|
|
|
free_extent_state(cached_state);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
*start = delalloc_start;
|
|
|
|
*end = delalloc_end;
|
|
|
|
out_failed:
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2017-02-10 23:41:05 +08:00
|
|
|
static int __process_pages_contig(struct address_space *mapping,
|
|
|
|
struct page *locked_page,
|
|
|
|
pgoff_t start_index, pgoff_t end_index,
|
|
|
|
unsigned long page_ops, pgoff_t *index_ret)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
{
|
2017-02-03 09:49:22 +08:00
|
|
|
unsigned long nr_pages = end_index - start_index + 1;
|
2017-02-10 23:41:05 +08:00
|
|
|
unsigned long pages_locked = 0;
|
2017-02-03 09:49:22 +08:00
|
|
|
pgoff_t index = start_index;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
struct page *pages[16];
|
2017-02-03 09:49:22 +08:00
|
|
|
unsigned ret;
|
2017-02-10 23:41:05 +08:00
|
|
|
int err = 0;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
int i;
|
2008-11-07 11:02:51 +08:00
|
|
|
|
2017-02-10 23:41:05 +08:00
|
|
|
if (page_ops & PAGE_LOCK) {
|
|
|
|
ASSERT(page_ops == PAGE_LOCK);
|
|
|
|
ASSERT(index_ret && *index_ret == start_index);
|
|
|
|
}
|
|
|
|
|
2014-10-07 05:14:22 +08:00
|
|
|
if ((page_ops & PAGE_SET_ERROR) && nr_pages > 0)
|
2017-02-03 09:49:22 +08:00
|
|
|
mapping_set_error(mapping, -EIO);
|
2014-10-07 05:14:22 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (nr_pages > 0) {
|
2017-02-03 09:49:22 +08:00
|
|
|
ret = find_get_pages_contig(mapping, index,
|
2008-11-11 22:34:41 +08:00
|
|
|
min_t(unsigned long,
|
|
|
|
nr_pages, ARRAY_SIZE(pages)), pages);
|
2017-02-10 23:41:05 +08:00
|
|
|
if (ret == 0) {
|
|
|
|
/*
|
|
|
|
* Only if we're going to lock these pages,
|
|
|
|
* can we find nothing at @index.
|
|
|
|
*/
|
|
|
|
ASSERT(page_ops & PAGE_LOCK);
|
2017-03-07 10:20:56 +08:00
|
|
|
err = -EAGAIN;
|
|
|
|
goto out;
|
2017-02-10 23:41:05 +08:00
|
|
|
}
|
2009-09-03 04:53:46 +08:00
|
|
|
|
2017-02-10 23:41:05 +08:00
|
|
|
for (i = 0; i < ret; i++) {
|
2013-07-29 23:20:47 +08:00
|
|
|
if (page_ops & PAGE_SET_PRIVATE2)
|
2009-09-03 04:53:46 +08:00
|
|
|
SetPagePrivate2(pages[i]);
|
|
|
|
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (pages[i] == locked_page) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(pages[i]);
|
2017-02-10 23:41:05 +08:00
|
|
|
pages_locked++;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
continue;
|
|
|
|
}
|
2013-07-29 23:20:47 +08:00
|
|
|
if (page_ops & PAGE_CLEAR_DIRTY)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
clear_page_dirty_for_io(pages[i]);
|
2013-07-29 23:20:47 +08:00
|
|
|
if (page_ops & PAGE_SET_WRITEBACK)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
set_page_writeback(pages[i]);
|
2014-10-07 05:14:22 +08:00
|
|
|
if (page_ops & PAGE_SET_ERROR)
|
|
|
|
SetPageError(pages[i]);
|
2013-07-29 23:20:47 +08:00
|
|
|
if (page_ops & PAGE_END_WRITEBACK)
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
end_page_writeback(pages[i]);
|
2013-07-29 23:20:47 +08:00
|
|
|
if (page_ops & PAGE_UNLOCK)
|
2008-11-07 11:02:51 +08:00
|
|
|
unlock_page(pages[i]);
|
2017-02-10 23:41:05 +08:00
|
|
|
if (page_ops & PAGE_LOCK) {
|
|
|
|
lock_page(pages[i]);
|
|
|
|
if (!PageDirty(pages[i]) ||
|
|
|
|
pages[i]->mapping != mapping) {
|
|
|
|
unlock_page(pages[i]);
|
|
|
|
put_page(pages[i]);
|
|
|
|
err = -EAGAIN;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(pages[i]);
|
2017-02-10 23:41:05 +08:00
|
|
|
pages_locked++;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
}
|
|
|
|
nr_pages -= ret;
|
|
|
|
index += ret;
|
|
|
|
cond_resched();
|
|
|
|
}
|
2017-02-10 23:41:05 +08:00
|
|
|
out:
|
|
|
|
if (err && index_ret)
|
|
|
|
*index_ret = start_index + pages_locked - 1;
|
|
|
|
return err;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
}
|
|
|
|
|
2017-02-03 09:49:22 +08:00
|
|
|
void extent_clear_unlock_delalloc(struct inode *inode, u64 start, u64 end,
|
|
|
|
u64 delalloc_end, struct page *locked_page,
|
|
|
|
unsigned clear_bits,
|
|
|
|
unsigned long page_ops)
|
|
|
|
{
|
|
|
|
clear_extent_bit(&BTRFS_I(inode)->io_tree, start, end, clear_bits, 1, 0,
|
2017-10-31 23:37:52 +08:00
|
|
|
NULL);
|
2017-02-03 09:49:22 +08:00
|
|
|
|
|
|
|
__process_pages_contig(inode->i_mapping, locked_page,
|
|
|
|
start >> PAGE_SHIFT, end >> PAGE_SHIFT,
|
2017-02-10 23:41:05 +08:00
|
|
|
page_ops, NULL);
|
2017-02-03 09:49:22 +08:00
|
|
|
}
|
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* count the number of bytes in the tree that have a given bit(s)
|
|
|
|
* set. This can be fairly slow, except for EXTENT_DIRTY which is
|
|
|
|
* cached. The total number found is returned.
|
|
|
|
*/
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 count_range_bits(struct extent_io_tree *tree,
|
|
|
|
u64 *start, u64 search_end, u64 max_bytes,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits, int contig)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
u64 cur_start = *start;
|
|
|
|
u64 total_bytes = 0;
|
2011-02-24 05:23:20 +08:00
|
|
|
u64 last = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
int found = 0;
|
|
|
|
|
2013-10-31 13:00:08 +08:00
|
|
|
if (WARN_ON(search_end <= cur_start))
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (cur_start == 0 && bits == EXTENT_DIRTY) {
|
|
|
|
total_bytes = tree->dirty_bytes;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, cur_start);
|
2009-01-06 10:25:51 +08:00
|
|
|
if (!node)
|
2008-01-25 05:13:08 +08:00
|
|
|
goto out;
|
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (1) {
|
2008-01-25 05:13:08 +08:00
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
if (state->start > search_end)
|
|
|
|
break;
|
2011-02-24 05:23:20 +08:00
|
|
|
if (contig && found && state->start > last + 1)
|
|
|
|
break;
|
|
|
|
if (state->end >= cur_start && (state->state & bits) == bits) {
|
2008-01-25 05:13:08 +08:00
|
|
|
total_bytes += min(search_end, state->end) + 1 -
|
|
|
|
max(cur_start, state->start);
|
|
|
|
if (total_bytes >= max_bytes)
|
|
|
|
break;
|
|
|
|
if (!found) {
|
2011-05-04 23:11:17 +08:00
|
|
|
*start = max(cur_start, state->start);
|
2008-01-25 05:13:08 +08:00
|
|
|
found = 1;
|
|
|
|
}
|
2011-02-24 05:23:20 +08:00
|
|
|
last = state->end;
|
|
|
|
} else if (contig && found) {
|
|
|
|
break;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
node = rb_next(node);
|
|
|
|
if (!node)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
return total_bytes;
|
|
|
|
}
|
2008-12-02 22:54:17 +08:00
|
|
|
|
2008-09-30 03:18:18 +08:00
|
|
|
/*
|
|
|
|
* set the private field for a given byte offset in the tree. If there isn't
|
|
|
|
* an extent_state there already, this does nothing.
|
|
|
|
*/
|
2016-02-23 05:53:20 +08:00
|
|
|
static noinline int set_state_failrec(struct extent_io_tree *tree, u64 start,
|
2016-02-11 20:24:13 +08:00
|
|
|
struct io_failure_record *failrec)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
int ret = 0;
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, start);
|
2008-04-01 23:21:40 +08:00
|
|
|
if (!node) {
|
2008-01-25 05:13:08 +08:00
|
|
|
ret = -ENOENT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
if (state->start != start) {
|
|
|
|
ret = -ENOENT;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-02-11 20:24:13 +08:00
|
|
|
state->failrec = failrec;
|
2008-01-25 05:13:08 +08:00
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-02-23 05:53:20 +08:00
|
|
|
static noinline int get_state_failrec(struct extent_io_tree *tree, u64 start,
|
2016-02-11 20:24:13 +08:00
|
|
|
struct io_failure_record **failrec)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct rb_node *node;
|
|
|
|
struct extent_state *state;
|
|
|
|
int ret = 0;
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* this search will find all the extents that end after
|
|
|
|
* our range starts.
|
|
|
|
*/
|
2008-02-02 03:51:59 +08:00
|
|
|
node = tree_search(tree, start);
|
2008-04-01 23:21:40 +08:00
|
|
|
if (!node) {
|
2008-01-25 05:13:08 +08:00
|
|
|
ret = -ENOENT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
if (state->start != start) {
|
|
|
|
ret = -ENOENT;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-02-11 20:24:13 +08:00
|
|
|
*failrec = state->failrec;
|
2008-01-25 05:13:08 +08:00
|
|
|
out:
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* searches a range in the state tree for a given mask.
|
2008-01-29 22:59:12 +08:00
|
|
|
* If 'filled' == 1, this returns 1 only if every extent in the tree
|
2008-01-25 05:13:08 +08:00
|
|
|
* has the bits set. Otherwise, 1 is returned if any bit in the
|
|
|
|
* range is found set.
|
|
|
|
*/
|
|
|
|
int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end,
|
2015-01-15 02:52:13 +08:00
|
|
|
unsigned bits, int filled, struct extent_state *cached)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_state *state = NULL;
|
|
|
|
struct rb_node *node;
|
|
|
|
int bitset = 0;
|
|
|
|
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_lock(&tree->lock);
|
2014-07-07 03:09:59 +08:00
|
|
|
if (cached && extent_state_in_tree(cached) && cached->start <= start &&
|
2011-06-21 02:53:48 +08:00
|
|
|
cached->end > start)
|
2009-09-03 03:22:30 +08:00
|
|
|
node = &cached->rb_node;
|
|
|
|
else
|
|
|
|
node = tree_search(tree, start);
|
2008-01-25 05:13:08 +08:00
|
|
|
while (node && start <= end) {
|
|
|
|
state = rb_entry(node, struct extent_state, rb_node);
|
|
|
|
|
|
|
|
if (filled && state->start > start) {
|
|
|
|
bitset = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (state->start > end)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (state->state & bits) {
|
|
|
|
bitset = 1;
|
|
|
|
if (!filled)
|
|
|
|
break;
|
|
|
|
} else if (filled) {
|
|
|
|
bitset = 0;
|
|
|
|
break;
|
|
|
|
}
|
2009-09-24 08:23:16 +08:00
|
|
|
|
|
|
|
if (state->end == (u64)-1)
|
|
|
|
break;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
start = state->end + 1;
|
|
|
|
if (start > end)
|
|
|
|
break;
|
|
|
|
node = rb_next(node);
|
|
|
|
if (!node) {
|
|
|
|
if (filled)
|
|
|
|
bitset = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2008-12-18 03:51:42 +08:00
|
|
|
spin_unlock(&tree->lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
return bitset;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* helper function to set a given page up to date if all the
|
|
|
|
* extents in the tree for that page are up to date
|
|
|
|
*/
|
2012-03-01 21:56:26 +08:00
|
|
|
static void check_page_uptodate(struct extent_io_tree *tree, struct page *page)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 end = start + PAGE_SIZE - 1;
|
2009-09-03 03:22:30 +08:00
|
|
|
if (test_range_bit(tree, start, end, EXTENT_UPTODATE, 1, NULL))
|
2008-01-25 05:13:08 +08:00
|
|
|
SetPageUptodate(page);
|
|
|
|
}
|
|
|
|
|
2017-05-05 23:57:15 +08:00
|
|
|
int free_io_failure(struct extent_io_tree *failure_tree,
|
|
|
|
struct extent_io_tree *io_tree,
|
|
|
|
struct io_failure_record *rec)
|
2011-07-22 21:41:52 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int err = 0;
|
|
|
|
|
2016-02-11 20:24:13 +08:00
|
|
|
set_state_failrec(failure_tree, rec->start, NULL);
|
2011-07-22 21:41:52 +08:00
|
|
|
ret = clear_extent_bits(failure_tree, rec->start,
|
|
|
|
rec->start + rec->len - 1,
|
2016-04-27 05:54:39 +08:00
|
|
|
EXTENT_LOCKED | EXTENT_DIRTY);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (ret)
|
|
|
|
err = ret;
|
|
|
|
|
2017-05-05 23:57:15 +08:00
|
|
|
ret = clear_extent_bits(io_tree, rec->start,
|
2013-01-30 07:40:14 +08:00
|
|
|
rec->start + rec->len - 1,
|
2016-04-27 05:54:39 +08:00
|
|
|
EXTENT_DAMAGED);
|
2013-01-30 07:40:14 +08:00
|
|
|
if (ret && !err)
|
|
|
|
err = ret;
|
2011-07-22 21:41:52 +08:00
|
|
|
|
|
|
|
kfree(rec);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* this bypasses the standard btrfs submit functions deliberately, as
|
|
|
|
* the standard behavior is to write all copies in a raid setup. here we only
|
|
|
|
* want to write the one bad copy. so we do the mapping for ourselves and issue
|
|
|
|
* submit_bio directly.
|
2012-11-05 22:46:42 +08:00
|
|
|
* to avoid any synchronization issues, wait for the data after writing, which
|
2011-07-22 21:41:52 +08:00
|
|
|
* actually prevents the read that triggered the error from finishing.
|
|
|
|
* currently, there can be no more than two copies of every data bit. thus,
|
|
|
|
* exactly one rewrite is required.
|
|
|
|
*/
|
2017-05-05 23:57:14 +08:00
|
|
|
int repair_io_failure(struct btrfs_fs_info *fs_info, u64 ino, u64 start,
|
|
|
|
u64 length, u64 logical, struct page *page,
|
|
|
|
unsigned int pg_offset, int mirror_num)
|
2011-07-22 21:41:52 +08:00
|
|
|
{
|
|
|
|
struct bio *bio;
|
|
|
|
struct btrfs_device *dev;
|
|
|
|
u64 map_length = 0;
|
|
|
|
u64 sector;
|
|
|
|
struct btrfs_bio *bbio = NULL;
|
|
|
|
int ret;
|
|
|
|
|
2017-11-28 05:05:09 +08:00
|
|
|
ASSERT(!(fs_info->sb->s_flags & SB_RDONLY));
|
2011-07-22 21:41:52 +08:00
|
|
|
BUG_ON(!mirror_num);
|
|
|
|
|
2017-06-12 23:29:41 +08:00
|
|
|
bio = btrfs_io_bio_alloc(1);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_size = 0;
|
2011-07-22 21:41:52 +08:00
|
|
|
map_length = length;
|
|
|
|
|
2016-05-28 05:21:27 +08:00
|
|
|
/*
|
|
|
|
* Avoid races with device replace and make sure our bbio has devices
|
|
|
|
* associated to its stripes that don't go away while we are doing the
|
|
|
|
* read repair operation.
|
|
|
|
*/
|
|
|
|
btrfs_bio_counter_inc_blocked(fs_info);
|
2017-07-19 15:48:42 +08:00
|
|
|
if (btrfs_is_parity_mirror(fs_info, logical, length)) {
|
2017-03-30 01:53:58 +08:00
|
|
|
/*
|
|
|
|
* Note that we don't use BTRFS_MAP_WRITE because it's supposed
|
|
|
|
* to update all raid stripes, but here we just want to correct
|
|
|
|
* bad stripe, thus BTRFS_MAP_READ is abused to only get the bad
|
|
|
|
* stripe's dev and sector.
|
|
|
|
*/
|
|
|
|
ret = btrfs_map_block(fs_info, BTRFS_MAP_READ, logical,
|
|
|
|
&map_length, &bbio, 0);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_bio_counter_dec(fs_info);
|
|
|
|
bio_put(bio);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
ASSERT(bbio->mirror_num == 1);
|
|
|
|
} else {
|
|
|
|
ret = btrfs_map_block(fs_info, BTRFS_MAP_WRITE, logical,
|
|
|
|
&map_length, &bbio, mirror_num);
|
|
|
|
if (ret) {
|
|
|
|
btrfs_bio_counter_dec(fs_info);
|
|
|
|
bio_put(bio);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
BUG_ON(mirror_num != bbio->mirror_num);
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
2017-03-30 01:53:58 +08:00
|
|
|
|
|
|
|
sector = bbio->stripes[bbio->mirror_num - 1].physical >> 9;
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_sector = sector;
|
2017-03-30 01:53:58 +08:00
|
|
|
dev = bbio->stripes[bbio->mirror_num - 1].dev;
|
2015-01-20 15:11:34 +08:00
|
|
|
btrfs_put_bbio(bbio);
|
2017-12-04 12:54:52 +08:00
|
|
|
if (!dev || !dev->bdev ||
|
|
|
|
!test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state)) {
|
2016-05-28 05:21:27 +08:00
|
|
|
btrfs_bio_counter_dec(fs_info);
|
2011-07-22 21:41:52 +08:00
|
|
|
bio_put(bio);
|
|
|
|
return -EIO;
|
|
|
|
}
|
2017-08-24 01:10:32 +08:00
|
|
|
bio_set_dev(bio, dev->bdev);
|
2016-11-01 21:40:10 +08:00
|
|
|
bio->bi_opf = REQ_OP_WRITE | REQ_SYNC;
|
2014-09-12 18:44:00 +08:00
|
|
|
bio_add_page(bio, page, length, pg_offset);
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2016-06-06 03:31:41 +08:00
|
|
|
if (btrfsic_submit_bio_wait(bio)) {
|
2011-07-22 21:41:52 +08:00
|
|
|
/* try to remap that extent elsewhere? */
|
2016-05-28 05:21:27 +08:00
|
|
|
btrfs_bio_counter_dec(fs_info);
|
2011-07-22 21:41:52 +08:00
|
|
|
bio_put(bio);
|
2012-05-25 22:06:08 +08:00
|
|
|
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
|
2011-07-22 21:41:52 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2015-10-08 16:43:10 +08:00
|
|
|
btrfs_info_rl_in_rcu(fs_info,
|
|
|
|
"read error corrected: ino %llu off %llu (dev %s sector %llu)",
|
2017-05-05 23:57:14 +08:00
|
|
|
ino, start,
|
2014-09-12 18:44:01 +08:00
|
|
|
rcu_str_deref(dev->name), sector);
|
2016-05-28 05:21:27 +08:00
|
|
|
btrfs_bio_counter_dec(fs_info);
|
2011-07-22 21:41:52 +08:00
|
|
|
bio_put(bio);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-03-20 18:23:44 +08:00
|
|
|
int btrfs_repair_eb_io_failure(struct extent_buffer *eb, int mirror_num)
|
2012-03-27 09:57:36 +08:00
|
|
|
{
|
2019-03-20 18:23:44 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2012-03-27 09:57:36 +08:00
|
|
|
u64 start = eb->start;
|
2018-03-02 01:20:27 +08:00
|
|
|
int i, num_pages = num_extent_pages(eb);
|
2012-04-13 03:55:15 +08:00
|
|
|
int ret = 0;
|
2012-03-27 09:57:36 +08:00
|
|
|
|
2017-07-17 15:45:34 +08:00
|
|
|
if (sb_rdonly(fs_info->sb))
|
2013-11-04 01:06:39 +08:00
|
|
|
return -EROFS;
|
|
|
|
|
2012-03-27 09:57:36 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
struct page *p = eb->pages[i];
|
2014-09-12 18:44:01 +08:00
|
|
|
|
2017-05-05 23:57:14 +08:00
|
|
|
ret = repair_io_failure(fs_info, 0, start, PAGE_SIZE, start, p,
|
2014-09-12 18:44:01 +08:00
|
|
|
start - page_offset(p), mirror_num);
|
2012-03-27 09:57:36 +08:00
|
|
|
if (ret)
|
|
|
|
break;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
start += PAGE_SIZE;
|
2012-03-27 09:57:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-07-22 21:41:52 +08:00
|
|
|
/*
|
|
|
|
* each time an IO finishes, we do a fast check in the IO failure tree
|
|
|
|
* to see if we need to process or clean up an io_failure_record
|
|
|
|
*/
|
2017-05-05 23:57:15 +08:00
|
|
|
int clean_io_failure(struct btrfs_fs_info *fs_info,
|
|
|
|
struct extent_io_tree *failure_tree,
|
|
|
|
struct extent_io_tree *io_tree, u64 start,
|
|
|
|
struct page *page, u64 ino, unsigned int pg_offset)
|
2011-07-22 21:41:52 +08:00
|
|
|
{
|
|
|
|
u64 private;
|
|
|
|
struct io_failure_record *failrec;
|
|
|
|
struct extent_state *state;
|
|
|
|
int num_copies;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
private = 0;
|
2017-05-05 23:57:15 +08:00
|
|
|
ret = count_range_bits(failure_tree, &private, (u64)-1, 1,
|
|
|
|
EXTENT_DIRTY, 0);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (!ret)
|
|
|
|
return 0;
|
|
|
|
|
2017-05-05 23:57:15 +08:00
|
|
|
ret = get_state_failrec(failure_tree, start, &failrec);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (ret)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
BUG_ON(!failrec->this_mirror);
|
|
|
|
|
|
|
|
if (failrec->in_validation) {
|
|
|
|
/* there was no real error, just free the record */
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"clean_io_failure: freeing dummy error at %llu",
|
|
|
|
failrec->start);
|
2011-07-22 21:41:52 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2017-07-17 15:45:34 +08:00
|
|
|
if (sb_rdonly(fs_info->sb))
|
2013-11-04 01:06:39 +08:00
|
|
|
goto out;
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2017-05-05 23:57:15 +08:00
|
|
|
spin_lock(&io_tree->lock);
|
|
|
|
state = find_first_extent_bit_state(io_tree,
|
2011-07-22 21:41:52 +08:00
|
|
|
failrec->start,
|
|
|
|
EXTENT_LOCKED);
|
2017-05-05 23:57:15 +08:00
|
|
|
spin_unlock(&io_tree->lock);
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2013-07-25 19:22:35 +08:00
|
|
|
if (state && state->start <= failrec->start &&
|
|
|
|
state->end >= failrec->start + failrec->len - 1) {
|
2012-11-05 22:46:42 +08:00
|
|
|
num_copies = btrfs_num_copies(fs_info, failrec->logical,
|
|
|
|
failrec->len);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (num_copies > 1) {
|
2017-05-05 23:57:15 +08:00
|
|
|
repair_io_failure(fs_info, ino, start, failrec->len,
|
|
|
|
failrec->logical, page, pg_offset,
|
|
|
|
failrec->failed_mirror);
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2017-05-05 23:57:15 +08:00
|
|
|
free_io_failure(failure_tree, io_tree, failrec);
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2014-09-12 18:43:58 +08:00
|
|
|
return 0;
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
|
|
|
|
Btrfs: cleanup the read failure record after write or when the inode is freeing
After the data is written successfully, we should cleanup the read failure record
in that range because
- If we set data COW for the file, the range that the failure record pointed to is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error during writting,
the corrupted data is corrected, so the failure record can be removed. And if
some errors happen on the mirrors, we also needn't worry about it because the
failure record will be recreated if we read the same place again.
Sometimes, we may fail to correct the data, so the failure records will be left
in the tree, we need free them when we free the inode or the memory leak happens.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-12 18:44:04 +08:00
|
|
|
/*
|
|
|
|
* Can be called when
|
|
|
|
* - hold extent lock
|
|
|
|
* - under ordered extent
|
|
|
|
* - the inode is freeing
|
|
|
|
*/
|
2017-02-20 19:50:57 +08:00
|
|
|
void btrfs_free_io_failure_record(struct btrfs_inode *inode, u64 start, u64 end)
|
Btrfs: cleanup the read failure record after write or when the inode is freeing
After the data is written successfully, we should cleanup the read failure record
in that range because
- If we set data COW for the file, the range that the failure record pointed to is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error during writting,
the corrupted data is corrected, so the failure record can be removed. And if
some errors happen on the mirrors, we also needn't worry about it because the
failure record will be recreated if we read the same place again.
Sometimes, we may fail to correct the data, so the failure records will be left
in the tree, we need free them when we free the inode or the memory leak happens.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-12 18:44:04 +08:00
|
|
|
{
|
2017-02-20 19:50:57 +08:00
|
|
|
struct extent_io_tree *failure_tree = &inode->io_failure_tree;
|
Btrfs: cleanup the read failure record after write or when the inode is freeing
After the data is written successfully, we should cleanup the read failure record
in that range because
- If we set data COW for the file, the range that the failure record pointed to is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error during writting,
the corrupted data is corrected, so the failure record can be removed. And if
some errors happen on the mirrors, we also needn't worry about it because the
failure record will be recreated if we read the same place again.
Sometimes, we may fail to correct the data, so the failure records will be left
in the tree, we need free them when we free the inode or the memory leak happens.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-12 18:44:04 +08:00
|
|
|
struct io_failure_record *failrec;
|
|
|
|
struct extent_state *state, *next;
|
|
|
|
|
|
|
|
if (RB_EMPTY_ROOT(&failure_tree->state))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock(&failure_tree->lock);
|
|
|
|
state = find_first_extent_bit_state(failure_tree, start, EXTENT_DIRTY);
|
|
|
|
while (state) {
|
|
|
|
if (state->start > end)
|
|
|
|
break;
|
|
|
|
|
|
|
|
ASSERT(state->end <= end);
|
|
|
|
|
|
|
|
next = next_state(state);
|
|
|
|
|
2016-02-11 20:24:13 +08:00
|
|
|
failrec = state->failrec;
|
Btrfs: cleanup the read failure record after write or when the inode is freeing
After the data is written successfully, we should cleanup the read failure record
in that range because
- If we set data COW for the file, the range that the failure record pointed to is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error during writting,
the corrupted data is corrected, so the failure record can be removed. And if
some errors happen on the mirrors, we also needn't worry about it because the
failure record will be recreated if we read the same place again.
Sometimes, we may fail to correct the data, so the failure records will be left
in the tree, we need free them when we free the inode or the memory leak happens.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-12 18:44:04 +08:00
|
|
|
free_extent_state(state);
|
|
|
|
kfree(failrec);
|
|
|
|
|
|
|
|
state = next;
|
|
|
|
}
|
|
|
|
spin_unlock(&failure_tree->lock);
|
|
|
|
}
|
|
|
|
|
2014-09-12 18:43:59 +08:00
|
|
|
int btrfs_get_io_failure_record(struct inode *inode, u64 start, u64 end,
|
2016-02-11 20:24:13 +08:00
|
|
|
struct io_failure_record **failrec_ret)
|
2011-07-22 21:41:52 +08:00
|
|
|
{
|
2016-09-20 22:05:02 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
2014-09-12 18:43:59 +08:00
|
|
|
struct io_failure_record *failrec;
|
2011-07-22 21:41:52 +08:00
|
|
|
struct extent_map *em;
|
|
|
|
struct extent_io_tree *failure_tree = &BTRFS_I(inode)->io_failure_tree;
|
|
|
|
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
|
|
|
|
struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
|
|
|
|
int ret;
|
|
|
|
u64 logical;
|
|
|
|
|
2016-02-11 20:24:13 +08:00
|
|
|
ret = get_state_failrec(failure_tree, start, &failrec);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (ret) {
|
|
|
|
failrec = kzalloc(sizeof(*failrec), GFP_NOFS);
|
|
|
|
if (!failrec)
|
|
|
|
return -ENOMEM;
|
2014-09-12 18:43:59 +08:00
|
|
|
|
2011-07-22 21:41:52 +08:00
|
|
|
failrec->start = start;
|
|
|
|
failrec->len = end - start + 1;
|
|
|
|
failrec->this_mirror = 0;
|
|
|
|
failrec->bio_flags = 0;
|
|
|
|
failrec->in_validation = 0;
|
|
|
|
|
|
|
|
read_lock(&em_tree->lock);
|
|
|
|
em = lookup_extent_mapping(em_tree, start, failrec->len);
|
|
|
|
if (!em) {
|
|
|
|
read_unlock(&em_tree->lock);
|
|
|
|
kfree(failrec);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2013-11-25 11:22:07 +08:00
|
|
|
if (em->start > start || em->start + em->len <= start) {
|
2011-07-22 21:41:52 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
em = NULL;
|
|
|
|
}
|
|
|
|
read_unlock(&em_tree->lock);
|
2012-10-01 17:07:15 +08:00
|
|
|
if (!em) {
|
2011-07-22 21:41:52 +08:00
|
|
|
kfree(failrec);
|
|
|
|
return -EIO;
|
|
|
|
}
|
2014-09-12 18:43:59 +08:00
|
|
|
|
2011-07-22 21:41:52 +08:00
|
|
|
logical = start - em->start;
|
|
|
|
logical = em->block_start + logical;
|
|
|
|
if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) {
|
|
|
|
logical = em->block_start;
|
|
|
|
failrec->bio_flags = EXTENT_BIO_COMPRESSED;
|
|
|
|
extent_set_compress_type(&failrec->bio_flags,
|
|
|
|
em->compress_type);
|
|
|
|
}
|
2014-09-12 18:43:59 +08:00
|
|
|
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"Get IO Failure Record: (new) logical=%llu, start=%llu, len=%llu",
|
|
|
|
logical, start, failrec->len);
|
2014-09-12 18:43:59 +08:00
|
|
|
|
2011-07-22 21:41:52 +08:00
|
|
|
failrec->logical = logical;
|
|
|
|
free_extent_map(em);
|
|
|
|
|
|
|
|
/* set the bits in the private failure tree */
|
|
|
|
ret = set_extent_bits(failure_tree, start, end,
|
2016-04-27 05:54:39 +08:00
|
|
|
EXTENT_LOCKED | EXTENT_DIRTY);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (ret >= 0)
|
2016-02-11 20:24:13 +08:00
|
|
|
ret = set_state_failrec(failure_tree, start, failrec);
|
2011-07-22 21:41:52 +08:00
|
|
|
/* set the bits in the inode's tree */
|
|
|
|
if (ret >= 0)
|
2016-04-27 05:54:39 +08:00
|
|
|
ret = set_extent_bits(tree, start, end, EXTENT_DAMAGED);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
kfree(failrec);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
} else {
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"Get IO Failure Record: (found) logical=%llu, start=%llu, len=%llu, validation=%d",
|
|
|
|
failrec->logical, failrec->start, failrec->len,
|
|
|
|
failrec->in_validation);
|
2011-07-22 21:41:52 +08:00
|
|
|
/*
|
|
|
|
* when data can be on disk more than twice, add to failrec here
|
|
|
|
* (e.g. with a list for failed_mirror) to make
|
|
|
|
* clean_io_failure() clean all those errors at once.
|
|
|
|
*/
|
|
|
|
}
|
2014-09-12 18:43:59 +08:00
|
|
|
|
|
|
|
*failrec_ret = failrec;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-12-18 20:22:11 +08:00
|
|
|
bool btrfs_check_repairable(struct inode *inode, unsigned failed_bio_pages,
|
2014-09-12 18:43:59 +08:00
|
|
|
struct io_failure_record *failrec, int failed_mirror)
|
|
|
|
{
|
2016-09-20 22:05:02 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
2014-09-12 18:43:59 +08:00
|
|
|
int num_copies;
|
|
|
|
|
2016-09-20 22:05:02 +08:00
|
|
|
num_copies = btrfs_num_copies(fs_info, failrec->logical, failrec->len);
|
2011-07-22 21:41:52 +08:00
|
|
|
if (num_copies == 1) {
|
|
|
|
/*
|
|
|
|
* we only have a single copy of the data, so don't bother with
|
|
|
|
* all the retry and error correction code that follows. no
|
|
|
|
* matter what the error is, it is very likely to persist.
|
|
|
|
*/
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"Check Repairable: cannot repair, num_copies=%d, next_mirror %d, failed_mirror %d",
|
|
|
|
num_copies, failrec->this_mirror, failed_mirror);
|
2017-07-14 06:00:50 +08:00
|
|
|
return false;
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* there are two premises:
|
|
|
|
* a) deliver good data to the caller
|
|
|
|
* b) correct the bad sectors on disk
|
|
|
|
*/
|
2017-12-18 20:22:11 +08:00
|
|
|
if (failed_bio_pages > 1) {
|
2011-07-22 21:41:52 +08:00
|
|
|
/*
|
|
|
|
* to fulfill b), we need to know the exact failing sectors, as
|
|
|
|
* we don't want to rewrite any more than the failed ones. thus,
|
|
|
|
* we need separate read requests for the failed bio
|
|
|
|
*
|
|
|
|
* if the following BUG_ON triggers, our validation request got
|
|
|
|
* merged. we need separate requests for our algorithm to work.
|
|
|
|
*/
|
|
|
|
BUG_ON(failrec->in_validation);
|
|
|
|
failrec->in_validation = 1;
|
|
|
|
failrec->this_mirror = failed_mirror;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* we're ready to fulfill a) and b) alongside. get a good copy
|
|
|
|
* of the failed sector and if we succeed, we have setup
|
|
|
|
* everything for repair_io_failure to do the rest for us.
|
|
|
|
*/
|
|
|
|
if (failrec->in_validation) {
|
|
|
|
BUG_ON(failrec->this_mirror != failed_mirror);
|
|
|
|
failrec->in_validation = 0;
|
|
|
|
failrec->this_mirror = 0;
|
|
|
|
}
|
|
|
|
failrec->failed_mirror = failed_mirror;
|
|
|
|
failrec->this_mirror++;
|
|
|
|
if (failrec->this_mirror == failed_mirror)
|
|
|
|
failrec->this_mirror++;
|
|
|
|
}
|
|
|
|
|
2013-07-25 19:22:34 +08:00
|
|
|
if (failrec->this_mirror > num_copies) {
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"Check Repairable: (fail) num_copies=%d, next_mirror %d, failed_mirror %d",
|
|
|
|
num_copies, failrec->this_mirror, failed_mirror);
|
2017-07-14 06:00:50 +08:00
|
|
|
return false;
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
|
|
|
|
2017-07-14 06:00:50 +08:00
|
|
|
return true;
|
2014-09-12 18:43:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
struct bio *btrfs_create_repair_bio(struct inode *inode, struct bio *failed_bio,
|
|
|
|
struct io_failure_record *failrec,
|
|
|
|
struct page *page, int pg_offset, int icsum,
|
2014-09-12 18:44:03 +08:00
|
|
|
bio_end_io_t *endio_func, void *data)
|
2014-09-12 18:43:59 +08:00
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
2014-09-12 18:43:59 +08:00
|
|
|
struct bio *bio;
|
|
|
|
struct btrfs_io_bio *btrfs_failed_bio;
|
|
|
|
struct btrfs_io_bio *btrfs_bio;
|
|
|
|
|
2017-06-12 23:29:41 +08:00
|
|
|
bio = btrfs_io_bio_alloc(1);
|
2014-09-12 18:43:59 +08:00
|
|
|
bio->bi_end_io = endio_func;
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_sector = failrec->logical >> 9;
|
2017-08-24 01:10:32 +08:00
|
|
|
bio_set_dev(bio, fs_info->fs_devices->latest_bdev);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_size = 0;
|
2014-09-12 18:44:03 +08:00
|
|
|
bio->bi_private = data;
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2013-07-25 19:22:34 +08:00
|
|
|
btrfs_failed_bio = btrfs_io_bio(failed_bio);
|
|
|
|
if (btrfs_failed_bio->csum) {
|
|
|
|
u16 csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
|
|
|
|
|
|
|
btrfs_bio = btrfs_io_bio(bio);
|
|
|
|
btrfs_bio->csum = btrfs_bio->csum_inline;
|
2014-09-12 18:43:59 +08:00
|
|
|
icsum *= csum_size;
|
|
|
|
memcpy(btrfs_bio->csum, btrfs_failed_bio->csum + icsum,
|
2013-07-25 19:22:34 +08:00
|
|
|
csum_size);
|
|
|
|
}
|
|
|
|
|
2014-09-12 18:43:59 +08:00
|
|
|
bio_add_page(bio, page, failrec->len, pg_offset);
|
|
|
|
|
|
|
|
return bio;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-11-22 16:17:49 +08:00
|
|
|
* This is a generic handler for readpage errors. If other copies exist, read
|
|
|
|
* those and write back good data to the failed position. Does not investigate
|
|
|
|
* in remapping the failed extent elsewhere, hoping the device will be smart
|
|
|
|
* enough to do this as needed
|
2014-09-12 18:43:59 +08:00
|
|
|
*/
|
|
|
|
static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
|
|
|
|
struct page *page, u64 start, u64 end,
|
|
|
|
int failed_mirror)
|
|
|
|
{
|
|
|
|
struct io_failure_record *failrec;
|
|
|
|
struct inode *inode = page->mapping->host;
|
|
|
|
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
|
2017-05-05 23:57:15 +08:00
|
|
|
struct extent_io_tree *failure_tree = &BTRFS_I(inode)->io_failure_tree;
|
2014-09-12 18:43:59 +08:00
|
|
|
struct bio *bio;
|
2016-11-01 21:40:10 +08:00
|
|
|
int read_mode = 0;
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t status;
|
2014-09-12 18:43:59 +08:00
|
|
|
int ret;
|
2019-02-15 19:13:07 +08:00
|
|
|
unsigned failed_bio_pages = failed_bio->bi_iter.bi_size >> PAGE_SHIFT;
|
2014-09-12 18:43:59 +08:00
|
|
|
|
2016-06-06 03:31:51 +08:00
|
|
|
BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE);
|
2014-09-12 18:43:59 +08:00
|
|
|
|
|
|
|
ret = btrfs_get_io_failure_record(inode, start, end, &failrec);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2017-12-18 20:22:11 +08:00
|
|
|
if (!btrfs_check_repairable(inode, failed_bio_pages, failrec,
|
2017-07-14 06:00:50 +08:00
|
|
|
failed_mirror)) {
|
2017-05-05 23:57:15 +08:00
|
|
|
free_io_failure(failure_tree, tree, failrec);
|
2014-09-12 18:43:59 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
2017-12-18 20:22:11 +08:00
|
|
|
if (failed_bio_pages > 1)
|
2016-11-01 21:40:10 +08:00
|
|
|
read_mode |= REQ_FAILFAST_DEV;
|
2014-09-12 18:43:59 +08:00
|
|
|
|
|
|
|
phy_offset >>= inode->i_sb->s_blocksize_bits;
|
|
|
|
bio = btrfs_create_repair_bio(inode, failed_bio, failrec, page,
|
|
|
|
start - page_offset(page),
|
2014-09-12 18:44:03 +08:00
|
|
|
(int)phy_offset, failed_bio->bi_end_io,
|
|
|
|
NULL);
|
2018-06-29 16:56:53 +08:00
|
|
|
bio->bi_opf = REQ_OP_READ | read_mode;
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(btrfs_sb(inode->i_sb),
|
|
|
|
"Repair Read Error: submitting new read[%#x] to this_mirror=%d, in_validation=%d",
|
|
|
|
read_mode, failrec->this_mirror, failrec->in_validation);
|
2011-07-22 21:41:52 +08:00
|
|
|
|
2017-07-06 07:41:23 +08:00
|
|
|
status = tree->ops->submit_bio_hook(tree->private_data, bio, failrec->this_mirror,
|
2019-04-11 00:46:04 +08:00
|
|
|
failrec->bio_flags);
|
2017-06-03 15:38:06 +08:00
|
|
|
if (status) {
|
2017-05-05 23:57:15 +08:00
|
|
|
free_io_failure(failure_tree, tree, failrec);
|
2014-09-12 18:43:57 +08:00
|
|
|
bio_put(bio);
|
2017-06-03 15:38:06 +08:00
|
|
|
ret = blk_status_to_errno(status);
|
2014-09-12 18:43:57 +08:00
|
|
|
}
|
|
|
|
|
2012-02-16 09:11:40 +08:00
|
|
|
return ret;
|
2011-07-22 21:41:52 +08:00
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/* lots and lots of room for performance fixes in the end_bio funcs */
|
|
|
|
|
2015-12-03 20:08:59 +08:00
|
|
|
void end_extent_writepage(struct page *page, int err, u64 start, u64 end)
|
2012-02-15 23:23:57 +08:00
|
|
|
{
|
|
|
|
int uptodate = (err == 0);
|
2014-06-12 13:39:58 +08:00
|
|
|
int ret = 0;
|
2012-02-15 23:23:57 +08:00
|
|
|
|
2018-11-08 16:18:08 +08:00
|
|
|
btrfs_writepage_endio_finish_ordered(page, start, end, uptodate);
|
2012-02-15 23:23:57 +08:00
|
|
|
|
|
|
|
if (!uptodate) {
|
|
|
|
ClearPageUptodate(page);
|
|
|
|
SetPageError(page);
|
2017-05-10 01:14:01 +08:00
|
|
|
ret = err < 0 ? err : -EIO;
|
2014-05-12 12:47:36 +08:00
|
|
|
mapping_set_error(page->mapping, ret);
|
2012-02-15 23:23:57 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* after a writepage IO is done, we need to:
|
|
|
|
* clear the uptodate bits on error
|
|
|
|
* clear the writeback bits in the extent tree for this IO
|
|
|
|
* end_page_writeback if the page has no more pending IO
|
|
|
|
*
|
|
|
|
* Scheduling is not allowed, so the extent state tree is expected
|
|
|
|
* to have one and only one object corresponding to this IO.
|
|
|
|
*/
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_bio_extent_writepage(struct bio *bio)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2017-06-03 15:38:06 +08:00
|
|
|
int error = blk_status_to_errno(bio->bi_status);
|
2013-11-08 04:20:26 +08:00
|
|
|
struct bio_vec *bvec;
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 start;
|
|
|
|
u64 end;
|
2019-02-15 19:13:19 +08:00
|
|
|
struct bvec_iter_all iter_all;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2017-07-14 00:10:07 +08:00
|
|
|
ASSERT(!bio_flagged(bio, BIO_CLONED));
|
2019-04-25 15:03:00 +08:00
|
|
|
bio_for_each_segment_all(bvec, bio, iter_all) {
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *page = bvec->bv_page;
|
2016-06-23 06:54:23 +08:00
|
|
|
struct inode *inode = page->mapping->host;
|
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
2008-08-20 20:51:49 +08:00
|
|
|
|
2013-05-15 23:38:55 +08:00
|
|
|
/* We always issue full-page reads, but if some block
|
|
|
|
* in a page fails to read, blk_update_request() will
|
|
|
|
* advance bv_offset and adjust bv_len to compensate.
|
|
|
|
* Print a warning for nonzero offsets, and an error
|
|
|
|
* if they don't add up to a full page. */
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (bvec->bv_offset || bvec->bv_len != PAGE_SIZE) {
|
|
|
|
if (bvec->bv_offset + bvec->bv_len != PAGE_SIZE)
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info,
|
2013-12-21 00:37:06 +08:00
|
|
|
"partial page write in btrfs with offset %u and length %u",
|
|
|
|
bvec->bv_offset, bvec->bv_len);
|
|
|
|
else
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_info(fs_info,
|
2016-09-20 22:05:00 +08:00
|
|
|
"incomplete page write in btrfs with offset %u and length %u",
|
2013-12-21 00:37:06 +08:00
|
|
|
bvec->bv_offset, bvec->bv_len);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-05-15 23:38:55 +08:00
|
|
|
start = page_offset(page);
|
|
|
|
end = start + bvec->bv_offset + bvec->bv_len - 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
end_extent_writepage(page, error, start, end);
|
2013-05-15 23:38:55 +08:00
|
|
|
end_page_writeback(page);
|
2013-11-08 04:20:26 +08:00
|
|
|
}
|
2008-09-24 23:48:04 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2013-07-25 19:22:35 +08:00
|
|
|
static void
|
|
|
|
endio_readpage_release_extent(struct extent_io_tree *tree, u64 start, u64 len,
|
|
|
|
int uptodate)
|
|
|
|
{
|
|
|
|
struct extent_state *cached = NULL;
|
|
|
|
u64 end = start + len - 1;
|
|
|
|
|
|
|
|
if (uptodate && tree->track_uptodate)
|
|
|
|
set_extent_uptodate(tree, start, end, &cached, GFP_ATOMIC);
|
2017-12-08 01:52:54 +08:00
|
|
|
unlock_extent_cached_atomic(tree, start, end, &cached);
|
2013-07-25 19:22:35 +08:00
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* after a readpage IO is done, we need to:
|
|
|
|
* clear the uptodate bits on error
|
|
|
|
* set the uptodate bits if things worked
|
|
|
|
* set the page up to date if all extents in the tree are uptodate
|
|
|
|
* clear the lock bit in the extent tree
|
|
|
|
* unlock the page if there are no other extents locked for it
|
|
|
|
*
|
|
|
|
* Scheduling is not allowed, so the extent state tree is expected
|
|
|
|
* to have one and only one object corresponding to this IO.
|
|
|
|
*/
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_bio_extent_readpage(struct bio *bio)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2013-11-08 04:20:26 +08:00
|
|
|
struct bio_vec *bvec;
|
2017-06-03 15:38:06 +08:00
|
|
|
int uptodate = !bio->bi_status;
|
2013-07-25 19:22:34 +08:00
|
|
|
struct btrfs_io_bio *io_bio = btrfs_io_bio(bio);
|
2017-05-05 23:57:15 +08:00
|
|
|
struct extent_io_tree *tree, *failure_tree;
|
2013-07-25 19:22:34 +08:00
|
|
|
u64 offset = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 start;
|
|
|
|
u64 end;
|
2013-07-25 19:22:34 +08:00
|
|
|
u64 len;
|
2013-07-25 19:22:35 +08:00
|
|
|
u64 extent_start = 0;
|
|
|
|
u64 extent_len = 0;
|
2012-04-16 21:42:26 +08:00
|
|
|
int mirror;
|
2008-01-25 05:13:08 +08:00
|
|
|
int ret;
|
2019-02-15 19:13:19 +08:00
|
|
|
struct bvec_iter_all iter_all;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2017-07-14 00:10:07 +08:00
|
|
|
ASSERT(!bio_flagged(bio, BIO_CLONED));
|
2019-04-25 15:03:00 +08:00
|
|
|
bio_for_each_segment_all(bvec, bio, iter_all) {
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *page = bvec->bv_page;
|
2013-06-18 05:14:39 +08:00
|
|
|
struct inode *inode = page->mapping->host;
|
2016-09-20 22:05:02 +08:00
|
|
|
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
|
2018-11-22 16:17:49 +08:00
|
|
|
bool data_inode = btrfs_ino(BTRFS_I(inode))
|
|
|
|
!= BTRFS_BTREE_INODE_OBJECTID;
|
2011-04-06 18:02:20 +08:00
|
|
|
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_debug(fs_info,
|
|
|
|
"end_bio_extent_readpage: bi_sector=%llu, err=%d, mirror=%u",
|
2017-06-03 15:38:06 +08:00
|
|
|
(u64)bio->bi_iter.bi_sector, bio->bi_status,
|
2016-09-20 22:05:02 +08:00
|
|
|
io_bio->mirror_num);
|
2013-06-18 05:14:39 +08:00
|
|
|
tree = &BTRFS_I(inode)->io_tree;
|
2017-05-05 23:57:15 +08:00
|
|
|
failure_tree = &BTRFS_I(inode)->io_failure_tree;
|
2008-08-20 20:51:49 +08:00
|
|
|
|
2013-05-15 23:38:55 +08:00
|
|
|
/* We always issue full-page reads, but if some block
|
|
|
|
* in a page fails to read, blk_update_request() will
|
|
|
|
* advance bv_offset and adjust bv_len to compensate.
|
|
|
|
* Print a warning for nonzero offsets, and an error
|
|
|
|
* if they don't add up to a full page. */
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (bvec->bv_offset || bvec->bv_len != PAGE_SIZE) {
|
|
|
|
if (bvec->bv_offset + bvec->bv_len != PAGE_SIZE)
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_err(fs_info,
|
|
|
|
"partial page read in btrfs with offset %u and length %u",
|
2013-12-21 00:37:06 +08:00
|
|
|
bvec->bv_offset, bvec->bv_len);
|
|
|
|
else
|
2016-09-20 22:05:02 +08:00
|
|
|
btrfs_info(fs_info,
|
|
|
|
"incomplete page read in btrfs with offset %u and length %u",
|
2013-12-21 00:37:06 +08:00
|
|
|
bvec->bv_offset, bvec->bv_len);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-05-15 23:38:55 +08:00
|
|
|
start = page_offset(page);
|
|
|
|
end = start + bvec->bv_offset + bvec->bv_len - 1;
|
2013-07-25 19:22:34 +08:00
|
|
|
len = bvec->bv_len;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-05-18 06:30:14 +08:00
|
|
|
mirror = io_bio->mirror_num;
|
2018-11-22 16:17:49 +08:00
|
|
|
if (likely(uptodate)) {
|
2013-07-25 19:22:34 +08:00
|
|
|
ret = tree->ops->readpage_end_io_hook(io_bio, offset,
|
|
|
|
page, start, end,
|
|
|
|
mirror);
|
2012-08-27 22:30:03 +08:00
|
|
|
if (ret)
|
2008-01-25 05:13:08 +08:00
|
|
|
uptodate = 0;
|
2012-08-27 22:30:03 +08:00
|
|
|
else
|
2017-05-05 23:57:15 +08:00
|
|
|
clean_io_failure(BTRFS_I(inode)->root->fs_info,
|
|
|
|
failure_tree, tree, start,
|
|
|
|
page,
|
|
|
|
btrfs_ino(BTRFS_I(inode)), 0);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2012-03-27 09:57:36 +08:00
|
|
|
|
2013-07-25 19:22:33 +08:00
|
|
|
if (likely(uptodate))
|
|
|
|
goto readpage_ok;
|
|
|
|
|
2018-11-22 16:17:49 +08:00
|
|
|
if (data_inode) {
|
2017-03-25 06:04:50 +08:00
|
|
|
|
2011-12-01 22:30:36 +08:00
|
|
|
/*
|
2018-11-22 16:17:49 +08:00
|
|
|
* The generic bio_readpage_error handles errors the
|
|
|
|
* following way: If possible, new read requests are
|
|
|
|
* created and submitted and will end up in
|
|
|
|
* end_bio_extent_readpage as well (if we're lucky,
|
|
|
|
* not in the !uptodate case). In that case it returns
|
|
|
|
* 0 and we just go on with the next page in our bio.
|
|
|
|
* If it can't handle the error it will return -EIO and
|
|
|
|
* we remain responsible for that page.
|
2011-12-01 22:30:36 +08:00
|
|
|
*/
|
2018-11-22 16:17:49 +08:00
|
|
|
ret = bio_readpage_error(bio, offset, page, start, end,
|
|
|
|
mirror);
|
|
|
|
if (ret == 0) {
|
|
|
|
uptodate = !bio->bi_status;
|
|
|
|
offset += len;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
|
|
|
|
eb->read_mirror = mirror;
|
|
|
|
atomic_dec(&eb->io_pages);
|
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_READAHEAD,
|
|
|
|
&eb->bflags))
|
|
|
|
btree_readahead_hook(eb, -EIO);
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
2013-07-25 19:22:33 +08:00
|
|
|
readpage_ok:
|
2013-07-25 19:22:35 +08:00
|
|
|
if (likely(uptodate)) {
|
2013-06-18 05:14:39 +08:00
|
|
|
loff_t i_size = i_size_read(inode);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
pgoff_t end_index = i_size >> PAGE_SHIFT;
|
2014-08-19 23:32:22 +08:00
|
|
|
unsigned off;
|
2013-06-18 05:14:39 +08:00
|
|
|
|
|
|
|
/* Zero out the end if this page straddles i_size */
|
2018-12-05 22:23:03 +08:00
|
|
|
off = offset_in_page(i_size);
|
2014-08-19 23:32:22 +08:00
|
|
|
if (page->index == end_index && off)
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
zero_user_segment(page, off, PAGE_SIZE);
|
2013-05-15 23:38:55 +08:00
|
|
|
SetPageUptodate(page);
|
2008-01-29 22:59:12 +08:00
|
|
|
} else {
|
2013-05-15 23:38:55 +08:00
|
|
|
ClearPageUptodate(page);
|
|
|
|
SetPageError(page);
|
2008-01-29 22:59:12 +08:00
|
|
|
}
|
2013-05-15 23:38:55 +08:00
|
|
|
unlock_page(page);
|
2013-07-25 19:22:34 +08:00
|
|
|
offset += len;
|
2013-07-25 19:22:35 +08:00
|
|
|
|
|
|
|
if (unlikely(!uptodate)) {
|
|
|
|
if (extent_len) {
|
|
|
|
endio_readpage_release_extent(tree,
|
|
|
|
extent_start,
|
|
|
|
extent_len, 1);
|
|
|
|
extent_start = 0;
|
|
|
|
extent_len = 0;
|
|
|
|
}
|
|
|
|
endio_readpage_release_extent(tree, start,
|
|
|
|
end - start + 1, 0);
|
|
|
|
} else if (!extent_len) {
|
|
|
|
extent_start = start;
|
|
|
|
extent_len = end + 1 - start;
|
|
|
|
} else if (extent_start + extent_len == start) {
|
|
|
|
extent_len += end + 1 - start;
|
|
|
|
} else {
|
|
|
|
endio_readpage_release_extent(tree, extent_start,
|
|
|
|
extent_len, uptodate);
|
|
|
|
extent_start = start;
|
|
|
|
extent_len = end + 1 - start;
|
|
|
|
}
|
2013-11-08 04:20:26 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-07-25 19:22:35 +08:00
|
|
|
if (extent_len)
|
|
|
|
endio_readpage_release_extent(tree, extent_start, extent_len,
|
|
|
|
uptodate);
|
2018-11-23 00:16:49 +08:00
|
|
|
btrfs_io_bio_free_csum(io_bio);
|
2008-01-25 05:13:08 +08:00
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2013-05-18 06:30:14 +08:00
|
|
|
/*
|
2017-06-12 23:29:39 +08:00
|
|
|
* Initialize the members up to but not including 'bio'. Use after allocating a
|
|
|
|
* new bio by bio_alloc_bioset as it does not initialize the bytes outside of
|
|
|
|
* 'bio' because use of __GFP_ZERO is not supported.
|
2013-05-18 06:30:14 +08:00
|
|
|
*/
|
2017-06-12 23:29:39 +08:00
|
|
|
static inline void btrfs_io_bio_init(struct btrfs_io_bio *btrfs_bio)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2017-06-12 23:29:39 +08:00
|
|
|
memset(btrfs_bio, 0, offsetof(struct btrfs_io_bio, bio));
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2013-05-18 06:30:14 +08:00
|
|
|
/*
|
2017-06-02 23:26:26 +08:00
|
|
|
* The following helpers allocate a bio. As it's backed by a bioset, it'll
|
|
|
|
* never fail. We're returning a bio right now but you can call btrfs_io_bio
|
|
|
|
* for the appropriate container_of magic
|
2013-05-18 06:30:14 +08:00
|
|
|
*/
|
2019-06-19 02:00:16 +08:00
|
|
|
struct bio *btrfs_bio_alloc(u64 first_byte)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct bio *bio;
|
|
|
|
|
2018-05-21 06:25:56 +08:00
|
|
|
bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &btrfs_bioset);
|
2017-06-03 00:35:36 +08:00
|
|
|
bio->bi_iter.bi_sector = first_byte >> 9;
|
2017-06-12 23:29:39 +08:00
|
|
|
btrfs_io_bio_init(btrfs_io_bio(bio));
|
2008-01-25 05:13:08 +08:00
|
|
|
return bio;
|
|
|
|
}
|
|
|
|
|
2017-06-02 23:48:13 +08:00
|
|
|
struct bio *btrfs_bio_clone(struct bio *bio)
|
2013-05-18 06:30:14 +08:00
|
|
|
{
|
2014-09-12 18:43:54 +08:00
|
|
|
struct btrfs_io_bio *btrfs_bio;
|
|
|
|
struct bio *new;
|
2013-05-18 06:30:14 +08:00
|
|
|
|
2017-06-02 23:26:26 +08:00
|
|
|
/* Bio allocation backed by a bioset does not fail */
|
2018-05-21 06:25:56 +08:00
|
|
|
new = bio_clone_fast(bio, GFP_NOFS, &btrfs_bioset);
|
2017-06-02 23:26:26 +08:00
|
|
|
btrfs_bio = btrfs_io_bio(new);
|
2017-06-12 23:29:39 +08:00
|
|
|
btrfs_io_bio_init(btrfs_bio);
|
2017-06-02 23:26:26 +08:00
|
|
|
btrfs_bio->iter = bio->bi_iter;
|
2014-09-12 18:43:54 +08:00
|
|
|
return new;
|
|
|
|
}
|
2013-05-18 06:30:14 +08:00
|
|
|
|
2017-06-12 23:29:41 +08:00
|
|
|
struct bio *btrfs_io_bio_alloc(unsigned int nr_iovecs)
|
2013-05-18 06:30:14 +08:00
|
|
|
{
|
2013-07-25 19:22:34 +08:00
|
|
|
struct bio *bio;
|
|
|
|
|
2017-06-02 23:26:26 +08:00
|
|
|
/* Bio allocation backed by a bioset does not fail */
|
2018-05-21 06:25:56 +08:00
|
|
|
bio = bio_alloc_bioset(GFP_NOFS, nr_iovecs, &btrfs_bioset);
|
2017-06-12 23:29:39 +08:00
|
|
|
btrfs_io_bio_init(btrfs_io_bio(bio));
|
2013-07-25 19:22:34 +08:00
|
|
|
return bio;
|
2013-05-18 06:30:14 +08:00
|
|
|
}
|
|
|
|
|
2017-05-17 01:57:14 +08:00
|
|
|
struct bio *btrfs_bio_clone_partial(struct bio *orig, int offset, int size)
|
2017-05-16 08:43:31 +08:00
|
|
|
{
|
|
|
|
struct bio *bio;
|
|
|
|
struct btrfs_io_bio *btrfs_bio;
|
|
|
|
|
|
|
|
/* this will never fail when it's backed by a bioset */
|
2018-05-21 06:25:56 +08:00
|
|
|
bio = bio_clone_fast(orig, GFP_NOFS, &btrfs_bioset);
|
2017-05-16 08:43:31 +08:00
|
|
|
ASSERT(bio);
|
|
|
|
|
|
|
|
btrfs_bio = btrfs_io_bio(bio);
|
2017-06-12 23:29:39 +08:00
|
|
|
btrfs_io_bio_init(btrfs_bio);
|
2017-05-16 08:43:31 +08:00
|
|
|
|
|
|
|
bio_trim(bio, offset >> 9, size >> 9);
|
2017-05-16 06:33:27 +08:00
|
|
|
btrfs_bio->iter = bio->bi_iter;
|
2017-05-16 08:43:31 +08:00
|
|
|
return bio;
|
|
|
|
}
|
2013-05-18 06:30:14 +08:00
|
|
|
|
2017-06-07 01:14:26 +08:00
|
|
|
/*
|
|
|
|
* @opf: bio REQ_OP_* and REQ_* flags as one value
|
2017-06-13 01:50:41 +08:00
|
|
|
* @tree: tree so we can call our merge_bio hook
|
|
|
|
* @wbc: optional writeback control for io accounting
|
|
|
|
* @page: page to add to the bio
|
|
|
|
* @pg_offset: offset of the new bio or to check whether we are adding
|
|
|
|
* a contiguous page to the previous one
|
|
|
|
* @size: portion of page that we want to write
|
|
|
|
* @offset: starting offset in the page
|
|
|
|
* @bdev: attach newly created bios to this bdev
|
2017-06-07 01:22:55 +08:00
|
|
|
* @bio_ret: must be valid pointer, newly allocated bio will be stored there
|
2017-06-13 01:50:41 +08:00
|
|
|
* @end_io_func: end_io callback for new bio
|
|
|
|
* @mirror_num: desired mirror to read/write
|
|
|
|
* @prev_bio_flags: flags of previous bio to see if we can merge the current one
|
|
|
|
* @bio_flags: flags of the current bio to see if we can merge them
|
2017-06-07 01:14:26 +08:00
|
|
|
*/
|
|
|
|
static int submit_extent_page(unsigned int opf, struct extent_io_tree *tree,
|
2015-07-03 04:57:22 +08:00
|
|
|
struct writeback_control *wbc,
|
2017-10-04 23:30:11 +08:00
|
|
|
struct page *page, u64 offset,
|
2017-10-04 23:10:34 +08:00
|
|
|
size_t size, unsigned long pg_offset,
|
2008-01-25 05:13:08 +08:00
|
|
|
struct block_device *bdev,
|
|
|
|
struct bio **bio_ret,
|
2008-04-10 04:28:12 +08:00
|
|
|
bio_end_io_t end_io_func,
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
int mirror_num,
|
|
|
|
unsigned long prev_bio_flags,
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
unsigned long bio_flags,
|
|
|
|
bool force_bio_submit)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
struct bio *bio;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
size_t page_size = min_t(size_t, size, PAGE_SIZE);
|
2017-10-04 23:30:11 +08:00
|
|
|
sector_t sector = offset >> 9;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2017-06-07 01:22:55 +08:00
|
|
|
ASSERT(bio_ret);
|
|
|
|
|
|
|
|
if (*bio_ret) {
|
2017-06-13 02:00:43 +08:00
|
|
|
bool contig;
|
|
|
|
bool can_merge = true;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
bio = *bio_ret;
|
2017-06-13 02:00:43 +08:00
|
|
|
if (prev_bio_flags & EXTENT_BIO_COMPRESSED)
|
2013-10-12 06:44:27 +08:00
|
|
|
contig = bio->bi_iter.bi_sector == sector;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
else
|
2012-09-26 06:05:12 +08:00
|
|
|
contig = bio_end_sector(bio) == sector;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2018-11-28 02:57:58 +08:00
|
|
|
ASSERT(tree->ops);
|
|
|
|
if (btrfs_bio_fits_in_stripe(page, page_size, bio, bio_flags))
|
2017-06-13 02:00:43 +08:00
|
|
|
can_merge = false;
|
|
|
|
|
|
|
|
if (prev_bio_flags != bio_flags || !contig || !can_merge ||
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
force_bio_submit ||
|
2017-10-04 23:10:34 +08:00
|
|
|
bio_add_page(bio, page, page_size, pg_offset) < page_size) {
|
2016-06-06 03:31:51 +08:00
|
|
|
ret = submit_one_bio(bio, mirror_num, prev_bio_flags);
|
2015-01-06 00:01:03 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
*bio_ret = NULL;
|
2012-03-12 23:03:00 +08:00
|
|
|
return ret;
|
2015-01-06 00:01:03 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
bio = NULL;
|
|
|
|
} else {
|
2015-07-03 04:57:22 +08:00
|
|
|
if (wbc)
|
2019-06-28 04:39:49 +08:00
|
|
|
wbc_account_cgroup_owner(wbc, page, page_size);
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2019-06-19 02:00:16 +08:00
|
|
|
bio = btrfs_bio_alloc(offset);
|
|
|
|
bio_set_dev(bio, bdev);
|
2017-10-04 23:10:34 +08:00
|
|
|
bio_add_page(bio, page, page_size, pg_offset);
|
2008-01-25 05:13:08 +08:00
|
|
|
bio->bi_end_io = end_io_func;
|
|
|
|
bio->bi_private = tree;
|
2017-06-28 01:51:28 +08:00
|
|
|
bio->bi_write_hint = page->mapping->host->i_write_hint;
|
2017-06-07 01:14:26 +08:00
|
|
|
bio->bi_opf = opf;
|
2015-07-03 04:57:22 +08:00
|
|
|
if (wbc) {
|
|
|
|
wbc_init_bio(wbc, bio);
|
2019-06-28 04:39:49 +08:00
|
|
|
wbc_account_cgroup_owner(wbc, page, page_size);
|
2015-07-03 04:57:22 +08:00
|
|
|
}
|
2008-01-29 22:59:12 +08:00
|
|
|
|
2017-06-07 01:22:55 +08:00
|
|
|
*bio_ret = bio;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-04-26 04:41:01 +08:00
|
|
|
static void attach_extent_buffer_page(struct extent_buffer *eb,
|
|
|
|
struct page *page)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
if (!PagePrivate(page)) {
|
|
|
|
SetPagePrivate(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
get_page(page);
|
2012-03-08 05:20:05 +08:00
|
|
|
set_page_private(page, (unsigned long)eb);
|
|
|
|
} else {
|
|
|
|
WARN_ON(page->private != (unsigned long)eb);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-08 05:20:05 +08:00
|
|
|
void set_page_extent_mapped(struct page *page)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2012-03-08 05:20:05 +08:00
|
|
|
if (!PagePrivate(page)) {
|
|
|
|
SetPagePrivate(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
get_page(page);
|
2012-03-08 05:20:05 +08:00
|
|
|
set_page_private(page, EXTENT_PAGE_PRIVATE);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2013-07-25 19:22:37 +08:00
|
|
|
static struct extent_map *
|
|
|
|
__get_extent_map(struct inode *inode, struct page *page, size_t pg_offset,
|
|
|
|
u64 start, u64 len, get_extent_t *get_extent,
|
|
|
|
struct extent_map **em_cached)
|
|
|
|
{
|
|
|
|
struct extent_map *em;
|
|
|
|
|
|
|
|
if (em_cached && *em_cached) {
|
|
|
|
em = *em_cached;
|
2014-02-25 22:15:12 +08:00
|
|
|
if (extent_map_in_tree(em) && start >= em->start &&
|
2013-07-25 19:22:37 +08:00
|
|
|
start < extent_map_end(em)) {
|
2017-03-03 16:55:12 +08:00
|
|
|
refcount_inc(&em->refs);
|
2013-07-25 19:22:37 +08:00
|
|
|
return em;
|
|
|
|
}
|
|
|
|
|
|
|
|
free_extent_map(em);
|
|
|
|
*em_cached = NULL;
|
|
|
|
}
|
|
|
|
|
2017-02-20 19:51:06 +08:00
|
|
|
em = get_extent(BTRFS_I(inode), page, pg_offset, start, len, 0);
|
2013-07-25 19:22:37 +08:00
|
|
|
if (em_cached && !IS_ERR_OR_NULL(em)) {
|
|
|
|
BUG_ON(*em_cached);
|
2017-03-03 16:55:12 +08:00
|
|
|
refcount_inc(&em->refs);
|
2013-07-25 19:22:37 +08:00
|
|
|
*em_cached = em;
|
|
|
|
}
|
|
|
|
return em;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* basic readpage implementation. Locked extent state structs are inserted
|
|
|
|
* into the tree that are removed when the IO is done (by the end_io
|
|
|
|
* handlers)
|
2012-03-12 23:03:00 +08:00
|
|
|
* XXX JDM: This needs looking at to ensure proper page locking
|
2016-07-12 01:39:07 +08:00
|
|
|
* return 0 on success, otherwise return error
|
2008-01-25 05:13:08 +08:00
|
|
|
*/
|
2013-07-25 19:22:36 +08:00
|
|
|
static int __do_readpage(struct extent_io_tree *tree,
|
|
|
|
struct page *page,
|
|
|
|
get_extent_t *get_extent,
|
2013-07-25 19:22:37 +08:00
|
|
|
struct extent_map **em_cached,
|
2013-07-25 19:22:36 +08:00
|
|
|
struct bio **bio, int mirror_num,
|
2017-06-07 01:03:49 +08:00
|
|
|
unsigned long *bio_flags, unsigned int read_flags,
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
u64 *prev_em_start)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct inode *inode = page->mapping->host;
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
2017-06-07 01:50:13 +08:00
|
|
|
const u64 end = start + PAGE_SIZE - 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 cur = start;
|
|
|
|
u64 extent_offset;
|
|
|
|
u64 last_byte = i_size_read(inode);
|
|
|
|
u64 block_start;
|
|
|
|
u64 cur_end;
|
|
|
|
struct extent_map *em;
|
|
|
|
struct block_device *bdev;
|
2016-07-12 01:39:07 +08:00
|
|
|
int ret = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
int nr = 0;
|
2011-04-19 20:29:38 +08:00
|
|
|
size_t pg_offset = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t iosize;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
size_t disk_io_size;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t blocksize = inode->i_sb->s_blocksize;
|
2016-01-28 03:17:20 +08:00
|
|
|
unsigned long this_bio_flag = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
set_page_extent_mapped(page);
|
|
|
|
|
2011-05-27 00:01:56 +08:00
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
if (cleancache_get_page(page) == 0) {
|
|
|
|
BUG_ON(blocksize != PAGE_SIZE);
|
2013-07-25 19:22:36 +08:00
|
|
|
unlock_extent(tree, start, end);
|
2011-05-27 00:01:56 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (page->index == last_byte >> PAGE_SHIFT) {
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
char *userpage;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t zero_offset = offset_in_page(last_byte);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
|
|
|
if (zero_offset) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
iosize = PAGE_SIZE - zero_offset;
|
2011-11-25 23:14:28 +08:00
|
|
|
userpage = kmap_atomic(page);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
memset(userpage + zero_offset, 0, iosize);
|
|
|
|
flush_dcache_page(page);
|
2011-11-25 23:14:28 +08:00
|
|
|
kunmap_atomic(userpage);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
}
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
while (cur <= end) {
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
bool force_bio_submit = false;
|
2017-10-04 23:30:11 +08:00
|
|
|
u64 offset;
|
2013-02-12 00:33:00 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
if (cur >= last_byte) {
|
|
|
|
char *userpage;
|
2011-04-06 18:02:20 +08:00
|
|
|
struct extent_state *cached = NULL;
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
iosize = PAGE_SIZE - pg_offset;
|
2011-11-25 23:14:28 +08:00
|
|
|
userpage = kmap_atomic(page);
|
2011-04-19 20:29:38 +08:00
|
|
|
memset(userpage + pg_offset, 0, iosize);
|
2008-01-25 05:13:08 +08:00
|
|
|
flush_dcache_page(page);
|
2011-11-25 23:14:28 +08:00
|
|
|
kunmap_atomic(userpage);
|
2008-01-25 05:13:08 +08:00
|
|
|
set_extent_uptodate(tree, cur, cur + iosize - 1,
|
2011-04-06 18:02:20 +08:00
|
|
|
&cached, GFP_NOFS);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent_cached(tree, cur,
|
2017-12-13 04:43:52 +08:00
|
|
|
cur + iosize - 1, &cached);
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
}
|
2013-07-25 19:22:37 +08:00
|
|
|
em = __get_extent_map(inode, page, pg_offset, cur,
|
|
|
|
end - cur + 1, get_extent, em_cached);
|
2011-04-20 00:00:01 +08:00
|
|
|
if (IS_ERR_OR_NULL(em)) {
|
2008-01-25 05:13:08 +08:00
|
|
|
SetPageError(page);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent(tree, cur, end);
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
extent_offset = cur - em->start;
|
|
|
|
BUG_ON(extent_map_end(em) <= cur);
|
|
|
|
BUG_ON(end < cur);
|
|
|
|
|
2010-12-17 14:21:50 +08:00
|
|
|
if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) {
|
2013-08-07 02:42:50 +08:00
|
|
|
this_bio_flag |= EXTENT_BIO_COMPRESSED;
|
2010-12-17 14:21:50 +08:00
|
|
|
extent_set_compress_type(&this_bio_flag,
|
|
|
|
em->compress_type);
|
|
|
|
}
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
iosize = min(extent_map_end(em) - cur, end - cur + 1);
|
|
|
|
cur_end = min(extent_map_end(em) - 1, end);
|
2013-02-26 16:10:22 +08:00
|
|
|
iosize = ALIGN(iosize, blocksize);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
if (this_bio_flag & EXTENT_BIO_COMPRESSED) {
|
|
|
|
disk_io_size = em->block_len;
|
2017-10-04 23:30:11 +08:00
|
|
|
offset = em->block_start;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
} else {
|
2017-10-04 23:30:11 +08:00
|
|
|
offset = em->block_start + extent_offset;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
disk_io_size = iosize;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
bdev = em->bdev;
|
|
|
|
block_start = em->block_start;
|
2008-10-31 02:25:28 +08:00
|
|
|
if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
|
|
|
|
block_start = EXTENT_MAP_HOLE;
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have a file range that points to a compressed extent
|
|
|
|
* and it's followed by a consecutive file range that points to
|
|
|
|
* to the same compressed extent (possibly with a different
|
|
|
|
* offset and/or length, so it either points to the whole extent
|
|
|
|
* or only part of it), we must make sure we do not submit a
|
|
|
|
* single bio to populate the pages for the 2 ranges because
|
|
|
|
* this makes the compressed extent read zero out the pages
|
|
|
|
* belonging to the 2nd range. Imagine the following scenario:
|
|
|
|
*
|
|
|
|
* File layout
|
|
|
|
* [0 - 8K] [8K - 24K]
|
|
|
|
* | |
|
|
|
|
* | |
|
|
|
|
* points to extent X, points to extent X,
|
|
|
|
* offset 4K, length of 8K offset 0, length 16K
|
|
|
|
*
|
|
|
|
* [extent X, compressed length = 4K uncompressed length = 16K]
|
|
|
|
*
|
|
|
|
* If the bio to read the compressed extent covers both ranges,
|
|
|
|
* it will decompress extent X into the pages belonging to the
|
|
|
|
* first range and then it will stop, zeroing out the remaining
|
|
|
|
* pages that belong to the other range that points to extent X.
|
|
|
|
* So here we make sure we submit 2 bios, one for the first
|
|
|
|
* range and another one for the third range. Both will target
|
|
|
|
* the same physical extent from disk, but we can't currently
|
|
|
|
* make the compressed bio endio callback populate the pages
|
|
|
|
* for both ranges because each compressed bio is tightly
|
|
|
|
* coupled with a single extent map, and each range can have
|
|
|
|
* an extent map with a different offset value relative to the
|
|
|
|
* uncompressed data of our extent and different lengths. This
|
|
|
|
* is a corner case so we prioritize correctness over
|
|
|
|
* non-optimal behavior (submitting 2 bios for the same extent).
|
|
|
|
*/
|
|
|
|
if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
|
|
|
|
prev_em_start && *prev_em_start != (u64)-1 &&
|
Btrfs: fix corruption reading shared and compressed extents after hole punching
In the past we had data corruption when reading compressed extents that
are shared within the same file and they are consecutive, this got fixed
by commit 005efedf2c7d0 ("Btrfs: fix read corruption of compressed and
shared extents") and by commit 808f80b46790f ("Btrfs: update fix for read
corruption of compressed and shared extents"). However there was a case
that was missing in those fixes, which is when the shared and compressed
extents are referenced with a non-zero offset. The following shell script
creates a reproducer for this issue:
#!/bin/bash
mkfs.btrfs -f /dev/sdc &> /dev/null
mount -o compress /dev/sdc /mnt/sdc
# Create a file with 3 consecutive compressed extents, each has an
# uncompressed size of 128Kb and a compressed size of 4Kb.
for ((i = 1; i <= 3; i++)); do
head -c 4096 /dev/zero
for ((j = 1; j <= 31; j++)); do
head -c 4096 /dev/zero | tr '\0' "\377"
done
done > /mnt/sdc/foobar
sync
echo "Digest after file creation: $(md5sum /mnt/sdc/foobar)"
# Clone the first extent into offsets 128K and 256K.
xfs_io -c "reflink /mnt/sdc/foobar 0 128K 128K" /mnt/sdc/foobar
xfs_io -c "reflink /mnt/sdc/foobar 0 256K 128K" /mnt/sdc/foobar
sync
echo "Digest after cloning: $(md5sum /mnt/sdc/foobar)"
# Punch holes into the regions that are already full of zeroes.
xfs_io -c "fpunch 0 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 128K 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 256K 4K" /mnt/sdc/foobar
sync
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
echo "Dropping page cache..."
sysctl -q vm.drop_caches=1
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
umount /dev/sdc
When running the script we get the following output:
Digest after file creation: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
linked 131072/131072 bytes at offset 131072
128 KiB, 1 ops; 0.0033 sec (36.960 MiB/sec and 295.6830 ops/sec)
linked 131072/131072 bytes at offset 262144
128 KiB, 1 ops; 0.0015 sec (78.567 MiB/sec and 628.5355 ops/sec)
Digest after cloning: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Digest after hole punching: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Dropping page cache...
Digest after hole punching: fba694ae8664ed0c2e9ff8937e7f1484 /mnt/sdc/foobar
This happens because after reading all the pages of the extent in the
range from 128K to 256K for example, we read the hole at offset 256K
and then when reading the page at offset 260K we don't submit the
existing bio, which is responsible for filling all the page in the
range 128K to 256K only, therefore adding the pages from range 260K
to 384K to the existing bio and submitting it after iterating over the
entire range. Once the bio completes, the uncompressed data fills only
the pages in the range 128K to 256K because there's no more data read
from disk, leaving the pages in the range 260K to 384K unfilled. It is
just a slightly different variant of what was solved by commit
005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared
extents").
Fix this by forcing a bio submit, during readpages(), whenever we find a
compressed extent map for a page that is different from the extent map
for the previous page or has a different starting offset (in case it's
the same compressed extent), instead of the extent map's original start
offset.
A test case for fstests follows soon.
Reported-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Fixes: 808f80b46790f ("Btrfs: update fix for read corruption of compressed and shared extents")
Fixes: 005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared extents")
Cc: stable@vger.kernel.org # 4.3+
Tested-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-02-14 23:17:20 +08:00
|
|
|
*prev_em_start != em->start)
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
force_bio_submit = true;
|
|
|
|
|
|
|
|
if (prev_em_start)
|
Btrfs: fix corruption reading shared and compressed extents after hole punching
In the past we had data corruption when reading compressed extents that
are shared within the same file and they are consecutive, this got fixed
by commit 005efedf2c7d0 ("Btrfs: fix read corruption of compressed and
shared extents") and by commit 808f80b46790f ("Btrfs: update fix for read
corruption of compressed and shared extents"). However there was a case
that was missing in those fixes, which is when the shared and compressed
extents are referenced with a non-zero offset. The following shell script
creates a reproducer for this issue:
#!/bin/bash
mkfs.btrfs -f /dev/sdc &> /dev/null
mount -o compress /dev/sdc /mnt/sdc
# Create a file with 3 consecutive compressed extents, each has an
# uncompressed size of 128Kb and a compressed size of 4Kb.
for ((i = 1; i <= 3; i++)); do
head -c 4096 /dev/zero
for ((j = 1; j <= 31; j++)); do
head -c 4096 /dev/zero | tr '\0' "\377"
done
done > /mnt/sdc/foobar
sync
echo "Digest after file creation: $(md5sum /mnt/sdc/foobar)"
# Clone the first extent into offsets 128K and 256K.
xfs_io -c "reflink /mnt/sdc/foobar 0 128K 128K" /mnt/sdc/foobar
xfs_io -c "reflink /mnt/sdc/foobar 0 256K 128K" /mnt/sdc/foobar
sync
echo "Digest after cloning: $(md5sum /mnt/sdc/foobar)"
# Punch holes into the regions that are already full of zeroes.
xfs_io -c "fpunch 0 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 128K 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 256K 4K" /mnt/sdc/foobar
sync
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
echo "Dropping page cache..."
sysctl -q vm.drop_caches=1
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
umount /dev/sdc
When running the script we get the following output:
Digest after file creation: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
linked 131072/131072 bytes at offset 131072
128 KiB, 1 ops; 0.0033 sec (36.960 MiB/sec and 295.6830 ops/sec)
linked 131072/131072 bytes at offset 262144
128 KiB, 1 ops; 0.0015 sec (78.567 MiB/sec and 628.5355 ops/sec)
Digest after cloning: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Digest after hole punching: 5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Dropping page cache...
Digest after hole punching: fba694ae8664ed0c2e9ff8937e7f1484 /mnt/sdc/foobar
This happens because after reading all the pages of the extent in the
range from 128K to 256K for example, we read the hole at offset 256K
and then when reading the page at offset 260K we don't submit the
existing bio, which is responsible for filling all the page in the
range 128K to 256K only, therefore adding the pages from range 260K
to 384K to the existing bio and submitting it after iterating over the
entire range. Once the bio completes, the uncompressed data fills only
the pages in the range 128K to 256K because there's no more data read
from disk, leaving the pages in the range 260K to 384K unfilled. It is
just a slightly different variant of what was solved by commit
005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared
extents").
Fix this by forcing a bio submit, during readpages(), whenever we find a
compressed extent map for a page that is different from the extent map
for the previous page or has a different starting offset (in case it's
the same compressed extent), instead of the extent map's original start
offset.
A test case for fstests follows soon.
Reported-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Fixes: 808f80b46790f ("Btrfs: update fix for read corruption of compressed and shared extents")
Fixes: 005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared extents")
Cc: stable@vger.kernel.org # 4.3+
Tested-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-02-14 23:17:20 +08:00
|
|
|
*prev_em_start = em->start;
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
em = NULL;
|
|
|
|
|
|
|
|
/* we've found a hole, just zero and go on */
|
|
|
|
if (block_start == EXTENT_MAP_HOLE) {
|
|
|
|
char *userpage;
|
2011-04-06 18:02:20 +08:00
|
|
|
struct extent_state *cached = NULL;
|
|
|
|
|
2011-11-25 23:14:28 +08:00
|
|
|
userpage = kmap_atomic(page);
|
2011-04-19 20:29:38 +08:00
|
|
|
memset(userpage + pg_offset, 0, iosize);
|
2008-01-25 05:13:08 +08:00
|
|
|
flush_dcache_page(page);
|
2011-11-25 23:14:28 +08:00
|
|
|
kunmap_atomic(userpage);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
set_extent_uptodate(tree, cur, cur + iosize - 1,
|
2011-04-06 18:02:20 +08:00
|
|
|
&cached, GFP_NOFS);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent_cached(tree, cur,
|
2017-12-13 04:43:52 +08:00
|
|
|
cur + iosize - 1, &cached);
|
2008-01-25 05:13:08 +08:00
|
|
|
cur = cur + iosize;
|
2011-04-19 20:29:38 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-25 05:13:08 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* the get_extent function already copied into the page */
|
2009-09-03 03:22:30 +08:00
|
|
|
if (test_range_bit(tree, cur, cur_end,
|
|
|
|
EXTENT_UPTODATE, 1, NULL)) {
|
2008-09-06 04:09:51 +08:00
|
|
|
check_page_uptodate(tree, page);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent(tree, cur, cur + iosize - 1);
|
2008-01-25 05:13:08 +08:00
|
|
|
cur = cur + iosize;
|
2011-04-19 20:29:38 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-25 05:13:08 +08:00
|
|
|
continue;
|
|
|
|
}
|
2008-01-29 22:59:12 +08:00
|
|
|
/* we have an inline extent but it didn't get marked up
|
|
|
|
* to date. Error out
|
|
|
|
*/
|
|
|
|
if (block_start == EXTENT_MAP_INLINE) {
|
|
|
|
SetPageError(page);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent(tree, cur, cur + iosize - 1);
|
2008-01-29 22:59:12 +08:00
|
|
|
cur = cur + iosize;
|
2011-04-19 20:29:38 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-29 22:59:12 +08:00
|
|
|
continue;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2017-06-07 01:14:26 +08:00
|
|
|
ret = submit_extent_page(REQ_OP_READ | read_flags, tree, NULL,
|
2017-10-04 23:30:11 +08:00
|
|
|
page, offset, disk_io_size,
|
|
|
|
pg_offset, bdev, bio,
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
end_bio_extent_readpage, mirror_num,
|
|
|
|
*bio_flags,
|
Btrfs: fix read corruption of compressed and shared extents
If a file has a range pointing to a compressed extent, followed by
another range that points to the same compressed extent and a read
operation attempts to read both ranges (either completely or part of
them), the pages that correspond to the second range are incorrectly
filled with zeroes.
Consider the following example:
File layout
[0 - 8K] [8K - 24K]
| |
| |
points to extent X, points to extent X,
offset 4K, length of 8K offset 0, length 16K
[extent X, compressed length = 4K uncompressed length = 16K]
If a readpages() call spans the 2 ranges, a single bio to read the extent
is submitted - extent_io.c:submit_extent_page() would only create a new
bio to cover the second range pointing to the extent if the extent it
points to had a different logical address than the extent associated with
the first range. This has a consequence of the compressed read end io
handler (compression.c:end_compressed_bio_read()) finish once the extent
is decompressed into the pages covering the first range, leaving the
remaining pages (belonging to the second range) filled with zeroes (done
by compression.c:btrfs_clear_biovec_end()).
So fix this by submitting the current bio whenever we find a range
pointing to a compressed extent that was preceded by a range with a
different extent map. This is the simplest solution for this corner
case. Making the end io callback populate both ranges (or more, if we
have multiple pointing to the same extent) is a much more complex
solution since each bio is tightly coupled with a single extent map and
the extent maps associated to the ranges pointing to the shared extent
can have different offsets and lengths.
The following test case for fstests triggers the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create a test file with a single extent that is compressed (the
# data we write into it is highly compressible no matter which
# compression algorithm is used, zlib or lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K" \
-c "pwrite -S 0xbb 4K 8K" \
-c "pwrite -S 0xcc 12K 4K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone our extent into an adjacent offset.
$CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Same as before but for this file we clone the extent into a lower
# file offset.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K" \
-c "pwrite -S 0xbb 12K 8K" \
-c "pwrite -S 0xcc 20K 4K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
$SCRATCH_MNT/bar $SCRATCH_MNT/bar
echo "File digests before unmounting filesystem:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
# Evicting the inode or clearing the page cache before reading
# again the file would also trigger the bug - reads were returning
# all bytes in the range corresponding to the second reference to
# the extent with a value of 0, but the correct data was persisted
# (it was a bug exclusively in the read path). The issue happened
# only if the same readpages() call targeted pages belonging to the
# first and second ranges that point to the same compressed extent.
_scratch_remount
echo "File digests after mounting filesystem again:"
# Must match the same digests we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
2015-09-14 16:09:31 +08:00
|
|
|
this_bio_flag,
|
|
|
|
force_bio_submit);
|
2013-02-12 00:33:00 +08:00
|
|
|
if (!ret) {
|
|
|
|
nr++;
|
|
|
|
*bio_flags = this_bio_flag;
|
|
|
|
} else {
|
2008-01-25 05:13:08 +08:00
|
|
|
SetPageError(page);
|
2016-01-28 03:17:20 +08:00
|
|
|
unlock_extent(tree, cur, cur + iosize - 1);
|
2016-07-12 01:39:07 +08:00
|
|
|
goto out;
|
2012-10-06 04:40:32 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
cur = cur + iosize;
|
2011-04-19 20:29:38 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2011-05-27 00:01:56 +08:00
|
|
|
out:
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!nr) {
|
|
|
|
if (!PageError(page))
|
|
|
|
SetPageUptodate(page);
|
|
|
|
unlock_page(page);
|
|
|
|
}
|
2016-07-12 01:39:07 +08:00
|
|
|
return ret;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2019-03-11 15:55:38 +08:00
|
|
|
static inline void contiguous_readpages(struct extent_io_tree *tree,
|
2013-07-25 19:22:36 +08:00
|
|
|
struct page *pages[], int nr_pages,
|
|
|
|
u64 start, u64 end,
|
2013-07-25 19:22:37 +08:00
|
|
|
struct extent_map **em_cached,
|
2017-10-24 16:50:39 +08:00
|
|
|
struct bio **bio,
|
2016-06-06 03:31:51 +08:00
|
|
|
unsigned long *bio_flags,
|
2015-09-28 16:56:26 +08:00
|
|
|
u64 *prev_em_start)
|
2013-07-25 19:22:36 +08:00
|
|
|
{
|
2019-05-07 15:19:23 +08:00
|
|
|
struct btrfs_inode *inode = BTRFS_I(pages[0]->mapping->host);
|
2013-07-25 19:22:36 +08:00
|
|
|
int index;
|
|
|
|
|
2019-05-07 15:19:23 +08:00
|
|
|
btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
|
2013-07-25 19:22:36 +08:00
|
|
|
|
|
|
|
for (index = 0; index < nr_pages; index++) {
|
2017-06-23 10:09:57 +08:00
|
|
|
__do_readpage(tree, pages[index], btrfs_get_extent, em_cached,
|
2018-08-18 06:45:39 +08:00
|
|
|
bio, 0, bio_flags, REQ_RAHEAD, prev_em_start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(pages[index]);
|
2013-07-25 19:22:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __extent_read_full_page(struct extent_io_tree *tree,
|
|
|
|
struct page *page,
|
|
|
|
get_extent_t *get_extent,
|
|
|
|
struct bio **bio, int mirror_num,
|
2017-06-07 01:03:49 +08:00
|
|
|
unsigned long *bio_flags,
|
|
|
|
unsigned int read_flags)
|
2013-07-25 19:22:36 +08:00
|
|
|
{
|
2019-05-07 15:19:23 +08:00
|
|
|
struct btrfs_inode *inode = BTRFS_I(page->mapping->host);
|
2013-07-25 19:22:36 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 end = start + PAGE_SIZE - 1;
|
2013-07-25 19:22:36 +08:00
|
|
|
int ret;
|
|
|
|
|
2019-05-07 15:19:23 +08:00
|
|
|
btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
|
2013-07-25 19:22:36 +08:00
|
|
|
|
2013-07-25 19:22:37 +08:00
|
|
|
ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
|
2016-06-06 03:31:51 +08:00
|
|
|
bio_flags, read_flags, NULL);
|
2013-07-25 19:22:36 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
int extent_read_full_page(struct extent_io_tree *tree, struct page *page,
|
2011-06-14 02:02:58 +08:00
|
|
|
get_extent_t *get_extent, int mirror_num)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct bio *bio = NULL;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
unsigned long bio_flags = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
int ret;
|
|
|
|
|
2011-06-14 02:02:58 +08:00
|
|
|
ret = __extent_read_full_page(tree, page, get_extent, &bio, mirror_num,
|
2016-06-06 03:31:51 +08:00
|
|
|
&bio_flags, 0);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (bio)
|
2016-06-06 03:31:51 +08:00
|
|
|
ret = submit_one_bio(bio, mirror_num, bio_flags);
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-02-11 02:33:41 +08:00
|
|
|
static void update_nr_written(struct writeback_control *wbc,
|
2016-03-08 08:56:21 +08:00
|
|
|
unsigned long nr_written)
|
2009-04-21 03:50:09 +08:00
|
|
|
{
|
|
|
|
wbc->nr_to_write -= nr_written;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
2014-05-22 04:35:51 +08:00
|
|
|
* helper for __extent_writepage, doing all of the delayed allocation setup.
|
|
|
|
*
|
2018-11-01 20:09:46 +08:00
|
|
|
* This returns 1 if btrfs_run_delalloc_range function did all the work required
|
2014-05-22 04:35:51 +08:00
|
|
|
* to write the page (copy into inline extent). In this case the IO has
|
|
|
|
* been started and the page is already unlocked.
|
|
|
|
*
|
|
|
|
* This returns 0 if all went well (page still locked)
|
|
|
|
* This returns < 0 if there were errors (page still locked)
|
2008-01-25 05:13:08 +08:00
|
|
|
*/
|
2014-05-22 04:35:51 +08:00
|
|
|
static noinline_for_stack int writepage_delalloc(struct inode *inode,
|
2018-11-08 16:18:07 +08:00
|
|
|
struct page *page, struct writeback_control *wbc,
|
|
|
|
u64 delalloc_start, unsigned long *nr_written)
|
2014-05-22 04:35:51 +08:00
|
|
|
{
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 page_end = delalloc_start + PAGE_SIZE - 1;
|
2018-11-29 11:33:38 +08:00
|
|
|
bool found;
|
2014-05-22 04:35:51 +08:00
|
|
|
u64 delalloc_to_write = 0;
|
|
|
|
u64 delalloc_end = 0;
|
|
|
|
int ret;
|
|
|
|
int page_started = 0;
|
|
|
|
|
|
|
|
|
|
|
|
while (delalloc_end < page_end) {
|
2019-06-21 23:02:54 +08:00
|
|
|
found = find_lock_delalloc_range(inode, page,
|
2014-05-22 04:35:51 +08:00
|
|
|
&delalloc_start,
|
2018-10-26 19:43:20 +08:00
|
|
|
&delalloc_end);
|
2018-11-29 11:33:38 +08:00
|
|
|
if (!found) {
|
2014-05-22 04:35:51 +08:00
|
|
|
delalloc_start = delalloc_end + 1;
|
|
|
|
continue;
|
|
|
|
}
|
2018-11-01 20:09:46 +08:00
|
|
|
ret = btrfs_run_delalloc_range(inode, page, delalloc_start,
|
|
|
|
delalloc_end, &page_started, nr_written, wbc);
|
2014-05-22 04:35:51 +08:00
|
|
|
if (ret) {
|
|
|
|
SetPageError(page);
|
2018-11-01 20:09:46 +08:00
|
|
|
/*
|
|
|
|
* btrfs_run_delalloc_range should return < 0 for error
|
|
|
|
* but just in case, we use > 0 here meaning the IO is
|
|
|
|
* started, so we don't want to return > 0 unless
|
|
|
|
* things are going well.
|
2014-05-22 04:35:51 +08:00
|
|
|
*/
|
|
|
|
ret = ret < 0 ? ret : -EIO;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
/*
|
2016-04-01 20:29:48 +08:00
|
|
|
* delalloc_end is already one less than the total length, so
|
|
|
|
* we don't subtract one from PAGE_SIZE
|
2014-05-22 04:35:51 +08:00
|
|
|
*/
|
|
|
|
delalloc_to_write += (delalloc_end - delalloc_start +
|
2016-04-01 20:29:48 +08:00
|
|
|
PAGE_SIZE) >> PAGE_SHIFT;
|
2014-05-22 04:35:51 +08:00
|
|
|
delalloc_start = delalloc_end + 1;
|
|
|
|
}
|
|
|
|
if (wbc->nr_to_write < delalloc_to_write) {
|
|
|
|
int thresh = 8192;
|
|
|
|
|
|
|
|
if (delalloc_to_write < thresh * 2)
|
|
|
|
thresh = delalloc_to_write;
|
|
|
|
wbc->nr_to_write = min_t(u64, delalloc_to_write,
|
|
|
|
thresh);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* did the fill delalloc function already unlock and start
|
|
|
|
* the IO?
|
|
|
|
*/
|
|
|
|
if (page_started) {
|
|
|
|
/*
|
|
|
|
* we've unlocked the page, so we can't update
|
|
|
|
* the mapping's writeback index, just update
|
|
|
|
* nr_to_write.
|
|
|
|
*/
|
|
|
|
wbc->nr_to_write -= *nr_written;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
done:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* helper for __extent_writepage. This calls the writepage start hooks,
|
|
|
|
* and does the loop to map the page into extents and bios.
|
|
|
|
*
|
|
|
|
* We return 1 if the IO is started and the page is unlocked,
|
|
|
|
* 0 if all went well (page still locked)
|
|
|
|
* < 0 if there were errors (page still locked)
|
|
|
|
*/
|
|
|
|
static noinline_for_stack int __extent_writepage_io(struct inode *inode,
|
|
|
|
struct page *page,
|
|
|
|
struct writeback_control *wbc,
|
|
|
|
struct extent_page_data *epd,
|
|
|
|
loff_t i_size,
|
|
|
|
unsigned long nr_written,
|
2017-06-07 01:03:49 +08:00
|
|
|
unsigned int write_flags, int *nr_ret)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_io_tree *tree = epd->tree;
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 page_end = start + PAGE_SIZE - 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
u64 end;
|
|
|
|
u64 cur = start;
|
|
|
|
u64 extent_offset;
|
|
|
|
u64 block_start;
|
|
|
|
u64 iosize;
|
|
|
|
struct extent_map *em;
|
|
|
|
struct block_device *bdev;
|
2008-07-19 00:01:11 +08:00
|
|
|
size_t pg_offset = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t blocksize;
|
2014-05-22 04:35:51 +08:00
|
|
|
int ret = 0;
|
|
|
|
int nr = 0;
|
|
|
|
bool compressed;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2018-11-01 20:09:47 +08:00
|
|
|
ret = btrfs_writepage_cow_fixup(page, start, page_end);
|
|
|
|
if (ret) {
|
|
|
|
/* Fixup worker will requeue */
|
|
|
|
if (ret == -EBUSY)
|
|
|
|
wbc->pages_skipped++;
|
|
|
|
else
|
|
|
|
redirty_page_for_writepage(wbc, page);
|
2014-05-22 04:35:51 +08:00
|
|
|
|
2018-11-01 20:09:47 +08:00
|
|
|
update_nr_written(wbc, nr_written);
|
|
|
|
unlock_page(page);
|
|
|
|
return 1;
|
2008-07-18 00:53:51 +08:00
|
|
|
}
|
|
|
|
|
2009-04-21 03:50:09 +08:00
|
|
|
/*
|
|
|
|
* we don't want to touch the inode after unlocking the page,
|
|
|
|
* so we update the mapping writeback index now
|
|
|
|
*/
|
2017-02-11 02:33:41 +08:00
|
|
|
update_nr_written(wbc, nr_written + 1);
|
2008-11-07 11:02:51 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
end = page_end;
|
2014-05-22 04:35:51 +08:00
|
|
|
if (i_size <= start) {
|
2018-11-08 16:18:08 +08:00
|
|
|
btrfs_writepage_endio_finish_ordered(page, start, page_end, 1);
|
2008-01-25 05:13:08 +08:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
blocksize = inode->i_sb->s_blocksize;
|
|
|
|
|
|
|
|
while (cur <= end) {
|
2014-05-22 04:35:51 +08:00
|
|
|
u64 em_end;
|
2017-10-04 23:30:11 +08:00
|
|
|
u64 offset;
|
2016-05-04 17:46:10 +08:00
|
|
|
|
2014-05-22 04:35:51 +08:00
|
|
|
if (cur >= i_size) {
|
2018-11-01 20:09:48 +08:00
|
|
|
btrfs_writepage_endio_finish_ordered(page, cur,
|
2018-11-08 16:18:08 +08:00
|
|
|
page_end, 1);
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
}
|
2017-06-23 10:01:08 +08:00
|
|
|
em = btrfs_get_extent(BTRFS_I(inode), page, pg_offset, cur,
|
2008-01-25 05:13:08 +08:00
|
|
|
end - cur + 1, 1);
|
2011-04-20 00:00:01 +08:00
|
|
|
if (IS_ERR_OR_NULL(em)) {
|
2008-01-25 05:13:08 +08:00
|
|
|
SetPageError(page);
|
2014-05-10 00:17:40 +08:00
|
|
|
ret = PTR_ERR_OR_ZERO(em);
|
2008-01-25 05:13:08 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
extent_offset = cur - em->start;
|
2014-05-22 04:35:51 +08:00
|
|
|
em_end = extent_map_end(em);
|
|
|
|
BUG_ON(em_end <= cur);
|
2008-01-25 05:13:08 +08:00
|
|
|
BUG_ON(end < cur);
|
2014-05-22 04:35:51 +08:00
|
|
|
iosize = min(em_end - cur, end - cur + 1);
|
2013-02-26 16:10:22 +08:00
|
|
|
iosize = ALIGN(iosize, blocksize);
|
2017-10-04 23:30:11 +08:00
|
|
|
offset = em->block_start + extent_offset;
|
2008-01-25 05:13:08 +08:00
|
|
|
bdev = em->bdev;
|
|
|
|
block_start = em->block_start;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
compressed = test_bit(EXTENT_FLAG_COMPRESSED, &em->flags);
|
2008-01-25 05:13:08 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
em = NULL;
|
|
|
|
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
/*
|
|
|
|
* compressed and inline extents are written through other
|
|
|
|
* paths in the FS
|
|
|
|
*/
|
|
|
|
if (compressed || block_start == EXTENT_MAP_HOLE ||
|
2008-01-25 05:13:08 +08:00
|
|
|
block_start == EXTENT_MAP_INLINE) {
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
/*
|
|
|
|
* end_io notification does not happen here for
|
|
|
|
* compressed extents
|
|
|
|
*/
|
2018-11-01 20:09:48 +08:00
|
|
|
if (!compressed)
|
|
|
|
btrfs_writepage_endio_finish_ordered(page, cur,
|
|
|
|
cur + iosize - 1,
|
2018-11-08 16:18:08 +08:00
|
|
|
1);
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
else if (compressed) {
|
|
|
|
/* we don't want to end_page_writeback on
|
|
|
|
* a compressed extent. this happens
|
|
|
|
* elsewhere
|
|
|
|
*/
|
|
|
|
nr++;
|
|
|
|
}
|
|
|
|
|
|
|
|
cur += iosize;
|
2008-07-19 00:01:11 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-25 05:13:08 +08:00
|
|
|
continue;
|
|
|
|
}
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2018-07-19 02:32:52 +08:00
|
|
|
btrfs_set_range_writeback(tree, cur, cur + iosize - 1);
|
2016-05-04 17:46:10 +08:00
|
|
|
if (!PageWriteback(page)) {
|
|
|
|
btrfs_err(BTRFS_I(inode)->root->fs_info,
|
|
|
|
"page %lu not writeback, cur %llu end %llu",
|
|
|
|
page->index, cur, end);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2008-07-19 00:01:11 +08:00
|
|
|
|
2017-06-07 01:14:26 +08:00
|
|
|
ret = submit_extent_page(REQ_OP_WRITE | write_flags, tree, wbc,
|
2017-10-04 23:30:11 +08:00
|
|
|
page, offset, iosize, pg_offset,
|
2017-02-11 02:29:38 +08:00
|
|
|
bdev, &epd->bio,
|
2016-05-04 17:46:10 +08:00
|
|
|
end_bio_extent_writepage,
|
|
|
|
0, 0, 0, false);
|
Btrfs: add another missing end_page_writeback on submit_extent_page failure
If btrfs_bio_alloc fails in submit_extent_page, submit_extent_page returns
without clearing the writeback bit of the failed page.
__extent_writepage_io, that is a caller of submit_extent_page,
does not clear the remaining writeback bit anywhere.
As a result, this will cause the hang at filemap_fdatawait_range,
because it waits the writeback bit to be cleared from the failed page.
So, we have to call end_page_writeback to clear the writeback bit.
For reproducing the hang, we inject a fault like
if (should_failtest()) { // I define should_failtest()
bio = NULL;
}
else {
bio = btrfs_bio_alloc(...);
}
in submit_extent_page.
We should also check whether page has the bit before end_page_writeback,
to avoid the conflict against the other end_page_writeback in bio_endio.
Thus, we add PageWriteback checks not only in __extent_writepage_io,
but also in write_one_eb too, because it misses the check.
Signed-off-by: Takafumi Kubota <takafumi.kubota1012@sslab.ics.keio.ac.jp>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Cc: David Sterba <dsterba@suse.cz>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-02-09 16:24:33 +08:00
|
|
|
if (ret) {
|
2016-05-04 17:46:10 +08:00
|
|
|
SetPageError(page);
|
Btrfs: add another missing end_page_writeback on submit_extent_page failure
If btrfs_bio_alloc fails in submit_extent_page, submit_extent_page returns
without clearing the writeback bit of the failed page.
__extent_writepage_io, that is a caller of submit_extent_page,
does not clear the remaining writeback bit anywhere.
As a result, this will cause the hang at filemap_fdatawait_range,
because it waits the writeback bit to be cleared from the failed page.
So, we have to call end_page_writeback to clear the writeback bit.
For reproducing the hang, we inject a fault like
if (should_failtest()) { // I define should_failtest()
bio = NULL;
}
else {
bio = btrfs_bio_alloc(...);
}
in submit_extent_page.
We should also check whether page has the bit before end_page_writeback,
to avoid the conflict against the other end_page_writeback in bio_endio.
Thus, we add PageWriteback checks not only in __extent_writepage_io,
but also in write_one_eb too, because it misses the check.
Signed-off-by: Takafumi Kubota <takafumi.kubota1012@sslab.ics.keio.ac.jp>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Cc: David Sterba <dsterba@suse.cz>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-02-09 16:24:33 +08:00
|
|
|
if (PageWriteback(page))
|
|
|
|
end_page_writeback(page);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
cur = cur + iosize;
|
2008-07-19 00:01:11 +08:00
|
|
|
pg_offset += iosize;
|
2008-01-25 05:13:08 +08:00
|
|
|
nr++;
|
|
|
|
}
|
2014-05-22 04:35:51 +08:00
|
|
|
done:
|
|
|
|
*nr_ret = nr;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* the writepage semantics are similar to regular writepage. extent
|
|
|
|
* records are inserted to lock ranges in the tree, and as dirty areas
|
|
|
|
* are found, they are marked writeback. Then the lock bits are removed
|
|
|
|
* and the end_io handler clears the writeback ranges
|
2019-03-20 14:27:42 +08:00
|
|
|
*
|
|
|
|
* Return 0 if everything goes well.
|
|
|
|
* Return <0 for error.
|
2014-05-22 04:35:51 +08:00
|
|
|
*/
|
|
|
|
static int __extent_writepage(struct page *page, struct writeback_control *wbc,
|
2017-12-01 01:00:02 +08:00
|
|
|
struct extent_page_data *epd)
|
2014-05-22 04:35:51 +08:00
|
|
|
{
|
|
|
|
struct inode *inode = page->mapping->host;
|
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 page_end = start + PAGE_SIZE - 1;
|
2014-05-22 04:35:51 +08:00
|
|
|
int ret;
|
|
|
|
int nr = 0;
|
|
|
|
size_t pg_offset = 0;
|
|
|
|
loff_t i_size = i_size_read(inode);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long end_index = i_size >> PAGE_SHIFT;
|
2017-06-07 01:03:49 +08:00
|
|
|
unsigned int write_flags = 0;
|
2014-05-22 04:35:51 +08:00
|
|
|
unsigned long nr_written = 0;
|
|
|
|
|
2017-08-25 08:19:48 +08:00
|
|
|
write_flags = wbc_to_write_flags(wbc);
|
2014-05-22 04:35:51 +08:00
|
|
|
|
|
|
|
trace___extent_writepage(page, inode, wbc);
|
|
|
|
|
|
|
|
WARN_ON(!PageLocked(page));
|
|
|
|
|
|
|
|
ClearPageError(page);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
pg_offset = offset_in_page(i_size);
|
2014-05-22 04:35:51 +08:00
|
|
|
if (page->index > end_index ||
|
|
|
|
(page->index == end_index && !pg_offset)) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
page->mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE);
|
2014-05-22 04:35:51 +08:00
|
|
|
unlock_page(page);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (page->index == end_index) {
|
|
|
|
char *userpage;
|
|
|
|
|
|
|
|
userpage = kmap_atomic(page);
|
|
|
|
memset(userpage + pg_offset, 0,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
PAGE_SIZE - pg_offset);
|
2014-05-22 04:35:51 +08:00
|
|
|
kunmap_atomic(userpage);
|
|
|
|
flush_dcache_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
pg_offset = 0;
|
|
|
|
|
|
|
|
set_page_extent_mapped(page);
|
|
|
|
|
2018-11-08 16:18:06 +08:00
|
|
|
if (!epd->extent_locked) {
|
2018-11-08 16:18:07 +08:00
|
|
|
ret = writepage_delalloc(inode, page, wbc, start, &nr_written);
|
2018-11-08 16:18:06 +08:00
|
|
|
if (ret == 1)
|
|
|
|
goto done_unlocked;
|
|
|
|
if (ret)
|
|
|
|
goto done;
|
|
|
|
}
|
2014-05-22 04:35:51 +08:00
|
|
|
|
|
|
|
ret = __extent_writepage_io(inode, page, wbc, epd,
|
|
|
|
i_size, nr_written, write_flags, &nr);
|
|
|
|
if (ret == 1)
|
|
|
|
goto done_unlocked;
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
done:
|
|
|
|
if (nr == 0) {
|
|
|
|
/* make sure the mapping tag for page dirty gets cleared */
|
|
|
|
set_page_writeback(page);
|
|
|
|
end_page_writeback(page);
|
|
|
|
}
|
2014-05-10 00:17:40 +08:00
|
|
|
if (PageError(page)) {
|
|
|
|
ret = ret < 0 ? ret : -EIO;
|
|
|
|
end_extent_writepage(page, ret, start, page_end);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
unlock_page(page);
|
2019-03-20 14:27:42 +08:00
|
|
|
ASSERT(ret <= 0);
|
2014-05-22 04:35:51 +08:00
|
|
|
return ret;
|
2008-11-07 11:02:51 +08:00
|
|
|
|
2009-04-21 03:50:09 +08:00
|
|
|
done_unlocked:
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-04-25 04:41:19 +08:00
|
|
|
void wait_on_extent_buffer_writeback(struct extent_buffer *eb)
|
2012-03-13 21:38:00 +08:00
|
|
|
{
|
sched: Remove proliferation of wait_on_bit() action functions
The current "wait_on_bit" interface requires an 'action'
function to be provided which does the actual waiting.
There are over 20 such functions, many of them identical.
Most cases can be satisfied by one of just two functions, one
which uses io_schedule() and one which just uses schedule().
So:
Rename wait_on_bit and wait_on_bit_lock to
wait_on_bit_action and wait_on_bit_lock_action
to make it explicit that they need an action function.
Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
which are *not* given an action function but implicitly use
a standard one.
The decision to error-out if a signal is pending is now made
based on the 'mode' argument rather than being encoded in the action
function.
All instances of the old wait_on_bit and wait_on_bit_lock which
can use the new version have been changed accordingly and their
action functions have been discarded.
wait_on_bit{_lock} does not return any specific error code in the
event of a signal so the caller must check for non-zero and
interpolate their own error code as appropriate.
The wait_on_bit() call in __fscache_wait_on_invalidate() was
ambiguous as it specified TASK_UNINTERRUPTIBLE but used
fscache_wait_bit_interruptible as an action function.
David Howells confirms this should be uniformly
"uninterruptible"
The main remaining user of wait_on_bit{,_lock}_action is NFS
which needs to use a freezer-aware schedule() call.
A comment in fs/gfs2/glock.c notes that having multiple 'action'
functions is useful as they display differently in the 'wchan'
field of 'ps'. (and /proc/$PID/wchan).
As the new bit_wait{,_io} functions are tagged "__sched", they
will not show up at all, but something higher in the stack. So
the distinction will still be visible, only with different
function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
gfs2/glock.c case).
Since first version of this patch (against 3.15) two new action
functions appeared, on in NFS and one in CIFS. CIFS also now
uses an action function that makes the same freezer aware
schedule call as NFS.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: David Howells <dhowells@redhat.com> (fscache, keys)
Acked-by: Steven Whitehouse <swhiteho@redhat.com> (gfs2)
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steve French <sfrench@samba.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brown
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-07 13:16:04 +08:00
|
|
|
wait_on_bit_io(&eb->bflags, EXTENT_BUFFER_WRITEBACK,
|
|
|
|
TASK_UNINTERRUPTIBLE);
|
2012-03-13 21:38:00 +08:00
|
|
|
}
|
|
|
|
|
2019-03-20 14:27:46 +08:00
|
|
|
/*
|
|
|
|
* Lock eb pages and flush the bio if we can't the locks
|
|
|
|
*
|
|
|
|
* Return 0 if nothing went wrong
|
|
|
|
* Return >0 is same as 0, except bio is not submitted
|
|
|
|
* Return <0 if something went wrong, no page is locked
|
|
|
|
*/
|
2019-03-20 18:21:41 +08:00
|
|
|
static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb,
|
2014-05-20 11:55:27 +08:00
|
|
|
struct extent_page_data *epd)
|
2012-03-13 21:38:00 +08:00
|
|
|
{
|
2019-03-20 18:21:41 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2019-03-20 14:27:46 +08:00
|
|
|
int i, num_pages, failed_page_nr;
|
2012-03-13 21:38:00 +08:00
|
|
|
int flush = 0;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (!btrfs_try_tree_write_lock(eb)) {
|
2019-03-20 14:27:41 +08:00
|
|
|
ret = flush_write_bio(epd);
|
2019-03-20 14:27:46 +08:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
flush = 1;
|
2012-03-13 21:38:00 +08:00
|
|
|
btrfs_tree_lock(eb);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) {
|
|
|
|
btrfs_tree_unlock(eb);
|
|
|
|
if (!epd->sync_io)
|
|
|
|
return 0;
|
|
|
|
if (!flush) {
|
2019-03-20 14:27:41 +08:00
|
|
|
ret = flush_write_bio(epd);
|
2019-03-20 14:27:46 +08:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2012-03-13 21:38:00 +08:00
|
|
|
flush = 1;
|
|
|
|
}
|
2012-03-22 00:09:56 +08:00
|
|
|
while (1) {
|
|
|
|
wait_on_extent_buffer_writeback(eb);
|
|
|
|
btrfs_tree_lock(eb);
|
|
|
|
if (!test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags))
|
|
|
|
break;
|
2012-03-13 21:38:00 +08:00
|
|
|
btrfs_tree_unlock(eb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-07-21 04:25:24 +08:00
|
|
|
/*
|
|
|
|
* We need to do this to prevent races in people who check if the eb is
|
|
|
|
* under IO since we can end up having no IO bits set for a short period
|
|
|
|
* of time.
|
|
|
|
*/
|
|
|
|
spin_lock(&eb->refs_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) {
|
|
|
|
set_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
|
2012-07-21 04:25:24 +08:00
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN);
|
2017-06-21 02:01:20 +08:00
|
|
|
percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
|
|
|
|
-eb->len,
|
|
|
|
fs_info->dirty_metadata_batch);
|
2012-03-13 21:38:00 +08:00
|
|
|
ret = 1;
|
2012-07-21 04:25:24 +08:00
|
|
|
} else {
|
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
btrfs_tree_unlock(eb);
|
|
|
|
|
|
|
|
if (!ret)
|
|
|
|
return ret;
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2012-03-13 21:38:00 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
struct page *p = eb->pages[i];
|
2012-03-13 21:38:00 +08:00
|
|
|
|
|
|
|
if (!trylock_page(p)) {
|
|
|
|
if (!flush) {
|
2019-03-20 14:27:41 +08:00
|
|
|
ret = flush_write_bio(epd);
|
2019-03-20 14:27:46 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
failed_page_nr = i;
|
|
|
|
goto err_unlock;
|
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
flush = 1;
|
|
|
|
}
|
|
|
|
lock_page(p);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
2019-03-20 14:27:46 +08:00
|
|
|
err_unlock:
|
|
|
|
/* Unlock already locked pages */
|
|
|
|
for (i = 0; i < failed_page_nr; i++)
|
|
|
|
unlock_page(eb->pages[i]);
|
|
|
|
return ret;
|
2012-03-13 21:38:00 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void end_extent_buffer_writeback(struct extent_buffer *eb)
|
|
|
|
{
|
|
|
|
clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
|
2014-03-18 01:06:10 +08:00
|
|
|
smp_mb__after_atomic();
|
2012-03-13 21:38:00 +08:00
|
|
|
wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK);
|
|
|
|
}
|
|
|
|
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
static void set_btree_ioerr(struct page *page)
|
|
|
|
{
|
|
|
|
struct extent_buffer *eb = (struct extent_buffer *)page->private;
|
|
|
|
|
|
|
|
SetPageError(page);
|
|
|
|
if (test_and_set_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If writeback for a btree extent that doesn't belong to a log tree
|
|
|
|
* failed, increment the counter transaction->eb_write_errors.
|
|
|
|
* We do this because while the transaction is running and before it's
|
|
|
|
* committing (when we call filemap_fdata[write|wait]_range against
|
|
|
|
* the btree inode), we might have
|
|
|
|
* btree_inode->i_mapping->a_ops->writepages() called by the VM - if it
|
|
|
|
* returns an error or an error happens during writeback, when we're
|
|
|
|
* committing the transaction we wouldn't know about it, since the pages
|
|
|
|
* can be no longer dirty nor marked anymore for writeback (if a
|
|
|
|
* subsequent modification to the extent buffer didn't happen before the
|
|
|
|
* transaction commit), which makes filemap_fdata[write|wait]_range not
|
|
|
|
* able to find the pages tagged with SetPageError at transaction
|
|
|
|
* commit time. So if this happens we must abort the transaction,
|
|
|
|
* otherwise we commit a super block with btree roots that point to
|
|
|
|
* btree nodes/leafs whose content on disk is invalid - either garbage
|
|
|
|
* or the content of some node/leaf from a past generation that got
|
|
|
|
* cowed or deleted and is no longer valid.
|
|
|
|
*
|
|
|
|
* Note: setting AS_EIO/AS_ENOSPC in the btree inode's i_mapping would
|
|
|
|
* not be enough - we need to distinguish between log tree extents vs
|
|
|
|
* non-log tree extents, and the next filemap_fdatawait_range() call
|
|
|
|
* will catch and clear such errors in the mapping - and that call might
|
|
|
|
* be from a log sync and not from a transaction commit. Also, checking
|
|
|
|
* for the eb flag EXTENT_BUFFER_WRITE_ERR at transaction commit time is
|
|
|
|
* not done and would not be reliable - the eb might have been released
|
|
|
|
* from memory and reading it back again means that flag would not be
|
|
|
|
* set (since it's a runtime flag, not persisted on disk).
|
|
|
|
*
|
|
|
|
* Using the flags below in the btree inode also makes us achieve the
|
|
|
|
* goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
|
|
|
|
* writeback for all dirty pages and before filemap_fdatawait_range()
|
|
|
|
* is called, the writeback for all dirty pages had already finished
|
|
|
|
* with errors - because we were not using AS_EIO/AS_ENOSPC,
|
|
|
|
* filemap_fdatawait_range() would return success, as it could not know
|
|
|
|
* that writeback errors happened (the pages were no longer tagged for
|
|
|
|
* writeback).
|
|
|
|
*/
|
|
|
|
switch (eb->log_index) {
|
|
|
|
case -1:
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_BTREE_ERR, &eb->fs_info->flags);
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
break;
|
|
|
|
case 0:
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_LOG1_ERR, &eb->fs_info->flags);
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
break;
|
|
|
|
case 1:
|
2016-09-03 03:40:02 +08:00
|
|
|
set_bit(BTRFS_FS_LOG2_ERR, &eb->fs_info->flags);
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
BUG(); /* unexpected, logic error */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_bio_extent_buffer_writepage(struct bio *bio)
|
2012-03-13 21:38:00 +08:00
|
|
|
{
|
2013-11-08 04:20:26 +08:00
|
|
|
struct bio_vec *bvec;
|
2012-03-13 21:38:00 +08:00
|
|
|
struct extent_buffer *eb;
|
2019-04-25 15:03:00 +08:00
|
|
|
int done;
|
2019-02-15 19:13:19 +08:00
|
|
|
struct bvec_iter_all iter_all;
|
2012-03-13 21:38:00 +08:00
|
|
|
|
2017-07-14 00:10:07 +08:00
|
|
|
ASSERT(!bio_flagged(bio, BIO_CLONED));
|
2019-04-25 15:03:00 +08:00
|
|
|
bio_for_each_segment_all(bvec, bio, iter_all) {
|
2012-03-13 21:38:00 +08:00
|
|
|
struct page *page = bvec->bv_page;
|
|
|
|
|
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
BUG_ON(!eb);
|
|
|
|
done = atomic_dec_and_test(&eb->io_pages);
|
|
|
|
|
2017-06-03 15:38:06 +08:00
|
|
|
if (bio->bi_status ||
|
2015-07-20 21:29:37 +08:00
|
|
|
test_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags)) {
|
2012-03-13 21:38:00 +08:00
|
|
|
ClearPageUptodate(page);
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
set_btree_ioerr(page);
|
2012-03-13 21:38:00 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
end_page_writeback(page);
|
|
|
|
|
|
|
|
if (!done)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
end_extent_buffer_writeback(eb);
|
2013-11-08 04:20:26 +08:00
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
|
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2014-05-20 11:55:27 +08:00
|
|
|
static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
|
2012-03-13 21:38:00 +08:00
|
|
|
struct writeback_control *wbc,
|
|
|
|
struct extent_page_data *epd)
|
|
|
|
{
|
2019-03-20 18:27:57 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2012-03-13 21:38:00 +08:00
|
|
|
struct block_device *bdev = fs_info->fs_devices->latest_bdev;
|
2013-12-17 02:24:27 +08:00
|
|
|
struct extent_io_tree *tree = &BTRFS_I(fs_info->btree_inode)->io_tree;
|
2012-03-13 21:38:00 +08:00
|
|
|
u64 offset = eb->start;
|
2016-09-24 04:44:44 +08:00
|
|
|
u32 nritems;
|
2018-03-02 01:20:27 +08:00
|
|
|
int i, num_pages;
|
2016-09-24 04:44:44 +08:00
|
|
|
unsigned long start, end;
|
2017-08-25 08:19:48 +08:00
|
|
|
unsigned int write_flags = wbc_to_write_flags(wbc) | REQ_META;
|
2012-04-24 02:00:51 +08:00
|
|
|
int ret = 0;
|
2012-03-13 21:38:00 +08:00
|
|
|
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
clear_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags);
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2012-03-13 21:38:00 +08:00
|
|
|
atomic_set(&eb->io_pages, num_pages);
|
2012-09-26 02:25:58 +08:00
|
|
|
|
2016-09-24 04:44:44 +08:00
|
|
|
/* set btree blocks beyond nritems with 0 to avoid stale content. */
|
|
|
|
nritems = btrfs_header_nritems(eb);
|
2016-09-15 08:22:57 +08:00
|
|
|
if (btrfs_header_level(eb) > 0) {
|
|
|
|
end = btrfs_node_key_ptr_offset(nritems);
|
|
|
|
|
2016-11-09 01:09:03 +08:00
|
|
|
memzero_extent_buffer(eb, end, eb->len - end);
|
2016-09-24 04:44:44 +08:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* leaf:
|
|
|
|
* header 0 1 2 .. N ... data_N .. data_2 data_1 data_0
|
|
|
|
*/
|
|
|
|
start = btrfs_item_nr_offset(nritems);
|
2019-03-20 18:33:10 +08:00
|
|
|
end = BTRFS_LEAF_DATA_OFFSET + leaf_data_end(eb);
|
2016-11-09 01:09:03 +08:00
|
|
|
memzero_extent_buffer(eb, start, end - start);
|
2016-09-15 08:22:57 +08:00
|
|
|
}
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
struct page *p = eb->pages[i];
|
2012-03-13 21:38:00 +08:00
|
|
|
|
|
|
|
clear_page_dirty_for_io(p);
|
|
|
|
set_page_writeback(p);
|
2017-06-07 01:14:26 +08:00
|
|
|
ret = submit_extent_page(REQ_OP_WRITE | write_flags, tree, wbc,
|
2017-10-04 23:30:11 +08:00
|
|
|
p, offset, PAGE_SIZE, 0, bdev,
|
2017-02-11 02:29:38 +08:00
|
|
|
&epd->bio,
|
2016-06-06 03:31:51 +08:00
|
|
|
end_bio_extent_buffer_writepage,
|
Btrfs: remove bio_flags which indicates a meta block of log-tree
Since both committing transaction and writing log-tree are doing
plugging on metadata IO, we can unify to use %sync_writers to benefit
both cases, instead of checking bio_flags while writing meta blocks of
log-tree.
We can remove this bio_flags because in order to write dirty blocks,
log tree also uses btrfs_write_marked_extents(), inside which we
have enabled %sync_writers, therefore, every write goes in a
synchronous way, so does checksuming.
Please also note that, bio_flags is applied per-context while
%sync_writers is applied per-inode, so this might incur some overhead, ie.
1) while log tree is flushing its dirty blocks via
btrfs_write_marked_extents(), in which %sync_writers is increased
by one.
2) in the meantime, some writeback operations may happen upon btrfs's
metadata inode, so these writes go synchronously, too.
However, AFAICS, the overhead is not a big one while the win is that
we unify the two places that needs synchronous way and remove a
special hack/flag.
This removes the bio_flags related stuff for writing log-tree.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-09-14 02:18:22 +08:00
|
|
|
0, 0, 0, false);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (ret) {
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
set_btree_ioerr(p);
|
Btrfs: add another missing end_page_writeback on submit_extent_page failure
If btrfs_bio_alloc fails in submit_extent_page, submit_extent_page returns
without clearing the writeback bit of the failed page.
__extent_writepage_io, that is a caller of submit_extent_page,
does not clear the remaining writeback bit anywhere.
As a result, this will cause the hang at filemap_fdatawait_range,
because it waits the writeback bit to be cleared from the failed page.
So, we have to call end_page_writeback to clear the writeback bit.
For reproducing the hang, we inject a fault like
if (should_failtest()) { // I define should_failtest()
bio = NULL;
}
else {
bio = btrfs_bio_alloc(...);
}
in submit_extent_page.
We should also check whether page has the bit before end_page_writeback,
to avoid the conflict against the other end_page_writeback in bio_endio.
Thus, we add PageWriteback checks not only in __extent_writepage_io,
but also in write_one_eb too, because it misses the check.
Signed-off-by: Takafumi Kubota <takafumi.kubota1012@sslab.ics.keio.ac.jp>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Cc: David Sterba <dsterba@suse.cz>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-02-09 16:24:33 +08:00
|
|
|
if (PageWriteback(p))
|
|
|
|
end_page_writeback(p);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (atomic_sub_and_test(num_pages - i, &eb->io_pages))
|
|
|
|
end_extent_buffer_writeback(eb);
|
|
|
|
ret = -EIO;
|
|
|
|
break;
|
|
|
|
}
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
offset += PAGE_SIZE;
|
2017-02-11 02:33:41 +08:00
|
|
|
update_nr_written(wbc, 1);
|
2012-03-13 21:38:00 +08:00
|
|
|
unlock_page(p);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(ret)) {
|
|
|
|
for (; i < num_pages; i++) {
|
2014-10-05 00:56:45 +08:00
|
|
|
struct page *p = eb->pages[i];
|
2014-09-23 22:22:33 +08:00
|
|
|
clear_page_dirty_for_io(p);
|
2012-03-13 21:38:00 +08:00
|
|
|
unlock_page(p);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int btree_write_cache_pages(struct address_space *mapping,
|
|
|
|
struct writeback_control *wbc)
|
|
|
|
{
|
|
|
|
struct extent_io_tree *tree = &BTRFS_I(mapping->host)->io_tree;
|
|
|
|
struct extent_buffer *eb, *prev_eb = NULL;
|
|
|
|
struct extent_page_data epd = {
|
|
|
|
.bio = NULL,
|
|
|
|
.tree = tree,
|
|
|
|
.extent_locked = 0,
|
|
|
|
.sync_io = wbc->sync_mode == WB_SYNC_ALL,
|
|
|
|
};
|
|
|
|
int ret = 0;
|
|
|
|
int done = 0;
|
|
|
|
int nr_to_write_done = 0;
|
|
|
|
struct pagevec pvec;
|
|
|
|
int nr_pages;
|
|
|
|
pgoff_t index;
|
|
|
|
pgoff_t end; /* Inclusive */
|
|
|
|
int scanned = 0;
|
2017-12-06 06:30:38 +08:00
|
|
|
xa_mark_t tag;
|
2012-03-13 21:38:00 +08:00
|
|
|
|
2017-11-16 09:37:52 +08:00
|
|
|
pagevec_init(&pvec);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (wbc->range_cyclic) {
|
|
|
|
index = mapping->writeback_index; /* Start from prev offset */
|
|
|
|
end = -1;
|
|
|
|
} else {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
index = wbc->range_start >> PAGE_SHIFT;
|
|
|
|
end = wbc->range_end >> PAGE_SHIFT;
|
2012-03-13 21:38:00 +08:00
|
|
|
scanned = 1;
|
|
|
|
}
|
|
|
|
if (wbc->sync_mode == WB_SYNC_ALL)
|
|
|
|
tag = PAGECACHE_TAG_TOWRITE;
|
|
|
|
else
|
|
|
|
tag = PAGECACHE_TAG_DIRTY;
|
|
|
|
retry:
|
|
|
|
if (wbc->sync_mode == WB_SYNC_ALL)
|
|
|
|
tag_pages_for_writeback(mapping, index, end);
|
|
|
|
while (!done && !nr_to_write_done && (index <= end) &&
|
2017-11-16 09:34:37 +08:00
|
|
|
(nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
|
2017-11-16 09:35:19 +08:00
|
|
|
tag))) {
|
2012-03-13 21:38:00 +08:00
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
scanned = 1;
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
struct page *page = pvec.pages[i];
|
|
|
|
|
|
|
|
if (!PagePrivate(page))
|
|
|
|
continue;
|
|
|
|
|
2012-09-15 01:43:01 +08:00
|
|
|
spin_lock(&mapping->private_lock);
|
|
|
|
if (!PagePrivate(page)) {
|
|
|
|
spin_unlock(&mapping->private_lock);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
2012-09-15 01:43:01 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Shouldn't happen and normally this would be a BUG_ON
|
|
|
|
* but no sense in crashing the users box for something
|
|
|
|
* we can survive anyway.
|
|
|
|
*/
|
2013-10-31 13:00:08 +08:00
|
|
|
if (WARN_ON(!eb)) {
|
2012-09-15 01:43:01 +08:00
|
|
|
spin_unlock(&mapping->private_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2012-09-15 01:43:01 +08:00
|
|
|
if (eb == prev_eb) {
|
|
|
|
spin_unlock(&mapping->private_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
continue;
|
2012-09-15 01:43:01 +08:00
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
|
2012-09-15 01:43:01 +08:00
|
|
|
ret = atomic_inc_not_zero(&eb->refs);
|
|
|
|
spin_unlock(&mapping->private_lock);
|
|
|
|
if (!ret)
|
2012-03-13 21:38:00 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
prev_eb = eb;
|
2019-03-20 18:21:41 +08:00
|
|
|
ret = lock_extent_buffer_for_io(eb, &epd);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (!ret) {
|
|
|
|
free_extent_buffer(eb);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2019-03-20 18:27:57 +08:00
|
|
|
ret = write_one_eb(eb, wbc, &epd);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (ret) {
|
|
|
|
done = 1;
|
|
|
|
free_extent_buffer(eb);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
free_extent_buffer(eb);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* the filesystem may choose to bump up nr_to_write.
|
|
|
|
* We have to make sure to honor the new nr_to_write
|
|
|
|
* at any time
|
|
|
|
*/
|
|
|
|
nr_to_write_done = wbc->nr_to_write <= 0;
|
|
|
|
}
|
|
|
|
pagevec_release(&pvec);
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
if (!scanned && !done) {
|
|
|
|
/*
|
|
|
|
* We hit the last page and there is more work to be done: wrap
|
|
|
|
* back to the start of the file
|
|
|
|
*/
|
|
|
|
scanned = 1;
|
|
|
|
index = 0;
|
|
|
|
goto retry;
|
|
|
|
}
|
2019-03-20 14:27:43 +08:00
|
|
|
ASSERT(ret <= 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
end_write_bio(&epd, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
ret = flush_write_bio(&epd);
|
2012-03-13 21:38:00 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/**
|
2008-09-08 23:18:08 +08:00
|
|
|
* write_cache_pages - walk the list of dirty pages of the given address space and write all of them.
|
2008-01-25 05:13:08 +08:00
|
|
|
* @mapping: address space structure to write
|
|
|
|
* @wbc: subtract the number of written pages from *@wbc->nr_to_write
|
2017-06-23 10:30:28 +08:00
|
|
|
* @data: data passed to __extent_writepage function
|
2008-01-25 05:13:08 +08:00
|
|
|
*
|
|
|
|
* If a page is already under I/O, write_cache_pages() skips it, even
|
|
|
|
* if it's dirty. This is desirable behaviour for memory-cleaning writeback,
|
|
|
|
* but it is INCORRECT for data-integrity system calls such as fsync(). fsync()
|
|
|
|
* and msync() need to guarantee that all the data which was dirty at the time
|
|
|
|
* the call was made get new I/O started against them. If wbc->sync_mode is
|
|
|
|
* WB_SYNC_ALL then we were called for data integrity and we must wait for
|
|
|
|
* existing IO to complete.
|
|
|
|
*/
|
2017-02-11 02:38:24 +08:00
|
|
|
static int extent_write_cache_pages(struct address_space *mapping,
|
2008-09-08 23:18:08 +08:00
|
|
|
struct writeback_control *wbc,
|
2017-12-01 01:00:02 +08:00
|
|
|
struct extent_page_data *epd)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2012-06-28 05:18:41 +08:00
|
|
|
struct inode *inode = mapping->host;
|
2008-01-25 05:13:08 +08:00
|
|
|
int ret = 0;
|
|
|
|
int done = 0;
|
2009-09-19 04:03:16 +08:00
|
|
|
int nr_to_write_done = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct pagevec pvec;
|
|
|
|
int nr_pages;
|
|
|
|
pgoff_t index;
|
|
|
|
pgoff_t end; /* Inclusive */
|
2016-03-08 08:56:21 +08:00
|
|
|
pgoff_t done_index;
|
|
|
|
int range_whole = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
int scanned = 0;
|
2017-12-06 06:30:38 +08:00
|
|
|
xa_mark_t tag;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2012-06-28 05:18:41 +08:00
|
|
|
/*
|
|
|
|
* We have to hold onto the inode so that ordered extents can do their
|
|
|
|
* work when the IO finishes. The alternative to this is failing to add
|
|
|
|
* an ordered extent if the igrab() fails there and that is a huge pain
|
|
|
|
* to deal with, so instead just hold onto the inode throughout the
|
|
|
|
* writepages operation. If it fails here we are freeing up the inode
|
|
|
|
* anyway and we'd rather not waste our time writing out stuff that is
|
|
|
|
* going to be truncated anyway.
|
|
|
|
*/
|
|
|
|
if (!igrab(inode))
|
|
|
|
return 0;
|
|
|
|
|
2017-11-16 09:37:52 +08:00
|
|
|
pagevec_init(&pvec);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (wbc->range_cyclic) {
|
|
|
|
index = mapping->writeback_index; /* Start from prev offset */
|
|
|
|
end = -1;
|
|
|
|
} else {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
index = wbc->range_start >> PAGE_SHIFT;
|
|
|
|
end = wbc->range_end >> PAGE_SHIFT;
|
2016-03-08 08:56:21 +08:00
|
|
|
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
|
|
|
|
range_whole = 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
scanned = 1;
|
|
|
|
}
|
2018-11-01 14:49:03 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We do the tagged writepage as long as the snapshot flush bit is set
|
|
|
|
* and we are the first one who do the filemap_flush() on this inode.
|
|
|
|
*
|
|
|
|
* The nr_to_write == LONG_MAX is needed to make sure other flushers do
|
|
|
|
* not race in and drop the bit.
|
|
|
|
*/
|
|
|
|
if (range_whole && wbc->nr_to_write == LONG_MAX &&
|
|
|
|
test_and_clear_bit(BTRFS_INODE_SNAPSHOT_FLUSH,
|
|
|
|
&BTRFS_I(inode)->runtime_flags))
|
|
|
|
wbc->tagged_writepages = 1;
|
|
|
|
|
|
|
|
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
|
2011-07-16 05:26:38 +08:00
|
|
|
tag = PAGECACHE_TAG_TOWRITE;
|
|
|
|
else
|
|
|
|
tag = PAGECACHE_TAG_DIRTY;
|
2008-01-25 05:13:08 +08:00
|
|
|
retry:
|
2018-11-01 14:49:03 +08:00
|
|
|
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
|
2011-07-16 05:26:38 +08:00
|
|
|
tag_pages_for_writeback(mapping, index, end);
|
2016-03-08 08:56:21 +08:00
|
|
|
done_index = index;
|
2009-09-19 04:03:16 +08:00
|
|
|
while (!done && !nr_to_write_done && (index <= end) &&
|
2017-11-16 09:35:19 +08:00
|
|
|
(nr_pages = pagevec_lookup_range_tag(&pvec, mapping,
|
|
|
|
&index, end, tag))) {
|
2008-01-25 05:13:08 +08:00
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
scanned = 1;
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
struct page *page = pvec.pages[i];
|
|
|
|
|
2016-03-08 08:56:21 +08:00
|
|
|
done_index = page->index;
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
2018-04-11 07:36:56 +08:00
|
|
|
* At this point we hold neither the i_pages lock nor
|
|
|
|
* the page lock: the page may be truncated or
|
|
|
|
* invalidated (changing page->mapping to NULL),
|
|
|
|
* or even swizzled back from swapper_space to
|
|
|
|
* tmpfs file mapping
|
2008-01-25 05:13:08 +08:00
|
|
|
*/
|
2013-02-12 00:33:00 +08:00
|
|
|
if (!trylock_page(page)) {
|
2019-03-20 14:27:41 +08:00
|
|
|
ret = flush_write_bio(epd);
|
|
|
|
BUG_ON(ret < 0);
|
2013-02-12 00:33:00 +08:00
|
|
|
lock_page(page);
|
2011-11-01 22:08:06 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
if (unlikely(page->mapping != mapping)) {
|
|
|
|
unlock_page(page);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2008-11-20 01:44:22 +08:00
|
|
|
if (wbc->sync_mode != WB_SYNC_NONE) {
|
2019-03-20 14:27:41 +08:00
|
|
|
if (PageWriteback(page)) {
|
|
|
|
ret = flush_write_bio(epd);
|
|
|
|
BUG_ON(ret < 0);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
wait_on_page_writeback(page);
|
2008-11-20 01:44:22 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
if (PageWriteback(page) ||
|
|
|
|
!clear_page_dirty_for_io(page)) {
|
|
|
|
unlock_page(page);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2017-12-01 01:00:02 +08:00
|
|
|
ret = __extent_writepage(page, wbc, epd);
|
2016-03-08 08:56:21 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
/*
|
|
|
|
* done_index is set past this page,
|
|
|
|
* so media errors will not choke
|
|
|
|
* background writeout for the entire
|
|
|
|
* file. This has consequences for
|
|
|
|
* range_cyclic semantics (ie. it may
|
|
|
|
* not be suitable for data integrity
|
|
|
|
* writeout).
|
|
|
|
*/
|
|
|
|
done_index = page->index + 1;
|
|
|
|
done = 1;
|
|
|
|
break;
|
|
|
|
}
|
2009-09-19 04:03:16 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* the filesystem may choose to bump up nr_to_write.
|
|
|
|
* We have to make sure to honor the new nr_to_write
|
|
|
|
* at any time
|
|
|
|
*/
|
|
|
|
nr_to_write_done = wbc->nr_to_write <= 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
pagevec_release(&pvec);
|
|
|
|
cond_resched();
|
|
|
|
}
|
2016-03-08 08:56:22 +08:00
|
|
|
if (!scanned && !done) {
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* We hit the last page and there is more work to be done: wrap
|
|
|
|
* back to the start of the file
|
|
|
|
*/
|
|
|
|
scanned = 1;
|
|
|
|
index = 0;
|
|
|
|
goto retry;
|
|
|
|
}
|
2016-03-08 08:56:21 +08:00
|
|
|
|
|
|
|
if (wbc->range_cyclic || (wbc->nr_to_write > 0 && range_whole))
|
|
|
|
mapping->writeback_index = done_index;
|
|
|
|
|
2012-06-28 05:18:41 +08:00
|
|
|
btrfs_add_delayed_iput(inode);
|
2016-03-08 08:56:22 +08:00
|
|
|
return ret;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2017-12-08 21:55:59 +08:00
|
|
|
int extent_write_full_page(struct page *page, struct writeback_control *wbc)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct extent_page_data epd = {
|
|
|
|
.bio = NULL,
|
2017-12-08 21:55:59 +08:00
|
|
|
.tree = &BTRFS_I(page->mapping->host)->io_tree,
|
2008-11-07 11:02:51 +08:00
|
|
|
.extent_locked = 0,
|
2009-04-21 03:50:09 +08:00
|
|
|
.sync_io = wbc->sync_mode == WB_SYNC_ALL,
|
2008-01-25 05:13:08 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
ret = __extent_writepage(page, wbc, &epd);
|
2019-03-20 14:27:42 +08:00
|
|
|
ASSERT(ret <= 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
end_write_bio(&epd, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2019-03-20 14:27:42 +08:00
|
|
|
ret = flush_write_bio(&epd);
|
|
|
|
ASSERT(ret <= 0);
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-12-08 21:55:58 +08:00
|
|
|
int extent_write_locked_range(struct inode *inode, u64 start, u64 end,
|
2008-11-07 11:02:51 +08:00
|
|
|
int mode)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
struct address_space *mapping = inode->i_mapping;
|
2017-12-08 21:55:58 +08:00
|
|
|
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
|
2008-11-07 11:02:51 +08:00
|
|
|
struct page *page;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long nr_pages = (end - start + PAGE_SIZE) >>
|
|
|
|
PAGE_SHIFT;
|
2008-11-07 11:02:51 +08:00
|
|
|
|
|
|
|
struct extent_page_data epd = {
|
|
|
|
.bio = NULL,
|
|
|
|
.tree = tree,
|
|
|
|
.extent_locked = 1,
|
2009-04-21 03:50:09 +08:00
|
|
|
.sync_io = mode == WB_SYNC_ALL,
|
2008-11-07 11:02:51 +08:00
|
|
|
};
|
|
|
|
struct writeback_control wbc_writepages = {
|
|
|
|
.sync_mode = mode,
|
|
|
|
.nr_to_write = nr_pages * 2,
|
|
|
|
.range_start = start,
|
|
|
|
.range_end = end + 1,
|
|
|
|
};
|
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (start <= end) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
page = find_get_page(mapping, start >> PAGE_SHIFT);
|
2008-11-07 11:02:51 +08:00
|
|
|
if (clear_page_dirty_for_io(page))
|
|
|
|
ret = __extent_writepage(page, &wbc_writepages, &epd);
|
|
|
|
else {
|
2018-11-01 20:09:48 +08:00
|
|
|
btrfs_writepage_endio_finish_ordered(page, start,
|
2018-11-08 16:18:08 +08:00
|
|
|
start + PAGE_SIZE - 1, 1);
|
2008-11-07 11:02:51 +08:00
|
|
|
unlock_page(page);
|
|
|
|
}
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
|
|
|
start += PAGE_SIZE;
|
2008-11-07 11:02:51 +08:00
|
|
|
}
|
|
|
|
|
2019-03-20 14:27:45 +08:00
|
|
|
ASSERT(ret <= 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
end_write_bio(&epd, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
ret = flush_write_bio(&epd);
|
2008-11-07 11:02:51 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-04-19 15:46:38 +08:00
|
|
|
int extent_writepages(struct address_space *mapping,
|
2008-01-25 05:13:08 +08:00
|
|
|
struct writeback_control *wbc)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
struct extent_page_data epd = {
|
|
|
|
.bio = NULL,
|
2018-04-19 15:46:38 +08:00
|
|
|
.tree = &BTRFS_I(mapping->host)->io_tree,
|
2008-11-07 11:02:51 +08:00
|
|
|
.extent_locked = 0,
|
2009-04-21 03:50:09 +08:00
|
|
|
.sync_io = wbc->sync_mode == WB_SYNC_ALL,
|
2008-01-25 05:13:08 +08:00
|
|
|
};
|
|
|
|
|
2017-06-23 10:30:28 +08:00
|
|
|
ret = extent_write_cache_pages(mapping, wbc, &epd);
|
2019-03-20 14:27:48 +08:00
|
|
|
ASSERT(ret <= 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
end_write_bio(&epd, ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
ret = flush_write_bio(&epd);
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-04-19 15:46:36 +08:00
|
|
|
int extent_readpages(struct address_space *mapping, struct list_head *pages,
|
|
|
|
unsigned nr_pages)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct bio *bio = NULL;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
unsigned long bio_flags = 0;
|
Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not
perform well enough.
Here is a scenario in fio jobs:
We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file,
and all of them will race on add_to_page_cache_lru(), and if one thread
successfully puts its page into the page cache, it takes the responsibility
to read the page's data.
And what's more, reading a page needs a period of time to finish, in which
other threads can slide in and process rest pages:
t1 t2 t3 t4
add Page1
read Page1 add Page2
| read Page2 add Page3
| | read Page3 add Page4
| | | read Page4
-----|------------|-----------|-----------|--------
v v v v
bio bio bio bio
Now we have four bios, each of which holds only one page since we need to
maintain consecutive pages in bio. Thus, we can end up with far more bios
than we need.
Here we're going to
a) delay the real read-page section and
b) try to put more pages into page cache.
With that said, we can make each bio hold more pages and reduce the number
of bios we need.
Here is some numbers taken from fio results:
w/o patch w patch
------------- -------- ---------------
READ: 745MB/s +25% 934MB/s
[1]:
[global]
group_reporting
thread
numjobs=4
bs=32k
rw=read
ioengine=sync
directory=/mnt/btrfs/
[READ]
filename=foobar
size=2000M
invalidate=1
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2012-07-21 11:43:09 +08:00
|
|
|
struct page *pagepool[16];
|
2013-07-25 19:22:37 +08:00
|
|
|
struct extent_map *em_cached = NULL;
|
2018-04-19 15:46:36 +08:00
|
|
|
struct extent_io_tree *tree = &BTRFS_I(mapping->host)->io_tree;
|
Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not
perform well enough.
Here is a scenario in fio jobs:
We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file,
and all of them will race on add_to_page_cache_lru(), and if one thread
successfully puts its page into the page cache, it takes the responsibility
to read the page's data.
And what's more, reading a page needs a period of time to finish, in which
other threads can slide in and process rest pages:
t1 t2 t3 t4
add Page1
read Page1 add Page2
| read Page2 add Page3
| | read Page3 add Page4
| | | read Page4
-----|------------|-----------|-----------|--------
v v v v
bio bio bio bio
Now we have four bios, each of which holds only one page since we need to
maintain consecutive pages in bio. Thus, we can end up with far more bios
than we need.
Here we're going to
a) delay the real read-page section and
b) try to put more pages into page cache.
With that said, we can make each bio hold more pages and reduce the number
of bios we need.
Here is some numbers taken from fio results:
w/o patch w patch
------------- -------- ---------------
READ: 745MB/s +25% 934MB/s
[1]:
[global]
group_reporting
thread
numjobs=4
bs=32k
rw=read
ioengine=sync
directory=/mnt/btrfs/
[READ]
filename=foobar
size=2000M
invalidate=1
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2012-07-21 11:43:09 +08:00
|
|
|
int nr = 0;
|
2015-09-28 16:56:26 +08:00
|
|
|
u64 prev_em_start = (u64)-1;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-11-30 00:41:31 +08:00
|
|
|
while (!list_empty(pages)) {
|
2019-03-11 15:55:38 +08:00
|
|
|
u64 contig_end = 0;
|
|
|
|
|
2018-11-30 00:41:31 +08:00
|
|
|
for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
|
2019-01-04 07:29:02 +08:00
|
|
|
struct page *page = lru_to_page(pages);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-11-30 00:41:31 +08:00
|
|
|
prefetchw(&page->flags);
|
|
|
|
list_del(&page->lru);
|
|
|
|
if (add_to_page_cache_lru(page, mapping, page->index,
|
|
|
|
readahead_gfp_mask(mapping))) {
|
|
|
|
put_page(page);
|
2019-03-11 15:55:38 +08:00
|
|
|
break;
|
2018-11-30 00:41:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
pagepool[nr++] = page;
|
2019-03-11 15:55:38 +08:00
|
|
|
contig_end = page_offset(page) + PAGE_SIZE - 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not
perform well enough.
Here is a scenario in fio jobs:
We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file,
and all of them will race on add_to_page_cache_lru(), and if one thread
successfully puts its page into the page cache, it takes the responsibility
to read the page's data.
And what's more, reading a page needs a period of time to finish, in which
other threads can slide in and process rest pages:
t1 t2 t3 t4
add Page1
read Page1 add Page2
| read Page2 add Page3
| | read Page3 add Page4
| | | read Page4
-----|------------|-----------|-----------|--------
v v v v
bio bio bio bio
Now we have four bios, each of which holds only one page since we need to
maintain consecutive pages in bio. Thus, we can end up with far more bios
than we need.
Here we're going to
a) delay the real read-page section and
b) try to put more pages into page cache.
With that said, we can make each bio hold more pages and reduce the number
of bios we need.
Here is some numbers taken from fio results:
w/o patch w patch
------------- -------- ---------------
READ: 745MB/s +25% 934MB/s
[1]:
[global]
group_reporting
thread
numjobs=4
bs=32k
rw=read
ioengine=sync
directory=/mnt/btrfs/
[READ]
filename=foobar
size=2000M
invalidate=1
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2012-07-21 11:43:09 +08:00
|
|
|
|
2019-03-11 15:55:38 +08:00
|
|
|
if (nr) {
|
|
|
|
u64 contig_start = page_offset(pagepool[0]);
|
|
|
|
|
|
|
|
ASSERT(contig_start + nr * PAGE_SIZE - 1 == contig_end);
|
|
|
|
|
|
|
|
contiguous_readpages(tree, pagepool, nr, contig_start,
|
|
|
|
contig_end, &em_cached, &bio, &bio_flags,
|
|
|
|
&prev_em_start);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not
perform well enough.
Here is a scenario in fio jobs:
We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file,
and all of them will race on add_to_page_cache_lru(), and if one thread
successfully puts its page into the page cache, it takes the responsibility
to read the page's data.
And what's more, reading a page needs a period of time to finish, in which
other threads can slide in and process rest pages:
t1 t2 t3 t4
add Page1
read Page1 add Page2
| read Page2 add Page3
| | read Page3 add Page4
| | | read Page4
-----|------------|-----------|-----------|--------
v v v v
bio bio bio bio
Now we have four bios, each of which holds only one page since we need to
maintain consecutive pages in bio. Thus, we can end up with far more bios
than we need.
Here we're going to
a) delay the real read-page section and
b) try to put more pages into page cache.
With that said, we can make each bio hold more pages and reduce the number
of bios we need.
Here is some numbers taken from fio results:
w/o patch w patch
------------- -------- ---------------
READ: 745MB/s +25% 934MB/s
[1]:
[global]
group_reporting
thread
numjobs=4
bs=32k
rw=read
ioengine=sync
directory=/mnt/btrfs/
[READ]
filename=foobar
size=2000M
invalidate=1
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
2012-07-21 11:43:09 +08:00
|
|
|
|
2013-07-25 19:22:37 +08:00
|
|
|
if (em_cached)
|
|
|
|
free_extent_map(em_cached);
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
if (bio)
|
2016-06-06 03:31:51 +08:00
|
|
|
return submit_one_bio(bio, 0, bio_flags);
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* basic invalidatepage code, this waits on any locked or writeback
|
|
|
|
* ranges corresponding to the page, and then deletes any extent state
|
|
|
|
* records from the tree
|
|
|
|
*/
|
|
|
|
int extent_invalidatepage(struct extent_io_tree *tree,
|
|
|
|
struct page *page, unsigned long offset)
|
|
|
|
{
|
2010-02-04 03:33:23 +08:00
|
|
|
struct extent_state *cached_state = NULL;
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 end = start + PAGE_SIZE - 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t blocksize = page->mapping->host->i_sb->s_blocksize;
|
|
|
|
|
2013-02-26 16:10:22 +08:00
|
|
|
start += ALIGN(offset, blocksize);
|
2008-01-25 05:13:08 +08:00
|
|
|
if (start > end)
|
|
|
|
return 0;
|
|
|
|
|
2015-12-03 21:30:40 +08:00
|
|
|
lock_extent_bits(tree, start, end, &cached_state);
|
2009-09-03 01:24:36 +08:00
|
|
|
wait_on_page_writeback(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
clear_extent_bit(tree, start, end,
|
2009-10-09 01:34:05 +08:00
|
|
|
EXTENT_LOCKED | EXTENT_DIRTY | EXTENT_DELALLOC |
|
|
|
|
EXTENT_DO_ACCOUNTING,
|
2017-10-31 23:37:52 +08:00
|
|
|
1, 1, &cached_state);
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-04-18 22:29:50 +08:00
|
|
|
/*
|
|
|
|
* a helper for releasepage, this tests for areas of the page that
|
|
|
|
* are locked or under IO and drops the related state bits if it is safe
|
|
|
|
* to drop the page.
|
|
|
|
*/
|
2018-04-19 15:46:35 +08:00
|
|
|
static int try_release_extent_state(struct extent_io_tree *tree,
|
2013-04-26 04:41:01 +08:00
|
|
|
struct page *page, gfp_t mask)
|
2008-04-18 22:29:50 +08:00
|
|
|
{
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 end = start + PAGE_SIZE - 1;
|
2008-04-18 22:29:50 +08:00
|
|
|
int ret = 1;
|
|
|
|
|
2019-03-14 21:28:31 +08:00
|
|
|
if (test_range_bit(tree, start, end, EXTENT_LOCKED, 0, NULL)) {
|
2008-04-18 22:29:50 +08:00
|
|
|
ret = 0;
|
2019-03-14 21:28:31 +08:00
|
|
|
} else {
|
2009-09-24 08:28:46 +08:00
|
|
|
/*
|
|
|
|
* at this point we can safely clear everything except the
|
|
|
|
* locked bit and the nodatasum bit
|
|
|
|
*/
|
2017-10-31 23:30:47 +08:00
|
|
|
ret = __clear_extent_bit(tree, start, end,
|
2009-09-24 08:28:46 +08:00
|
|
|
~(EXTENT_LOCKED | EXTENT_NODATASUM),
|
2017-10-31 23:30:47 +08:00
|
|
|
0, 0, NULL, mask, NULL);
|
2011-02-15 01:52:08 +08:00
|
|
|
|
|
|
|
/* if clear_extent_bit failed for enomem reasons,
|
|
|
|
* we can't allow the release to continue.
|
|
|
|
*/
|
|
|
|
if (ret < 0)
|
|
|
|
ret = 0;
|
|
|
|
else
|
|
|
|
ret = 1;
|
2008-04-18 22:29:50 +08:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
/*
|
|
|
|
* a helper for releasepage. As long as there are no locked extents
|
|
|
|
* in the range corresponding to the page, both state records and extent
|
|
|
|
* map records are removed
|
|
|
|
*/
|
2018-04-19 15:46:34 +08:00
|
|
|
int try_release_extent_mapping(struct page *page, gfp_t mask)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_map *em;
|
2012-12-21 17:17:45 +08:00
|
|
|
u64 start = page_offset(page);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
u64 end = start + PAGE_SIZE - 1;
|
Btrfs: fix file data corruption after cloning a range and fsync
When we clone a range into a file we can end up dropping existing
extent maps (or trimming them) and replacing them with new ones if the
range to be cloned overlaps with a range in the destination inode.
When that happens we add the new extent maps to the list of modified
extents in the inode's extent map tree, so that a "fast" fsync (the flag
BTRFS_INODE_NEEDS_FULL_SYNC not set in the inode) will see the extent maps
and log corresponding extent items. However, at the end of range cloning
operation we do truncate all the pages in the affected range (in order to
ensure future reads will not get stale data). Sometimes this truncation
will release the corresponding extent maps besides the pages from the page
cache. If this happens, then a "fast" fsync operation will miss logging
some extent items, because it relies exclusively on the extent maps being
present in the inode's extent tree, leading to data loss/corruption if
the fsync ends up using the same transaction used by the clone operation
(that transaction was not committed in the meanwhile). An extent map is
released through the callback btrfs_invalidatepage(), which gets called by
truncate_inode_pages_range(), and it calls __btrfs_releasepage(). The
later ends up calling try_release_extent_mapping() which will release the
extent map if some conditions are met, like the file size being greater
than 16Mb, gfp flags allow blocking and the range not being locked (which
is the case during the clone operation) nor being the extent map flagged
as pinned (also the case for cloning).
The following example, turned into a test for fstests, reproduces the
issue:
$ mkfs.btrfs -f /dev/sdb
$ mount /dev/sdb /mnt
$ xfs_io -f -c "pwrite -S 0x18 9000K 6908K" /mnt/foo
$ xfs_io -f -c "pwrite -S 0x20 2572K 156K" /mnt/bar
$ xfs_io -c "fsync" /mnt/bar
# reflink destination offset corresponds to the size of file bar,
# 2728Kb minus 4Kb.
$ xfs_io -c ""reflink ${SCRATCH_MNT}/foo 0 2724K 15908K" /mnt/bar
$ xfs_io -c "fsync" /mnt/bar
$ md5sum /mnt/bar
95a95813a8c2abc9aa75a6c2914a077e /mnt/bar
<power fail>
$ mount /dev/sdb /mnt
$ md5sum /mnt/bar
207fd8d0b161be8a84b945f0df8d5f8d /mnt/bar
# digest should be 95a95813a8c2abc9aa75a6c2914a077e like before the
# power failure
In the above example, the destination offset of the clone operation
corresponds to the size of the "bar" file minus 4Kb. So during the clone
operation, the extent map covering the range from 2572Kb to 2728Kb gets
trimmed so that it ends at offset 2724Kb, and a new extent map covering
the range from 2724Kb to 11724Kb is created. So at the end of the clone
operation when we ask to truncate the pages in the range from 2724Kb to
2724Kb + 15908Kb, the page invalidation callback ends up removing the new
extent map (through try_release_extent_mapping()) when the page at offset
2724Kb is passed to that callback.
Fix this by setting the bit BTRFS_INODE_NEEDS_FULL_SYNC whenever an extent
map is removed at try_release_extent_mapping(), forcing the next fsync to
search for modified extents in the fs/subvolume tree instead of relying on
the presence of extent maps in memory. This way we can continue doing a
"fast" fsync if the destination range of a clone operation does not
overlap with an existing range or if any of the criteria necessary to
remove an extent map at try_release_extent_mapping() is not met (file
size not bigger then 16Mb or gfp flags do not allow blocking).
CC: stable@vger.kernel.org # 3.16+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-07-12 08:36:43 +08:00
|
|
|
struct btrfs_inode *btrfs_inode = BTRFS_I(page->mapping->host);
|
|
|
|
struct extent_io_tree *tree = &btrfs_inode->io_tree;
|
|
|
|
struct extent_map_tree *map = &btrfs_inode->extent_tree;
|
2008-04-18 22:29:50 +08:00
|
|
|
|
2015-11-07 08:28:21 +08:00
|
|
|
if (gfpflags_allow_blocking(mask) &&
|
2015-12-15 00:42:10 +08:00
|
|
|
page->mapping->host->i_size > SZ_16M) {
|
2008-02-15 23:40:50 +08:00
|
|
|
u64 len;
|
2008-01-29 22:59:12 +08:00
|
|
|
while (start <= end) {
|
2008-02-15 23:40:50 +08:00
|
|
|
len = end - start + 1;
|
2009-09-03 04:24:52 +08:00
|
|
|
write_lock(&map->lock);
|
2008-02-15 23:40:50 +08:00
|
|
|
em = lookup_extent_mapping(map, start, len);
|
2012-02-16 15:23:58 +08:00
|
|
|
if (!em) {
|
2009-09-03 04:24:52 +08:00
|
|
|
write_unlock(&map->lock);
|
2008-01-29 22:59:12 +08:00
|
|
|
break;
|
|
|
|
}
|
2008-07-19 00:01:11 +08:00
|
|
|
if (test_bit(EXTENT_FLAG_PINNED, &em->flags) ||
|
|
|
|
em->start != start) {
|
2009-09-03 04:24:52 +08:00
|
|
|
write_unlock(&map->lock);
|
2008-01-29 22:59:12 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (!test_range_bit(tree, em->start,
|
|
|
|
extent_map_end(em) - 1,
|
2019-03-14 21:28:30 +08:00
|
|
|
EXTENT_LOCKED, 0, NULL)) {
|
Btrfs: fix file data corruption after cloning a range and fsync
When we clone a range into a file we can end up dropping existing
extent maps (or trimming them) and replacing them with new ones if the
range to be cloned overlaps with a range in the destination inode.
When that happens we add the new extent maps to the list of modified
extents in the inode's extent map tree, so that a "fast" fsync (the flag
BTRFS_INODE_NEEDS_FULL_SYNC not set in the inode) will see the extent maps
and log corresponding extent items. However, at the end of range cloning
operation we do truncate all the pages in the affected range (in order to
ensure future reads will not get stale data). Sometimes this truncation
will release the corresponding extent maps besides the pages from the page
cache. If this happens, then a "fast" fsync operation will miss logging
some extent items, because it relies exclusively on the extent maps being
present in the inode's extent tree, leading to data loss/corruption if
the fsync ends up using the same transaction used by the clone operation
(that transaction was not committed in the meanwhile). An extent map is
released through the callback btrfs_invalidatepage(), which gets called by
truncate_inode_pages_range(), and it calls __btrfs_releasepage(). The
later ends up calling try_release_extent_mapping() which will release the
extent map if some conditions are met, like the file size being greater
than 16Mb, gfp flags allow blocking and the range not being locked (which
is the case during the clone operation) nor being the extent map flagged
as pinned (also the case for cloning).
The following example, turned into a test for fstests, reproduces the
issue:
$ mkfs.btrfs -f /dev/sdb
$ mount /dev/sdb /mnt
$ xfs_io -f -c "pwrite -S 0x18 9000K 6908K" /mnt/foo
$ xfs_io -f -c "pwrite -S 0x20 2572K 156K" /mnt/bar
$ xfs_io -c "fsync" /mnt/bar
# reflink destination offset corresponds to the size of file bar,
# 2728Kb minus 4Kb.
$ xfs_io -c ""reflink ${SCRATCH_MNT}/foo 0 2724K 15908K" /mnt/bar
$ xfs_io -c "fsync" /mnt/bar
$ md5sum /mnt/bar
95a95813a8c2abc9aa75a6c2914a077e /mnt/bar
<power fail>
$ mount /dev/sdb /mnt
$ md5sum /mnt/bar
207fd8d0b161be8a84b945f0df8d5f8d /mnt/bar
# digest should be 95a95813a8c2abc9aa75a6c2914a077e like before the
# power failure
In the above example, the destination offset of the clone operation
corresponds to the size of the "bar" file minus 4Kb. So during the clone
operation, the extent map covering the range from 2572Kb to 2728Kb gets
trimmed so that it ends at offset 2724Kb, and a new extent map covering
the range from 2724Kb to 11724Kb is created. So at the end of the clone
operation when we ask to truncate the pages in the range from 2724Kb to
2724Kb + 15908Kb, the page invalidation callback ends up removing the new
extent map (through try_release_extent_mapping()) when the page at offset
2724Kb is passed to that callback.
Fix this by setting the bit BTRFS_INODE_NEEDS_FULL_SYNC whenever an extent
map is removed at try_release_extent_mapping(), forcing the next fsync to
search for modified extents in the fs/subvolume tree instead of relying on
the presence of extent maps in memory. This way we can continue doing a
"fast" fsync if the destination range of a clone operation does not
overlap with an existing range or if any of the criteria necessary to
remove an extent map at try_release_extent_mapping() is not met (file
size not bigger then 16Mb or gfp flags do not allow blocking).
CC: stable@vger.kernel.org # 3.16+
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2018-07-12 08:36:43 +08:00
|
|
|
set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
|
|
|
|
&btrfs_inode->runtime_flags);
|
2008-01-29 22:59:12 +08:00
|
|
|
remove_extent_mapping(map, em);
|
|
|
|
/* once for the rb tree */
|
|
|
|
free_extent_map(em);
|
|
|
|
}
|
|
|
|
start = extent_map_end(em);
|
2009-09-03 04:24:52 +08:00
|
|
|
write_unlock(&map->lock);
|
2008-01-29 22:59:12 +08:00
|
|
|
|
|
|
|
/* once for us */
|
2008-01-25 05:13:08 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
}
|
|
|
|
}
|
2018-04-19 15:46:35 +08:00
|
|
|
return try_release_extent_state(tree, page, mask);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2011-02-24 05:23:20 +08:00
|
|
|
/*
|
|
|
|
* helper function for fiemap, which doesn't want to see any holes.
|
|
|
|
* This maps until we find something past 'last'
|
|
|
|
*/
|
|
|
|
static struct extent_map *get_extent_skip_holes(struct inode *inode,
|
2017-06-23 10:09:57 +08:00
|
|
|
u64 offset, u64 last)
|
2011-02-24 05:23:20 +08:00
|
|
|
{
|
2016-06-15 21:22:56 +08:00
|
|
|
u64 sectorsize = btrfs_inode_sectorsize(inode);
|
2011-02-24 05:23:20 +08:00
|
|
|
struct extent_map *em;
|
|
|
|
u64 len;
|
|
|
|
|
|
|
|
if (offset >= last)
|
|
|
|
return NULL;
|
|
|
|
|
2013-10-31 13:03:04 +08:00
|
|
|
while (1) {
|
2011-02-24 05:23:20 +08:00
|
|
|
len = last - offset;
|
|
|
|
if (len == 0)
|
|
|
|
break;
|
2013-02-26 16:10:22 +08:00
|
|
|
len = ALIGN(len, sectorsize);
|
2018-12-12 15:42:32 +08:00
|
|
|
em = btrfs_get_extent_fiemap(BTRFS_I(inode), offset, len);
|
2011-04-20 00:00:01 +08:00
|
|
|
if (IS_ERR_OR_NULL(em))
|
2011-02-24 05:23:20 +08:00
|
|
|
return em;
|
|
|
|
|
|
|
|
/* if this isn't a hole return it */
|
2017-11-23 16:51:43 +08:00
|
|
|
if (em->block_start != EXTENT_MAP_HOLE)
|
2011-02-24 05:23:20 +08:00
|
|
|
return em;
|
|
|
|
|
|
|
|
/* this is a hole, advance to the next extent */
|
|
|
|
offset = extent_map_end(em);
|
|
|
|
free_extent_map(em);
|
|
|
|
if (offset >= last)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
/*
|
|
|
|
* To cache previous fiemap extent
|
|
|
|
*
|
|
|
|
* Will be used for merging fiemap extent
|
|
|
|
*/
|
|
|
|
struct fiemap_cache {
|
|
|
|
u64 offset;
|
|
|
|
u64 phys;
|
|
|
|
u64 len;
|
|
|
|
u32 flags;
|
|
|
|
bool cached;
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper to submit fiemap extent.
|
|
|
|
*
|
|
|
|
* Will try to merge current fiemap extent specified by @offset, @phys,
|
|
|
|
* @len and @flags with cached one.
|
|
|
|
* And only when we fails to merge, cached one will be submitted as
|
|
|
|
* fiemap extent.
|
|
|
|
*
|
|
|
|
* Return value is the same as fiemap_fill_next_extent().
|
|
|
|
*/
|
|
|
|
static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo,
|
|
|
|
struct fiemap_cache *cache,
|
|
|
|
u64 offset, u64 phys, u64 len, u32 flags)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (!cache->cached)
|
|
|
|
goto assign;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sanity check, extent_fiemap() should have ensured that new
|
2018-11-28 19:05:13 +08:00
|
|
|
* fiemap extent won't overlap with cached one.
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
* Not recoverable.
|
|
|
|
*
|
|
|
|
* NOTE: Physical address can overlap, due to compression
|
|
|
|
*/
|
|
|
|
if (cache->offset + cache->len > offset) {
|
|
|
|
WARN_ON(1);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only merges fiemap extents if
|
|
|
|
* 1) Their logical addresses are continuous
|
|
|
|
*
|
|
|
|
* 2) Their physical addresses are continuous
|
|
|
|
* So truly compressed (physical size smaller than logical size)
|
|
|
|
* extents won't get merged with each other
|
|
|
|
*
|
|
|
|
* 3) Share same flags except FIEMAP_EXTENT_LAST
|
|
|
|
* So regular extent won't get merged with prealloc extent
|
|
|
|
*/
|
|
|
|
if (cache->offset + cache->len == offset &&
|
|
|
|
cache->phys + cache->len == phys &&
|
|
|
|
(cache->flags & ~FIEMAP_EXTENT_LAST) ==
|
|
|
|
(flags & ~FIEMAP_EXTENT_LAST)) {
|
|
|
|
cache->len += len;
|
|
|
|
cache->flags |= flags;
|
|
|
|
goto try_submit_last;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Not mergeable, need to submit cached one */
|
|
|
|
ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys,
|
|
|
|
cache->len, cache->flags);
|
|
|
|
cache->cached = false;
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
assign:
|
|
|
|
cache->cached = true;
|
|
|
|
cache->offset = offset;
|
|
|
|
cache->phys = phys;
|
|
|
|
cache->len = len;
|
|
|
|
cache->flags = flags;
|
|
|
|
try_submit_last:
|
|
|
|
if (cache->flags & FIEMAP_EXTENT_LAST) {
|
|
|
|
ret = fiemap_fill_next_extent(fieinfo, cache->offset,
|
|
|
|
cache->phys, cache->len, cache->flags);
|
|
|
|
cache->cached = false;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-06-22 10:01:21 +08:00
|
|
|
* Emit last fiemap cache
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
*
|
2017-06-22 10:01:21 +08:00
|
|
|
* The last fiemap cache may still be cached in the following case:
|
|
|
|
* 0 4k 8k
|
|
|
|
* |<- Fiemap range ->|
|
|
|
|
* |<------------ First extent ----------->|
|
|
|
|
*
|
|
|
|
* In this case, the first extent range will be cached but not emitted.
|
|
|
|
* So we must emit it before ending extent_fiemap().
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
*/
|
2019-03-20 18:29:46 +08:00
|
|
|
static int emit_last_fiemap_cache(struct fiemap_extent_info *fieinfo,
|
2017-06-22 10:01:21 +08:00
|
|
|
struct fiemap_cache *cache)
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!cache->cached)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys,
|
|
|
|
cache->len, cache->flags);
|
|
|
|
cache->cached = false;
|
|
|
|
if (ret > 0)
|
|
|
|
ret = 0;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-01-22 03:39:14 +08:00
|
|
|
int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
2017-06-23 10:09:57 +08:00
|
|
|
__u64 start, __u64 len)
|
2009-01-22 03:39:14 +08:00
|
|
|
{
|
2010-11-24 03:36:57 +08:00
|
|
|
int ret = 0;
|
2009-01-22 03:39:14 +08:00
|
|
|
u64 off = start;
|
|
|
|
u64 max = start + len;
|
|
|
|
u32 flags = 0;
|
2010-11-24 03:36:57 +08:00
|
|
|
u32 found_type;
|
|
|
|
u64 last;
|
2011-02-24 05:23:20 +08:00
|
|
|
u64 last_for_get_extent = 0;
|
2009-01-22 03:39:14 +08:00
|
|
|
u64 disko = 0;
|
2011-02-24 05:23:20 +08:00
|
|
|
u64 isize = i_size_read(inode);
|
2010-11-24 03:36:57 +08:00
|
|
|
struct btrfs_key found_key;
|
2009-01-22 03:39:14 +08:00
|
|
|
struct extent_map *em = NULL;
|
2010-02-04 03:33:23 +08:00
|
|
|
struct extent_state *cached_state = NULL;
|
2010-11-24 03:36:57 +08:00
|
|
|
struct btrfs_path *path;
|
2014-09-11 04:20:45 +08:00
|
|
|
struct btrfs_root *root = BTRFS_I(inode)->root;
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
struct fiemap_cache cache = { 0 };
|
2019-05-15 21:31:04 +08:00
|
|
|
struct ulist *roots;
|
|
|
|
struct ulist *tmp_ulist;
|
2009-01-22 03:39:14 +08:00
|
|
|
int end = 0;
|
2011-02-24 05:23:20 +08:00
|
|
|
u64 em_start = 0;
|
|
|
|
u64 em_len = 0;
|
|
|
|
u64 em_end = 0;
|
2009-01-22 03:39:14 +08:00
|
|
|
|
|
|
|
if (len == 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2010-11-24 03:36:57 +08:00
|
|
|
path = btrfs_alloc_path();
|
|
|
|
if (!path)
|
|
|
|
return -ENOMEM;
|
|
|
|
path->leave_spinning = 1;
|
|
|
|
|
2019-05-15 21:31:04 +08:00
|
|
|
roots = ulist_alloc(GFP_KERNEL);
|
|
|
|
tmp_ulist = ulist_alloc(GFP_KERNEL);
|
|
|
|
if (!roots || !tmp_ulist) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out_free_ulist;
|
|
|
|
}
|
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
start = round_down(start, btrfs_inode_sectorsize(inode));
|
|
|
|
len = round_up(max, btrfs_inode_sectorsize(inode)) - start;
|
2011-11-18 00:34:31 +08:00
|
|
|
|
2011-02-24 05:23:20 +08:00
|
|
|
/*
|
|
|
|
* lookup the last file extent. We're not using i_size here
|
|
|
|
* because there might be preallocation past i_size
|
|
|
|
*/
|
2017-01-20 21:54:07 +08:00
|
|
|
ret = btrfs_lookup_file_extent(NULL, root, path,
|
|
|
|
btrfs_ino(BTRFS_I(inode)), -1, 0);
|
2010-11-24 03:36:57 +08:00
|
|
|
if (ret < 0) {
|
2019-05-15 21:31:04 +08:00
|
|
|
goto out_free_ulist;
|
2016-05-18 08:21:48 +08:00
|
|
|
} else {
|
|
|
|
WARN_ON(!ret);
|
|
|
|
if (ret == 1)
|
|
|
|
ret = 0;
|
2010-11-24 03:36:57 +08:00
|
|
|
}
|
2016-05-18 08:21:48 +08:00
|
|
|
|
2010-11-24 03:36:57 +08:00
|
|
|
path->slots[0]--;
|
|
|
|
btrfs_item_key_to_cpu(path->nodes[0], &found_key, path->slots[0]);
|
2014-06-05 00:41:45 +08:00
|
|
|
found_type = found_key.type;
|
2010-11-24 03:36:57 +08:00
|
|
|
|
2011-02-24 05:23:20 +08:00
|
|
|
/* No extents, but there might be delalloc bits */
|
2017-01-11 02:35:31 +08:00
|
|
|
if (found_key.objectid != btrfs_ino(BTRFS_I(inode)) ||
|
2010-11-24 03:36:57 +08:00
|
|
|
found_type != BTRFS_EXTENT_DATA_KEY) {
|
2011-02-24 05:23:20 +08:00
|
|
|
/* have to trust i_size as the end */
|
|
|
|
last = (u64)-1;
|
|
|
|
last_for_get_extent = isize;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* remember the start of the last extent. There are a
|
|
|
|
* bunch of different factors that go into the length of the
|
|
|
|
* extent, so its much less complex to remember where it started
|
|
|
|
*/
|
|
|
|
last = found_key.offset;
|
|
|
|
last_for_get_extent = last + 1;
|
2010-11-24 03:36:57 +08:00
|
|
|
}
|
2013-09-22 12:54:23 +08:00
|
|
|
btrfs_release_path(path);
|
2010-11-24 03:36:57 +08:00
|
|
|
|
2011-02-24 05:23:20 +08:00
|
|
|
/*
|
|
|
|
* we might have some extents allocated but more delalloc past those
|
|
|
|
* extents. so, we trust isize unless the start of the last extent is
|
|
|
|
* beyond isize
|
|
|
|
*/
|
|
|
|
if (last < isize) {
|
|
|
|
last = (u64)-1;
|
|
|
|
last_for_get_extent = isize;
|
|
|
|
}
|
|
|
|
|
2015-12-03 21:30:40 +08:00
|
|
|
lock_extent_bits(&BTRFS_I(inode)->io_tree, start, start + len - 1,
|
2012-03-01 21:57:19 +08:00
|
|
|
&cached_state);
|
2011-02-24 05:23:20 +08:00
|
|
|
|
2017-06-23 10:09:57 +08:00
|
|
|
em = get_extent_skip_holes(inode, start, last_for_get_extent);
|
2009-01-22 03:39:14 +08:00
|
|
|
if (!em)
|
|
|
|
goto out;
|
|
|
|
if (IS_ERR(em)) {
|
|
|
|
ret = PTR_ERR(em);
|
|
|
|
goto out;
|
|
|
|
}
|
2010-11-24 03:36:57 +08:00
|
|
|
|
2009-01-22 03:39:14 +08:00
|
|
|
while (!end) {
|
2013-07-06 01:52:51 +08:00
|
|
|
u64 offset_in_extent = 0;
|
2011-03-09 00:54:40 +08:00
|
|
|
|
|
|
|
/* break if the extent we found is outside the range */
|
|
|
|
if (em->start >= max || extent_map_end(em) < off)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get_extent may return an extent that starts before our
|
|
|
|
* requested range. We have to make sure the ranges
|
|
|
|
* we return to fiemap always move forward and don't
|
|
|
|
* overlap, so adjust the offsets here
|
|
|
|
*/
|
|
|
|
em_start = max(em->start, off);
|
2009-01-22 03:39:14 +08:00
|
|
|
|
2011-03-09 00:54:40 +08:00
|
|
|
/*
|
|
|
|
* record the offset from the start of the extent
|
2013-07-06 01:52:51 +08:00
|
|
|
* for adjusting the disk offset below. Only do this if the
|
|
|
|
* extent isn't compressed since our in ram offset may be past
|
|
|
|
* what we have actually allocated on disk.
|
2011-03-09 00:54:40 +08:00
|
|
|
*/
|
2013-07-06 01:52:51 +08:00
|
|
|
if (!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags))
|
|
|
|
offset_in_extent = em_start - em->start;
|
2011-02-24 05:23:20 +08:00
|
|
|
em_end = extent_map_end(em);
|
2011-03-09 00:54:40 +08:00
|
|
|
em_len = em_end - em_start;
|
2009-01-22 03:39:14 +08:00
|
|
|
flags = 0;
|
2018-06-20 17:02:30 +08:00
|
|
|
if (em->block_start < EXTENT_MAP_LAST_BYTE)
|
|
|
|
disko = em->block_start + offset_in_extent;
|
|
|
|
else
|
|
|
|
disko = 0;
|
2009-01-22 03:39:14 +08:00
|
|
|
|
2011-03-09 00:54:40 +08:00
|
|
|
/*
|
|
|
|
* bump off for our next call to get_extent
|
|
|
|
*/
|
|
|
|
off = extent_map_end(em);
|
|
|
|
if (off >= max)
|
|
|
|
end = 1;
|
|
|
|
|
2009-04-03 22:33:45 +08:00
|
|
|
if (em->block_start == EXTENT_MAP_LAST_BYTE) {
|
2009-01-22 03:39:14 +08:00
|
|
|
end = 1;
|
|
|
|
flags |= FIEMAP_EXTENT_LAST;
|
2009-04-03 22:33:45 +08:00
|
|
|
} else if (em->block_start == EXTENT_MAP_INLINE) {
|
2009-01-22 03:39:14 +08:00
|
|
|
flags |= (FIEMAP_EXTENT_DATA_INLINE |
|
|
|
|
FIEMAP_EXTENT_NOT_ALIGNED);
|
2009-04-03 22:33:45 +08:00
|
|
|
} else if (em->block_start == EXTENT_MAP_DELALLOC) {
|
2009-01-22 03:39:14 +08:00
|
|
|
flags |= (FIEMAP_EXTENT_DELALLOC |
|
|
|
|
FIEMAP_EXTENT_UNKNOWN);
|
2014-09-11 04:20:45 +08:00
|
|
|
} else if (fieinfo->fi_extents_max) {
|
|
|
|
u64 bytenr = em->block_start -
|
|
|
|
(em->start - em->orig_start);
|
2013-09-22 12:54:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* As btrfs supports shared space, this information
|
|
|
|
* can be exported to userspace tools via
|
2014-09-11 04:20:45 +08:00
|
|
|
* flag FIEMAP_EXTENT_SHARED. If fi_extents_max == 0
|
|
|
|
* then we're just getting a count and we can skip the
|
|
|
|
* lookup stuff.
|
2013-09-22 12:54:23 +08:00
|
|
|
*/
|
2017-06-29 11:56:58 +08:00
|
|
|
ret = btrfs_check_shared(root,
|
|
|
|
btrfs_ino(BTRFS_I(inode)),
|
2019-05-15 21:31:04 +08:00
|
|
|
bytenr, roots, tmp_ulist);
|
2014-09-11 04:20:45 +08:00
|
|
|
if (ret < 0)
|
2013-09-22 12:54:23 +08:00
|
|
|
goto out_free;
|
2014-09-11 04:20:45 +08:00
|
|
|
if (ret)
|
2013-09-22 12:54:23 +08:00
|
|
|
flags |= FIEMAP_EXTENT_SHARED;
|
2014-09-11 04:20:45 +08:00
|
|
|
ret = 0;
|
2009-01-22 03:39:14 +08:00
|
|
|
}
|
|
|
|
if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags))
|
|
|
|
flags |= FIEMAP_EXTENT_ENCODED;
|
2015-05-19 22:44:04 +08:00
|
|
|
if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
|
|
|
|
flags |= FIEMAP_EXTENT_UNWRITTEN;
|
2009-01-22 03:39:14 +08:00
|
|
|
|
|
|
|
free_extent_map(em);
|
|
|
|
em = NULL;
|
2011-02-24 05:23:20 +08:00
|
|
|
if ((em_start >= last) || em_len == (u64)-1 ||
|
|
|
|
(last == (u64)-1 && isize <= em_end)) {
|
2009-01-22 03:39:14 +08:00
|
|
|
flags |= FIEMAP_EXTENT_LAST;
|
|
|
|
end = 1;
|
|
|
|
}
|
|
|
|
|
2011-02-24 05:23:20 +08:00
|
|
|
/* now scan forward to see if this is really the last extent. */
|
2017-06-23 10:09:57 +08:00
|
|
|
em = get_extent_skip_holes(inode, off, last_for_get_extent);
|
2011-02-24 05:23:20 +08:00
|
|
|
if (IS_ERR(em)) {
|
|
|
|
ret = PTR_ERR(em);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (!em) {
|
2010-11-24 03:36:57 +08:00
|
|
|
flags |= FIEMAP_EXTENT_LAST;
|
|
|
|
end = 1;
|
|
|
|
}
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
ret = emit_fiemap_extent(fieinfo, &cache, em_start, disko,
|
|
|
|
em_len, flags);
|
2015-03-25 06:12:56 +08:00
|
|
|
if (ret) {
|
|
|
|
if (ret == 1)
|
|
|
|
ret = 0;
|
2011-02-24 05:23:20 +08:00
|
|
|
goto out_free;
|
2015-03-25 06:12:56 +08:00
|
|
|
}
|
2009-01-22 03:39:14 +08:00
|
|
|
}
|
|
|
|
out_free:
|
btrfs: fiemap: Cache and merge fiemap extent before submit it to user
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
# umount /mnt/btrfs
# mount /dev/vdb5 /mnt/btrfs
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..31]: 25088..25119 32 0x0
1: [32..63]: 25120..25151 32 0x0
2: [64..95]: 25152..25183 32 0x0
3: [96..127]: 25184..25215 32 0x1
But after above fiemap, we get correct merged result if we call fiemap
again.
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]: 25088..25215 128 0x1
[REASON]
Btrfs will try to merge extent map when inserting new extent map.
btrfs_fiemap(start=0 len=(u64)-1)
|- extent_fiemap(start=0 len=(u64)-1)
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=0 len=64k)
| | Found on-disk (ino, EXTENT_DATA, 0)
| |- add_extent_mapping()
| |- Return (em->start=0, len=16k)
|
|- fiemap_fill_next_extent(logic=0 phys=X len=16k)
|
|- get_extent_skip_holes(start=0 len=64k)
| |- btrfs_get_extent_fiemap(start=0 len=64k)
| |- btrfs_get_extent(start=16k len=48k)
| | Found on-disk (ino, EXTENT_DATA, 16k)
| |- add_extent_mapping()
| | |- try_merge_map()
| | Merge with previous em start=0 len=16k
| | resulting em start=0 len=32k
| |- Return (em->start=0, len=32K) << Merged result
|- Stripe off the unrelated range (0~16K) of return em
|- fiemap_fill_next_extent(logic=16K phys=X+16K len=16K)
^^^ Causing split fiemap extent.
And since in add_extent_mapping(), em is already merged, in next
fiemap() call, we will get merged result.
[FIX]
Here we introduce a new structure, fiemap_cache, which records previous
fiemap extent.
And will always try to merge current fiemap_cache result before calling
fiemap_fill_next_extent().
Only when we failed to merge current fiemap extent with cached one, we
will call fiemap_fill_next_extent() to submit cached one.
So by this method, we can merge all fiemap extents.
It can also be done in fs/ioctl.c, however the problem is if
fieinfo->fi_extents_max == 0, we have no space to cache previous fiemap
extent.
So I choose to merge it in btrfs.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-04-07 10:43:15 +08:00
|
|
|
if (!ret)
|
2019-03-20 18:29:46 +08:00
|
|
|
ret = emit_last_fiemap_cache(fieinfo, &cache);
|
2009-01-22 03:39:14 +08:00
|
|
|
free_extent_map(em);
|
|
|
|
out:
|
2013-05-02 00:23:41 +08:00
|
|
|
unlock_extent_cached(&BTRFS_I(inode)->io_tree, start, start + len - 1,
|
2017-12-13 04:43:52 +08:00
|
|
|
&cached_state);
|
2019-05-15 21:31:04 +08:00
|
|
|
|
|
|
|
out_free_ulist:
|
2019-07-05 15:26:24 +08:00
|
|
|
btrfs_free_path(path);
|
2019-05-15 21:31:04 +08:00
|
|
|
ulist_free(roots);
|
|
|
|
ulist_free(tmp_ulist);
|
2009-01-22 03:39:14 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-08-07 01:21:20 +08:00
|
|
|
static void __free_extent_buffer(struct extent_buffer *eb)
|
|
|
|
{
|
2013-04-23 00:12:31 +08:00
|
|
|
btrfs_leak_debug_del(&eb->leak_list);
|
2010-08-07 01:21:20 +08:00
|
|
|
kmem_cache_free(extent_buffer_cache, eb);
|
|
|
|
}
|
|
|
|
|
2014-03-29 05:07:27 +08:00
|
|
|
int extent_buffer_under_io(struct extent_buffer *eb)
|
2013-08-08 02:54:37 +08:00
|
|
|
{
|
|
|
|
return (atomic_read(&eb->io_pages) ||
|
|
|
|
test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags) ||
|
|
|
|
test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-07-19 23:24:32 +08:00
|
|
|
* Release all pages attached to the extent buffer.
|
2013-08-08 02:54:37 +08:00
|
|
|
*/
|
2018-07-19 23:24:32 +08:00
|
|
|
static void btrfs_release_extent_buffer_pages(struct extent_buffer *eb)
|
2013-08-08 02:54:37 +08:00
|
|
|
{
|
2018-06-27 21:38:22 +08:00
|
|
|
int i;
|
|
|
|
int num_pages;
|
2018-06-27 21:38:24 +08:00
|
|
|
int mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
|
2013-08-08 02:54:37 +08:00
|
|
|
|
|
|
|
BUG_ON(extent_buffer_under_io(eb));
|
|
|
|
|
2018-06-27 21:38:22 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
|
|
|
for (i = 0; i < num_pages; i++) {
|
|
|
|
struct page *page = eb->pages[i];
|
2013-08-08 02:54:37 +08:00
|
|
|
|
2015-02-09 17:31:45 +08:00
|
|
|
if (!page)
|
|
|
|
continue;
|
|
|
|
if (mapped)
|
2013-08-08 02:54:37 +08:00
|
|
|
spin_lock(&page->mapping->private_lock);
|
2015-02-09 17:31:45 +08:00
|
|
|
/*
|
|
|
|
* We do this since we'll remove the pages after we've
|
|
|
|
* removed the eb from the radix tree, so we could race
|
|
|
|
* and have this page now attached to the new eb. So
|
|
|
|
* only clear page_private if it's still connected to
|
|
|
|
* this eb.
|
|
|
|
*/
|
|
|
|
if (PagePrivate(page) &&
|
|
|
|
page->private == (unsigned long)eb) {
|
|
|
|
BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
|
|
|
|
BUG_ON(PageDirty(page));
|
|
|
|
BUG_ON(PageWriteback(page));
|
2013-08-08 02:54:37 +08:00
|
|
|
/*
|
2015-02-09 17:31:45 +08:00
|
|
|
* We need to make sure we haven't be attached
|
|
|
|
* to a new eb.
|
2013-08-08 02:54:37 +08:00
|
|
|
*/
|
2015-02-09 17:31:45 +08:00
|
|
|
ClearPagePrivate(page);
|
|
|
|
set_page_private(page, 0);
|
|
|
|
/* One for the page private */
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
2013-08-08 02:54:37 +08:00
|
|
|
}
|
2015-02-09 17:31:45 +08:00
|
|
|
|
|
|
|
if (mapped)
|
|
|
|
spin_unlock(&page->mapping->private_lock);
|
|
|
|
|
2016-05-20 09:18:45 +08:00
|
|
|
/* One for when we allocated the page */
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(page);
|
2018-06-27 21:38:22 +08:00
|
|
|
}
|
2013-08-08 02:54:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper for releasing the extent buffer.
|
|
|
|
*/
|
|
|
|
static inline void btrfs_release_extent_buffer(struct extent_buffer *eb)
|
|
|
|
{
|
2018-07-19 23:24:32 +08:00
|
|
|
btrfs_release_extent_buffer_pages(eb);
|
2013-08-08 02:54:37 +08:00
|
|
|
__free_extent_buffer(eb);
|
|
|
|
}
|
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
static struct extent_buffer *
|
|
|
|
__alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 start,
|
2014-06-15 08:55:29 +08:00
|
|
|
unsigned long len)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
struct extent_buffer *eb = NULL;
|
|
|
|
|
2015-08-19 20:17:40 +08:00
|
|
|
eb = kmem_cache_zalloc(extent_buffer_cache, GFP_NOFS|__GFP_NOFAIL);
|
2008-01-25 05:13:08 +08:00
|
|
|
eb->start = start;
|
|
|
|
eb->len = len;
|
2013-12-17 02:24:27 +08:00
|
|
|
eb->fs_info = fs_info;
|
2012-05-16 23:00:02 +08:00
|
|
|
eb->bflags = 0;
|
2011-07-17 03:23:14 +08:00
|
|
|
rwlock_init(&eb->lock);
|
|
|
|
atomic_set(&eb->blocking_readers, 0);
|
2019-05-02 22:47:23 +08:00
|
|
|
eb->blocking_writers = 0;
|
2018-08-24 22:31:17 +08:00
|
|
|
eb->lock_nested = false;
|
2011-07-17 03:23:14 +08:00
|
|
|
init_waitqueue_head(&eb->write_lock_wq);
|
|
|
|
init_waitqueue_head(&eb->read_lock_wq);
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
|
2013-04-23 00:12:31 +08:00
|
|
|
btrfs_leak_debug_add(&eb->leak_list, &buffers);
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_lock_init(&eb->refs_lock);
|
2008-01-25 05:13:08 +08:00
|
|
|
atomic_set(&eb->refs, 1);
|
2012-03-13 21:38:00 +08:00
|
|
|
atomic_set(&eb->io_pages, 0);
|
2010-08-07 01:21:20 +08:00
|
|
|
|
2013-02-28 22:54:18 +08:00
|
|
|
/*
|
|
|
|
* Sanity checks, currently the maximum is 64k covered by 16x 4k pages
|
|
|
|
*/
|
|
|
|
BUILD_BUG_ON(BTRFS_MAX_METADATA_BLOCKSIZE
|
|
|
|
> MAX_INLINE_EXTENT_BUFFER_SIZE);
|
|
|
|
BUG_ON(len > MAX_INLINE_EXTENT_BUFFER_SIZE);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-08-24 20:56:28 +08:00
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
2019-05-02 22:51:53 +08:00
|
|
|
eb->spinning_writers = 0;
|
2018-08-24 21:57:38 +08:00
|
|
|
atomic_set(&eb->spinning_readers, 0);
|
2018-08-24 22:15:51 +08:00
|
|
|
atomic_set(&eb->read_locks, 0);
|
2019-05-02 22:53:47 +08:00
|
|
|
eb->write_locks = 0;
|
2018-08-24 20:56:28 +08:00
|
|
|
#endif
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
return eb;
|
|
|
|
}
|
|
|
|
|
2012-05-16 23:00:02 +08:00
|
|
|
struct extent_buffer *btrfs_clone_extent_buffer(struct extent_buffer *src)
|
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
2012-05-16 23:00:02 +08:00
|
|
|
struct page *p;
|
|
|
|
struct extent_buffer *new;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages = num_extent_pages(src);
|
2012-05-16 23:00:02 +08:00
|
|
|
|
2014-06-15 09:20:26 +08:00
|
|
|
new = __alloc_extent_buffer(src->fs_info, src->start, src->len);
|
2012-05-16 23:00:02 +08:00
|
|
|
if (new == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
for (i = 0; i < num_pages; i++) {
|
2013-08-08 04:57:23 +08:00
|
|
|
p = alloc_page(GFP_NOFS);
|
2013-08-08 02:54:37 +08:00
|
|
|
if (!p) {
|
|
|
|
btrfs_release_extent_buffer(new);
|
|
|
|
return NULL;
|
|
|
|
}
|
2012-05-16 23:00:02 +08:00
|
|
|
attach_extent_buffer_page(new, p);
|
|
|
|
WARN_ON(PageDirty(p));
|
|
|
|
SetPageUptodate(p);
|
|
|
|
new->pages[i] = p;
|
2016-11-09 00:56:24 +08:00
|
|
|
copy_page(page_address(p), page_address(src->pages[i]));
|
2012-05-16 23:00:02 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
set_bit(EXTENT_BUFFER_UPTODATE, &new->bflags);
|
2018-06-27 21:38:24 +08:00
|
|
|
set_bit(EXTENT_BUFFER_UNMAPPED, &new->bflags);
|
2012-05-16 23:00:02 +08:00
|
|
|
|
|
|
|
return new;
|
|
|
|
}
|
|
|
|
|
2015-09-30 11:50:31 +08:00
|
|
|
struct extent_buffer *__alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 start, unsigned long len)
|
2012-05-16 23:00:02 +08:00
|
|
|
{
|
|
|
|
struct extent_buffer *eb;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
|
|
|
int i;
|
2012-05-16 23:00:02 +08:00
|
|
|
|
2014-06-15 09:20:26 +08:00
|
|
|
eb = __alloc_extent_buffer(fs_info, start, len);
|
2012-05-16 23:00:02 +08:00
|
|
|
if (!eb)
|
|
|
|
return NULL;
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2012-05-16 23:00:02 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2013-08-08 04:57:23 +08:00
|
|
|
eb->pages[i] = alloc_page(GFP_NOFS);
|
2012-05-16 23:00:02 +08:00
|
|
|
if (!eb->pages[i])
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
set_extent_buffer_uptodate(eb);
|
|
|
|
btrfs_set_header_nritems(eb, 0);
|
2018-06-27 21:38:24 +08:00
|
|
|
set_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
|
2012-05-16 23:00:02 +08:00
|
|
|
|
|
|
|
return eb;
|
|
|
|
err:
|
2012-10-11 21:25:16 +08:00
|
|
|
for (; i > 0; i--)
|
|
|
|
__free_page(eb->pages[i - 1]);
|
2012-05-16 23:00:02 +08:00
|
|
|
__free_extent_buffer(eb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-09-30 11:50:31 +08:00
|
|
|
struct extent_buffer *alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info,
|
2016-06-15 21:22:56 +08:00
|
|
|
u64 start)
|
2015-09-30 11:50:31 +08:00
|
|
|
{
|
2016-06-15 21:22:56 +08:00
|
|
|
return __alloc_dummy_extent_buffer(fs_info, start, fs_info->nodesize);
|
2015-09-30 11:50:31 +08:00
|
|
|
}
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
static void check_buffer_tree_ref(struct extent_buffer *eb)
|
|
|
|
{
|
2013-01-30 06:49:37 +08:00
|
|
|
int refs;
|
2012-03-13 21:38:00 +08:00
|
|
|
/* the ref bit is tricky. We have to make sure it is set
|
|
|
|
* if we have the buffer dirty. Otherwise the
|
|
|
|
* code to free a buffer can end up dropping a dirty
|
|
|
|
* page
|
|
|
|
*
|
|
|
|
* Once the ref bit is set, it won't go away while the
|
|
|
|
* buffer is dirty or in writeback, and it also won't
|
|
|
|
* go away while we have the reference count on the
|
|
|
|
* eb bumped.
|
|
|
|
*
|
|
|
|
* We can't just set the ref bit without bumping the
|
|
|
|
* ref on the eb because free_extent_buffer might
|
|
|
|
* see the ref bit and try to clear it. If this happens
|
|
|
|
* free_extent_buffer might end up dropping our original
|
|
|
|
* ref by mistake and freeing the page before we are able
|
|
|
|
* to add one more ref.
|
|
|
|
*
|
|
|
|
* So bump the ref count first, then set the bit. If someone
|
|
|
|
* beat us to it, drop the ref we added.
|
|
|
|
*/
|
2013-01-30 06:49:37 +08:00
|
|
|
refs = atomic_read(&eb->refs);
|
|
|
|
if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
|
|
|
|
return;
|
|
|
|
|
2012-07-21 04:11:08 +08:00
|
|
|
spin_lock(&eb->refs_lock);
|
|
|
|
if (!test_and_set_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
|
2012-03-13 21:38:00 +08:00
|
|
|
atomic_inc(&eb->refs);
|
2012-07-21 04:11:08 +08:00
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
}
|
|
|
|
|
2014-06-05 07:10:31 +08:00
|
|
|
static void mark_extent_buffer_accessed(struct extent_buffer *eb,
|
|
|
|
struct page *accessed)
|
2012-03-16 06:24:42 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages, i;
|
2012-03-16 06:24:42 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
check_buffer_tree_ref(eb);
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2012-03-16 06:24:42 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
struct page *p = eb->pages[i];
|
|
|
|
|
2014-06-05 07:10:31 +08:00
|
|
|
if (p != accessed)
|
|
|
|
mark_page_accessed(p);
|
2012-03-16 06:24:42 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
|
|
|
|
u64 start)
|
2013-10-07 23:45:25 +08:00
|
|
|
{
|
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2013-12-17 02:24:27 +08:00
|
|
|
eb = radix_tree_lookup(&fs_info->buffer_radix,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
start >> PAGE_SHIFT);
|
2013-10-07 23:45:25 +08:00
|
|
|
if (eb && atomic_inc_not_zero(&eb->refs)) {
|
|
|
|
rcu_read_unlock();
|
2015-04-23 18:28:48 +08:00
|
|
|
/*
|
|
|
|
* Lock our eb's refs_lock to avoid races with
|
|
|
|
* free_extent_buffer. When we get our eb it might be flagged
|
|
|
|
* with EXTENT_BUFFER_STALE and another task running
|
|
|
|
* free_extent_buffer might have seen that flag set,
|
|
|
|
* eb->refs == 2, that the buffer isn't under IO (dirty and
|
|
|
|
* writeback flags not set) and it's still in the tree (flag
|
|
|
|
* EXTENT_BUFFER_TREE_REF set), therefore being in the process
|
|
|
|
* of decrementing the extent buffer's reference count twice.
|
|
|
|
* So here we could race and increment the eb's reference count,
|
|
|
|
* clear its stale flag, mark it as dirty and drop our reference
|
|
|
|
* before the other task finishes executing free_extent_buffer,
|
|
|
|
* which would later result in an attempt to free an extent
|
|
|
|
* buffer that is dirty.
|
|
|
|
*/
|
|
|
|
if (test_bit(EXTENT_BUFFER_STALE, &eb->bflags)) {
|
|
|
|
spin_lock(&eb->refs_lock);
|
|
|
|
spin_unlock(&eb->refs_lock);
|
|
|
|
}
|
2014-06-05 07:10:31 +08:00
|
|
|
mark_extent_buffer_accessed(eb, NULL);
|
2013-10-07 23:45:25 +08:00
|
|
|
return eb;
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2014-05-08 05:06:09 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
|
|
|
struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
|
2016-06-15 21:22:56 +08:00
|
|
|
u64 start)
|
2014-05-08 05:06:09 +08:00
|
|
|
{
|
|
|
|
struct extent_buffer *eb, *exists = NULL;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
eb = find_extent_buffer(fs_info, start);
|
|
|
|
if (eb)
|
|
|
|
return eb;
|
2016-06-15 21:22:56 +08:00
|
|
|
eb = alloc_dummy_extent_buffer(fs_info, start);
|
2014-05-08 05:06:09 +08:00
|
|
|
if (!eb)
|
|
|
|
return NULL;
|
|
|
|
eb->fs_info = fs_info;
|
|
|
|
again:
|
2016-05-09 20:11:38 +08:00
|
|
|
ret = radix_tree_preload(GFP_NOFS);
|
2014-05-08 05:06:09 +08:00
|
|
|
if (ret)
|
|
|
|
goto free_eb;
|
|
|
|
spin_lock(&fs_info->buffer_lock);
|
|
|
|
ret = radix_tree_insert(&fs_info->buffer_radix,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
start >> PAGE_SHIFT, eb);
|
2014-05-08 05:06:09 +08:00
|
|
|
spin_unlock(&fs_info->buffer_lock);
|
|
|
|
radix_tree_preload_end();
|
|
|
|
if (ret == -EEXIST) {
|
|
|
|
exists = find_extent_buffer(fs_info, start);
|
|
|
|
if (exists)
|
|
|
|
goto free_eb;
|
|
|
|
else
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
check_buffer_tree_ref(eb);
|
|
|
|
set_bit(EXTENT_BUFFER_IN_TREE, &eb->bflags);
|
|
|
|
|
|
|
|
return eb;
|
|
|
|
free_eb:
|
|
|
|
btrfs_release_extent_buffer(eb);
|
|
|
|
return exists;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
|
2014-06-15 09:00:04 +08:00
|
|
|
u64 start)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2016-06-15 21:22:56 +08:00
|
|
|
unsigned long len = fs_info->nodesize;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
|
|
|
int i;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long index = start >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct extent_buffer *eb;
|
2008-07-22 23:18:07 +08:00
|
|
|
struct extent_buffer *exists = NULL;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *p;
|
2013-12-17 02:24:27 +08:00
|
|
|
struct address_space *mapping = fs_info->btree_inode->i_mapping;
|
2008-01-25 05:13:08 +08:00
|
|
|
int uptodate = 1;
|
2010-10-27 08:57:29 +08:00
|
|
|
int ret;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2016-06-15 21:22:56 +08:00
|
|
|
if (!IS_ALIGNED(start, fs_info->sectorsize)) {
|
2016-06-07 03:01:23 +08:00
|
|
|
btrfs_err(fs_info, "bad tree block start %llu", start);
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
eb = find_extent_buffer(fs_info, start);
|
2013-10-07 23:45:25 +08:00
|
|
|
if (eb)
|
2008-07-22 23:18:07 +08:00
|
|
|
return eb;
|
|
|
|
|
2014-06-15 08:55:29 +08:00
|
|
|
eb = __alloc_extent_buffer(fs_info, start, len);
|
2008-04-01 23:21:40 +08:00
|
|
|
if (!eb)
|
2016-06-07 03:01:23 +08:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2010-08-07 01:21:20 +08:00
|
|
|
for (i = 0; i < num_pages; i++, index++) {
|
2015-08-19 20:17:40 +08:00
|
|
|
p = find_or_create_page(mapping, index, GFP_NOFS|__GFP_NOFAIL);
|
2016-06-07 03:01:23 +08:00
|
|
|
if (!p) {
|
|
|
|
exists = ERR_PTR(-ENOMEM);
|
2008-07-22 23:18:07 +08:00
|
|
|
goto free_eb;
|
2016-06-07 03:01:23 +08:00
|
|
|
}
|
2012-03-08 05:20:05 +08:00
|
|
|
|
|
|
|
spin_lock(&mapping->private_lock);
|
|
|
|
if (PagePrivate(p)) {
|
|
|
|
/*
|
|
|
|
* We could have already allocated an eb for this page
|
|
|
|
* and attached one so lets see if we can get a ref on
|
|
|
|
* the existing eb, and if we can we know it's good and
|
|
|
|
* we can just return that one, else we know we can just
|
|
|
|
* overwrite page->private.
|
|
|
|
*/
|
|
|
|
exists = (struct extent_buffer *)p->private;
|
|
|
|
if (atomic_inc_not_zero(&exists->refs)) {
|
|
|
|
spin_unlock(&mapping->private_lock);
|
|
|
|
unlock_page(p);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(p);
|
2014-06-05 07:10:31 +08:00
|
|
|
mark_extent_buffer_accessed(exists, p);
|
2012-03-08 05:20:05 +08:00
|
|
|
goto free_eb;
|
|
|
|
}
|
2015-02-24 18:47:05 +08:00
|
|
|
exists = NULL;
|
2012-03-08 05:20:05 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
/*
|
2012-03-08 05:20:05 +08:00
|
|
|
* Do this so attach doesn't complain and we need to
|
|
|
|
* drop the ref the old guy had.
|
|
|
|
*/
|
|
|
|
ClearPagePrivate(p);
|
2012-03-13 21:38:00 +08:00
|
|
|
WARN_ON(PageDirty(p));
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
put_page(p);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2012-03-08 05:20:05 +08:00
|
|
|
attach_extent_buffer_page(eb, p);
|
|
|
|
spin_unlock(&mapping->private_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
WARN_ON(PageDirty(p));
|
2010-08-07 01:21:20 +08:00
|
|
|
eb->pages[i] = p;
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!PageUptodate(p))
|
|
|
|
uptodate = 0;
|
2011-02-11 01:35:00 +08:00
|
|
|
|
|
|
|
/*
|
2018-07-04 15:24:52 +08:00
|
|
|
* We can't unlock the pages just yet since the extent buffer
|
|
|
|
* hasn't been properly inserted in the radix tree, this
|
|
|
|
* opens a race with btree_releasepage which can free a page
|
|
|
|
* while we are still filling in all pages for the buffer and
|
|
|
|
* we could crash.
|
2011-02-11 01:35:00 +08:00
|
|
|
*/
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
if (uptodate)
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
|
2012-03-09 22:51:43 +08:00
|
|
|
again:
|
2016-05-09 20:11:38 +08:00
|
|
|
ret = radix_tree_preload(GFP_NOFS);
|
2016-06-07 03:01:23 +08:00
|
|
|
if (ret) {
|
|
|
|
exists = ERR_PTR(ret);
|
2010-10-27 08:57:29 +08:00
|
|
|
goto free_eb;
|
2016-06-07 03:01:23 +08:00
|
|
|
}
|
2010-10-27 08:57:29 +08:00
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
spin_lock(&fs_info->buffer_lock);
|
|
|
|
ret = radix_tree_insert(&fs_info->buffer_radix,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
start >> PAGE_SHIFT, eb);
|
2013-12-17 02:24:27 +08:00
|
|
|
spin_unlock(&fs_info->buffer_lock);
|
2013-10-07 23:45:25 +08:00
|
|
|
radix_tree_preload_end();
|
2010-10-27 08:57:29 +08:00
|
|
|
if (ret == -EEXIST) {
|
2013-12-17 02:24:27 +08:00
|
|
|
exists = find_extent_buffer(fs_info, start);
|
2013-10-07 23:45:25 +08:00
|
|
|
if (exists)
|
|
|
|
goto free_eb;
|
|
|
|
else
|
2012-03-09 22:51:43 +08:00
|
|
|
goto again;
|
2008-07-22 23:18:07 +08:00
|
|
|
}
|
|
|
|
/* add one reference for the tree */
|
2012-03-13 21:38:00 +08:00
|
|
|
check_buffer_tree_ref(eb);
|
2013-12-13 23:41:51 +08:00
|
|
|
set_bit(EXTENT_BUFFER_IN_TREE, &eb->bflags);
|
2011-02-11 01:35:00 +08:00
|
|
|
|
|
|
|
/*
|
2018-07-04 15:24:52 +08:00
|
|
|
* Now it's safe to unlock the pages because any calls to
|
|
|
|
* btree_releasepage will correctly detect that a page belongs to a
|
|
|
|
* live buffer and won't free them prematurely.
|
2011-02-11 01:35:00 +08:00
|
|
|
*/
|
2018-07-04 15:24:51 +08:00
|
|
|
for (i = 0; i < num_pages; i++)
|
|
|
|
unlock_page(eb->pages[i]);
|
2008-01-25 05:13:08 +08:00
|
|
|
return eb;
|
|
|
|
|
2008-07-22 23:18:07 +08:00
|
|
|
free_eb:
|
2015-02-24 18:47:05 +08:00
|
|
|
WARN_ON(!atomic_dec_and_test(&eb->refs));
|
2010-08-07 01:21:20 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
|
|
|
if (eb->pages[i])
|
|
|
|
unlock_page(eb->pages[i]);
|
|
|
|
}
|
2011-02-11 01:35:00 +08:00
|
|
|
|
2010-10-27 08:57:29 +08:00
|
|
|
btrfs_release_extent_buffer(eb);
|
2008-07-22 23:18:07 +08:00
|
|
|
return exists;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
static inline void btrfs_release_extent_buffer_rcu(struct rcu_head *head)
|
|
|
|
{
|
|
|
|
struct extent_buffer *eb =
|
|
|
|
container_of(head, struct extent_buffer, rcu_head);
|
|
|
|
|
|
|
|
__free_extent_buffer(eb);
|
|
|
|
}
|
|
|
|
|
2013-04-26 22:56:29 +08:00
|
|
|
static int release_extent_buffer(struct extent_buffer *eb)
|
2012-03-10 05:01:49 +08:00
|
|
|
{
|
2018-06-27 21:38:23 +08:00
|
|
|
lockdep_assert_held(&eb->refs_lock);
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
WARN_ON(atomic_read(&eb->refs) == 0);
|
|
|
|
if (atomic_dec_and_test(&eb->refs)) {
|
2013-12-13 23:41:51 +08:00
|
|
|
if (test_and_clear_bit(EXTENT_BUFFER_IN_TREE, &eb->bflags)) {
|
2013-12-17 02:24:27 +08:00
|
|
|
struct btrfs_fs_info *fs_info = eb->fs_info;
|
2012-03-10 05:01:49 +08:00
|
|
|
|
2012-05-16 23:00:02 +08:00
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-03-10 05:01:49 +08:00
|
|
|
|
2013-12-17 02:24:27 +08:00
|
|
|
spin_lock(&fs_info->buffer_lock);
|
|
|
|
radix_tree_delete(&fs_info->buffer_radix,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
eb->start >> PAGE_SHIFT);
|
2013-12-17 02:24:27 +08:00
|
|
|
spin_unlock(&fs_info->buffer_lock);
|
2013-12-13 23:41:51 +08:00
|
|
|
} else {
|
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-05-16 23:00:02 +08:00
|
|
|
}
|
2012-03-10 05:01:49 +08:00
|
|
|
|
|
|
|
/* Should be safe to release our pages at this point */
|
2018-07-19 23:24:32 +08:00
|
|
|
btrfs_release_extent_buffer_pages(eb);
|
2015-03-17 05:38:02 +08:00
|
|
|
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
|
2018-06-27 21:38:24 +08:00
|
|
|
if (unlikely(test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags))) {
|
2015-03-17 05:38:02 +08:00
|
|
|
__free_extent_buffer(eb);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
#endif
|
2012-03-10 05:01:49 +08:00
|
|
|
call_rcu(&eb->rcu_head, btrfs_release_extent_buffer_rcu);
|
2012-07-21 04:05:36 +08:00
|
|
|
return 1;
|
2012-03-10 05:01:49 +08:00
|
|
|
}
|
|
|
|
spin_unlock(&eb->refs_lock);
|
2012-07-21 04:05:36 +08:00
|
|
|
|
|
|
|
return 0;
|
2012-03-10 05:01:49 +08:00
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
void free_extent_buffer(struct extent_buffer *eb)
|
|
|
|
{
|
2013-01-30 06:49:37 +08:00
|
|
|
int refs;
|
|
|
|
int old;
|
2008-01-25 05:13:08 +08:00
|
|
|
if (!eb)
|
|
|
|
return;
|
|
|
|
|
2013-01-30 06:49:37 +08:00
|
|
|
while (1) {
|
|
|
|
refs = atomic_read(&eb->refs);
|
2018-10-15 22:04:01 +08:00
|
|
|
if ((!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && refs <= 3)
|
|
|
|
|| (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) &&
|
|
|
|
refs == 1))
|
2013-01-30 06:49:37 +08:00
|
|
|
break;
|
|
|
|
old = atomic_cmpxchg(&eb->refs, refs, refs - 1);
|
|
|
|
if (old == refs)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_lock(&eb->refs_lock);
|
|
|
|
if (atomic_read(&eb->refs) == 2 &&
|
|
|
|
test_bit(EXTENT_BUFFER_STALE, &eb->bflags) &&
|
2012-03-13 21:38:00 +08:00
|
|
|
!extent_buffer_under_io(eb) &&
|
2012-03-10 05:01:49 +08:00
|
|
|
test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
|
|
|
|
atomic_dec(&eb->refs);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* I know this is terrible, but it's temporary until we stop tracking
|
|
|
|
* the uptodate bits and such for the extent buffers.
|
|
|
|
*/
|
2013-04-26 22:56:29 +08:00
|
|
|
release_extent_buffer(eb);
|
2012-03-10 05:01:49 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void free_extent_buffer_stale(struct extent_buffer *eb)
|
|
|
|
{
|
|
|
|
if (!eb)
|
2008-01-25 05:13:08 +08:00
|
|
|
return;
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_lock(&eb->refs_lock);
|
|
|
|
set_bit(EXTENT_BUFFER_STALE, &eb->bflags);
|
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
if (atomic_read(&eb->refs) == 2 && !extent_buffer_under_io(eb) &&
|
2012-03-10 05:01:49 +08:00
|
|
|
test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
|
|
|
|
atomic_dec(&eb->refs);
|
2013-04-26 22:56:29 +08:00
|
|
|
release_extent_buffer(eb);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2012-03-29 08:31:37 +08:00
|
|
|
void clear_extent_buffer_dirty(struct extent_buffer *eb)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
|
|
|
int num_pages;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *page;
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2009-03-13 23:00:37 +08:00
|
|
|
if (!PageDirty(page))
|
2008-11-20 01:44:22 +08:00
|
|
|
continue;
|
|
|
|
|
2008-07-22 23:18:08 +08:00
|
|
|
lock_page(page);
|
2011-02-11 01:35:00 +08:00
|
|
|
WARN_ON(!PagePrivate(page));
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
clear_page_dirty_for_io(page);
|
2018-04-11 07:36:56 +08:00
|
|
|
xa_lock_irq(&page->mapping->i_pages);
|
2017-12-04 23:37:22 +08:00
|
|
|
if (!PageDirty(page))
|
|
|
|
__xa_clear_mark(&page->mapping->i_pages,
|
|
|
|
page_index(page), PAGECACHE_TAG_DIRTY);
|
2018-04-11 07:36:56 +08:00
|
|
|
xa_unlock_irq(&page->mapping->i_pages);
|
2011-11-05 00:29:37 +08:00
|
|
|
ClearPageError(page);
|
2008-07-22 23:18:08 +08:00
|
|
|
unlock_page(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2012-03-13 21:38:00 +08:00
|
|
|
WARN_ON(atomic_read(&eb->refs) == 0);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2018-09-14 01:44:42 +08:00
|
|
|
bool set_extent_buffer_dirty(struct extent_buffer *eb)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
|
|
|
int num_pages;
|
2018-09-14 01:44:42 +08:00
|
|
|
bool was_dirty;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
check_buffer_tree_ref(eb);
|
|
|
|
|
2009-03-13 23:00:37 +08:00
|
|
|
was_dirty = test_and_set_bit(EXTENT_BUFFER_DIRTY, &eb->bflags);
|
2012-03-13 21:38:00 +08:00
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2012-03-10 05:01:49 +08:00
|
|
|
WARN_ON(atomic_read(&eb->refs) == 0);
|
2012-03-13 21:38:00 +08:00
|
|
|
WARN_ON(!test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags));
|
|
|
|
|
2018-09-14 01:44:42 +08:00
|
|
|
if (!was_dirty)
|
|
|
|
for (i = 0; i < num_pages; i++)
|
|
|
|
set_page_dirty(eb->pages[i]);
|
2018-09-14 01:46:08 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_BTRFS_DEBUG
|
|
|
|
for (i = 0; i < num_pages; i++)
|
|
|
|
ASSERT(PageDirty(eb->pages[i]));
|
|
|
|
#endif
|
|
|
|
|
2009-03-13 23:00:37 +08:00
|
|
|
return was_dirty;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2015-12-03 20:08:59 +08:00
|
|
|
void clear_extent_buffer_uptodate(struct extent_buffer *eb)
|
2008-05-13 01:39:03 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
2008-05-13 01:39:03 +08:00
|
|
|
struct page *page;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
2008-05-13 01:39:03 +08:00
|
|
|
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
clear_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2008-05-13 01:39:03 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-07-30 22:29:12 +08:00
|
|
|
if (page)
|
|
|
|
ClearPageUptodate(page);
|
2008-05-13 01:39:03 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-12-03 20:08:59 +08:00
|
|
|
void set_extent_buffer_uptodate(struct extent_buffer *eb)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *page;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2012-03-13 21:38:00 +08:00
|
|
|
set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2008-01-25 05:13:08 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
SetPageUptodate(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-10 22:24:40 +08:00
|
|
|
int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2018-03-02 01:20:27 +08:00
|
|
|
int i;
|
2008-01-25 05:13:08 +08:00
|
|
|
struct page *page;
|
|
|
|
int err;
|
|
|
|
int ret = 0;
|
2008-04-10 04:28:12 +08:00
|
|
|
int locked_pages = 0;
|
|
|
|
int all_uptodate = 1;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
2010-08-07 01:21:20 +08:00
|
|
|
unsigned long num_reads = 0;
|
2008-02-07 23:50:54 +08:00
|
|
|
struct bio *bio = NULL;
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
unsigned long bio_flags = 0;
|
2019-04-10 22:24:40 +08:00
|
|
|
struct extent_io_tree *tree = &BTRFS_I(eb->fs_info->btree_inode)->io_tree;
|
2008-02-07 23:50:54 +08:00
|
|
|
|
Btrfs: Change btree locking to use explicit blocking points
Most of the btrfs metadata operations can be protected by a spinlock,
but some operations still need to schedule.
So far, btrfs has been using a mutex along with a trylock loop,
most of the time it is able to avoid going for the full mutex, so
the trylock loop is a big performance gain.
This commit is step one for getting rid of the blocking locks entirely.
btrfs_tree_lock takes a spinlock, and the code explicitly switches
to a blocking lock when it starts an operation that can schedule.
We'll be able get rid of the blocking locks in smaller pieces over time.
Tracing allows us to find the most common cause of blocking, so we
can start with the hot spots first.
The basic idea is:
btrfs_tree_lock() returns with the spin lock held
btrfs_set_lock_blocking() sets the EXTENT_BUFFER_BLOCKING bit in
the extent buffer flags, and then drops the spin lock. The buffer is
still considered locked by all of the btrfs code.
If btrfs_tree_lock gets the spinlock but finds the blocking bit set, it drops
the spin lock and waits on a wait queue for the blocking bit to go away.
Much of the code that needs to set the blocking bit finishes without actually
blocking a good percentage of the time. So, an adaptive spin is still
used against the blocking bit to avoid very high context switch rates.
btrfs_clear_lock_blocking() clears the blocking bit and returns
with the spinlock held again.
btrfs_tree_unlock() can be called on either blocking or spinning locks,
it does the right thing based on the blocking bit.
ctree.c has a helper function to set/clear all the locked buffers in a
path as blocking.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 22:25:08 +08:00
|
|
|
if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(eb);
|
2016-09-03 03:40:03 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2011-06-10 20:06:53 +08:00
|
|
|
if (wait == WAIT_NONE) {
|
2008-08-07 23:19:43 +08:00
|
|
|
if (!trylock_page(page))
|
2008-04-10 04:28:12 +08:00
|
|
|
goto unlock_exit;
|
2008-01-25 05:13:08 +08:00
|
|
|
} else {
|
|
|
|
lock_page(page);
|
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
locked_pages++;
|
Btrfs: fix memory leak in reading btree blocks
So we can read a btree block via readahead or intentional read,
and we can end up with a memory leak when something happens as
follows,
1) readahead starts to read block A but does not wait for read
completion,
2) btree_readpage_end_io_hook finds that block A is corrupted,
and it needs to clear all block A's pages' uptodate bit.
3) meanwhile an intentional read kicks in and checks block A's
pages' uptodate to decide which page needs to be read.
4) when some pages have the uptodate bit during 3)'s check so
3) doesn't count them for eb->io_pages, but they are later
cleared by 2) so we has to readpage on the page, we get
the wrong eb->io_pages which results in a memory leak of
this block.
This fixes the problem by firstly getting all pages's locking and
then checking pages' uptodate bit.
t1(readahead) t2(readahead endio) t3(the following read)
read_extent_buffer_pages end_bio_extent_readpage
for pg in eb: for page 0,1,2 in eb:
if pg is uptodate: btree_readpage_end_io_hook(pg)
num_reads++ if uptodate:
eb->io_pages = num_reads SetPageUptodate(pg) _______________
for pg in eb: for page 3 in eb: read_extent_buffer_pages
if pg is NOT uptodate: btree_readpage_end_io_hook(pg) for pg in eb:
__extent_read_full_page(pg) sanity check reports something wrong if pg is uptodate:
clear_extent_buffer_uptodate(eb) num_reads++
for pg in eb: eb->io_pages = num_reads
ClearPageUptodate(page) _______________
for pg in eb:
if pg is NOT uptodate:
__extent_read_full_page(pg)
So t3's eb->io_pages is not consistent with the number of pages it's reading,
and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
number so that we're not able to free the eb.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-08-04 03:33:01 +08:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* We need to firstly lock all pages to make sure that
|
|
|
|
* the uptodate bit of our pages won't be affected by
|
|
|
|
* clear_extent_buffer_uptodate().
|
|
|
|
*/
|
2016-09-03 03:40:03 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
Btrfs: fix memory leak in reading btree blocks
So we can read a btree block via readahead or intentional read,
and we can end up with a memory leak when something happens as
follows,
1) readahead starts to read block A but does not wait for read
completion,
2) btree_readpage_end_io_hook finds that block A is corrupted,
and it needs to clear all block A's pages' uptodate bit.
3) meanwhile an intentional read kicks in and checks block A's
pages' uptodate to decide which page needs to be read.
4) when some pages have the uptodate bit during 3)'s check so
3) doesn't count them for eb->io_pages, but they are later
cleared by 2) so we has to readpage on the page, we get
the wrong eb->io_pages which results in a memory leak of
this block.
This fixes the problem by firstly getting all pages's locking and
then checking pages' uptodate bit.
t1(readahead) t2(readahead endio) t3(the following read)
read_extent_buffer_pages end_bio_extent_readpage
for pg in eb: for page 0,1,2 in eb:
if pg is uptodate: btree_readpage_end_io_hook(pg)
num_reads++ if uptodate:
eb->io_pages = num_reads SetPageUptodate(pg) _______________
for pg in eb: for page 3 in eb: read_extent_buffer_pages
if pg is NOT uptodate: btree_readpage_end_io_hook(pg) for pg in eb:
__extent_read_full_page(pg) sanity check reports something wrong if pg is uptodate:
clear_extent_buffer_uptodate(eb) num_reads++
for pg in eb: eb->io_pages = num_reads
ClearPageUptodate(page) _______________
for pg in eb:
if pg is NOT uptodate:
__extent_read_full_page(pg)
So t3's eb->io_pages is not consistent with the number of pages it's reading,
and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
number so that we're not able to free the eb.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-08-04 03:33:01 +08:00
|
|
|
page = eb->pages[i];
|
2010-08-07 01:21:20 +08:00
|
|
|
if (!PageUptodate(page)) {
|
|
|
|
num_reads++;
|
2008-04-10 04:28:12 +08:00
|
|
|
all_uptodate = 0;
|
2010-08-07 01:21:20 +08:00
|
|
|
}
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
Btrfs: fix memory leak in reading btree blocks
So we can read a btree block via readahead or intentional read,
and we can end up with a memory leak when something happens as
follows,
1) readahead starts to read block A but does not wait for read
completion,
2) btree_readpage_end_io_hook finds that block A is corrupted,
and it needs to clear all block A's pages' uptodate bit.
3) meanwhile an intentional read kicks in and checks block A's
pages' uptodate to decide which page needs to be read.
4) when some pages have the uptodate bit during 3)'s check so
3) doesn't count them for eb->io_pages, but they are later
cleared by 2) so we has to readpage on the page, we get
the wrong eb->io_pages which results in a memory leak of
this block.
This fixes the problem by firstly getting all pages's locking and
then checking pages' uptodate bit.
t1(readahead) t2(readahead endio) t3(the following read)
read_extent_buffer_pages end_bio_extent_readpage
for pg in eb: for page 0,1,2 in eb:
if pg is uptodate: btree_readpage_end_io_hook(pg)
num_reads++ if uptodate:
eb->io_pages = num_reads SetPageUptodate(pg) _______________
for pg in eb: for page 3 in eb: read_extent_buffer_pages
if pg is NOT uptodate: btree_readpage_end_io_hook(pg) for pg in eb:
__extent_read_full_page(pg) sanity check reports something wrong if pg is uptodate:
clear_extent_buffer_uptodate(eb) num_reads++
for pg in eb: eb->io_pages = num_reads
ClearPageUptodate(page) _______________
for pg in eb:
if pg is NOT uptodate:
__extent_read_full_page(pg)
So t3's eb->io_pages is not consistent with the number of pages it's reading,
and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
number so that we're not able to free the eb.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2016-08-04 03:33:01 +08:00
|
|
|
|
2008-04-10 04:28:12 +08:00
|
|
|
if (all_uptodate) {
|
2016-09-03 03:40:03 +08:00
|
|
|
set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
|
2008-04-10 04:28:12 +08:00
|
|
|
goto unlock_exit;
|
|
|
|
}
|
|
|
|
|
Btrfs: be aware of btree inode write errors to avoid fs corruption
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode->i_mapping->a_ops->writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit the running transaction. If this happens,
we might have no way of knowing that such error happened when we are
committing the transaction - because the pages might no longer be
marked dirty nor tagged for writeback (if a subsequent modification
to the extent buffer didn't happen before the transaction commit) which
makes filemap_fdata[write|wait]_range unable to find such pages (even
if they're marked with SetPageError).
So if this happens we must abort the transaction, otherwise we commit
a super block with btree roots that point to btree nodes/leafs whose
content on disk is invalid - either garbage or the content of some
node/leaf from a past generation that got cowed or deleted and is no
longer valid (for this later case we end up getting error messages like
"parent transid verify failed on 10826481664 wanted 25748 found 29562"
when reading btree nodes/leafs from disk).
Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
i_mapping would not be enough because we need to distinguish between
log tree extents (not fatal) vs non-log tree extents (fatal) and
because the next call to filemap_fdatawait_range() will catch and clear
such errors in the mapping - and that call might be from a log sync and
not from a transaction commit, which means we would not know about the
error at transaction commit time. Also, checking for the eb flag
EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
not be completely reliable, as the eb might be removed from memory and
read back when trying to get it, which clears that flag right before
reading the eb's pages from disk, making us not know about the previous
write error.
Using the new 3 flags for the btree inode also makes us achieve the
goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
writeback for all dirty pages and before filemap_fdatawait_range() is
called, the writeback for all dirty pages had already finished with
errors - because we were not using AS_EIO/AS_ENOSPC,
filemap_fdatawait_range() would return success, as it could not know
that writeback errors happened (the pages were no longer tagged for
writeback).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
2014-09-26 19:25:56 +08:00
|
|
|
clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
|
2012-04-16 21:42:26 +08:00
|
|
|
eb->read_mirror = 0;
|
2012-03-13 21:38:00 +08:00
|
|
|
atomic_set(&eb->io_pages, num_reads);
|
2016-09-03 03:40:03 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2016-07-12 01:39:07 +08:00
|
|
|
|
2008-04-10 04:28:12 +08:00
|
|
|
if (!PageUptodate(page)) {
|
2016-07-12 01:39:07 +08:00
|
|
|
if (ret) {
|
|
|
|
atomic_dec(&eb->io_pages);
|
|
|
|
unlock_page(page);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2008-04-10 04:28:12 +08:00
|
|
|
ClearPageError(page);
|
2008-02-07 23:50:54 +08:00
|
|
|
err = __extent_read_full_page(tree, page,
|
2017-06-23 10:09:57 +08:00
|
|
|
btree_get_extent, &bio,
|
2013-04-20 07:49:09 +08:00
|
|
|
mirror_num, &bio_flags,
|
2016-06-06 03:31:51 +08:00
|
|
|
REQ_META);
|
2016-07-12 01:39:07 +08:00
|
|
|
if (err) {
|
2008-01-25 05:13:08 +08:00
|
|
|
ret = err;
|
2016-07-12 01:39:07 +08:00
|
|
|
/*
|
|
|
|
* We use &bio in above __extent_read_full_page,
|
|
|
|
* so we ensure that if it returns error, the
|
|
|
|
* current page fails to add itself to bio and
|
|
|
|
* it's been unlocked.
|
|
|
|
*
|
|
|
|
* We must dec io_pages by ourselves.
|
|
|
|
*/
|
|
|
|
atomic_dec(&eb->io_pages);
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
} else {
|
|
|
|
unlock_page(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-04 11:23:14 +08:00
|
|
|
if (bio) {
|
2016-06-06 03:31:51 +08:00
|
|
|
err = submit_one_bio(bio, mirror_num, bio_flags);
|
2012-03-12 23:03:00 +08:00
|
|
|
if (err)
|
|
|
|
return err;
|
2011-10-04 11:23:14 +08:00
|
|
|
}
|
2008-02-07 23:50:54 +08:00
|
|
|
|
2011-06-10 20:06:53 +08:00
|
|
|
if (ret || wait != WAIT_COMPLETE)
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2016-09-03 03:40:03 +08:00
|
|
|
for (i = 0; i < num_pages; i++) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
wait_on_page_locked(page);
|
2009-01-06 10:25:51 +08:00
|
|
|
if (!PageUptodate(page))
|
2008-01-25 05:13:08 +08:00
|
|
|
ret = -EIO;
|
|
|
|
}
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
return ret;
|
2008-04-10 04:28:12 +08:00
|
|
|
|
|
|
|
unlock_exit:
|
2009-01-06 10:25:51 +08:00
|
|
|
while (locked_pages > 0) {
|
2008-04-10 04:28:12 +08:00
|
|
|
locked_pages--;
|
2016-09-03 03:40:03 +08:00
|
|
|
page = eb->pages[locked_pages];
|
|
|
|
unlock_page(page);
|
2008-04-10 04:28:12 +08:00
|
|
|
}
|
|
|
|
return ret;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2017-06-29 11:56:53 +08:00
|
|
|
void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
|
|
|
|
unsigned long start, unsigned long len)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
|
|
|
char *dst = (char *)dstv;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
Btrfs: fix out of bounds array access while reading extent buffer
There is a corner case that slips through the checkers in functions
reading extent buffer, ie.
if (start < eb->len) and (start + len > eb->len),
then
a) map_private_extent_buffer() returns immediately because
it's thinking the range spans across two pages,
b) and the checkers in read_extent_buffer(), WARN_ON(start > eb->len)
and WARN_ON(start + len > eb->start + eb->len), both are OK in this
corner case, but it'd actually try to access the eb->pages out of
bounds because of (start + len > eb->len).
The case is found by switching extent inline ref type from shared data
ref to non-shared data ref, which is a kind of metadata corruption.
It'd use the wrong helper to access the eb,
eg. btrfs_extent_data_ref_root(eb, ref) is used but the %ref passing
here is "struct btrfs_shared_data_ref". And if the extent item
happens to be the first item in the eb, then offset/length will get
over eb->len which ends up an invalid memory access.
This is adding proper checks in order to avoid invalid memory access,
ie. 'general protection fault', before it's too late.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-10 01:10:16 +08:00
|
|
|
if (start + len > eb->len) {
|
|
|
|
WARN(1, KERN_ERR "btrfs bad mapping eb start %llu len %lu, wanted %lu %lu\n",
|
|
|
|
eb->start, eb->len, start, len);
|
|
|
|
memset(dst, 0, len);
|
|
|
|
return;
|
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + start);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, (PAGE_SIZE - offset));
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
memcpy(dst, kaddr + offset, cur);
|
|
|
|
|
|
|
|
dst += cur;
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-29 11:56:53 +08:00
|
|
|
int read_extent_buffer_to_user(const struct extent_buffer *eb,
|
|
|
|
void __user *dstv,
|
|
|
|
unsigned long start, unsigned long len)
|
2014-01-30 23:24:01 +08:00
|
|
|
{
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
|
|
|
char __user *dst = (char __user *)dstv;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2014-01-30 23:24:01 +08:00
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
WARN_ON(start > eb->len);
|
|
|
|
WARN_ON(start + len > eb->start + eb->len);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + start);
|
2014-01-30 23:24:01 +08:00
|
|
|
|
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2014-01-30 23:24:01 +08:00
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, (PAGE_SIZE - offset));
|
2014-01-30 23:24:01 +08:00
|
|
|
kaddr = page_address(page);
|
|
|
|
if (copy_to_user(dst, kaddr + offset, cur)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
dst += cur;
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-06-18 10:16:21 +08:00
|
|
|
/*
|
|
|
|
* return 0 if the item is found within a page.
|
|
|
|
* return 1 if the item spans two pages.
|
|
|
|
* return -EINVAL otherwise.
|
|
|
|
*/
|
2017-06-29 11:56:53 +08:00
|
|
|
int map_private_extent_buffer(const struct extent_buffer *eb,
|
|
|
|
unsigned long start, unsigned long min_len,
|
|
|
|
char **map, unsigned long *map_start,
|
|
|
|
unsigned long *map_len)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
2018-11-28 16:54:54 +08:00
|
|
|
size_t offset;
|
2008-01-25 05:13:08 +08:00
|
|
|
char *kaddr;
|
|
|
|
struct page *p;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
unsigned long end_i = (start_offset + start + min_len - 1) >>
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
Btrfs: fix out of bounds array access while reading extent buffer
There is a corner case that slips through the checkers in functions
reading extent buffer, ie.
if (start < eb->len) and (start + len > eb->len),
then
a) map_private_extent_buffer() returns immediately because
it's thinking the range spans across two pages,
b) and the checkers in read_extent_buffer(), WARN_ON(start > eb->len)
and WARN_ON(start + len > eb->start + eb->len), both are OK in this
corner case, but it'd actually try to access the eb->pages out of
bounds because of (start + len > eb->len).
The case is found by switching extent inline ref type from shared data
ref to non-shared data ref, which is a kind of metadata corruption.
It'd use the wrong helper to access the eb,
eg. btrfs_extent_data_ref_root(eb, ref) is used but the %ref passing
here is "struct btrfs_shared_data_ref". And if the extent item
happens to be the first item in the eb, then offset/length will get
over eb->len which ends up an invalid memory access.
This is adding proper checks in order to avoid invalid memory access,
ie. 'general protection fault', before it's too late.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-10 01:10:16 +08:00
|
|
|
if (start + min_len > eb->len) {
|
|
|
|
WARN(1, KERN_ERR "btrfs bad mapping eb start %llu len %lu, wanted %lu %lu\n",
|
|
|
|
eb->start, eb->len, start, min_len);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
if (i != end_i)
|
2016-06-18 10:16:21 +08:00
|
|
|
return 1;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
if (i == 0) {
|
|
|
|
offset = start_offset;
|
|
|
|
*map_start = 0;
|
|
|
|
} else {
|
|
|
|
offset = 0;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
*map_start = ((u64)i << PAGE_SHIFT) - start_offset;
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2009-01-06 10:25:51 +08:00
|
|
|
|
2014-07-31 07:03:53 +08:00
|
|
|
p = eb->pages[i];
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(p);
|
2008-01-25 05:13:08 +08:00
|
|
|
*map = kaddr + offset;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
*map_len = PAGE_SIZE - offset;
|
2008-01-25 05:13:08 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-06-29 11:56:53 +08:00
|
|
|
int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
|
|
|
|
unsigned long start, unsigned long len)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
|
|
|
char *ptr = (char *)ptrv;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
WARN_ON(start > eb->len);
|
|
|
|
WARN_ON(start + len > eb->start + eb->len);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + start);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, (PAGE_SIZE - offset));
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
ret = memcmp(ptr, kaddr + offset, cur);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
|
|
|
|
ptr += cur;
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-11-10 00:43:38 +08:00
|
|
|
void write_extent_buffer_chunk_tree_uuid(struct extent_buffer *eb,
|
|
|
|
const void *srcv)
|
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
|
|
|
|
WARN_ON(!PageUptodate(eb->pages[0]));
|
|
|
|
kaddr = page_address(eb->pages[0]);
|
|
|
|
memcpy(kaddr + offsetof(struct btrfs_header, chunk_tree_uuid), srcv,
|
|
|
|
BTRFS_FSID_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void write_extent_buffer_fsid(struct extent_buffer *eb, const void *srcv)
|
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
|
|
|
|
WARN_ON(!PageUptodate(eb->pages[0]));
|
|
|
|
kaddr = page_address(eb->pages[0]);
|
|
|
|
memcpy(kaddr + offsetof(struct btrfs_header, fsid), srcv,
|
|
|
|
BTRFS_FSID_SIZE);
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
void write_extent_buffer(struct extent_buffer *eb, const void *srcv,
|
|
|
|
unsigned long start, unsigned long len)
|
|
|
|
{
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
|
|
|
char *src = (char *)srcv;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
WARN_ON(start > eb->len);
|
|
|
|
WARN_ON(start + len > eb->start + eb->len);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + start);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, PAGE_SIZE - offset);
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
memcpy(kaddr + offset, src, cur);
|
|
|
|
|
|
|
|
src += cur;
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-09 01:09:03 +08:00
|
|
|
void memzero_extent_buffer(struct extent_buffer *eb, unsigned long start,
|
|
|
|
unsigned long len)
|
2008-01-25 05:13:08 +08:00
|
|
|
{
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + start) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
WARN_ON(start > eb->len);
|
|
|
|
WARN_ON(start + len > eb->start + eb->len);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + start);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = eb->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, PAGE_SIZE - offset);
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(page);
|
2016-11-09 01:09:03 +08:00
|
|
|
memset(kaddr + offset, 0, cur);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-11-09 01:30:31 +08:00
|
|
|
void copy_extent_buffer_full(struct extent_buffer *dst,
|
|
|
|
struct extent_buffer *src)
|
|
|
|
{
|
|
|
|
int i;
|
2018-03-02 01:20:27 +08:00
|
|
|
int num_pages;
|
2016-11-09 01:30:31 +08:00
|
|
|
|
|
|
|
ASSERT(dst->len == src->len);
|
|
|
|
|
2018-06-29 16:56:49 +08:00
|
|
|
num_pages = num_extent_pages(dst);
|
2016-11-09 01:30:31 +08:00
|
|
|
for (i = 0; i < num_pages; i++)
|
|
|
|
copy_page(page_address(dst->pages[i]),
|
|
|
|
page_address(src->pages[i]));
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
void copy_extent_buffer(struct extent_buffer *dst, struct extent_buffer *src,
|
|
|
|
unsigned long dst_offset, unsigned long src_offset,
|
|
|
|
unsigned long len)
|
|
|
|
{
|
|
|
|
u64 dst_len = dst->len;
|
|
|
|
size_t cur;
|
|
|
|
size_t offset;
|
|
|
|
struct page *page;
|
|
|
|
char *kaddr;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(dst->start);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
unsigned long i = (start_offset + dst_offset) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
WARN_ON(src->len != dst_len);
|
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
offset = offset_in_page(start_offset + dst_offset);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2014-07-31 07:03:53 +08:00
|
|
|
page = dst->pages[i];
|
2008-01-25 05:13:08 +08:00
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, (unsigned long)(PAGE_SIZE - offset));
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2011-07-20 00:04:14 +08:00
|
|
|
kaddr = page_address(page);
|
2008-01-25 05:13:08 +08:00
|
|
|
read_extent_buffer(src, kaddr + offset, src_offset, cur);
|
|
|
|
|
|
|
|
src_offset += cur;
|
|
|
|
len -= cur;
|
|
|
|
offset = 0;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-30 11:50:30 +08:00
|
|
|
/*
|
|
|
|
* eb_bitmap_offset() - calculate the page and offset of the byte containing the
|
|
|
|
* given bit number
|
|
|
|
* @eb: the extent buffer
|
|
|
|
* @start: offset of the bitmap item in the extent buffer
|
|
|
|
* @nr: bit number
|
|
|
|
* @page_index: return index of the page in the extent buffer that contains the
|
|
|
|
* given bit number
|
|
|
|
* @page_offset: return offset into the page given by page_index
|
|
|
|
*
|
|
|
|
* This helper hides the ugliness of finding the byte in an extent buffer which
|
|
|
|
* contains a given bit.
|
|
|
|
*/
|
|
|
|
static inline void eb_bitmap_offset(struct extent_buffer *eb,
|
|
|
|
unsigned long start, unsigned long nr,
|
|
|
|
unsigned long *page_index,
|
|
|
|
size_t *page_offset)
|
|
|
|
{
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(eb->start);
|
2015-09-30 11:50:30 +08:00
|
|
|
size_t byte_offset = BIT_BYTE(nr);
|
|
|
|
size_t offset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The byte we want is the offset of the extent buffer + the offset of
|
|
|
|
* the bitmap item in the extent buffer + the offset of the byte in the
|
|
|
|
* bitmap item.
|
|
|
|
*/
|
|
|
|
offset = start_offset + start + byte_offset;
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
*page_index = offset >> PAGE_SHIFT;
|
2018-12-05 22:23:03 +08:00
|
|
|
*page_offset = offset_in_page(offset);
|
2015-09-30 11:50:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* extent_buffer_test_bit - determine whether a bit in a bitmap item is set
|
|
|
|
* @eb: the extent buffer
|
|
|
|
* @start: offset of the bitmap item in the extent buffer
|
|
|
|
* @nr: bit number to test
|
|
|
|
*/
|
|
|
|
int extent_buffer_test_bit(struct extent_buffer *eb, unsigned long start,
|
|
|
|
unsigned long nr)
|
|
|
|
{
|
2016-09-23 08:24:20 +08:00
|
|
|
u8 *kaddr;
|
2015-09-30 11:50:30 +08:00
|
|
|
struct page *page;
|
|
|
|
unsigned long i;
|
|
|
|
size_t offset;
|
|
|
|
|
|
|
|
eb_bitmap_offset(eb, start, nr, &i, &offset);
|
|
|
|
page = eb->pages[i];
|
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
kaddr = page_address(page);
|
|
|
|
return 1U & (kaddr[offset] >> (nr & (BITS_PER_BYTE - 1)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* extent_buffer_bitmap_set - set an area of a bitmap
|
|
|
|
* @eb: the extent buffer
|
|
|
|
* @start: offset of the bitmap item in the extent buffer
|
|
|
|
* @pos: bit number of the first bit
|
|
|
|
* @len: number of bits to set
|
|
|
|
*/
|
|
|
|
void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start,
|
|
|
|
unsigned long pos, unsigned long len)
|
|
|
|
{
|
2016-09-23 08:24:20 +08:00
|
|
|
u8 *kaddr;
|
2015-09-30 11:50:30 +08:00
|
|
|
struct page *page;
|
|
|
|
unsigned long i;
|
|
|
|
size_t offset;
|
|
|
|
const unsigned int size = pos + len;
|
|
|
|
int bits_to_set = BITS_PER_BYTE - (pos % BITS_PER_BYTE);
|
2016-09-23 08:24:20 +08:00
|
|
|
u8 mask_to_set = BITMAP_FIRST_BYTE_MASK(pos);
|
2015-09-30 11:50:30 +08:00
|
|
|
|
|
|
|
eb_bitmap_offset(eb, start, pos, &i, &offset);
|
|
|
|
page = eb->pages[i];
|
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
kaddr = page_address(page);
|
|
|
|
|
|
|
|
while (len >= bits_to_set) {
|
|
|
|
kaddr[offset] |= mask_to_set;
|
|
|
|
len -= bits_to_set;
|
|
|
|
bits_to_set = BITS_PER_BYTE;
|
2016-10-12 16:33:21 +08:00
|
|
|
mask_to_set = ~0;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (++offset >= PAGE_SIZE && len > 0) {
|
2015-09-30 11:50:30 +08:00
|
|
|
offset = 0;
|
|
|
|
page = eb->pages[++i];
|
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
kaddr = page_address(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (len) {
|
|
|
|
mask_to_set &= BITMAP_LAST_BYTE_MASK(size);
|
|
|
|
kaddr[offset] |= mask_to_set;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* extent_buffer_bitmap_clear - clear an area of a bitmap
|
|
|
|
* @eb: the extent buffer
|
|
|
|
* @start: offset of the bitmap item in the extent buffer
|
|
|
|
* @pos: bit number of the first bit
|
|
|
|
* @len: number of bits to clear
|
|
|
|
*/
|
|
|
|
void extent_buffer_bitmap_clear(struct extent_buffer *eb, unsigned long start,
|
|
|
|
unsigned long pos, unsigned long len)
|
|
|
|
{
|
2016-09-23 08:24:20 +08:00
|
|
|
u8 *kaddr;
|
2015-09-30 11:50:30 +08:00
|
|
|
struct page *page;
|
|
|
|
unsigned long i;
|
|
|
|
size_t offset;
|
|
|
|
const unsigned int size = pos + len;
|
|
|
|
int bits_to_clear = BITS_PER_BYTE - (pos % BITS_PER_BYTE);
|
2016-09-23 08:24:20 +08:00
|
|
|
u8 mask_to_clear = BITMAP_FIRST_BYTE_MASK(pos);
|
2015-09-30 11:50:30 +08:00
|
|
|
|
|
|
|
eb_bitmap_offset(eb, start, pos, &i, &offset);
|
|
|
|
page = eb->pages[i];
|
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
kaddr = page_address(page);
|
|
|
|
|
|
|
|
while (len >= bits_to_clear) {
|
|
|
|
kaddr[offset] &= ~mask_to_clear;
|
|
|
|
len -= bits_to_clear;
|
|
|
|
bits_to_clear = BITS_PER_BYTE;
|
2016-10-12 16:33:21 +08:00
|
|
|
mask_to_clear = ~0;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
if (++offset >= PAGE_SIZE && len > 0) {
|
2015-09-30 11:50:30 +08:00
|
|
|
offset = 0;
|
|
|
|
page = eb->pages[++i];
|
|
|
|
WARN_ON(!PageUptodate(page));
|
|
|
|
kaddr = page_address(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (len) {
|
|
|
|
mask_to_clear &= BITMAP_LAST_BYTE_MASK(size);
|
|
|
|
kaddr[offset] &= ~mask_to_clear;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-04-12 05:52:52 +08:00
|
|
|
static inline bool areas_overlap(unsigned long src, unsigned long dst, unsigned long len)
|
|
|
|
{
|
|
|
|
unsigned long distance = (src > dst) ? src - dst : dst - src;
|
|
|
|
return distance < len;
|
|
|
|
}
|
|
|
|
|
2008-01-25 05:13:08 +08:00
|
|
|
static void copy_pages(struct page *dst_page, struct page *src_page,
|
|
|
|
unsigned long dst_off, unsigned long src_off,
|
|
|
|
unsigned long len)
|
|
|
|
{
|
2011-07-20 00:04:14 +08:00
|
|
|
char *dst_kaddr = page_address(dst_page);
|
2008-01-25 05:13:08 +08:00
|
|
|
char *src_kaddr;
|
2010-08-07 01:21:20 +08:00
|
|
|
int must_memmove = 0;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2011-04-12 05:52:52 +08:00
|
|
|
if (dst_page != src_page) {
|
2011-07-20 00:04:14 +08:00
|
|
|
src_kaddr = page_address(src_page);
|
2011-04-12 05:52:52 +08:00
|
|
|
} else {
|
2008-01-25 05:13:08 +08:00
|
|
|
src_kaddr = dst_kaddr;
|
2010-08-07 01:21:20 +08:00
|
|
|
if (areas_overlap(src_off, dst_off, len))
|
|
|
|
must_memmove = 1;
|
2011-04-12 05:52:52 +08:00
|
|
|
}
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2010-08-07 01:21:20 +08:00
|
|
|
if (must_memmove)
|
|
|
|
memmove(dst_kaddr + dst_off, src_kaddr + src_off, len);
|
|
|
|
else
|
|
|
|
memcpy(dst_kaddr + dst_off, src_kaddr + src_off, len);
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void memcpy_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset,
|
|
|
|
unsigned long src_offset, unsigned long len)
|
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = dst->fs_info;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t cur;
|
|
|
|
size_t dst_off_in_page;
|
|
|
|
size_t src_off_in_page;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(dst->start);
|
2008-01-25 05:13:08 +08:00
|
|
|
unsigned long dst_i;
|
|
|
|
unsigned long src_i;
|
|
|
|
|
|
|
|
if (src_offset + len > dst->len) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info,
|
2016-09-20 22:05:00 +08:00
|
|
|
"memmove bogus src_offset %lu move len %lu dst len %lu",
|
|
|
|
src_offset, len, dst->len);
|
btrfs: use BUG() instead of BUG_ON(1)
BUG_ON(1) leads to bogus warnings from clang when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set:
fs/btrfs/volumes.c:5041:3: error: variable 'max_chunk_size' is used uninitialized whenever 'if' condition is false
[-Werror,-Wsometimes-uninitialized]
BUG_ON(1);
^~~~~~~~~
include/asm-generic/bug.h:61:36: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:48:23: note: expanded from macro 'unlikely'
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/btrfs/volumes.c:5046:9: note: uninitialized use occurs here
max_chunk_size);
^~~~~~~~~~~~~~
include/linux/kernel.h:860:36: note: expanded from macro 'min'
#define min(x, y) __careful_cmp(x, y, <)
^
include/linux/kernel.h:853:17: note: expanded from macro '__careful_cmp'
__cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op))
^
include/linux/kernel.h:847:25: note: expanded from macro '__cmp_once'
typeof(y) unique_y = (y); \
^
fs/btrfs/volumes.c:5041:3: note: remove the 'if' if its condition is always true
BUG_ON(1);
^
include/asm-generic/bug.h:61:32: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
fs/btrfs/volumes.c:4993:20: note: initialize the variable 'max_chunk_size' to silence this warning
u64 max_chunk_size;
^
= 0
Change it to BUG() so clang can see that this code path can never
continue.
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-25 21:02:25 +08:00
|
|
|
BUG();
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
if (dst_offset + len > dst->len) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info,
|
2016-09-20 22:05:00 +08:00
|
|
|
"memmove bogus dst_offset %lu move len %lu dst len %lu",
|
|
|
|
dst_offset, len, dst->len);
|
btrfs: use BUG() instead of BUG_ON(1)
BUG_ON(1) leads to bogus warnings from clang when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set:
fs/btrfs/volumes.c:5041:3: error: variable 'max_chunk_size' is used uninitialized whenever 'if' condition is false
[-Werror,-Wsometimes-uninitialized]
BUG_ON(1);
^~~~~~~~~
include/asm-generic/bug.h:61:36: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:48:23: note: expanded from macro 'unlikely'
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/btrfs/volumes.c:5046:9: note: uninitialized use occurs here
max_chunk_size);
^~~~~~~~~~~~~~
include/linux/kernel.h:860:36: note: expanded from macro 'min'
#define min(x, y) __careful_cmp(x, y, <)
^
include/linux/kernel.h:853:17: note: expanded from macro '__careful_cmp'
__cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op))
^
include/linux/kernel.h:847:25: note: expanded from macro '__cmp_once'
typeof(y) unique_y = (y); \
^
fs/btrfs/volumes.c:5041:3: note: remove the 'if' if its condition is always true
BUG_ON(1);
^
include/asm-generic/bug.h:61:32: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
fs/btrfs/volumes.c:4993:20: note: initialize the variable 'max_chunk_size' to silence this warning
u64 max_chunk_size;
^
= 0
Change it to BUG() so clang can see that this code path can never
continue.
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-25 21:02:25 +08:00
|
|
|
BUG();
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
2018-12-05 22:23:03 +08:00
|
|
|
dst_off_in_page = offset_in_page(start_offset + dst_offset);
|
|
|
|
src_off_in_page = offset_in_page(start_offset + src_offset);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
dst_i = (start_offset + dst_offset) >> PAGE_SHIFT;
|
|
|
|
src_i = (start_offset + src_offset) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
cur = min(len, (unsigned long)(PAGE_SIZE -
|
2008-01-25 05:13:08 +08:00
|
|
|
src_off_in_page));
|
|
|
|
cur = min_t(unsigned long, cur,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
(unsigned long)(PAGE_SIZE - dst_off_in_page));
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2014-07-31 07:03:53 +08:00
|
|
|
copy_pages(dst->pages[dst_i], dst->pages[src_i],
|
2008-01-25 05:13:08 +08:00
|
|
|
dst_off_in_page, src_off_in_page, cur);
|
|
|
|
|
|
|
|
src_offset += cur;
|
|
|
|
dst_offset += cur;
|
|
|
|
len -= cur;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void memmove_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset,
|
|
|
|
unsigned long src_offset, unsigned long len)
|
|
|
|
{
|
2016-06-23 06:54:23 +08:00
|
|
|
struct btrfs_fs_info *fs_info = dst->fs_info;
|
2008-01-25 05:13:08 +08:00
|
|
|
size_t cur;
|
|
|
|
size_t dst_off_in_page;
|
|
|
|
size_t src_off_in_page;
|
|
|
|
unsigned long dst_end = dst_offset + len - 1;
|
|
|
|
unsigned long src_end = src_offset + len - 1;
|
2018-12-05 22:23:03 +08:00
|
|
|
size_t start_offset = offset_in_page(dst->start);
|
2008-01-25 05:13:08 +08:00
|
|
|
unsigned long dst_i;
|
|
|
|
unsigned long src_i;
|
|
|
|
|
|
|
|
if (src_offset + len > dst->len) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info,
|
2016-09-20 22:05:00 +08:00
|
|
|
"memmove bogus src_offset %lu move len %lu len %lu",
|
|
|
|
src_offset, len, dst->len);
|
btrfs: use BUG() instead of BUG_ON(1)
BUG_ON(1) leads to bogus warnings from clang when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set:
fs/btrfs/volumes.c:5041:3: error: variable 'max_chunk_size' is used uninitialized whenever 'if' condition is false
[-Werror,-Wsometimes-uninitialized]
BUG_ON(1);
^~~~~~~~~
include/asm-generic/bug.h:61:36: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:48:23: note: expanded from macro 'unlikely'
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/btrfs/volumes.c:5046:9: note: uninitialized use occurs here
max_chunk_size);
^~~~~~~~~~~~~~
include/linux/kernel.h:860:36: note: expanded from macro 'min'
#define min(x, y) __careful_cmp(x, y, <)
^
include/linux/kernel.h:853:17: note: expanded from macro '__careful_cmp'
__cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op))
^
include/linux/kernel.h:847:25: note: expanded from macro '__cmp_once'
typeof(y) unique_y = (y); \
^
fs/btrfs/volumes.c:5041:3: note: remove the 'if' if its condition is always true
BUG_ON(1);
^
include/asm-generic/bug.h:61:32: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
fs/btrfs/volumes.c:4993:20: note: initialize the variable 'max_chunk_size' to silence this warning
u64 max_chunk_size;
^
= 0
Change it to BUG() so clang can see that this code path can never
continue.
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-25 21:02:25 +08:00
|
|
|
BUG();
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
|
|
|
if (dst_offset + len > dst->len) {
|
2016-06-23 06:54:23 +08:00
|
|
|
btrfs_err(fs_info,
|
2016-09-20 22:05:00 +08:00
|
|
|
"memmove bogus dst_offset %lu move len %lu len %lu",
|
|
|
|
dst_offset, len, dst->len);
|
btrfs: use BUG() instead of BUG_ON(1)
BUG_ON(1) leads to bogus warnings from clang when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set:
fs/btrfs/volumes.c:5041:3: error: variable 'max_chunk_size' is used uninitialized whenever 'if' condition is false
[-Werror,-Wsometimes-uninitialized]
BUG_ON(1);
^~~~~~~~~
include/asm-generic/bug.h:61:36: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:48:23: note: expanded from macro 'unlikely'
# define unlikely(x) (__branch_check__(x, 0, __builtin_constant_p(x)))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/btrfs/volumes.c:5046:9: note: uninitialized use occurs here
max_chunk_size);
^~~~~~~~~~~~~~
include/linux/kernel.h:860:36: note: expanded from macro 'min'
#define min(x, y) __careful_cmp(x, y, <)
^
include/linux/kernel.h:853:17: note: expanded from macro '__careful_cmp'
__cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op))
^
include/linux/kernel.h:847:25: note: expanded from macro '__cmp_once'
typeof(y) unique_y = (y); \
^
fs/btrfs/volumes.c:5041:3: note: remove the 'if' if its condition is always true
BUG_ON(1);
^
include/asm-generic/bug.h:61:32: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
fs/btrfs/volumes.c:4993:20: note: initialize the variable 'max_chunk_size' to silence this warning
u64 max_chunk_size;
^
= 0
Change it to BUG() so clang can see that this code path can never
continue.
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David Sterba <dsterba@suse.com>
2019-03-25 21:02:25 +08:00
|
|
|
BUG();
|
2008-01-25 05:13:08 +08:00
|
|
|
}
|
2010-08-07 01:21:20 +08:00
|
|
|
if (dst_offset < src_offset) {
|
2008-01-25 05:13:08 +08:00
|
|
|
memcpy_extent_buffer(dst, dst_offset, src_offset, len);
|
|
|
|
return;
|
|
|
|
}
|
2009-01-06 10:25:51 +08:00
|
|
|
while (len > 0) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 20:29:47 +08:00
|
|
|
dst_i = (start_offset + dst_end) >> PAGE_SHIFT;
|
|
|
|
src_i = (start_offset + src_end) >> PAGE_SHIFT;
|
2008-01-25 05:13:08 +08:00
|
|
|
|
2018-12-05 22:23:03 +08:00
|
|
|
dst_off_in_page = offset_in_page(start_offset + dst_end);
|
|
|
|
src_off_in_page = offset_in_page(start_offset + src_end);
|
2008-01-25 05:13:08 +08:00
|
|
|
|
|
|
|
cur = min_t(unsigned long, len, src_off_in_page + 1);
|
|
|
|
cur = min(cur, dst_off_in_page + 1);
|
2014-07-31 07:03:53 +08:00
|
|
|
copy_pages(dst->pages[dst_i], dst->pages[src_i],
|
2008-01-25 05:13:08 +08:00
|
|
|
dst_off_in_page - cur + 1,
|
|
|
|
src_off_in_page - cur + 1, cur);
|
|
|
|
|
|
|
|
dst_end -= cur;
|
|
|
|
src_end -= cur;
|
|
|
|
len -= cur;
|
|
|
|
}
|
|
|
|
}
|
2008-07-22 23:18:07 +08:00
|
|
|
|
2013-04-26 22:56:29 +08:00
|
|
|
int try_release_extent_buffer(struct page *page)
|
2010-10-27 08:57:29 +08:00
|
|
|
{
|
2008-07-22 23:18:07 +08:00
|
|
|
struct extent_buffer *eb;
|
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
/*
|
2016-05-20 09:18:45 +08:00
|
|
|
* We need to make sure nobody is attaching this page to an eb right
|
2012-03-10 05:01:49 +08:00
|
|
|
* now.
|
|
|
|
*/
|
|
|
|
spin_lock(&page->mapping->private_lock);
|
|
|
|
if (!PagePrivate(page)) {
|
|
|
|
spin_unlock(&page->mapping->private_lock);
|
2012-03-08 05:20:05 +08:00
|
|
|
return 1;
|
2010-11-22 11:27:44 +08:00
|
|
|
}
|
2008-07-22 23:18:07 +08:00
|
|
|
|
2012-03-10 05:01:49 +08:00
|
|
|
eb = (struct extent_buffer *)page->private;
|
|
|
|
BUG_ON(!eb);
|
2010-10-27 08:57:29 +08:00
|
|
|
|
|
|
|
/*
|
2012-03-10 05:01:49 +08:00
|
|
|
* This is a little awful but should be ok, we need to make sure that
|
|
|
|
* the eb doesn't disappear out from under us while we're looking at
|
|
|
|
* this page.
|
2010-10-27 08:57:29 +08:00
|
|
|
*/
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_lock(&eb->refs_lock);
|
2012-03-13 21:38:00 +08:00
|
|
|
if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_unlock(&eb->refs_lock);
|
|
|
|
spin_unlock(&page->mapping->private_lock);
|
|
|
|
return 0;
|
2009-03-13 23:00:37 +08:00
|
|
|
}
|
2012-03-10 05:01:49 +08:00
|
|
|
spin_unlock(&page->mapping->private_lock);
|
2010-10-27 08:57:29 +08:00
|
|
|
|
2010-10-27 08:57:29 +08:00
|
|
|
/*
|
2012-03-10 05:01:49 +08:00
|
|
|
* If tree ref isn't set then we know the ref on this eb is a real ref,
|
|
|
|
* so just return, this page will likely be freed soon anyway.
|
2010-10-27 08:57:29 +08:00
|
|
|
*/
|
2012-03-10 05:01:49 +08:00
|
|
|
if (!test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) {
|
|
|
|
spin_unlock(&eb->refs_lock);
|
|
|
|
return 0;
|
2009-03-13 23:00:37 +08:00
|
|
|
}
|
2010-10-27 08:57:29 +08:00
|
|
|
|
2013-04-26 22:56:29 +08:00
|
|
|
return release_extent_buffer(eb);
|
2008-07-22 23:18:07 +08:00
|
|
|
}
|