f2fs-for-4.18-rc1
In this round, we've mainly focused on discard, aka unmap, control along with fstrim for Android-specific usage model. In addition, we've fixed writepage flow which returned EAGAIN previously resulting in EIO of fsync(2) due to mapping's error state. In order to avoid old MM bug [1], we decided not to use __GFP_ZERO for the mapping for node and meta page caches. As always, we've cleaned up many places for future fsverity and symbol conflicts. Enhancement: - do discard/fstrim in lower priority considering fs utilization - split large discard commands into smaller ones for better responsiveness - add more sanity checks to address syzbot reports - add a mount option, fsync_mode=nobarrier, which can reduce # of cache flushes - clean up symbol namespace with modified function names - be strict on block allocation and IO control in corner cases Bug fix: - don't use __GFP_ZERO for mappings - fix error reports in writepage to avoid fsync() failure - avoid selinux denial on CAP_RESOURCE on resgid/resuid - fix some subtle race conditions in GC/atomic writes/shutdown - fix overflow bugs in sanity_check_raw_super - fix missing bits on get_flags Clean-up: - prepare the generic flow for future fsverity integration - fix some broken coding standard [1] https://lkml.org/lkml/2018/4/8/661 -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE00UqedjCtOrGVvQiQBSofoJIUNIFAlsepb8ACgkQQBSofoJI UNJdSw/+IhrYJFkJEN/pV4M5xSjYirl/P2WJ4AGi6HcpjEGmaDiBi2whod1Jw2NE 1auSMiby7K91VAmPvxMmmLhOdC8XgJ8jwY1nEaZMfmMXohlaD3FDY5bzYf5rJDF4 J184P6xUZ2IKlFVA4prwNQgYi3awPthVu1lxbFPp8GUHDbmr5ZXEysxPDzz2O0Em oE7WmklmyCHJPhmg/EcVXfF/Ekf3zMOVR+EI2otcDjnWIQioVetIK8CKi0MM4bkG X8Z318ANjGTd42woupXIzsiTrMRONY7zzkUvE+S6tfUjKZoIdofDM5OIXMdOxpxL DZ53WrwfeB74igD8jDZgqD6OaonIfDfCuKrwUASFAC2Ou4h3apj3ckUzoHtAhEUL z5yTSKTrtfuoSufhBp+nKKs3ijDgms76arw8x/pPdN6D6xDwIJtBPxC2sObPaj35 damv4GyM4+sbhGO/Gbie2q6za55IvYFZc7JNCC2D2K5tnBmUaa7/XdvxcyigniGk AZgkaddHePkAZpa5AYYirZR8bd7IFds0+m6VcybG0/pYb0qPEcI6U4mujBSCIwVy kXuD7su3jNjj6hWnCl5PSQo8yBWS5H8c6/o+5XHozzYA91dsLAmD8entuCreg6Hp NaIFio0qKULweLK86f66qQTsRPMpYRAtqPS0Ew0+3llKMcrlRp4= =JrW7 -----END PGP SIGNATURE----- Merge tag 'f2fs-for-4.18' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "In this round, we've mainly focused on discard, aka unmap, control along with fstrim for Android-specific usage model. In addition, we've fixed writepage flow which returned EAGAIN previously resulting in EIO of fsync(2) due to mapping's error state. In order to avoid old MM bug [1], we decided not to use __GFP_ZERO for the mapping for node and meta page caches. As always, we've cleaned up many places for future fsverity and symbol conflicts. Enhancements: - do discard/fstrim in lower priority considering fs utilization - split large discard commands into smaller ones for better responsiveness - add more sanity checks to address syzbot reports - add a mount option, fsync_mode=nobarrier, which can reduce # of cache flushes - clean up symbol namespace with modified function names - be strict on block allocation and IO control in corner cases Bug fixes: - don't use __GFP_ZERO for mappings - fix error reports in writepage to avoid fsync() failure - avoid selinux denial on CAP_RESOURCE on resgid/resuid - fix some subtle race conditions in GC/atomic writes/shutdown - fix overflow bugs in sanity_check_raw_super - fix missing bits on get_flags Clean-ups: - prepare the generic flow for future fsverity integration - fix some broken coding standard" [1] https://lkml.org/lkml/2018/4/8/661 * tag 'f2fs-for-4.18' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (79 commits) f2fs: fix to clear FI_VOLATILE_FILE correctly f2fs: let sync node IO interrupt async one f2fs: don't change wbc->sync_mode f2fs: fix to update mtime correctly fs: f2fs: insert space around that ':' and ', ' fs: f2fs: add missing blank lines after declarations fs: f2fs: changed variable type of offset "unsigned" to "loff_t" f2fs: clean up symbol namespace f2fs: make set_de_type() static f2fs: make __f2fs_write_data_pages() static f2fs: fix to avoid accessing cross the boundary f2fs: fix to let caller retry allocating block address disable loading f2fs module on PAGE_SIZE > 4KB f2fs: fix error path of move_data_page f2fs: don't drop dentry pages after fs shutdown f2fs: fix to avoid race during access gc_thread pointer f2fs: clean up with clear_radix_tree_dirty_tag f2fs: fix to don't trigger writeback during recovery f2fs: clear discard_wake earlier f2fs: let discard thread wait a little longer if dev is busy ...
This commit is contained in:
commit
d54d35c501
|
@ -101,6 +101,7 @@ Date: February 2015
|
|||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the trimming rate in batch mode.
|
||||
<deprecated>
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/cp_interval
|
||||
Date: October 2015
|
||||
|
@ -140,7 +141,7 @@ Contact: "Shuoran Liu" <liushuoran@huawei.com>
|
|||
Description:
|
||||
Shows total written kbytes issued to disk.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/feature
|
||||
What: /sys/fs/f2fs/<disk>/features
|
||||
Date: July 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
|
|
|
@ -182,13 +182,15 @@ whint_mode=%s Control which write hints are passed down to block
|
|||
passes down hints with its policy.
|
||||
alloc_mode=%s Adjust block allocation policy, which supports "reuse"
|
||||
and "default".
|
||||
fsync_mode=%s Control the policy of fsync. Currently supports "posix"
|
||||
and "strict". In "posix" mode, which is default, fsync
|
||||
will follow POSIX semantics and does a light operation
|
||||
to improve the filesystem performance. In "strict" mode,
|
||||
fsync will be heavy and behaves in line with xfs, ext4
|
||||
and btrfs, where xfstest generic/342 will pass, but the
|
||||
performance will regress.
|
||||
fsync_mode=%s Control the policy of fsync. Currently supports "posix",
|
||||
"strict", and "nobarrier". In "posix" mode, which is
|
||||
default, fsync will follow POSIX semantics and does a
|
||||
light operation to improve the filesystem performance.
|
||||
In "strict" mode, fsync will be heavy and behaves in line
|
||||
with xfs, ext4 and btrfs, where xfstest generic/342 will
|
||||
pass, but the performance will regress. "nobarrier" is
|
||||
based on "posix", but doesn't issue flush command for
|
||||
non-atomic files likewise "nobarrier" mount option.
|
||||
test_dummy_encryption Enable dummy encryption, which provides a fake fscrypt
|
||||
context. The fake fscrypt context is used by xfstests.
|
||||
|
||||
|
|
|
@ -26,15 +26,8 @@
|
|||
#include <linux/namei.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/*
|
||||
* Call fscrypt_decrypt_page on every single page, reusing the encryption
|
||||
* context.
|
||||
*/
|
||||
static void completion_pages(struct work_struct *work)
|
||||
static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
|
||||
{
|
||||
struct fscrypt_ctx *ctx =
|
||||
container_of(work, struct fscrypt_ctx, r.work);
|
||||
struct bio *bio = ctx->r.bio;
|
||||
struct bio_vec *bv;
|
||||
int i;
|
||||
|
||||
|
@ -46,22 +39,38 @@ static void completion_pages(struct work_struct *work)
|
|||
if (ret) {
|
||||
WARN_ON_ONCE(1);
|
||||
SetPageError(page);
|
||||
} else {
|
||||
} else if (done) {
|
||||
SetPageUptodate(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
if (done)
|
||||
unlock_page(page);
|
||||
}
|
||||
}
|
||||
|
||||
void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
__fscrypt_decrypt_bio(bio, false);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_decrypt_bio);
|
||||
|
||||
static void completion_pages(struct work_struct *work)
|
||||
{
|
||||
struct fscrypt_ctx *ctx =
|
||||
container_of(work, struct fscrypt_ctx, r.work);
|
||||
struct bio *bio = ctx->r.bio;
|
||||
|
||||
__fscrypt_decrypt_bio(bio, true);
|
||||
fscrypt_release_ctx(ctx);
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *ctx, struct bio *bio)
|
||||
void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
|
||||
{
|
||||
INIT_WORK(&ctx->r.work, completion_pages);
|
||||
ctx->r.bio = bio;
|
||||
queue_work(fscrypt_read_workqueue, &ctx->r.work);
|
||||
fscrypt_enqueue_decrypt_work(&ctx->r.work);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_decrypt_bio_pages);
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
|
||||
|
||||
void fscrypt_pullback_bio_page(struct page **page, bool restore)
|
||||
{
|
||||
|
|
|
@ -45,12 +45,18 @@ static mempool_t *fscrypt_bounce_page_pool = NULL;
|
|||
static LIST_HEAD(fscrypt_free_ctxs);
|
||||
static DEFINE_SPINLOCK(fscrypt_ctx_lock);
|
||||
|
||||
struct workqueue_struct *fscrypt_read_workqueue;
|
||||
static struct workqueue_struct *fscrypt_read_workqueue;
|
||||
static DEFINE_MUTEX(fscrypt_init_mutex);
|
||||
|
||||
static struct kmem_cache *fscrypt_ctx_cachep;
|
||||
struct kmem_cache *fscrypt_info_cachep;
|
||||
|
||||
void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
{
|
||||
queue_work(fscrypt_read_workqueue, work);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
|
||||
|
||||
/**
|
||||
* fscrypt_release_ctx() - Releases an encryption context
|
||||
* @ctx: The encryption context to release.
|
||||
|
|
|
@ -93,7 +93,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
|
|||
/* crypto.c */
|
||||
extern struct kmem_cache *fscrypt_info_cachep;
|
||||
extern int fscrypt_initialize(unsigned int cop_flags);
|
||||
extern struct workqueue_struct *fscrypt_read_workqueue;
|
||||
extern int fscrypt_do_page_crypto(const struct inode *inode,
|
||||
fscrypt_direction_t rw, u64 lblk_num,
|
||||
struct page *src_page,
|
||||
|
|
|
@ -77,7 +77,7 @@ static void mpage_end_io(struct bio *bio)
|
|||
if (bio->bi_status) {
|
||||
fscrypt_release_ctx(bio->bi_private);
|
||||
} else {
|
||||
fscrypt_decrypt_bio_pages(bio->bi_private, bio);
|
||||
fscrypt_enqueue_decrypt_bio(bio->bi_private, bio);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
#include <trace/events/f2fs.h>
|
||||
|
||||
static struct kmem_cache *ino_entry_slab;
|
||||
struct kmem_cache *inode_entry_slab;
|
||||
struct kmem_cache *f2fs_inode_entry_slab;
|
||||
|
||||
void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io)
|
||||
{
|
||||
|
@ -36,7 +36,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io)
|
|||
/*
|
||||
* We guarantee no failure on the returned page.
|
||||
*/
|
||||
struct page *grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
{
|
||||
struct address_space *mapping = META_MAPPING(sbi);
|
||||
struct page *page = NULL;
|
||||
|
@ -100,24 +100,27 @@ repeat:
|
|||
* readonly and make sure do not write checkpoint with non-uptodate
|
||||
* meta page.
|
||||
*/
|
||||
if (unlikely(!PageUptodate(page)))
|
||||
if (unlikely(!PageUptodate(page))) {
|
||||
memset(page_address(page), 0, PAGE_SIZE);
|
||||
f2fs_stop_checkpoint(sbi, false);
|
||||
}
|
||||
out:
|
||||
return page;
|
||||
}
|
||||
|
||||
struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
{
|
||||
return __get_meta_page(sbi, index, true);
|
||||
}
|
||||
|
||||
/* for POR only */
|
||||
struct page *get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
{
|
||||
return __get_meta_page(sbi, index, false);
|
||||
}
|
||||
|
||||
bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type)
|
||||
bool f2fs_is_valid_meta_blkaddr(struct f2fs_sb_info *sbi,
|
||||
block_t blkaddr, int type)
|
||||
{
|
||||
switch (type) {
|
||||
case META_NAT:
|
||||
|
@ -151,7 +154,7 @@ bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type)
|
|||
/*
|
||||
* Readahead CP/NAT/SIT/SSA pages
|
||||
*/
|
||||
int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
|
||||
int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
|
||||
int type, bool sync)
|
||||
{
|
||||
struct page *page;
|
||||
|
@ -173,7 +176,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
|
|||
blk_start_plug(&plug);
|
||||
for (; nrpages-- > 0; blkno++) {
|
||||
|
||||
if (!is_valid_blkaddr(sbi, blkno, type))
|
||||
if (!f2fs_is_valid_meta_blkaddr(sbi, blkno, type))
|
||||
goto out;
|
||||
|
||||
switch (type) {
|
||||
|
@ -217,7 +220,7 @@ out:
|
|||
return blkno - start;
|
||||
}
|
||||
|
||||
void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
|
||||
{
|
||||
struct page *page;
|
||||
bool readahead = false;
|
||||
|
@ -228,7 +231,7 @@ void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
|
|||
f2fs_put_page(page, 0);
|
||||
|
||||
if (readahead)
|
||||
ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true);
|
||||
f2fs_ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true);
|
||||
}
|
||||
|
||||
static int __f2fs_write_meta_page(struct page *page,
|
||||
|
@ -249,7 +252,7 @@ static int __f2fs_write_meta_page(struct page *page,
|
|||
if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
|
||||
goto redirty_out;
|
||||
|
||||
write_meta_page(sbi, page, io_type);
|
||||
f2fs_do_write_meta_page(sbi, page, io_type);
|
||||
dec_page_count(sbi, F2FS_DIRTY_META);
|
||||
|
||||
if (wbc->for_reclaim)
|
||||
|
@ -294,7 +297,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
|
|||
|
||||
trace_f2fs_writepages(mapping->host, wbc, META);
|
||||
diff = nr_pages_to_write(sbi, META, wbc);
|
||||
written = sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO);
|
||||
written = f2fs_sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO);
|
||||
mutex_unlock(&sbi->cp_mutex);
|
||||
wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff);
|
||||
return 0;
|
||||
|
@ -305,7 +308,7 @@ skip_write:
|
|||
return 0;
|
||||
}
|
||||
|
||||
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
|
||||
long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
|
||||
long nr_to_write, enum iostat_type io_type)
|
||||
{
|
||||
struct address_space *mapping = META_MAPPING(sbi);
|
||||
|
@ -382,7 +385,7 @@ static int f2fs_set_meta_page_dirty(struct page *page)
|
|||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
|
||||
SetPagePrivate(page);
|
||||
f2fs_trace_pid(page);
|
||||
|
@ -455,20 +458,20 @@ static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
|
|||
spin_unlock(&im->ino_lock);
|
||||
}
|
||||
|
||||
void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
|
||||
void f2fs_add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
|
||||
{
|
||||
/* add new dirty ino entry into list */
|
||||
__add_ino_entry(sbi, ino, 0, type);
|
||||
}
|
||||
|
||||
void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
|
||||
void f2fs_remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
|
||||
{
|
||||
/* remove dirty ino entry from list */
|
||||
__remove_ino_entry(sbi, ino, type);
|
||||
}
|
||||
|
||||
/* mode should be APPEND_INO or UPDATE_INO */
|
||||
bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
|
||||
bool f2fs_exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
|
||||
{
|
||||
struct inode_management *im = &sbi->im[mode];
|
||||
struct ino_entry *e;
|
||||
|
@ -479,7 +482,7 @@ bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
|
|||
return e ? true : false;
|
||||
}
|
||||
|
||||
void release_ino_entry(struct f2fs_sb_info *sbi, bool all)
|
||||
void f2fs_release_ino_entry(struct f2fs_sb_info *sbi, bool all)
|
||||
{
|
||||
struct ino_entry *e, *tmp;
|
||||
int i;
|
||||
|
@ -498,13 +501,13 @@ void release_ino_entry(struct f2fs_sb_info *sbi, bool all)
|
|||
}
|
||||
}
|
||||
|
||||
void set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
void f2fs_set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
unsigned int devidx, int type)
|
||||
{
|
||||
__add_ino_entry(sbi, ino, devidx, type);
|
||||
}
|
||||
|
||||
bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
bool f2fs_is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
unsigned int devidx, int type)
|
||||
{
|
||||
struct inode_management *im = &sbi->im[type];
|
||||
|
@ -519,7 +522,7 @@ bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
|||
return is_dirty;
|
||||
}
|
||||
|
||||
int acquire_orphan_inode(struct f2fs_sb_info *sbi)
|
||||
int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct inode_management *im = &sbi->im[ORPHAN_INO];
|
||||
int err = 0;
|
||||
|
@ -542,7 +545,7 @@ int acquire_orphan_inode(struct f2fs_sb_info *sbi)
|
|||
return err;
|
||||
}
|
||||
|
||||
void release_orphan_inode(struct f2fs_sb_info *sbi)
|
||||
void f2fs_release_orphan_inode(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct inode_management *im = &sbi->im[ORPHAN_INO];
|
||||
|
||||
|
@ -552,14 +555,14 @@ void release_orphan_inode(struct f2fs_sb_info *sbi)
|
|||
spin_unlock(&im->ino_lock);
|
||||
}
|
||||
|
||||
void add_orphan_inode(struct inode *inode)
|
||||
void f2fs_add_orphan_inode(struct inode *inode)
|
||||
{
|
||||
/* add new orphan ino entry into list */
|
||||
__add_ino_entry(F2FS_I_SB(inode), inode->i_ino, 0, ORPHAN_INO);
|
||||
update_inode_page(inode);
|
||||
f2fs_update_inode_page(inode);
|
||||
}
|
||||
|
||||
void remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
void f2fs_remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
{
|
||||
/* remove orphan entry from orphan list */
|
||||
__remove_ino_entry(sbi, ino, ORPHAN_INO);
|
||||
|
@ -569,7 +572,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
|
|||
{
|
||||
struct inode *inode;
|
||||
struct node_info ni;
|
||||
int err = acquire_orphan_inode(sbi);
|
||||
int err = f2fs_acquire_orphan_inode(sbi);
|
||||
|
||||
if (err)
|
||||
goto err_out;
|
||||
|
@ -587,16 +590,17 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
|
|||
}
|
||||
|
||||
err = dquot_initialize(inode);
|
||||
if (err)
|
||||
if (err) {
|
||||
iput(inode);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
dquot_initialize(inode);
|
||||
clear_nlink(inode);
|
||||
|
||||
/* truncate all the data during iput */
|
||||
iput(inode);
|
||||
|
||||
get_node_info(sbi, ino, &ni);
|
||||
f2fs_get_node_info(sbi, ino, &ni);
|
||||
|
||||
/* ENOMEM was fully retried in f2fs_evict_inode. */
|
||||
if (ni.blk_addr != NULL_ADDR) {
|
||||
|
@ -614,7 +618,7 @@ err_out:
|
|||
return err;
|
||||
}
|
||||
|
||||
int recover_orphan_inodes(struct f2fs_sb_info *sbi)
|
||||
int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
block_t start_blk, orphan_blocks, i, j;
|
||||
unsigned int s_flags = sbi->sb->s_flags;
|
||||
|
@ -642,10 +646,10 @@ int recover_orphan_inodes(struct f2fs_sb_info *sbi)
|
|||
start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi);
|
||||
orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi);
|
||||
|
||||
ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP, true);
|
||||
f2fs_ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP, true);
|
||||
|
||||
for (i = 0; i < orphan_blocks; i++) {
|
||||
struct page *page = get_meta_page(sbi, start_blk + i);
|
||||
struct page *page = f2fs_get_meta_page(sbi, start_blk + i);
|
||||
struct f2fs_orphan_block *orphan_blk;
|
||||
|
||||
orphan_blk = (struct f2fs_orphan_block *)page_address(page);
|
||||
|
@ -695,7 +699,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
|
|||
/* loop for each orphan inode entry and write them in Jornal block */
|
||||
list_for_each_entry(orphan, head, list) {
|
||||
if (!page) {
|
||||
page = grab_meta_page(sbi, start_blk++);
|
||||
page = f2fs_grab_meta_page(sbi, start_blk++);
|
||||
orphan_blk =
|
||||
(struct f2fs_orphan_block *)page_address(page);
|
||||
memset(orphan_blk, 0, sizeof(*orphan_blk));
|
||||
|
@ -737,7 +741,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
|
|||
size_t crc_offset = 0;
|
||||
__u32 crc = 0;
|
||||
|
||||
*cp_page = get_meta_page(sbi, cp_addr);
|
||||
*cp_page = f2fs_get_meta_page(sbi, cp_addr);
|
||||
*cp_block = (struct f2fs_checkpoint *)page_address(*cp_page);
|
||||
|
||||
crc_offset = le32_to_cpu((*cp_block)->checksum_offset);
|
||||
|
@ -790,7 +794,7 @@ invalid_cp1:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
int get_valid_checkpoint(struct f2fs_sb_info *sbi)
|
||||
int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_checkpoint *cp_block;
|
||||
struct f2fs_super_block *fsb = sbi->raw_super;
|
||||
|
@ -834,7 +838,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi)
|
|||
memcpy(sbi->ckpt, cp_block, blk_size);
|
||||
|
||||
/* Sanity checking of checkpoint */
|
||||
if (sanity_check_ckpt(sbi))
|
||||
if (f2fs_sanity_check_ckpt(sbi))
|
||||
goto free_fail_no_cp;
|
||||
|
||||
if (cur_page == cp1)
|
||||
|
@ -853,7 +857,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi)
|
|||
void *sit_bitmap_ptr;
|
||||
unsigned char *ckpt = (unsigned char *)sbi->ckpt;
|
||||
|
||||
cur_page = get_meta_page(sbi, cp_blk_no + i);
|
||||
cur_page = f2fs_get_meta_page(sbi, cp_blk_no + i);
|
||||
sit_bitmap_ptr = page_address(cur_page);
|
||||
memcpy(ckpt + i * blk_size, sit_bitmap_ptr, blk_size);
|
||||
f2fs_put_page(cur_page, 1);
|
||||
|
@ -898,7 +902,7 @@ static void __remove_dirty_inode(struct inode *inode, enum inode_type type)
|
|||
stat_dec_dirty_inode(F2FS_I_SB(inode), type);
|
||||
}
|
||||
|
||||
void update_dirty_page(struct inode *inode, struct page *page)
|
||||
void f2fs_update_dirty_page(struct inode *inode, struct page *page)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE;
|
||||
|
@ -917,7 +921,7 @@ void update_dirty_page(struct inode *inode, struct page *page)
|
|||
f2fs_trace_pid(page);
|
||||
}
|
||||
|
||||
void remove_dirty_inode(struct inode *inode)
|
||||
void f2fs_remove_dirty_inode(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE;
|
||||
|
@ -934,7 +938,7 @@ void remove_dirty_inode(struct inode *inode)
|
|||
spin_unlock(&sbi->inode_lock[type]);
|
||||
}
|
||||
|
||||
int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
|
||||
int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
|
||||
{
|
||||
struct list_head *head;
|
||||
struct inode *inode;
|
||||
|
@ -1017,7 +1021,7 @@ int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi)
|
|||
|
||||
/* it's on eviction */
|
||||
if (is_inode_flag_set(inode, FI_DIRTY_INODE))
|
||||
update_inode_page(inode);
|
||||
f2fs_update_inode_page(inode);
|
||||
iput(inode);
|
||||
}
|
||||
}
|
||||
|
@ -1057,7 +1061,7 @@ retry_flush_dents:
|
|||
/* write all the dirty dentry pages */
|
||||
if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
|
||||
f2fs_unlock_all(sbi);
|
||||
err = sync_dirty_inodes(sbi, DIR_INODE);
|
||||
err = f2fs_sync_dirty_inodes(sbi, DIR_INODE);
|
||||
if (err)
|
||||
goto out;
|
||||
cond_resched();
|
||||
|
@ -1085,7 +1089,9 @@ retry_flush_nodes:
|
|||
|
||||
if (get_pages(sbi, F2FS_DIRTY_NODES)) {
|
||||
up_write(&sbi->node_write);
|
||||
err = sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
|
||||
atomic_inc(&sbi->wb_sync_req[NODE]);
|
||||
err = f2fs_sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
|
||||
atomic_dec(&sbi->wb_sync_req[NODE]);
|
||||
if (err) {
|
||||
up_write(&sbi->node_change);
|
||||
f2fs_unlock_all(sbi);
|
||||
|
@ -1179,10 +1185,10 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
|
|||
|
||||
/*
|
||||
* pagevec_lookup_tag and lock_page again will take
|
||||
* some extra time. Therefore, update_meta_pages and
|
||||
* sync_meta_pages are combined in this function.
|
||||
* some extra time. Therefore, f2fs_update_meta_pages and
|
||||
* f2fs_sync_meta_pages are combined in this function.
|
||||
*/
|
||||
struct page *page = grab_meta_page(sbi, blk_addr);
|
||||
struct page *page = f2fs_grab_meta_page(sbi, blk_addr);
|
||||
int err;
|
||||
|
||||
memcpy(page_address(page), src, PAGE_SIZE);
|
||||
|
@ -1220,7 +1226,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
|
||||
/* Flush all the NAT/SIT pages */
|
||||
while (get_pages(sbi, F2FS_DIRTY_META)) {
|
||||
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
|
||||
f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
|
||||
if (unlikely(f2fs_cp_error(sbi)))
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -1229,7 +1235,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
* modify checkpoint
|
||||
* version number is already updated
|
||||
*/
|
||||
ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi));
|
||||
ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi, true));
|
||||
ckpt->free_segment_count = cpu_to_le32(free_segments(sbi));
|
||||
for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) {
|
||||
ckpt->cur_node_segno[i] =
|
||||
|
@ -1249,7 +1255,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
}
|
||||
|
||||
/* 2 cp + n data seg summary + orphan inode blocks */
|
||||
data_sum_blocks = npages_for_summary_flush(sbi, false);
|
||||
data_sum_blocks = f2fs_npages_for_summary_flush(sbi, false);
|
||||
spin_lock_irqsave(&sbi->cp_lock, flags);
|
||||
if (data_sum_blocks < NR_CURSEG_DATA_TYPE)
|
||||
__set_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
|
||||
|
@ -1294,22 +1300,23 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
|
||||
blk = start_blk + sbi->blocks_per_seg - nm_i->nat_bits_blocks;
|
||||
for (i = 0; i < nm_i->nat_bits_blocks; i++)
|
||||
update_meta_page(sbi, nm_i->nat_bits +
|
||||
f2fs_update_meta_page(sbi, nm_i->nat_bits +
|
||||
(i << F2FS_BLKSIZE_BITS), blk + i);
|
||||
|
||||
/* Flush all the NAT BITS pages */
|
||||
while (get_pages(sbi, F2FS_DIRTY_META)) {
|
||||
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
|
||||
f2fs_sync_meta_pages(sbi, META, LONG_MAX,
|
||||
FS_CP_META_IO);
|
||||
if (unlikely(f2fs_cp_error(sbi)))
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
/* write out checkpoint buffer at block 0 */
|
||||
update_meta_page(sbi, ckpt, start_blk++);
|
||||
f2fs_update_meta_page(sbi, ckpt, start_blk++);
|
||||
|
||||
for (i = 1; i < 1 + cp_payload_blks; i++)
|
||||
update_meta_page(sbi, (char *)ckpt + i * F2FS_BLKSIZE,
|
||||
f2fs_update_meta_page(sbi, (char *)ckpt + i * F2FS_BLKSIZE,
|
||||
start_blk++);
|
||||
|
||||
if (orphan_num) {
|
||||
|
@ -1317,7 +1324,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
start_blk += orphan_blocks;
|
||||
}
|
||||
|
||||
write_data_summaries(sbi, start_blk);
|
||||
f2fs_write_data_summaries(sbi, start_blk);
|
||||
start_blk += data_sum_blocks;
|
||||
|
||||
/* Record write statistics in the hot node summary */
|
||||
|
@ -1328,7 +1335,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
|
||||
|
||||
if (__remain_node_summaries(cpc->reason)) {
|
||||
write_node_summaries(sbi, start_blk);
|
||||
f2fs_write_node_summaries(sbi, start_blk);
|
||||
start_blk += NR_CURSEG_NODE_TYPE;
|
||||
}
|
||||
|
||||
|
@ -1337,7 +1344,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
percpu_counter_set(&sbi->alloc_valid_block_count, 0);
|
||||
|
||||
/* Here, we have one bio having CP pack except cp pack 2 page */
|
||||
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
|
||||
f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
|
||||
|
||||
/* wait for previous submitted meta pages writeback */
|
||||
wait_on_all_pages_writeback(sbi);
|
||||
|
@ -1354,7 +1361,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
commit_checkpoint(sbi, ckpt, start_blk);
|
||||
wait_on_all_pages_writeback(sbi);
|
||||
|
||||
release_ino_entry(sbi, false);
|
||||
f2fs_release_ino_entry(sbi, false);
|
||||
|
||||
if (unlikely(f2fs_cp_error(sbi)))
|
||||
return -EIO;
|
||||
|
@ -1379,7 +1386,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
/*
|
||||
* We guarantee that this checkpoint procedure will not fail.
|
||||
*/
|
||||
int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
||||
int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
||||
{
|
||||
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
|
||||
unsigned long long ckpt_ver;
|
||||
|
@ -1412,7 +1419,7 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
|
||||
/* this is the case of multiple fstrims without any changes */
|
||||
if (cpc->reason & CP_DISCARD) {
|
||||
if (!exist_trim_candidates(sbi, cpc)) {
|
||||
if (!f2fs_exist_trim_candidates(sbi, cpc)) {
|
||||
unblock_operations(sbi);
|
||||
goto out;
|
||||
}
|
||||
|
@ -1420,8 +1427,8 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
if (NM_I(sbi)->dirty_nat_cnt == 0 &&
|
||||
SIT_I(sbi)->dirty_sentries == 0 &&
|
||||
prefree_segments(sbi) == 0) {
|
||||
flush_sit_entries(sbi, cpc);
|
||||
clear_prefree_segments(sbi, cpc);
|
||||
f2fs_flush_sit_entries(sbi, cpc);
|
||||
f2fs_clear_prefree_segments(sbi, cpc);
|
||||
unblock_operations(sbi);
|
||||
goto out;
|
||||
}
|
||||
|
@ -1436,15 +1443,15 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
ckpt->checkpoint_ver = cpu_to_le64(++ckpt_ver);
|
||||
|
||||
/* write cached NAT/SIT entries to NAT/SIT area */
|
||||
flush_nat_entries(sbi, cpc);
|
||||
flush_sit_entries(sbi, cpc);
|
||||
f2fs_flush_nat_entries(sbi, cpc);
|
||||
f2fs_flush_sit_entries(sbi, cpc);
|
||||
|
||||
/* unlock all the fs_lock[] in do_checkpoint() */
|
||||
err = do_checkpoint(sbi, cpc);
|
||||
if (err)
|
||||
release_discard_addrs(sbi);
|
||||
f2fs_release_discard_addrs(sbi);
|
||||
else
|
||||
clear_prefree_segments(sbi, cpc);
|
||||
f2fs_clear_prefree_segments(sbi, cpc);
|
||||
|
||||
unblock_operations(sbi);
|
||||
stat_inc_cp_count(sbi->stat_info);
|
||||
|
@ -1461,7 +1468,7 @@ out:
|
|||
return err;
|
||||
}
|
||||
|
||||
void init_ino_entry_info(struct f2fs_sb_info *sbi)
|
||||
void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -1479,23 +1486,23 @@ void init_ino_entry_info(struct f2fs_sb_info *sbi)
|
|||
F2FS_ORPHANS_PER_BLOCK;
|
||||
}
|
||||
|
||||
int __init create_checkpoint_caches(void)
|
||||
int __init f2fs_create_checkpoint_caches(void)
|
||||
{
|
||||
ino_entry_slab = f2fs_kmem_cache_create("f2fs_ino_entry",
|
||||
sizeof(struct ino_entry));
|
||||
if (!ino_entry_slab)
|
||||
return -ENOMEM;
|
||||
inode_entry_slab = f2fs_kmem_cache_create("f2fs_inode_entry",
|
||||
f2fs_inode_entry_slab = f2fs_kmem_cache_create("f2fs_inode_entry",
|
||||
sizeof(struct inode_entry));
|
||||
if (!inode_entry_slab) {
|
||||
if (!f2fs_inode_entry_slab) {
|
||||
kmem_cache_destroy(ino_entry_slab);
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void destroy_checkpoint_caches(void)
|
||||
void f2fs_destroy_checkpoint_caches(void)
|
||||
{
|
||||
kmem_cache_destroy(ino_entry_slab);
|
||||
kmem_cache_destroy(inode_entry_slab);
|
||||
kmem_cache_destroy(f2fs_inode_entry_slab);
|
||||
}
|
||||
|
|
401
fs/f2fs/data.c
401
fs/f2fs/data.c
|
@ -19,8 +19,6 @@
|
|||
#include <linux/bio.h>
|
||||
#include <linux/prefetch.h>
|
||||
#include <linux/uio.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/memcontrol.h>
|
||||
#include <linux/cleancache.h>
|
||||
#include <linux/sched/signal.h>
|
||||
|
||||
|
@ -30,6 +28,11 @@
|
|||
#include "trace.h"
|
||||
#include <trace/events/f2fs.h>
|
||||
|
||||
#define NUM_PREALLOC_POST_READ_CTXS 128
|
||||
|
||||
static struct kmem_cache *bio_post_read_ctx_cache;
|
||||
static mempool_t *bio_post_read_ctx_pool;
|
||||
|
||||
static bool __is_cp_guaranteed(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
|
@ -45,16 +48,84 @@ static bool __is_cp_guaranteed(struct page *page)
|
|||
if (inode->i_ino == F2FS_META_INO(sbi) ||
|
||||
inode->i_ino == F2FS_NODE_INO(sbi) ||
|
||||
S_ISDIR(inode->i_mode) ||
|
||||
(S_ISREG(inode->i_mode) &&
|
||||
is_inode_flag_set(inode, FI_ATOMIC_FILE)) ||
|
||||
is_cold_data(page))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static void f2fs_read_end_io(struct bio *bio)
|
||||
/* postprocessing steps for read bios */
|
||||
enum bio_post_read_step {
|
||||
STEP_INITIAL = 0,
|
||||
STEP_DECRYPT,
|
||||
};
|
||||
|
||||
struct bio_post_read_ctx {
|
||||
struct bio *bio;
|
||||
struct work_struct work;
|
||||
unsigned int cur_step;
|
||||
unsigned int enabled_steps;
|
||||
};
|
||||
|
||||
static void __read_end_io(struct bio *bio)
|
||||
{
|
||||
struct bio_vec *bvec;
|
||||
struct page *page;
|
||||
struct bio_vec *bv;
|
||||
int i;
|
||||
|
||||
bio_for_each_segment_all(bv, bio, i) {
|
||||
page = bv->bv_page;
|
||||
|
||||
/* PG_error was set if any post_read step failed */
|
||||
if (bio->bi_status || PageError(page)) {
|
||||
ClearPageUptodate(page);
|
||||
SetPageError(page);
|
||||
} else {
|
||||
SetPageUptodate(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
}
|
||||
if (bio->bi_private)
|
||||
mempool_free(bio->bi_private, bio_post_read_ctx_pool);
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
static void bio_post_read_processing(struct bio_post_read_ctx *ctx);
|
||||
|
||||
static void decrypt_work(struct work_struct *work)
|
||||
{
|
||||
struct bio_post_read_ctx *ctx =
|
||||
container_of(work, struct bio_post_read_ctx, work);
|
||||
|
||||
fscrypt_decrypt_bio(ctx->bio);
|
||||
|
||||
bio_post_read_processing(ctx);
|
||||
}
|
||||
|
||||
static void bio_post_read_processing(struct bio_post_read_ctx *ctx)
|
||||
{
|
||||
switch (++ctx->cur_step) {
|
||||
case STEP_DECRYPT:
|
||||
if (ctx->enabled_steps & (1 << STEP_DECRYPT)) {
|
||||
INIT_WORK(&ctx->work, decrypt_work);
|
||||
fscrypt_enqueue_decrypt_work(&ctx->work);
|
||||
return;
|
||||
}
|
||||
ctx->cur_step++;
|
||||
/* fall-through */
|
||||
default:
|
||||
__read_end_io(ctx->bio);
|
||||
}
|
||||
}
|
||||
|
||||
static bool f2fs_bio_post_read_required(struct bio *bio)
|
||||
{
|
||||
return bio->bi_private && !bio->bi_status;
|
||||
}
|
||||
|
||||
static void f2fs_read_end_io(struct bio *bio)
|
||||
{
|
||||
#ifdef CONFIG_F2FS_FAULT_INJECTION
|
||||
if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)), FAULT_IO)) {
|
||||
f2fs_show_injection_info(FAULT_IO);
|
||||
|
@ -62,28 +133,15 @@ static void f2fs_read_end_io(struct bio *bio)
|
|||
}
|
||||
#endif
|
||||
|
||||
if (f2fs_bio_encrypted(bio)) {
|
||||
if (bio->bi_status) {
|
||||
fscrypt_release_ctx(bio->bi_private);
|
||||
} else {
|
||||
fscrypt_decrypt_bio_pages(bio->bi_private, bio);
|
||||
return;
|
||||
}
|
||||
if (f2fs_bio_post_read_required(bio)) {
|
||||
struct bio_post_read_ctx *ctx = bio->bi_private;
|
||||
|
||||
ctx->cur_step = STEP_INITIAL;
|
||||
bio_post_read_processing(ctx);
|
||||
return;
|
||||
}
|
||||
|
||||
bio_for_each_segment_all(bvec, bio, i) {
|
||||
struct page *page = bvec->bv_page;
|
||||
|
||||
if (!bio->bi_status) {
|
||||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
} else {
|
||||
ClearPageUptodate(page);
|
||||
SetPageError(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
}
|
||||
bio_put(bio);
|
||||
__read_end_io(bio);
|
||||
}
|
||||
|
||||
static void f2fs_write_end_io(struct bio *bio)
|
||||
|
@ -189,7 +247,7 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
|
|||
} else {
|
||||
bio->bi_end_io = f2fs_write_end_io;
|
||||
bio->bi_private = sbi;
|
||||
bio->bi_write_hint = io_type_to_rw_hint(sbi, type, temp);
|
||||
bio->bi_write_hint = f2fs_io_type_to_rw_hint(sbi, type, temp);
|
||||
}
|
||||
if (wbc)
|
||||
wbc_init_bio(wbc, bio);
|
||||
|
@ -404,13 +462,12 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int f2fs_submit_page_write(struct f2fs_io_info *fio)
|
||||
void f2fs_submit_page_write(struct f2fs_io_info *fio)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = fio->sbi;
|
||||
enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
|
||||
struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
|
||||
struct page *bio_page;
|
||||
int err = 0;
|
||||
|
||||
f2fs_bug_on(sbi, is_read_io(fio->op));
|
||||
|
||||
|
@ -420,7 +477,7 @@ next:
|
|||
spin_lock(&io->io_lock);
|
||||
if (list_empty(&io->io_list)) {
|
||||
spin_unlock(&io->io_lock);
|
||||
goto out_fail;
|
||||
goto out;
|
||||
}
|
||||
fio = list_first_entry(&io->io_list,
|
||||
struct f2fs_io_info, list);
|
||||
|
@ -428,7 +485,7 @@ next:
|
|||
spin_unlock(&io->io_lock);
|
||||
}
|
||||
|
||||
if (fio->old_blkaddr != NEW_ADDR)
|
||||
if (is_valid_blkaddr(fio->old_blkaddr))
|
||||
verify_block_addr(fio, fio->old_blkaddr);
|
||||
verify_block_addr(fio, fio->new_blkaddr);
|
||||
|
||||
|
@ -447,9 +504,9 @@ alloc_new:
|
|||
if (io->bio == NULL) {
|
||||
if ((fio->type == DATA || fio->type == NODE) &&
|
||||
fio->new_blkaddr & F2FS_IO_SIZE_MASK(sbi)) {
|
||||
err = -EAGAIN;
|
||||
dec_page_count(sbi, WB_DATA_TYPE(bio_page));
|
||||
goto out_fail;
|
||||
fio->retry = true;
|
||||
goto skip;
|
||||
}
|
||||
io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc,
|
||||
BIO_MAX_PAGES, false,
|
||||
|
@ -469,41 +526,44 @@ alloc_new:
|
|||
f2fs_trace_ios(fio, 0);
|
||||
|
||||
trace_f2fs_submit_page_write(fio->page, fio);
|
||||
|
||||
skip:
|
||||
if (fio->in_list)
|
||||
goto next;
|
||||
out_fail:
|
||||
out:
|
||||
up_write(&io->io_rwsem);
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
|
||||
unsigned nr_pages)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct fscrypt_ctx *ctx = NULL;
|
||||
struct bio *bio;
|
||||
struct bio_post_read_ctx *ctx;
|
||||
unsigned int post_read_steps = 0;
|
||||
|
||||
if (f2fs_encrypted_file(inode)) {
|
||||
ctx = fscrypt_get_ctx(inode, GFP_NOFS);
|
||||
if (IS_ERR(ctx))
|
||||
return ERR_CAST(ctx);
|
||||
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
|
||||
if (!bio)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
f2fs_target_device(sbi, blkaddr, bio);
|
||||
bio->bi_end_io = f2fs_read_end_io;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, 0);
|
||||
|
||||
if (f2fs_encrypted_file(inode))
|
||||
post_read_steps |= 1 << STEP_DECRYPT;
|
||||
if (post_read_steps) {
|
||||
ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
|
||||
if (!ctx) {
|
||||
bio_put(bio);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
ctx->bio = bio;
|
||||
ctx->enabled_steps = post_read_steps;
|
||||
bio->bi_private = ctx;
|
||||
|
||||
/* wait the page to be moved by cleaning */
|
||||
f2fs_wait_on_block_writeback(sbi, blkaddr);
|
||||
}
|
||||
|
||||
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
|
||||
if (!bio) {
|
||||
if (ctx)
|
||||
fscrypt_release_ctx(ctx);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
f2fs_target_device(sbi, blkaddr, bio);
|
||||
bio->bi_end_io = f2fs_read_end_io;
|
||||
bio->bi_private = ctx;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, 0);
|
||||
|
||||
return bio;
|
||||
}
|
||||
|
||||
|
@ -544,7 +604,7 @@ static void __set_data_blkaddr(struct dnode_of_data *dn)
|
|||
* ->node_page
|
||||
* update block addresses in the node page
|
||||
*/
|
||||
void set_data_blkaddr(struct dnode_of_data *dn)
|
||||
void f2fs_set_data_blkaddr(struct dnode_of_data *dn)
|
||||
{
|
||||
f2fs_wait_on_page_writeback(dn->node_page, NODE, true);
|
||||
__set_data_blkaddr(dn);
|
||||
|
@ -555,12 +615,12 @@ void set_data_blkaddr(struct dnode_of_data *dn)
|
|||
void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
|
||||
{
|
||||
dn->data_blkaddr = blkaddr;
|
||||
set_data_blkaddr(dn);
|
||||
f2fs_set_data_blkaddr(dn);
|
||||
f2fs_update_extent_cache(dn);
|
||||
}
|
||||
|
||||
/* dn->ofs_in_node will be returned with up-to-date last block pointer */
|
||||
int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
|
||||
int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
|
||||
int err;
|
||||
|
@ -594,12 +654,12 @@ int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
|
|||
}
|
||||
|
||||
/* Should keep dn->ofs_in_node unchanged */
|
||||
int reserve_new_block(struct dnode_of_data *dn)
|
||||
int f2fs_reserve_new_block(struct dnode_of_data *dn)
|
||||
{
|
||||
unsigned int ofs_in_node = dn->ofs_in_node;
|
||||
int ret;
|
||||
|
||||
ret = reserve_new_blocks(dn, 1);
|
||||
ret = f2fs_reserve_new_blocks(dn, 1);
|
||||
dn->ofs_in_node = ofs_in_node;
|
||||
return ret;
|
||||
}
|
||||
|
@ -609,12 +669,12 @@ int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index)
|
|||
bool need_put = dn->inode_page ? false : true;
|
||||
int err;
|
||||
|
||||
err = get_dnode_of_data(dn, index, ALLOC_NODE);
|
||||
err = f2fs_get_dnode_of_data(dn, index, ALLOC_NODE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (dn->data_blkaddr == NULL_ADDR)
|
||||
err = reserve_new_block(dn);
|
||||
err = f2fs_reserve_new_block(dn);
|
||||
if (err || need_put)
|
||||
f2fs_put_dnode(dn);
|
||||
return err;
|
||||
|
@ -633,7 +693,7 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index)
|
|||
return f2fs_reserve_block(dn, index);
|
||||
}
|
||||
|
||||
struct page *get_read_data_page(struct inode *inode, pgoff_t index,
|
||||
struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
|
||||
int op_flags, bool for_write)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
|
@ -652,7 +712,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
|
|||
}
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE);
|
||||
if (err)
|
||||
goto put_err;
|
||||
f2fs_put_dnode(&dn);
|
||||
|
@ -671,7 +731,8 @@ got_it:
|
|||
* A new dentry page is allocated but not able to be written, since its
|
||||
* new inode page couldn't be allocated due to -ENOSPC.
|
||||
* In such the case, its blkaddr can be remained as NEW_ADDR.
|
||||
* see, f2fs_add_link -> get_new_data_page -> init_inode_metadata.
|
||||
* see, f2fs_add_link -> f2fs_get_new_data_page ->
|
||||
* f2fs_init_inode_metadata.
|
||||
*/
|
||||
if (dn.data_blkaddr == NEW_ADDR) {
|
||||
zero_user_segment(page, 0, PAGE_SIZE);
|
||||
|
@ -691,7 +752,7 @@ put_err:
|
|||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
struct page *find_data_page(struct inode *inode, pgoff_t index)
|
||||
struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct page *page;
|
||||
|
@ -701,7 +762,7 @@ struct page *find_data_page(struct inode *inode, pgoff_t index)
|
|||
return page;
|
||||
f2fs_put_page(page, 0);
|
||||
|
||||
page = get_read_data_page(inode, index, 0, false);
|
||||
page = f2fs_get_read_data_page(inode, index, 0, false);
|
||||
if (IS_ERR(page))
|
||||
return page;
|
||||
|
||||
|
@ -721,13 +782,13 @@ struct page *find_data_page(struct inode *inode, pgoff_t index)
|
|||
* Because, the callers, functions in dir.c and GC, should be able to know
|
||||
* whether this page exists or not.
|
||||
*/
|
||||
struct page *get_lock_data_page(struct inode *inode, pgoff_t index,
|
||||
struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index,
|
||||
bool for_write)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct page *page;
|
||||
repeat:
|
||||
page = get_read_data_page(inode, index, 0, for_write);
|
||||
page = f2fs_get_read_data_page(inode, index, 0, for_write);
|
||||
if (IS_ERR(page))
|
||||
return page;
|
||||
|
||||
|
@ -753,7 +814,7 @@ repeat:
|
|||
* Note that, ipage is set only by make_empty_dir, and if any error occur,
|
||||
* ipage should be released by this function.
|
||||
*/
|
||||
struct page *get_new_data_page(struct inode *inode,
|
||||
struct page *f2fs_get_new_data_page(struct inode *inode,
|
||||
struct page *ipage, pgoff_t index, bool new_i_size)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
|
@ -792,7 +853,7 @@ struct page *get_new_data_page(struct inode *inode,
|
|||
|
||||
/* if ipage exists, blkaddr should be NEW_ADDR */
|
||||
f2fs_bug_on(F2FS_I_SB(inode), ipage);
|
||||
page = get_lock_data_page(inode, index, true);
|
||||
page = f2fs_get_lock_data_page(inode, index, true);
|
||||
if (IS_ERR(page))
|
||||
return page;
|
||||
}
|
||||
|
@ -824,15 +885,15 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
|
|||
return err;
|
||||
|
||||
alloc:
|
||||
get_node_info(sbi, dn->nid, &ni);
|
||||
f2fs_get_node_info(sbi, dn->nid, &ni);
|
||||
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
|
||||
|
||||
allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr,
|
||||
f2fs_allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr,
|
||||
&sum, seg_type, NULL, false);
|
||||
set_data_blkaddr(dn);
|
||||
f2fs_set_data_blkaddr(dn);
|
||||
|
||||
/* update i_size */
|
||||
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
|
||||
fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
|
||||
dn->ofs_in_node;
|
||||
if (i_size_read(dn->inode) < ((loff_t)(fofs + 1) << PAGE_SHIFT))
|
||||
f2fs_i_size_write(dn->inode,
|
||||
|
@ -870,7 +931,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
|
|||
map.m_seg_type = NO_CHECK_TYPE;
|
||||
|
||||
if (direct_io) {
|
||||
map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint);
|
||||
map.m_seg_type = f2fs_rw_hint_to_seg_type(iocb->ki_hint);
|
||||
flag = f2fs_force_buffered_io(inode, WRITE) ?
|
||||
F2FS_GET_BLOCK_PRE_AIO :
|
||||
F2FS_GET_BLOCK_PRE_DIO;
|
||||
|
@ -960,7 +1021,7 @@ next_dnode:
|
|||
|
||||
/* When reading holes, we need its node page */
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, pgofs, mode);
|
||||
err = f2fs_get_dnode_of_data(&dn, pgofs, mode);
|
||||
if (err) {
|
||||
if (flag == F2FS_GET_BLOCK_BMAP)
|
||||
map->m_pblk = 0;
|
||||
|
@ -968,10 +1029,10 @@ next_dnode:
|
|||
err = 0;
|
||||
if (map->m_next_pgofs)
|
||||
*map->m_next_pgofs =
|
||||
get_next_page_offset(&dn, pgofs);
|
||||
f2fs_get_next_page_offset(&dn, pgofs);
|
||||
if (map->m_next_extent)
|
||||
*map->m_next_extent =
|
||||
get_next_page_offset(&dn, pgofs);
|
||||
f2fs_get_next_page_offset(&dn, pgofs);
|
||||
}
|
||||
goto unlock_out;
|
||||
}
|
||||
|
@ -984,7 +1045,7 @@ next_dnode:
|
|||
next_block:
|
||||
blkaddr = datablock_addr(dn.inode, dn.node_page, dn.ofs_in_node);
|
||||
|
||||
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) {
|
||||
if (!is_valid_blkaddr(blkaddr)) {
|
||||
if (create) {
|
||||
if (unlikely(f2fs_cp_error(sbi))) {
|
||||
err = -EIO;
|
||||
|
@ -1057,7 +1118,7 @@ skip:
|
|||
(pgofs == end || dn.ofs_in_node == end_offset)) {
|
||||
|
||||
dn.ofs_in_node = ofs_in_node;
|
||||
err = reserve_new_blocks(&dn, prealloc);
|
||||
err = f2fs_reserve_new_blocks(&dn, prealloc);
|
||||
if (err)
|
||||
goto sync_out;
|
||||
|
||||
|
@ -1176,7 +1237,7 @@ static int get_data_block_dio(struct inode *inode, sector_t iblock,
|
|||
{
|
||||
return __get_data_block(inode, iblock, bh_result, create,
|
||||
F2FS_GET_BLOCK_DEFAULT, NULL,
|
||||
rw_hint_to_seg_type(
|
||||
f2fs_rw_hint_to_seg_type(
|
||||
inode->i_write_hint));
|
||||
}
|
||||
|
||||
|
@ -1221,7 +1282,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
|
|||
if (!page)
|
||||
return -ENOMEM;
|
||||
|
||||
get_node_info(sbi, inode->i_ino, &ni);
|
||||
f2fs_get_node_info(sbi, inode->i_ino, &ni);
|
||||
|
||||
phys = (__u64)blk_to_logical(inode, ni.blk_addr);
|
||||
offset = offsetof(struct f2fs_inode, i_addr) +
|
||||
|
@ -1248,7 +1309,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
|
|||
if (!page)
|
||||
return -ENOMEM;
|
||||
|
||||
get_node_info(sbi, xnid, &ni);
|
||||
f2fs_get_node_info(sbi, xnid, &ni);
|
||||
|
||||
phys = (__u64)blk_to_logical(inode, ni.blk_addr);
|
||||
len = inode->i_sb->s_blocksize;
|
||||
|
@ -1525,7 +1586,7 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
|
|||
if (!f2fs_encrypted_file(inode))
|
||||
return 0;
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
f2fs_wait_on_block_writeback(fio->sbi, fio->old_blkaddr);
|
||||
|
||||
retry_encrypt:
|
||||
|
@ -1552,12 +1613,12 @@ static inline bool check_inplace_update_policy(struct inode *inode,
|
|||
|
||||
if (policy & (0x1 << F2FS_IPU_FORCE))
|
||||
return true;
|
||||
if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi))
|
||||
if (policy & (0x1 << F2FS_IPU_SSR) && f2fs_need_SSR(sbi))
|
||||
return true;
|
||||
if (policy & (0x1 << F2FS_IPU_UTIL) &&
|
||||
utilization(sbi) > SM_I(sbi)->min_ipu_util)
|
||||
return true;
|
||||
if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && need_SSR(sbi) &&
|
||||
if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && f2fs_need_SSR(sbi) &&
|
||||
utilization(sbi) > SM_I(sbi)->min_ipu_util)
|
||||
return true;
|
||||
|
||||
|
@ -1578,7 +1639,7 @@ static inline bool check_inplace_update_policy(struct inode *inode,
|
|||
return false;
|
||||
}
|
||||
|
||||
bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
|
||||
bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
|
||||
{
|
||||
if (f2fs_is_pinned_file(inode))
|
||||
return true;
|
||||
|
@ -1590,7 +1651,7 @@ bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
|
|||
return check_inplace_update_policy(inode, fio);
|
||||
}
|
||||
|
||||
bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
|
||||
bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
||||
|
@ -1613,22 +1674,13 @@ static inline bool need_inplace_update(struct f2fs_io_info *fio)
|
|||
{
|
||||
struct inode *inode = fio->page->mapping->host;
|
||||
|
||||
if (should_update_outplace(inode, fio))
|
||||
if (f2fs_should_update_outplace(inode, fio))
|
||||
return false;
|
||||
|
||||
return should_update_inplace(inode, fio);
|
||||
return f2fs_should_update_inplace(inode, fio);
|
||||
}
|
||||
|
||||
static inline bool valid_ipu_blkaddr(struct f2fs_io_info *fio)
|
||||
{
|
||||
if (fio->old_blkaddr == NEW_ADDR)
|
||||
return false;
|
||||
if (fio->old_blkaddr == NULL_ADDR)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
int do_write_data_page(struct f2fs_io_info *fio)
|
||||
int f2fs_do_write_data_page(struct f2fs_io_info *fio)
|
||||
{
|
||||
struct page *page = fio->page;
|
||||
struct inode *inode = page->mapping->host;
|
||||
|
@ -1642,7 +1694,7 @@ int do_write_data_page(struct f2fs_io_info *fio)
|
|||
f2fs_lookup_extent_cache(inode, page->index, &ei)) {
|
||||
fio->old_blkaddr = ei.blk + page->index - ei.fofs;
|
||||
|
||||
if (valid_ipu_blkaddr(fio)) {
|
||||
if (is_valid_blkaddr(fio->old_blkaddr)) {
|
||||
ipu_force = true;
|
||||
fio->need_lock = LOCK_DONE;
|
||||
goto got_it;
|
||||
|
@ -1653,7 +1705,7 @@ int do_write_data_page(struct f2fs_io_info *fio)
|
|||
if (fio->need_lock == LOCK_REQ && !f2fs_trylock_op(fio->sbi))
|
||||
return -EAGAIN;
|
||||
|
||||
err = get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -1669,16 +1721,18 @@ got_it:
|
|||
* If current allocation needs SSR,
|
||||
* it had better in-place writes for updated data.
|
||||
*/
|
||||
if (ipu_force || (valid_ipu_blkaddr(fio) && need_inplace_update(fio))) {
|
||||
if (ipu_force || (is_valid_blkaddr(fio->old_blkaddr) &&
|
||||
need_inplace_update(fio))) {
|
||||
err = encrypt_one_page(fio);
|
||||
if (err)
|
||||
goto out_writepage;
|
||||
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
f2fs_put_dnode(&dn);
|
||||
if (fio->need_lock == LOCK_REQ)
|
||||
f2fs_unlock_op(fio->sbi);
|
||||
err = rewrite_data_page(fio);
|
||||
err = f2fs_inplace_write_data(fio);
|
||||
trace_f2fs_do_write_data_page(fio->page, IPU);
|
||||
set_inode_flag(inode, FI_UPDATE_WRITE);
|
||||
return err;
|
||||
|
@ -1697,9 +1751,10 @@ got_it:
|
|||
goto out_writepage;
|
||||
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
|
||||
/* LFS mode write path */
|
||||
write_data_page(&dn, fio);
|
||||
f2fs_outplace_write_data(&dn, fio);
|
||||
trace_f2fs_do_write_data_page(page, OPU);
|
||||
set_inode_flag(inode, FI_APPEND_WRITE);
|
||||
if (page->index == 0)
|
||||
|
@ -1745,6 +1800,12 @@ static int __write_data_page(struct page *page, bool *submitted,
|
|||
/* we should bypass data pages to proceed the kworkder jobs */
|
||||
if (unlikely(f2fs_cp_error(sbi))) {
|
||||
mapping_set_error(page->mapping, -EIO);
|
||||
/*
|
||||
* don't drop any dirty dentry pages for keeping lastest
|
||||
* directory structure.
|
||||
*/
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
goto redirty_out;
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -1769,13 +1830,13 @@ write:
|
|||
/* we should not write 0'th page having journal header */
|
||||
if (f2fs_is_volatile_file(inode) && (!page->index ||
|
||||
(!wbc->for_reclaim &&
|
||||
available_free_memory(sbi, BASE_CHECK))))
|
||||
f2fs_available_free_memory(sbi, BASE_CHECK))))
|
||||
goto redirty_out;
|
||||
|
||||
/* Dentry blocks are controlled by checkpoint */
|
||||
if (S_ISDIR(inode->i_mode)) {
|
||||
fio.need_lock = LOCK_DONE;
|
||||
err = do_write_data_page(&fio);
|
||||
err = f2fs_do_write_data_page(&fio);
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
@ -1794,10 +1855,10 @@ write:
|
|||
}
|
||||
|
||||
if (err == -EAGAIN) {
|
||||
err = do_write_data_page(&fio);
|
||||
err = f2fs_do_write_data_page(&fio);
|
||||
if (err == -EAGAIN) {
|
||||
fio.need_lock = LOCK_REQ;
|
||||
err = do_write_data_page(&fio);
|
||||
err = f2fs_do_write_data_page(&fio);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1822,7 +1883,7 @@ out:
|
|||
if (wbc->for_reclaim) {
|
||||
f2fs_submit_merged_write_cond(sbi, inode, 0, page->index, DATA);
|
||||
clear_inode_flag(inode, FI_HOT_DATA);
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
submitted = NULL;
|
||||
}
|
||||
|
||||
|
@ -1842,7 +1903,13 @@ out:
|
|||
|
||||
redirty_out:
|
||||
redirty_page_for_writepage(wbc, page);
|
||||
if (!err)
|
||||
/*
|
||||
* pageout() in MM traslates EAGAIN, so calls handle_write_error()
|
||||
* -> mapping_set_error() -> set_bit(AS_EIO, ...).
|
||||
* file_write_and_wait_range() will see EIO error, which is critical
|
||||
* to return value of fsync() followed by atomic_write failure to user.
|
||||
*/
|
||||
if (!err || wbc->for_reclaim)
|
||||
return AOP_WRITEPAGE_ACTIVATE;
|
||||
unlock_page(page);
|
||||
return err;
|
||||
|
@ -1866,6 +1933,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
|
|||
int ret = 0;
|
||||
int done = 0;
|
||||
struct pagevec pvec;
|
||||
struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
|
||||
int nr_pages;
|
||||
pgoff_t uninitialized_var(writeback_index);
|
||||
pgoff_t index;
|
||||
|
@ -1919,6 +1987,13 @@ retry:
|
|||
struct page *page = pvec.pages[i];
|
||||
bool submitted = false;
|
||||
|
||||
/* give a priority to WB_SYNC threads */
|
||||
if (atomic_read(&sbi->wb_sync_req[DATA]) &&
|
||||
wbc->sync_mode == WB_SYNC_NONE) {
|
||||
done = 1;
|
||||
break;
|
||||
}
|
||||
|
||||
done_index = page->index;
|
||||
retry_write:
|
||||
lock_page(page);
|
||||
|
@ -1973,9 +2048,7 @@ continue_unlock:
|
|||
last_idx = page->index;
|
||||
}
|
||||
|
||||
/* give a priority to WB_SYNC threads */
|
||||
if ((atomic_read(&F2FS_M_SB(mapping)->wb_sync_req) ||
|
||||
--wbc->nr_to_write <= 0) &&
|
||||
if (--wbc->nr_to_write <= 0 &&
|
||||
wbc->sync_mode == WB_SYNC_NONE) {
|
||||
done = 1;
|
||||
break;
|
||||
|
@ -2001,7 +2074,7 @@ continue_unlock:
|
|||
return ret;
|
||||
}
|
||||
|
||||
int __f2fs_write_data_pages(struct address_space *mapping,
|
||||
static int __f2fs_write_data_pages(struct address_space *mapping,
|
||||
struct writeback_control *wbc,
|
||||
enum iostat_type io_type)
|
||||
{
|
||||
|
@ -2024,7 +2097,7 @@ int __f2fs_write_data_pages(struct address_space *mapping,
|
|||
|
||||
if (S_ISDIR(inode->i_mode) && wbc->sync_mode == WB_SYNC_NONE &&
|
||||
get_dirty_pages(inode) < nr_pages_to_skip(sbi, DATA) &&
|
||||
available_free_memory(sbi, DIRTY_DENTS))
|
||||
f2fs_available_free_memory(sbi, DIRTY_DENTS))
|
||||
goto skip_write;
|
||||
|
||||
/* skip writing during file defragment */
|
||||
|
@ -2035,8 +2108,8 @@ int __f2fs_write_data_pages(struct address_space *mapping,
|
|||
|
||||
/* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */
|
||||
if (wbc->sync_mode == WB_SYNC_ALL)
|
||||
atomic_inc(&sbi->wb_sync_req);
|
||||
else if (atomic_read(&sbi->wb_sync_req))
|
||||
atomic_inc(&sbi->wb_sync_req[DATA]);
|
||||
else if (atomic_read(&sbi->wb_sync_req[DATA]))
|
||||
goto skip_write;
|
||||
|
||||
blk_start_plug(&plug);
|
||||
|
@ -2044,13 +2117,13 @@ int __f2fs_write_data_pages(struct address_space *mapping,
|
|||
blk_finish_plug(&plug);
|
||||
|
||||
if (wbc->sync_mode == WB_SYNC_ALL)
|
||||
atomic_dec(&sbi->wb_sync_req);
|
||||
atomic_dec(&sbi->wb_sync_req[DATA]);
|
||||
/*
|
||||
* if some pages were truncated, we cannot guarantee its mapping->host
|
||||
* to detect pending bios.
|
||||
*/
|
||||
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
return ret;
|
||||
|
||||
skip_write:
|
||||
|
@ -2077,7 +2150,7 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
|
|||
if (to > i_size) {
|
||||
down_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
truncate_pagecache(inode, i_size);
|
||||
truncate_blocks(inode, i_size, true);
|
||||
f2fs_truncate_blocks(inode, i_size, true);
|
||||
up_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
}
|
||||
}
|
||||
|
@ -2109,7 +2182,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
|
|||
}
|
||||
restart:
|
||||
/* check inline_data */
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
err = PTR_ERR(ipage);
|
||||
goto unlock_out;
|
||||
|
@ -2119,7 +2192,7 @@ restart:
|
|||
|
||||
if (f2fs_has_inline_data(inode)) {
|
||||
if (pos + len <= MAX_INLINE_DATA(inode)) {
|
||||
read_inline_data(page, ipage);
|
||||
f2fs_do_read_inline_data(page, ipage);
|
||||
set_inode_flag(inode, FI_DATA_EXIST);
|
||||
if (inode->i_nlink)
|
||||
set_inline_node(ipage);
|
||||
|
@ -2137,7 +2210,7 @@ restart:
|
|||
dn.data_blkaddr = ei.blk + index - ei.fofs;
|
||||
} else {
|
||||
/* hole case */
|
||||
err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE);
|
||||
if (err || dn.data_blkaddr == NULL_ADDR) {
|
||||
f2fs_put_dnode(&dn);
|
||||
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO,
|
||||
|
@ -2174,7 +2247,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
|
|||
trace_f2fs_write_begin(inode, pos, len, flags);
|
||||
|
||||
if (f2fs_is_atomic_file(inode) &&
|
||||
!available_free_memory(sbi, INMEM_PAGES)) {
|
||||
!f2fs_available_free_memory(sbi, INMEM_PAGES)) {
|
||||
err = -ENOMEM;
|
||||
drop_atomic = true;
|
||||
goto fail;
|
||||
|
@ -2222,8 +2295,8 @@ repeat:
|
|||
|
||||
f2fs_wait_on_page_writeback(page, DATA, false);
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
if (f2fs_encrypted_file(inode))
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
if (f2fs_post_read_required(inode))
|
||||
f2fs_wait_on_block_writeback(sbi, blkaddr);
|
||||
|
||||
if (len == PAGE_SIZE || PageUptodate(page))
|
||||
|
@ -2258,7 +2331,7 @@ fail:
|
|||
f2fs_put_page(page, 1);
|
||||
f2fs_write_failed(mapping, pos + len);
|
||||
if (drop_atomic)
|
||||
drop_inmem_pages_all(sbi);
|
||||
f2fs_drop_inmem_pages_all(sbi, false);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -2333,17 +2406,17 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
|
|||
if (rw == WRITE && whint_mode == WHINT_MODE_OFF)
|
||||
iocb->ki_hint = WRITE_LIFE_NOT_SET;
|
||||
|
||||
if (!down_read_trylock(&F2FS_I(inode)->dio_rwsem[rw])) {
|
||||
if (!down_read_trylock(&F2FS_I(inode)->i_gc_rwsem[rw])) {
|
||||
if (iocb->ki_flags & IOCB_NOWAIT) {
|
||||
iocb->ki_hint = hint;
|
||||
err = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
down_read(&F2FS_I(inode)->dio_rwsem[rw]);
|
||||
down_read(&F2FS_I(inode)->i_gc_rwsem[rw]);
|
||||
}
|
||||
|
||||
err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio);
|
||||
up_read(&F2FS_I(inode)->dio_rwsem[rw]);
|
||||
up_read(&F2FS_I(inode)->i_gc_rwsem[rw]);
|
||||
|
||||
if (rw == WRITE) {
|
||||
if (whint_mode == WHINT_MODE_OFF)
|
||||
|
@ -2380,13 +2453,13 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset,
|
|||
dec_page_count(sbi, F2FS_DIRTY_NODES);
|
||||
} else {
|
||||
inode_dec_dirty_pages(inode);
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
}
|
||||
}
|
||||
|
||||
/* This is atomic written page, keep Private */
|
||||
if (IS_ATOMIC_WRITTEN_PAGE(page))
|
||||
return drop_inmem_page(inode, page);
|
||||
return f2fs_drop_inmem_page(inode, page);
|
||||
|
||||
set_page_private(page, 0);
|
||||
ClearPagePrivate(page);
|
||||
|
@ -2407,35 +2480,6 @@ int f2fs_release_page(struct page *page, gfp_t wait)
|
|||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* This was copied from __set_page_dirty_buffers which gives higher performance
|
||||
* in very high speed storages. (e.g., pmem)
|
||||
*/
|
||||
void f2fs_set_page_dirty_nobuffers(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!mapping))
|
||||
return;
|
||||
|
||||
spin_lock(&mapping->private_lock);
|
||||
lock_page_memcg(page);
|
||||
SetPageDirty(page);
|
||||
spin_unlock(&mapping->private_lock);
|
||||
|
||||
xa_lock_irqsave(&mapping->i_pages, flags);
|
||||
WARN_ON_ONCE(!PageUptodate(page));
|
||||
account_page_dirtied(page, mapping);
|
||||
radix_tree_tag_set(&mapping->i_pages,
|
||||
page_index(page), PAGECACHE_TAG_DIRTY);
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags);
|
||||
unlock_page_memcg(page);
|
||||
|
||||
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
|
||||
return;
|
||||
}
|
||||
|
||||
static int f2fs_set_data_page_dirty(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
|
@ -2448,7 +2492,7 @@ static int f2fs_set_data_page_dirty(struct page *page)
|
|||
|
||||
if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
|
||||
if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
|
||||
register_inmem_page(inode, page);
|
||||
f2fs_register_inmem_page(inode, page);
|
||||
return 1;
|
||||
}
|
||||
/*
|
||||
|
@ -2459,8 +2503,8 @@ static int f2fs_set_data_page_dirty(struct page *page)
|
|||
}
|
||||
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
update_dirty_page(inode, page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
f2fs_update_dirty_page(inode, page);
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
|
@ -2555,3 +2599,38 @@ const struct address_space_operations f2fs_dblock_aops = {
|
|||
.migratepage = f2fs_migrate_page,
|
||||
#endif
|
||||
};
|
||||
|
||||
void f2fs_clear_radix_tree_dirty_tag(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page_mapping(page);
|
||||
unsigned long flags;
|
||||
|
||||
xa_lock_irqsave(&mapping->i_pages, flags);
|
||||
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
|
||||
PAGECACHE_TAG_DIRTY);
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags);
|
||||
}
|
||||
|
||||
int __init f2fs_init_post_read_processing(void)
|
||||
{
|
||||
bio_post_read_ctx_cache = KMEM_CACHE(bio_post_read_ctx, 0);
|
||||
if (!bio_post_read_ctx_cache)
|
||||
goto fail;
|
||||
bio_post_read_ctx_pool =
|
||||
mempool_create_slab_pool(NUM_PREALLOC_POST_READ_CTXS,
|
||||
bio_post_read_ctx_cache);
|
||||
if (!bio_post_read_ctx_pool)
|
||||
goto fail_free_cache;
|
||||
return 0;
|
||||
|
||||
fail_free_cache:
|
||||
kmem_cache_destroy(bio_post_read_ctx_cache);
|
||||
fail:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
void __exit f2fs_destroy_post_read_processing(void)
|
||||
{
|
||||
mempool_destroy(bio_post_read_ctx_pool);
|
||||
kmem_cache_destroy(bio_post_read_ctx_cache);
|
||||
}
|
||||
|
|
|
@ -104,6 +104,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
|
|||
si->avail_nids = NM_I(sbi)->available_nids;
|
||||
si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID];
|
||||
si->bg_gc = sbi->bg_gc;
|
||||
si->skipped_atomic_files[BG_GC] = sbi->skipped_atomic_files[BG_GC];
|
||||
si->skipped_atomic_files[FG_GC] = sbi->skipped_atomic_files[FG_GC];
|
||||
si->util_free = (int)(free_user_blocks(sbi) >> sbi->log_blocks_per_seg)
|
||||
* 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg)
|
||||
/ 2;
|
||||
|
@ -342,6 +344,10 @@ static int stat_show(struct seq_file *s, void *v)
|
|||
si->bg_data_blks);
|
||||
seq_printf(s, " - node blocks : %d (%d)\n", si->node_blks,
|
||||
si->bg_node_blks);
|
||||
seq_printf(s, "Skipped : atomic write %llu (%llu)\n",
|
||||
si->skipped_atomic_files[BG_GC] +
|
||||
si->skipped_atomic_files[FG_GC],
|
||||
si->skipped_atomic_files[BG_GC]);
|
||||
seq_puts(s, "\nExtent Cache:\n");
|
||||
seq_printf(s, " - Hit Count: L1-1:%llu L1-2:%llu L2:%llu\n",
|
||||
si->hit_largest, si->hit_cached,
|
||||
|
|
|
@ -60,12 +60,12 @@ static unsigned char f2fs_type_by_mode[S_IFMT >> S_SHIFT] = {
|
|||
[S_IFLNK >> S_SHIFT] = F2FS_FT_SYMLINK,
|
||||
};
|
||||
|
||||
void set_de_type(struct f2fs_dir_entry *de, umode_t mode)
|
||||
static void set_de_type(struct f2fs_dir_entry *de, umode_t mode)
|
||||
{
|
||||
de->file_type = f2fs_type_by_mode[(mode & S_IFMT) >> S_SHIFT];
|
||||
}
|
||||
|
||||
unsigned char get_de_type(struct f2fs_dir_entry *de)
|
||||
unsigned char f2fs_get_de_type(struct f2fs_dir_entry *de)
|
||||
{
|
||||
if (de->file_type < F2FS_FT_MAX)
|
||||
return f2fs_filetype_table[de->file_type];
|
||||
|
@ -97,14 +97,14 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page,
|
|||
dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
|
||||
|
||||
make_dentry_ptr_block(NULL, &d, dentry_blk);
|
||||
de = find_target_dentry(fname, namehash, max_slots, &d);
|
||||
de = f2fs_find_target_dentry(fname, namehash, max_slots, &d);
|
||||
if (de)
|
||||
*res_page = dentry_page;
|
||||
|
||||
return de;
|
||||
}
|
||||
|
||||
struct f2fs_dir_entry *find_target_dentry(struct fscrypt_name *fname,
|
||||
struct f2fs_dir_entry *f2fs_find_target_dentry(struct fscrypt_name *fname,
|
||||
f2fs_hash_t namehash, int *max_slots,
|
||||
struct f2fs_dentry_ptr *d)
|
||||
{
|
||||
|
@ -171,7 +171,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
|
|||
|
||||
for (; bidx < end_block; bidx++) {
|
||||
/* no need to allocate new dentry pages to all the indices */
|
||||
dentry_page = find_data_page(dir, bidx);
|
||||
dentry_page = f2fs_find_data_page(dir, bidx);
|
||||
if (IS_ERR(dentry_page)) {
|
||||
if (PTR_ERR(dentry_page) == -ENOENT) {
|
||||
room = true;
|
||||
|
@ -210,7 +210,7 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
|
|||
|
||||
if (f2fs_has_inline_dentry(dir)) {
|
||||
*res_page = NULL;
|
||||
de = find_in_inline_dir(dir, fname, res_page);
|
||||
de = f2fs_find_in_inline_dir(dir, fname, res_page);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -319,7 +319,7 @@ static void init_dent_inode(const struct qstr *name, struct page *ipage)
|
|||
set_page_dirty(ipage);
|
||||
}
|
||||
|
||||
void do_make_empty_dir(struct inode *inode, struct inode *parent,
|
||||
void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
|
||||
struct f2fs_dentry_ptr *d)
|
||||
{
|
||||
struct qstr dot = QSTR_INIT(".", 1);
|
||||
|
@ -340,23 +340,23 @@ static int make_empty_dir(struct inode *inode,
|
|||
struct f2fs_dentry_ptr d;
|
||||
|
||||
if (f2fs_has_inline_dentry(inode))
|
||||
return make_empty_inline_dir(inode, parent, page);
|
||||
return f2fs_make_empty_inline_dir(inode, parent, page);
|
||||
|
||||
dentry_page = get_new_data_page(inode, page, 0, true);
|
||||
dentry_page = f2fs_get_new_data_page(inode, page, 0, true);
|
||||
if (IS_ERR(dentry_page))
|
||||
return PTR_ERR(dentry_page);
|
||||
|
||||
dentry_blk = page_address(dentry_page);
|
||||
|
||||
make_dentry_ptr_block(NULL, &d, dentry_blk);
|
||||
do_make_empty_dir(inode, parent, &d);
|
||||
f2fs_do_make_empty_dir(inode, parent, &d);
|
||||
|
||||
set_page_dirty(dentry_page);
|
||||
f2fs_put_page(dentry_page, 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
|
||||
struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir,
|
||||
const struct qstr *new_name, const struct qstr *orig_name,
|
||||
struct page *dpage)
|
||||
{
|
||||
|
@ -365,7 +365,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
|
|||
int err;
|
||||
|
||||
if (is_inode_flag_set(inode, FI_NEW_INODE)) {
|
||||
page = new_inode_page(inode);
|
||||
page = f2fs_new_inode_page(inode);
|
||||
if (IS_ERR(page))
|
||||
return page;
|
||||
|
||||
|
@ -395,7 +395,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
|
|||
goto put_error;
|
||||
}
|
||||
} else {
|
||||
page = get_node_page(F2FS_I_SB(dir), inode->i_ino);
|
||||
page = f2fs_get_node_page(F2FS_I_SB(dir), inode->i_ino);
|
||||
if (IS_ERR(page))
|
||||
return page;
|
||||
}
|
||||
|
@ -418,19 +418,19 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
|
|||
* we should remove this inode from orphan list.
|
||||
*/
|
||||
if (inode->i_nlink == 0)
|
||||
remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino);
|
||||
f2fs_remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino);
|
||||
f2fs_i_links_write(inode, true);
|
||||
}
|
||||
return page;
|
||||
|
||||
put_error:
|
||||
clear_nlink(inode);
|
||||
update_inode(inode, page);
|
||||
f2fs_update_inode(inode, page);
|
||||
f2fs_put_page(page, 1);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
void update_parent_metadata(struct inode *dir, struct inode *inode,
|
||||
void f2fs_update_parent_metadata(struct inode *dir, struct inode *inode,
|
||||
unsigned int current_depth)
|
||||
{
|
||||
if (inode && is_inode_flag_set(inode, FI_NEW_INODE)) {
|
||||
|
@ -448,7 +448,7 @@ void update_parent_metadata(struct inode *dir, struct inode *inode,
|
|||
clear_inode_flag(inode, FI_INC_LINK);
|
||||
}
|
||||
|
||||
int room_for_filename(const void *bitmap, int slots, int max_slots)
|
||||
int f2fs_room_for_filename(const void *bitmap, int slots, int max_slots)
|
||||
{
|
||||
int bit_start = 0;
|
||||
int zero_start, zero_end;
|
||||
|
@ -537,12 +537,12 @@ start:
|
|||
(le32_to_cpu(dentry_hash) % nbucket));
|
||||
|
||||
for (block = bidx; block <= (bidx + nblock - 1); block++) {
|
||||
dentry_page = get_new_data_page(dir, NULL, block, true);
|
||||
dentry_page = f2fs_get_new_data_page(dir, NULL, block, true);
|
||||
if (IS_ERR(dentry_page))
|
||||
return PTR_ERR(dentry_page);
|
||||
|
||||
dentry_blk = page_address(dentry_page);
|
||||
bit_pos = room_for_filename(&dentry_blk->dentry_bitmap,
|
||||
bit_pos = f2fs_room_for_filename(&dentry_blk->dentry_bitmap,
|
||||
slots, NR_DENTRY_IN_BLOCK);
|
||||
if (bit_pos < NR_DENTRY_IN_BLOCK)
|
||||
goto add_dentry;
|
||||
|
@ -558,7 +558,7 @@ add_dentry:
|
|||
|
||||
if (inode) {
|
||||
down_write(&F2FS_I(inode)->i_sem);
|
||||
page = init_inode_metadata(inode, dir, new_name,
|
||||
page = f2fs_init_inode_metadata(inode, dir, new_name,
|
||||
orig_name, NULL);
|
||||
if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
|
@ -576,7 +576,7 @@ add_dentry:
|
|||
f2fs_put_page(page, 1);
|
||||
}
|
||||
|
||||
update_parent_metadata(dir, inode, current_depth);
|
||||
f2fs_update_parent_metadata(dir, inode, current_depth);
|
||||
fail:
|
||||
if (inode)
|
||||
up_write(&F2FS_I(inode)->i_sem);
|
||||
|
@ -586,7 +586,7 @@ fail:
|
|||
return err;
|
||||
}
|
||||
|
||||
int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname,
|
||||
int f2fs_add_dentry(struct inode *dir, struct fscrypt_name *fname,
|
||||
struct inode *inode, nid_t ino, umode_t mode)
|
||||
{
|
||||
struct qstr new_name;
|
||||
|
@ -610,7 +610,7 @@ int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname,
|
|||
* Caller should grab and release a rwsem by calling f2fs_lock_op() and
|
||||
* f2fs_unlock_op().
|
||||
*/
|
||||
int __f2fs_add_link(struct inode *dir, const struct qstr *name,
|
||||
int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
|
||||
struct inode *inode, nid_t ino, umode_t mode)
|
||||
{
|
||||
struct fscrypt_name fname;
|
||||
|
@ -639,7 +639,7 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name,
|
|||
} else if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
} else {
|
||||
err = __f2fs_do_add_link(dir, &fname, inode, ino, mode);
|
||||
err = f2fs_add_dentry(dir, &fname, inode, ino, mode);
|
||||
}
|
||||
fscrypt_free_filename(&fname);
|
||||
return err;
|
||||
|
@ -651,7 +651,7 @@ int f2fs_do_tmpfile(struct inode *inode, struct inode *dir)
|
|||
int err = 0;
|
||||
|
||||
down_write(&F2FS_I(inode)->i_sem);
|
||||
page = init_inode_metadata(inode, dir, NULL, NULL, NULL);
|
||||
page = f2fs_init_inode_metadata(inode, dir, NULL, NULL, NULL);
|
||||
if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
goto fail;
|
||||
|
@ -683,9 +683,9 @@ void f2fs_drop_nlink(struct inode *dir, struct inode *inode)
|
|||
up_write(&F2FS_I(inode)->i_sem);
|
||||
|
||||
if (inode->i_nlink == 0)
|
||||
add_orphan_inode(inode);
|
||||
f2fs_add_orphan_inode(inode);
|
||||
else
|
||||
release_orphan_inode(sbi);
|
||||
f2fs_release_orphan_inode(sbi);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -698,14 +698,12 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
|
|||
struct f2fs_dentry_block *dentry_blk;
|
||||
unsigned int bit_pos;
|
||||
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
|
||||
struct address_space *mapping = page_mapping(page);
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
|
||||
|
||||
if (F2FS_OPTION(F2FS_I_SB(dir)).fsync_mode == FSYNC_MODE_STRICT)
|
||||
add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);
|
||||
f2fs_add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);
|
||||
|
||||
if (f2fs_has_inline_dentry(dir))
|
||||
return f2fs_delete_inline_entry(dentry, page, dir, inode);
|
||||
|
@ -731,17 +729,13 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
|
|||
f2fs_drop_nlink(dir, inode);
|
||||
|
||||
if (bit_pos == NR_DENTRY_IN_BLOCK &&
|
||||
!truncate_hole(dir, page->index, page->index + 1)) {
|
||||
xa_lock_irqsave(&mapping->i_pages, flags);
|
||||
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
|
||||
PAGECACHE_TAG_DIRTY);
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags);
|
||||
|
||||
!f2fs_truncate_hole(dir, page->index, page->index + 1)) {
|
||||
f2fs_clear_radix_tree_dirty_tag(page);
|
||||
clear_page_dirty_for_io(page);
|
||||
ClearPagePrivate(page);
|
||||
ClearPageUptodate(page);
|
||||
inode_dec_dirty_pages(dir);
|
||||
remove_dirty_inode(dir);
|
||||
f2fs_remove_dirty_inode(dir);
|
||||
}
|
||||
f2fs_put_page(page, 1);
|
||||
}
|
||||
|
@ -758,7 +752,7 @@ bool f2fs_empty_dir(struct inode *dir)
|
|||
return f2fs_empty_inline_dir(dir);
|
||||
|
||||
for (bidx = 0; bidx < nblock; bidx++) {
|
||||
dentry_page = get_lock_data_page(dir, bidx, false);
|
||||
dentry_page = f2fs_get_lock_data_page(dir, bidx, false);
|
||||
if (IS_ERR(dentry_page)) {
|
||||
if (PTR_ERR(dentry_page) == -ENOENT)
|
||||
continue;
|
||||
|
@ -806,7 +800,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
|
|||
continue;
|
||||
}
|
||||
|
||||
d_type = get_de_type(de);
|
||||
d_type = f2fs_get_de_type(de);
|
||||
|
||||
de_name.name = d->filename[bit_pos];
|
||||
de_name.len = le16_to_cpu(de->name_len);
|
||||
|
@ -830,7 +824,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
|
|||
return 1;
|
||||
|
||||
if (sbi->readdir_ra == 1)
|
||||
ra_node_page(sbi, le32_to_cpu(de->ino));
|
||||
f2fs_ra_node_page(sbi, le32_to_cpu(de->ino));
|
||||
|
||||
bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
|
||||
ctx->pos = start_pos + bit_pos;
|
||||
|
@ -880,7 +874,7 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
|
|||
page_cache_sync_readahead(inode->i_mapping, ra, file, n,
|
||||
min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES));
|
||||
|
||||
dentry_page = get_lock_data_page(inode, n, false);
|
||||
dentry_page = f2fs_get_lock_data_page(inode, n, false);
|
||||
if (IS_ERR(dentry_page)) {
|
||||
err = PTR_ERR(dentry_page);
|
||||
if (err == -ENOENT) {
|
||||
|
|
|
@ -49,7 +49,7 @@ static struct rb_entry *__lookup_rb_tree_slow(struct rb_root *root,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
struct rb_entry *__lookup_rb_tree(struct rb_root *root,
|
||||
struct rb_entry *f2fs_lookup_rb_tree(struct rb_root *root,
|
||||
struct rb_entry *cached_re, unsigned int ofs)
|
||||
{
|
||||
struct rb_entry *re;
|
||||
|
@ -61,7 +61,7 @@ struct rb_entry *__lookup_rb_tree(struct rb_root *root,
|
|||
return re;
|
||||
}
|
||||
|
||||
struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
|
||||
struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
|
||||
struct rb_root *root, struct rb_node **parent,
|
||||
unsigned int ofs)
|
||||
{
|
||||
|
@ -92,7 +92,7 @@ struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
|
|||
* in order to simpfy the insertion after.
|
||||
* tree must stay unchanged between lookup and insertion.
|
||||
*/
|
||||
struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root,
|
||||
struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root *root,
|
||||
struct rb_entry *cached_re,
|
||||
unsigned int ofs,
|
||||
struct rb_entry **prev_entry,
|
||||
|
@ -159,7 +159,7 @@ lookup_neighbors:
|
|||
return re;
|
||||
}
|
||||
|
||||
bool __check_rb_tree_consistence(struct f2fs_sb_info *sbi,
|
||||
bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
|
||||
struct rb_root *root)
|
||||
{
|
||||
#ifdef CONFIG_F2FS_CHECK_FS
|
||||
|
@ -390,7 +390,7 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
|
|||
goto out;
|
||||
}
|
||||
|
||||
en = (struct extent_node *)__lookup_rb_tree(&et->root,
|
||||
en = (struct extent_node *)f2fs_lookup_rb_tree(&et->root,
|
||||
(struct rb_entry *)et->cached_en, pgofs);
|
||||
if (!en)
|
||||
goto out;
|
||||
|
@ -470,7 +470,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode,
|
|||
goto do_insert;
|
||||
}
|
||||
|
||||
p = __lookup_rb_tree_for_insert(sbi, &et->root, &parent, ei->fofs);
|
||||
p = f2fs_lookup_rb_tree_for_insert(sbi, &et->root, &parent, ei->fofs);
|
||||
do_insert:
|
||||
en = __attach_extent_node(sbi, et, ei, parent, p);
|
||||
if (!en)
|
||||
|
@ -520,7 +520,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
|
|||
__drop_largest_extent(inode, fofs, len);
|
||||
|
||||
/* 1. lookup first extent node in range [fofs, fofs + len - 1] */
|
||||
en = (struct extent_node *)__lookup_rb_tree_ret(&et->root,
|
||||
en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
|
||||
(struct rb_entry *)et->cached_en, fofs,
|
||||
(struct rb_entry **)&prev_en,
|
||||
(struct rb_entry **)&next_en,
|
||||
|
@ -773,7 +773,7 @@ void f2fs_update_extent_cache(struct dnode_of_data *dn)
|
|||
else
|
||||
blkaddr = dn->data_blkaddr;
|
||||
|
||||
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
|
||||
fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
|
||||
dn->ofs_in_node;
|
||||
f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, 1);
|
||||
}
|
||||
|
@ -788,7 +788,7 @@ void f2fs_update_extent_cache_range(struct dnode_of_data *dn,
|
|||
f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, len);
|
||||
}
|
||||
|
||||
void init_extent_cache_info(struct f2fs_sb_info *sbi)
|
||||
void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO);
|
||||
mutex_init(&sbi->extent_tree_lock);
|
||||
|
@ -800,7 +800,7 @@ void init_extent_cache_info(struct f2fs_sb_info *sbi)
|
|||
atomic_set(&sbi->total_ext_node, 0);
|
||||
}
|
||||
|
||||
int __init create_extent_cache(void)
|
||||
int __init f2fs_create_extent_cache(void)
|
||||
{
|
||||
extent_tree_slab = f2fs_kmem_cache_create("f2fs_extent_tree",
|
||||
sizeof(struct extent_tree));
|
||||
|
@ -815,7 +815,7 @@ int __init create_extent_cache(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void destroy_extent_cache(void)
|
||||
void f2fs_destroy_extent_cache(void)
|
||||
{
|
||||
kmem_cache_destroy(extent_node_slab);
|
||||
kmem_cache_destroy(extent_tree_slab);
|
||||
|
|
460
fs/f2fs/f2fs.h
460
fs/f2fs/f2fs.h
|
@ -176,15 +176,13 @@ enum {
|
|||
#define CP_DISCARD 0x00000010
|
||||
#define CP_TRIMMED 0x00000020
|
||||
|
||||
#define DEF_BATCHED_TRIM_SECTIONS 2048
|
||||
#define BATCHED_TRIM_SEGMENTS(sbi) \
|
||||
(GET_SEG_FROM_SEC(sbi, SM_I(sbi)->trim_sections))
|
||||
#define BATCHED_TRIM_BLOCKS(sbi) \
|
||||
(BATCHED_TRIM_SEGMENTS(sbi) << (sbi)->log_blocks_per_seg)
|
||||
#define MAX_DISCARD_BLOCKS(sbi) BLKS_PER_SEC(sbi)
|
||||
#define DEF_MAX_DISCARD_REQUEST 8 /* issue 8 discards per round */
|
||||
#define DEF_MAX_DISCARD_LEN 512 /* Max. 2MB per discard */
|
||||
#define DEF_MIN_DISCARD_ISSUE_TIME 50 /* 50 ms, if exists */
|
||||
#define DEF_MID_DISCARD_ISSUE_TIME 500 /* 500 ms, if device busy */
|
||||
#define DEF_MAX_DISCARD_ISSUE_TIME 60000 /* 60 s, if no candidates */
|
||||
#define DEF_DISCARD_URGENT_UTIL 80 /* do more discard over 80% */
|
||||
#define DEF_CP_INTERVAL 60 /* 60 secs */
|
||||
#define DEF_IDLE_INTERVAL 5 /* 5 secs */
|
||||
|
||||
|
@ -285,6 +283,7 @@ enum {
|
|||
struct discard_policy {
|
||||
int type; /* type of discard */
|
||||
unsigned int min_interval; /* used for candidates exist */
|
||||
unsigned int mid_interval; /* used for device busy */
|
||||
unsigned int max_interval; /* used for candidates not exist */
|
||||
unsigned int max_requests; /* # of discards issued per round */
|
||||
unsigned int io_aware_gran; /* minimum granularity discard not be aware of I/O */
|
||||
|
@ -620,15 +619,20 @@ enum {
|
|||
|
||||
#define DEF_DIR_LEVEL 0
|
||||
|
||||
enum {
|
||||
GC_FAILURE_PIN,
|
||||
GC_FAILURE_ATOMIC,
|
||||
MAX_GC_FAILURE
|
||||
};
|
||||
|
||||
struct f2fs_inode_info {
|
||||
struct inode vfs_inode; /* serve a vfs inode */
|
||||
unsigned long i_flags; /* keep an inode flags for ioctl */
|
||||
unsigned char i_advise; /* use to give file attribute hints */
|
||||
unsigned char i_dir_level; /* use for dentry level for large dir */
|
||||
union {
|
||||
unsigned int i_current_depth; /* only for directory depth */
|
||||
unsigned short i_gc_failures; /* only for regular file */
|
||||
};
|
||||
unsigned int i_current_depth; /* only for directory depth */
|
||||
/* for gc failure statistic */
|
||||
unsigned int i_gc_failures[MAX_GC_FAILURE];
|
||||
unsigned int i_pino; /* parent inode number */
|
||||
umode_t i_acl_mode; /* keep file acl mode temporarily */
|
||||
|
||||
|
@ -656,7 +660,9 @@ struct f2fs_inode_info {
|
|||
struct task_struct *inmem_task; /* store inmemory task */
|
||||
struct mutex inmem_lock; /* lock for inmemory pages */
|
||||
struct extent_tree *extent_tree; /* cached extent_tree entry */
|
||||
struct rw_semaphore dio_rwsem[2];/* avoid racing between dio and gc */
|
||||
|
||||
/* avoid racing between foreground op and gc */
|
||||
struct rw_semaphore i_gc_rwsem[2];
|
||||
struct rw_semaphore i_mmap_sem;
|
||||
struct rw_semaphore i_xattr_sem; /* avoid racing between reading and changing EAs */
|
||||
|
||||
|
@ -694,7 +700,8 @@ static inline void set_extent_info(struct extent_info *ei, unsigned int fofs,
|
|||
static inline bool __is_discard_mergeable(struct discard_info *back,
|
||||
struct discard_info *front)
|
||||
{
|
||||
return back->lstart + back->len == front->lstart;
|
||||
return (back->lstart + back->len == front->lstart) &&
|
||||
(back->len + front->len < DEF_MAX_DISCARD_LEN);
|
||||
}
|
||||
|
||||
static inline bool __is_discard_back_mergeable(struct discard_info *cur,
|
||||
|
@ -1005,6 +1012,7 @@ struct f2fs_io_info {
|
|||
int need_lock; /* indicate we need to lock cp_rwsem */
|
||||
bool in_list; /* indicate fio is in io_list */
|
||||
bool is_meta; /* indicate borrow meta inode mapping or not */
|
||||
bool retry; /* need to reallocate block address */
|
||||
enum iostat_type io_type; /* io type */
|
||||
struct writeback_control *io_wbc; /* writeback control */
|
||||
};
|
||||
|
@ -1066,6 +1074,13 @@ enum {
|
|||
MAX_TIME,
|
||||
};
|
||||
|
||||
enum {
|
||||
GC_NORMAL,
|
||||
GC_IDLE_CB,
|
||||
GC_IDLE_GREEDY,
|
||||
GC_URGENT,
|
||||
};
|
||||
|
||||
enum {
|
||||
WHINT_MODE_OFF, /* not pass down write hints */
|
||||
WHINT_MODE_USER, /* try to pass down hints given by users */
|
||||
|
@ -1080,6 +1095,7 @@ enum {
|
|||
enum fsync_mode {
|
||||
FSYNC_MODE_POSIX, /* fsync follows posix semantics */
|
||||
FSYNC_MODE_STRICT, /* fsync behaves in line with ext4 */
|
||||
FSYNC_MODE_NOBARRIER, /* fsync behaves nobarrier based on posix */
|
||||
};
|
||||
|
||||
#ifdef CONFIG_F2FS_FS_ENCRYPTION
|
||||
|
@ -1113,6 +1129,8 @@ struct f2fs_sb_info {
|
|||
struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */
|
||||
struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE];
|
||||
/* bio ordering for NODE/DATA */
|
||||
/* keep migration IO order for LFS mode */
|
||||
struct rw_semaphore io_order_lock;
|
||||
mempool_t *write_io_dummy; /* Dummy pages */
|
||||
|
||||
/* for checkpoint */
|
||||
|
@ -1183,7 +1201,7 @@ struct f2fs_sb_info {
|
|||
struct percpu_counter alloc_valid_block_count;
|
||||
|
||||
/* writeback control */
|
||||
atomic_t wb_sync_req; /* count # of WB_SYNC threads */
|
||||
atomic_t wb_sync_req[META]; /* count # of WB_SYNC threads */
|
||||
|
||||
/* valid inode count */
|
||||
struct percpu_counter total_valid_inode_count;
|
||||
|
@ -1194,9 +1212,9 @@ struct f2fs_sb_info {
|
|||
struct mutex gc_mutex; /* mutex for GC */
|
||||
struct f2fs_gc_kthread *gc_thread; /* GC thread */
|
||||
unsigned int cur_victim_sec; /* current victim section num */
|
||||
|
||||
/* threshold for converting bg victims for fg */
|
||||
u64 fggc_threshold;
|
||||
unsigned int gc_mode; /* current GC state */
|
||||
/* for skip statistic */
|
||||
unsigned long long skipped_atomic_files[2]; /* FG_GC and BG_GC */
|
||||
|
||||
/* threshold for gc trials on pinned files */
|
||||
u64 gc_pin_file_threshold;
|
||||
|
@ -1586,18 +1604,6 @@ static inline bool __exist_node_summaries(struct f2fs_sb_info *sbi)
|
|||
is_set_ckpt_flags(sbi, CP_FASTBOOT_FLAG));
|
||||
}
|
||||
|
||||
/*
|
||||
* Check whether the given nid is within node id range.
|
||||
*/
|
||||
static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
if (unlikely(nid < F2FS_ROOT_INO(sbi)))
|
||||
return -EINVAL;
|
||||
if (unlikely(nid >= NM_I(sbi)->max_nid))
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check whether the inode has blocks or not
|
||||
*/
|
||||
|
@ -1614,7 +1620,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs)
|
|||
}
|
||||
|
||||
static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
|
||||
struct inode *inode)
|
||||
struct inode *inode, bool cap)
|
||||
{
|
||||
if (!inode)
|
||||
return true;
|
||||
|
@ -1627,7 +1633,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
|
|||
if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
|
||||
in_group_p(F2FS_OPTION(sbi).s_resgid))
|
||||
return true;
|
||||
if (capable(CAP_SYS_RESOURCE))
|
||||
if (cap && capable(CAP_SYS_RESOURCE))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
@ -1662,7 +1668,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
|
|||
avail_user_block_count = sbi->user_block_count -
|
||||
sbi->current_reserved_blocks;
|
||||
|
||||
if (!__allow_reserved_blocks(sbi, inode))
|
||||
if (!__allow_reserved_blocks(sbi, inode, true))
|
||||
avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
|
||||
|
||||
if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
|
||||
|
@ -1869,7 +1875,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
|
|||
valid_block_count = sbi->total_valid_block_count +
|
||||
sbi->current_reserved_blocks + 1;
|
||||
|
||||
if (!__allow_reserved_blocks(sbi, inode))
|
||||
if (!__allow_reserved_blocks(sbi, inode, false))
|
||||
valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
|
||||
|
||||
if (unlikely(valid_block_count > sbi->user_block_count)) {
|
||||
|
@ -2156,9 +2162,60 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
|
|||
*addr ^= mask;
|
||||
}
|
||||
|
||||
#define F2FS_REG_FLMASK (~(FS_DIRSYNC_FL | FS_TOPDIR_FL))
|
||||
#define F2FS_OTHER_FLMASK (FS_NODUMP_FL | FS_NOATIME_FL)
|
||||
#define F2FS_FL_INHERITED (FS_PROJINHERIT_FL)
|
||||
/*
|
||||
* Inode flags
|
||||
*/
|
||||
#define F2FS_SECRM_FL 0x00000001 /* Secure deletion */
|
||||
#define F2FS_UNRM_FL 0x00000002 /* Undelete */
|
||||
#define F2FS_COMPR_FL 0x00000004 /* Compress file */
|
||||
#define F2FS_SYNC_FL 0x00000008 /* Synchronous updates */
|
||||
#define F2FS_IMMUTABLE_FL 0x00000010 /* Immutable file */
|
||||
#define F2FS_APPEND_FL 0x00000020 /* writes to file may only append */
|
||||
#define F2FS_NODUMP_FL 0x00000040 /* do not dump file */
|
||||
#define F2FS_NOATIME_FL 0x00000080 /* do not update atime */
|
||||
/* Reserved for compression usage... */
|
||||
#define F2FS_DIRTY_FL 0x00000100
|
||||
#define F2FS_COMPRBLK_FL 0x00000200 /* One or more compressed clusters */
|
||||
#define F2FS_NOCOMPR_FL 0x00000400 /* Don't compress */
|
||||
#define F2FS_ENCRYPT_FL 0x00000800 /* encrypted file */
|
||||
/* End compression flags --- maybe not all used */
|
||||
#define F2FS_INDEX_FL 0x00001000 /* hash-indexed directory */
|
||||
#define F2FS_IMAGIC_FL 0x00002000 /* AFS directory */
|
||||
#define F2FS_JOURNAL_DATA_FL 0x00004000 /* file data should be journaled */
|
||||
#define F2FS_NOTAIL_FL 0x00008000 /* file tail should not be merged */
|
||||
#define F2FS_DIRSYNC_FL 0x00010000 /* dirsync behaviour (directories only) */
|
||||
#define F2FS_TOPDIR_FL 0x00020000 /* Top of directory hierarchies*/
|
||||
#define F2FS_HUGE_FILE_FL 0x00040000 /* Set to each huge file */
|
||||
#define F2FS_EXTENTS_FL 0x00080000 /* Inode uses extents */
|
||||
#define F2FS_EA_INODE_FL 0x00200000 /* Inode used for large EA */
|
||||
#define F2FS_EOFBLOCKS_FL 0x00400000 /* Blocks allocated beyond EOF */
|
||||
#define F2FS_INLINE_DATA_FL 0x10000000 /* Inode has inline data. */
|
||||
#define F2FS_PROJINHERIT_FL 0x20000000 /* Create with parents projid */
|
||||
#define F2FS_RESERVED_FL 0x80000000 /* reserved for ext4 lib */
|
||||
|
||||
#define F2FS_FL_USER_VISIBLE 0x304BDFFF /* User visible flags */
|
||||
#define F2FS_FL_USER_MODIFIABLE 0x204BC0FF /* User modifiable flags */
|
||||
|
||||
/* Flags we can manipulate with through F2FS_IOC_FSSETXATTR */
|
||||
#define F2FS_FL_XFLAG_VISIBLE (F2FS_SYNC_FL | \
|
||||
F2FS_IMMUTABLE_FL | \
|
||||
F2FS_APPEND_FL | \
|
||||
F2FS_NODUMP_FL | \
|
||||
F2FS_NOATIME_FL | \
|
||||
F2FS_PROJINHERIT_FL)
|
||||
|
||||
/* Flags that should be inherited by new inodes from their parent. */
|
||||
#define F2FS_FL_INHERITED (F2FS_SECRM_FL | F2FS_UNRM_FL | F2FS_COMPR_FL |\
|
||||
F2FS_SYNC_FL | F2FS_NODUMP_FL | F2FS_NOATIME_FL |\
|
||||
F2FS_NOCOMPR_FL | F2FS_JOURNAL_DATA_FL |\
|
||||
F2FS_NOTAIL_FL | F2FS_DIRSYNC_FL |\
|
||||
F2FS_PROJINHERIT_FL)
|
||||
|
||||
/* Flags that are appropriate for regular files (all but dir-specific ones). */
|
||||
#define F2FS_REG_FLMASK (~(F2FS_DIRSYNC_FL | F2FS_TOPDIR_FL))
|
||||
|
||||
/* Flags that are appropriate for non-directories/regular files. */
|
||||
#define F2FS_OTHER_FLMASK (F2FS_NODUMP_FL | F2FS_NOATIME_FL)
|
||||
|
||||
static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags)
|
||||
{
|
||||
|
@ -2201,6 +2258,7 @@ enum {
|
|||
FI_EXTRA_ATTR, /* indicate file has extra attribute */
|
||||
FI_PROJ_INHERIT, /* indicate file inherits projectid */
|
||||
FI_PIN_FILE, /* indicate file should not be gced */
|
||||
FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
|
||||
};
|
||||
|
||||
static inline void __mark_inode_dirty_flag(struct inode *inode,
|
||||
|
@ -2299,7 +2357,7 @@ static inline void f2fs_i_depth_write(struct inode *inode, unsigned int depth)
|
|||
static inline void f2fs_i_gc_failures_write(struct inode *inode,
|
||||
unsigned int count)
|
||||
{
|
||||
F2FS_I(inode)->i_gc_failures = count;
|
||||
F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN] = count;
|
||||
f2fs_mark_inode_dirty_sync(inode, true);
|
||||
}
|
||||
|
||||
|
@ -2568,7 +2626,7 @@ static inline int get_inline_xattr_addrs(struct inode *inode)
|
|||
return F2FS_I(inode)->i_inline_xattr_size;
|
||||
}
|
||||
|
||||
#define get_inode_mode(i) \
|
||||
#define f2fs_get_inode_mode(i) \
|
||||
((is_inode_flag_set(i, FI_ACL_MODE)) ? \
|
||||
(F2FS_I(i)->i_acl_mode) : ((i)->i_mode))
|
||||
|
||||
|
@ -2607,18 +2665,25 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi,
|
|||
spin_unlock(&sbi->iostat_lock);
|
||||
}
|
||||
|
||||
static inline bool is_valid_blkaddr(block_t blkaddr)
|
||||
{
|
||||
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* file.c
|
||||
*/
|
||||
int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync);
|
||||
void truncate_data_blocks(struct dnode_of_data *dn);
|
||||
int truncate_blocks(struct inode *inode, u64 from, bool lock);
|
||||
void f2fs_truncate_data_blocks(struct dnode_of_data *dn);
|
||||
int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock);
|
||||
int f2fs_truncate(struct inode *inode);
|
||||
int f2fs_getattr(const struct path *path, struct kstat *stat,
|
||||
u32 request_mask, unsigned int flags);
|
||||
int f2fs_setattr(struct dentry *dentry, struct iattr *attr);
|
||||
int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end);
|
||||
void truncate_data_blocks_range(struct dnode_of_data *dn, int count);
|
||||
int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end);
|
||||
void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count);
|
||||
int f2fs_precache_extents(struct inode *inode);
|
||||
long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
|
||||
long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
|
||||
|
@ -2632,38 +2697,37 @@ bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page);
|
|||
void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page);
|
||||
struct inode *f2fs_iget(struct super_block *sb, unsigned long ino);
|
||||
struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino);
|
||||
int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink);
|
||||
void update_inode(struct inode *inode, struct page *node_page);
|
||||
void update_inode_page(struct inode *inode);
|
||||
int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink);
|
||||
void f2fs_update_inode(struct inode *inode, struct page *node_page);
|
||||
void f2fs_update_inode_page(struct inode *inode);
|
||||
int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc);
|
||||
void f2fs_evict_inode(struct inode *inode);
|
||||
void handle_failed_inode(struct inode *inode);
|
||||
void f2fs_handle_failed_inode(struct inode *inode);
|
||||
|
||||
/*
|
||||
* namei.c
|
||||
*/
|
||||
int update_extension_list(struct f2fs_sb_info *sbi, const char *name,
|
||||
int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
|
||||
bool hot, bool set);
|
||||
struct dentry *f2fs_get_parent(struct dentry *child);
|
||||
|
||||
/*
|
||||
* dir.c
|
||||
*/
|
||||
void set_de_type(struct f2fs_dir_entry *de, umode_t mode);
|
||||
unsigned char get_de_type(struct f2fs_dir_entry *de);
|
||||
struct f2fs_dir_entry *find_target_dentry(struct fscrypt_name *fname,
|
||||
unsigned char f2fs_get_de_type(struct f2fs_dir_entry *de);
|
||||
struct f2fs_dir_entry *f2fs_find_target_dentry(struct fscrypt_name *fname,
|
||||
f2fs_hash_t namehash, int *max_slots,
|
||||
struct f2fs_dentry_ptr *d);
|
||||
int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
|
||||
unsigned int start_pos, struct fscrypt_str *fstr);
|
||||
void do_make_empty_dir(struct inode *inode, struct inode *parent,
|
||||
void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
|
||||
struct f2fs_dentry_ptr *d);
|
||||
struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
|
||||
struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir,
|
||||
const struct qstr *new_name,
|
||||
const struct qstr *orig_name, struct page *dpage);
|
||||
void update_parent_metadata(struct inode *dir, struct inode *inode,
|
||||
void f2fs_update_parent_metadata(struct inode *dir, struct inode *inode,
|
||||
unsigned int current_depth);
|
||||
int room_for_filename(const void *bitmap, int slots, int max_slots);
|
||||
int f2fs_room_for_filename(const void *bitmap, int slots, int max_slots);
|
||||
void f2fs_drop_nlink(struct inode *dir, struct inode *inode);
|
||||
struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
|
||||
struct fscrypt_name *fname, struct page **res_page);
|
||||
|
@ -2680,9 +2744,9 @@ void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
|
|||
int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
|
||||
const struct qstr *orig_name,
|
||||
struct inode *inode, nid_t ino, umode_t mode);
|
||||
int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname,
|
||||
int f2fs_add_dentry(struct inode *dir, struct fscrypt_name *fname,
|
||||
struct inode *inode, nid_t ino, umode_t mode);
|
||||
int __f2fs_add_link(struct inode *dir, const struct qstr *name,
|
||||
int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
|
||||
struct inode *inode, nid_t ino, umode_t mode);
|
||||
void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
|
||||
struct inode *dir, struct inode *inode);
|
||||
|
@ -2691,7 +2755,7 @@ bool f2fs_empty_dir(struct inode *dir);
|
|||
|
||||
static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
|
||||
{
|
||||
return __f2fs_add_link(d_inode(dentry->d_parent), &dentry->d_name,
|
||||
return f2fs_do_add_link(d_inode(dentry->d_parent), &dentry->d_name,
|
||||
inode, inode->i_ino, inode->i_mode);
|
||||
}
|
||||
|
||||
|
@ -2706,7 +2770,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
|
|||
int f2fs_sync_fs(struct super_block *sb, int sync);
|
||||
extern __printf(3, 4)
|
||||
void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...);
|
||||
int sanity_check_ckpt(struct f2fs_sb_info *sbi);
|
||||
int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi);
|
||||
|
||||
/*
|
||||
* hash.c
|
||||
|
@ -2720,179 +2784,183 @@ f2fs_hash_t f2fs_dentry_hash(const struct qstr *name_info,
|
|||
struct dnode_of_data;
|
||||
struct node_info;
|
||||
|
||||
bool available_free_memory(struct f2fs_sb_info *sbi, int type);
|
||||
int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
bool need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni);
|
||||
pgoff_t get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs);
|
||||
int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode);
|
||||
int truncate_inode_blocks(struct inode *inode, pgoff_t from);
|
||||
int truncate_xattr_node(struct inode *inode);
|
||||
int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
int remove_inode_page(struct inode *inode);
|
||||
struct page *new_inode_page(struct inode *inode);
|
||||
struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs);
|
||||
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid);
|
||||
struct page *get_node_page_ra(struct page *parent, int start);
|
||||
void move_node_page(struct page *node_page, int gc_type);
|
||||
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
|
||||
int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type);
|
||||
int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
void f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
|
||||
struct node_info *ni);
|
||||
pgoff_t f2fs_get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs);
|
||||
int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode);
|
||||
int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from);
|
||||
int f2fs_truncate_xattr_node(struct inode *inode);
|
||||
int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
int f2fs_remove_inode_page(struct inode *inode);
|
||||
struct page *f2fs_new_inode_page(struct inode *inode);
|
||||
struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs);
|
||||
void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid);
|
||||
struct page *f2fs_get_node_page_ra(struct page *parent, int start);
|
||||
void f2fs_move_node_page(struct page *node_page, int gc_type);
|
||||
int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
|
||||
struct writeback_control *wbc, bool atomic);
|
||||
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
|
||||
int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
|
||||
struct writeback_control *wbc,
|
||||
bool do_balance, enum iostat_type io_type);
|
||||
void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
|
||||
bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
|
||||
void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink);
|
||||
void recover_inline_xattr(struct inode *inode, struct page *page);
|
||||
int recover_xattr_data(struct inode *inode, struct page *page);
|
||||
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
|
||||
void restore_node_summary(struct f2fs_sb_info *sbi,
|
||||
void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
|
||||
bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
|
||||
void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
|
||||
int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink);
|
||||
void f2fs_recover_inline_xattr(struct inode *inode, struct page *page);
|
||||
int f2fs_recover_xattr_data(struct inode *inode, struct page *page);
|
||||
int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
|
||||
void f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
|
||||
unsigned int segno, struct f2fs_summary_block *sum);
|
||||
void flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
int build_node_manager(struct f2fs_sb_info *sbi);
|
||||
void destroy_node_manager(struct f2fs_sb_info *sbi);
|
||||
int __init create_node_manager_caches(void);
|
||||
void destroy_node_manager_caches(void);
|
||||
void f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
int f2fs_build_node_manager(struct f2fs_sb_info *sbi);
|
||||
void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi);
|
||||
int __init f2fs_create_node_manager_caches(void);
|
||||
void f2fs_destroy_node_manager_caches(void);
|
||||
|
||||
/*
|
||||
* segment.c
|
||||
*/
|
||||
bool need_SSR(struct f2fs_sb_info *sbi);
|
||||
void register_inmem_page(struct inode *inode, struct page *page);
|
||||
void drop_inmem_pages_all(struct f2fs_sb_info *sbi);
|
||||
void drop_inmem_pages(struct inode *inode);
|
||||
void drop_inmem_page(struct inode *inode, struct page *page);
|
||||
int commit_inmem_pages(struct inode *inode);
|
||||
bool f2fs_need_SSR(struct f2fs_sb_info *sbi);
|
||||
void f2fs_register_inmem_page(struct inode *inode, struct page *page);
|
||||
void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, bool gc_failure);
|
||||
void f2fs_drop_inmem_pages(struct inode *inode);
|
||||
void f2fs_drop_inmem_page(struct inode *inode, struct page *page);
|
||||
int f2fs_commit_inmem_pages(struct inode *inode);
|
||||
void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need);
|
||||
void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi);
|
||||
int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
int create_flush_cmd_control(struct f2fs_sb_info *sbi);
|
||||
int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
|
||||
int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
|
||||
void destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
|
||||
void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
|
||||
bool is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
|
||||
void init_discard_policy(struct discard_policy *dpolicy, int discard_type,
|
||||
unsigned int granularity);
|
||||
void drop_discard_cmd(struct f2fs_sb_info *sbi);
|
||||
void stop_discard_thread(struct f2fs_sb_info *sbi);
|
||||
void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
|
||||
void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
|
||||
bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
|
||||
void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
|
||||
void f2fs_stop_discard_thread(struct f2fs_sb_info *sbi);
|
||||
bool f2fs_wait_discard_bios(struct f2fs_sb_info *sbi);
|
||||
void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
void release_discard_addrs(struct f2fs_sb_info *sbi);
|
||||
int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
|
||||
void allocate_new_segments(struct f2fs_sb_info *sbi);
|
||||
void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
|
||||
struct cp_control *cpc);
|
||||
void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi);
|
||||
int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
|
||||
void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi);
|
||||
int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
|
||||
bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
struct page *get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno);
|
||||
void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr);
|
||||
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
|
||||
bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
|
||||
struct cp_control *cpc);
|
||||
struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno);
|
||||
void f2fs_update_meta_page(struct f2fs_sb_info *sbi, void *src,
|
||||
block_t blk_addr);
|
||||
void f2fs_do_write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
|
||||
enum iostat_type io_type);
|
||||
void write_node_page(unsigned int nid, struct f2fs_io_info *fio);
|
||||
void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio);
|
||||
int rewrite_data_page(struct f2fs_io_info *fio);
|
||||
void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
|
||||
void f2fs_do_write_node_page(unsigned int nid, struct f2fs_io_info *fio);
|
||||
void f2fs_outplace_write_data(struct dnode_of_data *dn,
|
||||
struct f2fs_io_info *fio);
|
||||
int f2fs_inplace_write_data(struct f2fs_io_info *fio);
|
||||
void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
|
||||
block_t old_blkaddr, block_t new_blkaddr,
|
||||
bool recover_curseg, bool recover_newaddr);
|
||||
void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn,
|
||||
block_t old_addr, block_t new_addr,
|
||||
unsigned char version, bool recover_curseg,
|
||||
bool recover_newaddr);
|
||||
void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
|
||||
void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
|
||||
block_t old_blkaddr, block_t *new_blkaddr,
|
||||
struct f2fs_summary *sum, int type,
|
||||
struct f2fs_io_info *fio, bool add_list);
|
||||
void f2fs_wait_on_page_writeback(struct page *page,
|
||||
enum page_type type, bool ordered);
|
||||
void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr);
|
||||
void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
|
||||
void write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
|
||||
int lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
|
||||
void f2fs_write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
|
||||
void f2fs_write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
|
||||
int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
|
||||
unsigned int val, int alloc);
|
||||
void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
int build_segment_manager(struct f2fs_sb_info *sbi);
|
||||
void destroy_segment_manager(struct f2fs_sb_info *sbi);
|
||||
int __init create_segment_manager_caches(void);
|
||||
void destroy_segment_manager_caches(void);
|
||||
int rw_hint_to_seg_type(enum rw_hint hint);
|
||||
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi, enum page_type type,
|
||||
enum temp_type temp);
|
||||
void f2fs_flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
int f2fs_build_segment_manager(struct f2fs_sb_info *sbi);
|
||||
void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi);
|
||||
int __init f2fs_create_segment_manager_caches(void);
|
||||
void f2fs_destroy_segment_manager_caches(void);
|
||||
int f2fs_rw_hint_to_seg_type(enum rw_hint hint);
|
||||
enum rw_hint f2fs_io_type_to_rw_hint(struct f2fs_sb_info *sbi,
|
||||
enum page_type type, enum temp_type temp);
|
||||
|
||||
/*
|
||||
* checkpoint.c
|
||||
*/
|
||||
void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io);
|
||||
struct page *grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
struct page *get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type);
|
||||
int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
|
||||
struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
bool f2fs_is_valid_meta_blkaddr(struct f2fs_sb_info *sbi,
|
||||
block_t blkaddr, int type);
|
||||
int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
|
||||
int type, bool sync);
|
||||
void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
|
||||
void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index);
|
||||
long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
|
||||
long nr_to_write, enum iostat_type io_type);
|
||||
void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
|
||||
void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
|
||||
void release_ino_entry(struct f2fs_sb_info *sbi, bool all);
|
||||
bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode);
|
||||
void set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
void f2fs_add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
|
||||
void f2fs_remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
|
||||
void f2fs_release_ino_entry(struct f2fs_sb_info *sbi, bool all);
|
||||
bool f2fs_exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode);
|
||||
void f2fs_set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
unsigned int devidx, int type);
|
||||
bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
bool f2fs_is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
|
||||
unsigned int devidx, int type);
|
||||
int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi);
|
||||
int acquire_orphan_inode(struct f2fs_sb_info *sbi);
|
||||
void release_orphan_inode(struct f2fs_sb_info *sbi);
|
||||
void add_orphan_inode(struct inode *inode);
|
||||
void remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
int recover_orphan_inodes(struct f2fs_sb_info *sbi);
|
||||
int get_valid_checkpoint(struct f2fs_sb_info *sbi);
|
||||
void update_dirty_page(struct inode *inode, struct page *page);
|
||||
void remove_dirty_inode(struct inode *inode);
|
||||
int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
|
||||
int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
void init_ino_entry_info(struct f2fs_sb_info *sbi);
|
||||
int __init create_checkpoint_caches(void);
|
||||
void destroy_checkpoint_caches(void);
|
||||
int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi);
|
||||
void f2fs_release_orphan_inode(struct f2fs_sb_info *sbi);
|
||||
void f2fs_add_orphan_inode(struct inode *inode);
|
||||
void f2fs_remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino);
|
||||
int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi);
|
||||
int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi);
|
||||
void f2fs_update_dirty_page(struct inode *inode, struct page *page);
|
||||
void f2fs_remove_dirty_inode(struct inode *inode);
|
||||
int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
|
||||
int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi);
|
||||
int __init f2fs_create_checkpoint_caches(void);
|
||||
void f2fs_destroy_checkpoint_caches(void);
|
||||
|
||||
/*
|
||||
* data.c
|
||||
*/
|
||||
int f2fs_init_post_read_processing(void);
|
||||
void f2fs_destroy_post_read_processing(void);
|
||||
void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type);
|
||||
void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi,
|
||||
struct inode *inode, nid_t ino, pgoff_t idx,
|
||||
enum page_type type);
|
||||
void f2fs_flush_merged_writes(struct f2fs_sb_info *sbi);
|
||||
int f2fs_submit_page_bio(struct f2fs_io_info *fio);
|
||||
int f2fs_submit_page_write(struct f2fs_io_info *fio);
|
||||
void f2fs_submit_page_write(struct f2fs_io_info *fio);
|
||||
struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
|
||||
block_t blk_addr, struct bio *bio);
|
||||
int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr);
|
||||
void set_data_blkaddr(struct dnode_of_data *dn);
|
||||
void f2fs_set_data_blkaddr(struct dnode_of_data *dn);
|
||||
void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr);
|
||||
int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count);
|
||||
int reserve_new_block(struct dnode_of_data *dn);
|
||||
int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count);
|
||||
int f2fs_reserve_new_block(struct dnode_of_data *dn);
|
||||
int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index);
|
||||
int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from);
|
||||
int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index);
|
||||
struct page *get_read_data_page(struct inode *inode, pgoff_t index,
|
||||
struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
|
||||
int op_flags, bool for_write);
|
||||
struct page *find_data_page(struct inode *inode, pgoff_t index);
|
||||
struct page *get_lock_data_page(struct inode *inode, pgoff_t index,
|
||||
struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index);
|
||||
struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index,
|
||||
bool for_write);
|
||||
struct page *get_new_data_page(struct inode *inode,
|
||||
struct page *f2fs_get_new_data_page(struct inode *inode,
|
||||
struct page *ipage, pgoff_t index, bool new_i_size);
|
||||
int do_write_data_page(struct f2fs_io_info *fio);
|
||||
int f2fs_do_write_data_page(struct f2fs_io_info *fio);
|
||||
int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
|
||||
int create, int flag);
|
||||
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
||||
u64 start, u64 len);
|
||||
bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
void f2fs_set_page_dirty_nobuffers(struct page *page);
|
||||
int __f2fs_write_data_pages(struct address_space *mapping,
|
||||
struct writeback_control *wbc,
|
||||
enum iostat_type io_type);
|
||||
bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
void f2fs_invalidate_page(struct page *page, unsigned int offset,
|
||||
unsigned int length);
|
||||
int f2fs_release_page(struct page *page, gfp_t wait);
|
||||
|
@ -2901,22 +2969,23 @@ int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
|
|||
struct page *page, enum migrate_mode mode);
|
||||
#endif
|
||||
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len);
|
||||
void f2fs_clear_radix_tree_dirty_tag(struct page *page);
|
||||
|
||||
/*
|
||||
* gc.c
|
||||
*/
|
||||
int start_gc_thread(struct f2fs_sb_info *sbi);
|
||||
void stop_gc_thread(struct f2fs_sb_info *sbi);
|
||||
block_t start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
|
||||
int f2fs_start_gc_thread(struct f2fs_sb_info *sbi);
|
||||
void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
|
||||
block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
|
||||
int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background,
|
||||
unsigned int segno);
|
||||
void build_gc_manager(struct f2fs_sb_info *sbi);
|
||||
void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
|
||||
|
||||
/*
|
||||
* recovery.c
|
||||
*/
|
||||
int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only);
|
||||
bool space_for_roll_forward(struct f2fs_sb_info *sbi);
|
||||
int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only);
|
||||
bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi);
|
||||
|
||||
/*
|
||||
* debug.c
|
||||
|
@ -2954,6 +3023,7 @@ struct f2fs_stat_info {
|
|||
int bg_node_segs, bg_data_segs;
|
||||
int tot_blks, data_blks, node_blks;
|
||||
int bg_data_blks, bg_node_blks;
|
||||
unsigned long long skipped_atomic_files[2];
|
||||
int curseg[NR_CURSEG_TYPE];
|
||||
int cursec[NR_CURSEG_TYPE];
|
||||
int curzone[NR_CURSEG_TYPE];
|
||||
|
@ -3120,29 +3190,31 @@ extern const struct inode_operations f2fs_dir_inode_operations;
|
|||
extern const struct inode_operations f2fs_symlink_inode_operations;
|
||||
extern const struct inode_operations f2fs_encrypted_symlink_inode_operations;
|
||||
extern const struct inode_operations f2fs_special_inode_operations;
|
||||
extern struct kmem_cache *inode_entry_slab;
|
||||
extern struct kmem_cache *f2fs_inode_entry_slab;
|
||||
|
||||
/*
|
||||
* inline.c
|
||||
*/
|
||||
bool f2fs_may_inline_data(struct inode *inode);
|
||||
bool f2fs_may_inline_dentry(struct inode *inode);
|
||||
void read_inline_data(struct page *page, struct page *ipage);
|
||||
void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from);
|
||||
void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
|
||||
void f2fs_truncate_inline_inode(struct inode *inode,
|
||||
struct page *ipage, u64 from);
|
||||
int f2fs_read_inline_data(struct inode *inode, struct page *page);
|
||||
int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
|
||||
int f2fs_convert_inline_inode(struct inode *inode);
|
||||
int f2fs_write_inline_data(struct inode *inode, struct page *page);
|
||||
bool recover_inline_data(struct inode *inode, struct page *npage);
|
||||
struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
|
||||
bool f2fs_recover_inline_data(struct inode *inode, struct page *npage);
|
||||
struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
|
||||
struct fscrypt_name *fname, struct page **res_page);
|
||||
int make_empty_inline_dir(struct inode *inode, struct inode *parent,
|
||||
int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
|
||||
struct page *ipage);
|
||||
int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
||||
const struct qstr *orig_name,
|
||||
struct inode *inode, nid_t ino, umode_t mode);
|
||||
void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page,
|
||||
struct inode *dir, struct inode *inode);
|
||||
void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry,
|
||||
struct page *page, struct inode *dir,
|
||||
struct inode *inode);
|
||||
bool f2fs_empty_inline_dir(struct inode *dir);
|
||||
int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
|
||||
struct fscrypt_str *fstr);
|
||||
|
@ -3163,17 +3235,17 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi);
|
|||
/*
|
||||
* extent_cache.c
|
||||
*/
|
||||
struct rb_entry *__lookup_rb_tree(struct rb_root *root,
|
||||
struct rb_entry *f2fs_lookup_rb_tree(struct rb_root *root,
|
||||
struct rb_entry *cached_re, unsigned int ofs);
|
||||
struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
|
||||
struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
|
||||
struct rb_root *root, struct rb_node **parent,
|
||||
unsigned int ofs);
|
||||
struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root,
|
||||
struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root *root,
|
||||
struct rb_entry *cached_re, unsigned int ofs,
|
||||
struct rb_entry **prev_entry, struct rb_entry **next_entry,
|
||||
struct rb_node ***insert_p, struct rb_node **insert_parent,
|
||||
bool force);
|
||||
bool __check_rb_tree_consistence(struct f2fs_sb_info *sbi,
|
||||
bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
|
||||
struct rb_root *root);
|
||||
unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink);
|
||||
bool f2fs_init_extent_tree(struct inode *inode, struct f2fs_extent *i_ext);
|
||||
|
@ -3185,9 +3257,9 @@ bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs,
|
|||
void f2fs_update_extent_cache(struct dnode_of_data *dn);
|
||||
void f2fs_update_extent_cache_range(struct dnode_of_data *dn,
|
||||
pgoff_t fofs, block_t blkaddr, unsigned int len);
|
||||
void init_extent_cache_info(struct f2fs_sb_info *sbi);
|
||||
int __init create_extent_cache(void);
|
||||
void destroy_extent_cache(void);
|
||||
void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi);
|
||||
int __init f2fs_create_extent_cache(void);
|
||||
void f2fs_destroy_extent_cache(void);
|
||||
|
||||
/*
|
||||
* sysfs.c
|
||||
|
@ -3218,9 +3290,13 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline bool f2fs_bio_encrypted(struct bio *bio)
|
||||
/*
|
||||
* Returns true if the reads of the inode's data need to undergo some
|
||||
* postprocessing step, like decryption or authenticity verification.
|
||||
*/
|
||||
static inline bool f2fs_post_read_required(struct inode *inode)
|
||||
{
|
||||
return bio->bi_private != NULL;
|
||||
return f2fs_encrypted_file(inode);
|
||||
}
|
||||
|
||||
#define F2FS_FEATURE_FUNCS(name, flagname) \
|
||||
|
@ -3288,7 +3364,7 @@ static inline bool f2fs_may_encrypt(struct inode *inode)
|
|||
|
||||
static inline bool f2fs_force_buffered_io(struct inode *inode, int rw)
|
||||
{
|
||||
return (f2fs_encrypted_file(inode) ||
|
||||
return (f2fs_post_read_required(inode) ||
|
||||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
|
||||
F2FS_I_SB(inode)->s_ndevs);
|
||||
}
|
||||
|
|
335
fs/f2fs/file.c
335
fs/f2fs/file.c
|
@ -33,19 +33,19 @@
|
|||
#include "trace.h"
|
||||
#include <trace/events/f2fs.h>
|
||||
|
||||
static int f2fs_filemap_fault(struct vm_fault *vmf)
|
||||
static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
|
||||
{
|
||||
struct inode *inode = file_inode(vmf->vma->vm_file);
|
||||
int err;
|
||||
vm_fault_t ret;
|
||||
|
||||
down_read(&F2FS_I(inode)->i_mmap_sem);
|
||||
err = filemap_fault(vmf);
|
||||
ret = filemap_fault(vmf);
|
||||
up_read(&F2FS_I(inode)->i_mmap_sem);
|
||||
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
||||
static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
||||
{
|
||||
struct page *page = vmf->page;
|
||||
struct inode *inode = file_inode(vmf->vma->vm_file);
|
||||
|
@ -95,7 +95,8 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
|||
/* page is wholly or partially inside EOF */
|
||||
if (((loff_t)(page->index + 1) << PAGE_SHIFT) >
|
||||
i_size_read(inode)) {
|
||||
unsigned offset;
|
||||
loff_t offset;
|
||||
|
||||
offset = i_size_read(inode) & ~PAGE_MASK;
|
||||
zero_user_segment(page, offset, PAGE_SIZE);
|
||||
}
|
||||
|
@ -110,8 +111,8 @@ mapped:
|
|||
/* fill the page */
|
||||
f2fs_wait_on_page_writeback(page, DATA, false);
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
if (f2fs_encrypted_file(inode))
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
if (f2fs_post_read_required(inode))
|
||||
f2fs_wait_on_block_writeback(sbi, dn.data_blkaddr);
|
||||
|
||||
out_sem:
|
||||
|
@ -157,17 +158,18 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
|
|||
cp_reason = CP_SB_NEED_CP;
|
||||
else if (file_wrong_pino(inode))
|
||||
cp_reason = CP_WRONG_PINO;
|
||||
else if (!space_for_roll_forward(sbi))
|
||||
else if (!f2fs_space_for_roll_forward(sbi))
|
||||
cp_reason = CP_NO_SPC_ROLL;
|
||||
else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
|
||||
else if (!f2fs_is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
|
||||
cp_reason = CP_NODE_NEED_CP;
|
||||
else if (test_opt(sbi, FASTBOOT))
|
||||
cp_reason = CP_FASTBOOT_MODE;
|
||||
else if (F2FS_OPTION(sbi).active_logs == 2)
|
||||
cp_reason = CP_SPEC_LOG_NUM;
|
||||
else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT &&
|
||||
need_dentry_mark(sbi, inode->i_ino) &&
|
||||
exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO))
|
||||
f2fs_need_dentry_mark(sbi, inode->i_ino) &&
|
||||
f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
|
||||
TRANS_DIR_INO))
|
||||
cp_reason = CP_RECOVER_DIR;
|
||||
|
||||
return cp_reason;
|
||||
|
@ -178,7 +180,7 @@ static bool need_inode_page_update(struct f2fs_sb_info *sbi, nid_t ino)
|
|||
struct page *i = find_get_page(NODE_MAPPING(sbi), ino);
|
||||
bool ret = false;
|
||||
/* But we need to avoid that there are some inode updates */
|
||||
if ((i && PageDirty(i)) || need_inode_block_update(sbi, ino))
|
||||
if ((i && PageDirty(i)) || f2fs_need_inode_block_update(sbi, ino))
|
||||
ret = true;
|
||||
f2fs_put_page(i, 0);
|
||||
return ret;
|
||||
|
@ -238,14 +240,14 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
|
|||
* if there is no written data, don't waste time to write recovery info.
|
||||
*/
|
||||
if (!is_inode_flag_set(inode, FI_APPEND_WRITE) &&
|
||||
!exist_written_data(sbi, ino, APPEND_INO)) {
|
||||
!f2fs_exist_written_data(sbi, ino, APPEND_INO)) {
|
||||
|
||||
/* it may call write_inode just prior to fsync */
|
||||
if (need_inode_page_update(sbi, ino))
|
||||
goto go_write;
|
||||
|
||||
if (is_inode_flag_set(inode, FI_UPDATE_WRITE) ||
|
||||
exist_written_data(sbi, ino, UPDATE_INO))
|
||||
f2fs_exist_written_data(sbi, ino, UPDATE_INO))
|
||||
goto flush_out;
|
||||
goto out;
|
||||
}
|
||||
|
@ -272,7 +274,9 @@ go_write:
|
|||
goto out;
|
||||
}
|
||||
sync_nodes:
|
||||
ret = fsync_node_pages(sbi, inode, &wbc, atomic);
|
||||
atomic_inc(&sbi->wb_sync_req[NODE]);
|
||||
ret = f2fs_fsync_node_pages(sbi, inode, &wbc, atomic);
|
||||
atomic_dec(&sbi->wb_sync_req[NODE]);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
|
@ -282,7 +286,7 @@ sync_nodes:
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (need_inode_block_update(sbi, ino)) {
|
||||
if (f2fs_need_inode_block_update(sbi, ino)) {
|
||||
f2fs_mark_inode_dirty_sync(inode, true);
|
||||
f2fs_write_inode(inode, NULL);
|
||||
goto sync_nodes;
|
||||
|
@ -297,21 +301,21 @@ sync_nodes:
|
|||
* given fsync mark.
|
||||
*/
|
||||
if (!atomic) {
|
||||
ret = wait_on_node_pages_writeback(sbi, ino);
|
||||
ret = f2fs_wait_on_node_pages_writeback(sbi, ino);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* once recovery info is written, don't need to tack this */
|
||||
remove_ino_entry(sbi, ino, APPEND_INO);
|
||||
f2fs_remove_ino_entry(sbi, ino, APPEND_INO);
|
||||
clear_inode_flag(inode, FI_APPEND_WRITE);
|
||||
flush_out:
|
||||
if (!atomic)
|
||||
if (!atomic && F2FS_OPTION(sbi).fsync_mode != FSYNC_MODE_NOBARRIER)
|
||||
ret = f2fs_issue_flush(sbi, inode->i_ino);
|
||||
if (!ret) {
|
||||
remove_ino_entry(sbi, ino, UPDATE_INO);
|
||||
f2fs_remove_ino_entry(sbi, ino, UPDATE_INO);
|
||||
clear_inode_flag(inode, FI_UPDATE_WRITE);
|
||||
remove_ino_entry(sbi, ino, FLUSH_INO);
|
||||
f2fs_remove_ino_entry(sbi, ino, FLUSH_INO);
|
||||
}
|
||||
f2fs_update_time(sbi, REQ_TIME);
|
||||
out:
|
||||
|
@ -352,7 +356,7 @@ static bool __found_offset(block_t blkaddr, pgoff_t dirty, pgoff_t pgofs,
|
|||
switch (whence) {
|
||||
case SEEK_DATA:
|
||||
if ((blkaddr == NEW_ADDR && dirty == pgofs) ||
|
||||
(blkaddr != NEW_ADDR && blkaddr != NULL_ADDR))
|
||||
is_valid_blkaddr(blkaddr))
|
||||
return true;
|
||||
break;
|
||||
case SEEK_HOLE:
|
||||
|
@ -392,13 +396,13 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
|
|||
|
||||
for (; data_ofs < isize; data_ofs = (loff_t)pgofs << PAGE_SHIFT) {
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, pgofs, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, pgofs, LOOKUP_NODE);
|
||||
if (err && err != -ENOENT) {
|
||||
goto fail;
|
||||
} else if (err == -ENOENT) {
|
||||
/* direct node does not exists */
|
||||
if (whence == SEEK_DATA) {
|
||||
pgofs = get_next_page_offset(&dn, pgofs);
|
||||
pgofs = f2fs_get_next_page_offset(&dn, pgofs);
|
||||
continue;
|
||||
} else {
|
||||
goto found;
|
||||
|
@ -412,6 +416,7 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
|
|||
dn.ofs_in_node++, pgofs++,
|
||||
data_ofs = (loff_t)pgofs << PAGE_SHIFT) {
|
||||
block_t blkaddr;
|
||||
|
||||
blkaddr = datablock_addr(dn.inode,
|
||||
dn.node_page, dn.ofs_in_node);
|
||||
|
||||
|
@ -486,7 +491,7 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
|
|||
return dquot_file_open(inode, filp);
|
||||
}
|
||||
|
||||
void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
||||
void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
|
||||
struct f2fs_node *raw_node;
|
||||
|
@ -502,12 +507,13 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
|||
|
||||
for (; count > 0; count--, addr++, dn->ofs_in_node++) {
|
||||
block_t blkaddr = le32_to_cpu(*addr);
|
||||
|
||||
if (blkaddr == NULL_ADDR)
|
||||
continue;
|
||||
|
||||
dn->data_blkaddr = NULL_ADDR;
|
||||
set_data_blkaddr(dn);
|
||||
invalidate_blocks(sbi, blkaddr);
|
||||
f2fs_set_data_blkaddr(dn);
|
||||
f2fs_invalidate_blocks(sbi, blkaddr);
|
||||
if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page))
|
||||
clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN);
|
||||
nr_free++;
|
||||
|
@ -519,7 +525,7 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
|||
* once we invalidate valid blkaddr in range [ofs, ofs + count],
|
||||
* we will invalidate all blkaddr in the whole range.
|
||||
*/
|
||||
fofs = start_bidx_of_node(ofs_of_node(dn->node_page),
|
||||
fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page),
|
||||
dn->inode) + ofs;
|
||||
f2fs_update_extent_cache_range(dn, fofs, 0, len);
|
||||
dec_valid_block_count(sbi, dn->inode, nr_free);
|
||||
|
@ -531,15 +537,15 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
|||
dn->ofs_in_node, nr_free);
|
||||
}
|
||||
|
||||
void truncate_data_blocks(struct dnode_of_data *dn)
|
||||
void f2fs_truncate_data_blocks(struct dnode_of_data *dn)
|
||||
{
|
||||
truncate_data_blocks_range(dn, ADDRS_PER_BLOCK);
|
||||
f2fs_truncate_data_blocks_range(dn, ADDRS_PER_BLOCK);
|
||||
}
|
||||
|
||||
static int truncate_partial_data_page(struct inode *inode, u64 from,
|
||||
bool cache_only)
|
||||
{
|
||||
unsigned offset = from & (PAGE_SIZE - 1);
|
||||
loff_t offset = from & (PAGE_SIZE - 1);
|
||||
pgoff_t index = from >> PAGE_SHIFT;
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct page *page;
|
||||
|
@ -555,7 +561,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
|
|||
return 0;
|
||||
}
|
||||
|
||||
page = get_lock_data_page(inode, index, true);
|
||||
page = f2fs_get_lock_data_page(inode, index, true);
|
||||
if (IS_ERR(page))
|
||||
return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page);
|
||||
truncate_out:
|
||||
|
@ -570,7 +576,7 @@ truncate_out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
int truncate_blocks(struct inode *inode, u64 from, bool lock)
|
||||
int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct dnode_of_data dn;
|
||||
|
@ -589,21 +595,21 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
|
|||
if (lock)
|
||||
f2fs_lock_op(sbi);
|
||||
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
err = PTR_ERR(ipage);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (f2fs_has_inline_data(inode)) {
|
||||
truncate_inline_inode(inode, ipage, from);
|
||||
f2fs_truncate_inline_inode(inode, ipage, from);
|
||||
f2fs_put_page(ipage, 1);
|
||||
truncate_page = true;
|
||||
goto out;
|
||||
}
|
||||
|
||||
set_new_dnode(&dn, inode, ipage, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE_RA);
|
||||
err = f2fs_get_dnode_of_data(&dn, free_from, LOOKUP_NODE_RA);
|
||||
if (err) {
|
||||
if (err == -ENOENT)
|
||||
goto free_next;
|
||||
|
@ -616,13 +622,13 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
|
|||
f2fs_bug_on(sbi, count < 0);
|
||||
|
||||
if (dn.ofs_in_node || IS_INODE(dn.node_page)) {
|
||||
truncate_data_blocks_range(&dn, count);
|
||||
f2fs_truncate_data_blocks_range(&dn, count);
|
||||
free_from += count;
|
||||
}
|
||||
|
||||
f2fs_put_dnode(&dn);
|
||||
free_next:
|
||||
err = truncate_inode_blocks(inode, free_from);
|
||||
err = f2fs_truncate_inode_blocks(inode, free_from);
|
||||
out:
|
||||
if (lock)
|
||||
f2fs_unlock_op(sbi);
|
||||
|
@ -661,7 +667,7 @@ int f2fs_truncate(struct inode *inode)
|
|||
return err;
|
||||
}
|
||||
|
||||
err = truncate_blocks(inode, i_size_read(inode), true);
|
||||
err = f2fs_truncate_blocks(inode, i_size_read(inode), true);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -686,16 +692,16 @@ int f2fs_getattr(const struct path *path, struct kstat *stat,
|
|||
stat->btime.tv_nsec = fi->i_crtime.tv_nsec;
|
||||
}
|
||||
|
||||
flags = fi->i_flags & (FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL);
|
||||
if (flags & FS_APPEND_FL)
|
||||
flags = fi->i_flags & F2FS_FL_USER_VISIBLE;
|
||||
if (flags & F2FS_APPEND_FL)
|
||||
stat->attributes |= STATX_ATTR_APPEND;
|
||||
if (flags & FS_COMPR_FL)
|
||||
if (flags & F2FS_COMPR_FL)
|
||||
stat->attributes |= STATX_ATTR_COMPRESSED;
|
||||
if (f2fs_encrypted_inode(inode))
|
||||
stat->attributes |= STATX_ATTR_ENCRYPTED;
|
||||
if (flags & FS_IMMUTABLE_FL)
|
||||
if (flags & F2FS_IMMUTABLE_FL)
|
||||
stat->attributes |= STATX_ATTR_IMMUTABLE;
|
||||
if (flags & FS_NODUMP_FL)
|
||||
if (flags & F2FS_NODUMP_FL)
|
||||
stat->attributes |= STATX_ATTR_NODUMP;
|
||||
|
||||
stat->attributes_mask |= (STATX_ATTR_APPEND |
|
||||
|
@ -811,7 +817,7 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
|
|||
__setattr_copy(inode, attr);
|
||||
|
||||
if (attr->ia_valid & ATTR_MODE) {
|
||||
err = posix_acl_chmod(inode, get_inode_mode(inode));
|
||||
err = posix_acl_chmod(inode, f2fs_get_inode_mode(inode));
|
||||
if (err || is_inode_flag_set(inode, FI_ACL_MODE)) {
|
||||
inode->i_mode = F2FS_I(inode)->i_acl_mode;
|
||||
clear_inode_flag(inode, FI_ACL_MODE);
|
||||
|
@ -850,7 +856,7 @@ static int fill_zero(struct inode *inode, pgoff_t index,
|
|||
f2fs_balance_fs(sbi, true);
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
page = get_new_data_page(inode, NULL, index, false);
|
||||
page = f2fs_get_new_data_page(inode, NULL, index, false);
|
||||
f2fs_unlock_op(sbi);
|
||||
|
||||
if (IS_ERR(page))
|
||||
|
@ -863,7 +869,7 @@ static int fill_zero(struct inode *inode, pgoff_t index,
|
|||
return 0;
|
||||
}
|
||||
|
||||
int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
|
||||
int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -872,10 +878,11 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
|
|||
pgoff_t end_offset, count;
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, pg_start, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, pg_start, LOOKUP_NODE);
|
||||
if (err) {
|
||||
if (err == -ENOENT) {
|
||||
pg_start = get_next_page_offset(&dn, pg_start);
|
||||
pg_start = f2fs_get_next_page_offset(&dn,
|
||||
pg_start);
|
||||
continue;
|
||||
}
|
||||
return err;
|
||||
|
@ -886,7 +893,7 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
|
|||
|
||||
f2fs_bug_on(F2FS_I_SB(inode), count == 0 || count > end_offset);
|
||||
|
||||
truncate_data_blocks_range(&dn, count);
|
||||
f2fs_truncate_data_blocks_range(&dn, count);
|
||||
f2fs_put_dnode(&dn);
|
||||
|
||||
pg_start += count;
|
||||
|
@ -942,7 +949,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
|
|||
blk_end - 1);
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
ret = truncate_hole(inode, pg_start, pg_end);
|
||||
ret = f2fs_truncate_hole(inode, pg_start, pg_end);
|
||||
f2fs_unlock_op(sbi);
|
||||
up_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
}
|
||||
|
@ -960,7 +967,7 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr,
|
|||
|
||||
next_dnode:
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
ret = get_dnode_of_data(&dn, off, LOOKUP_NODE_RA);
|
||||
ret = f2fs_get_dnode_of_data(&dn, off, LOOKUP_NODE_RA);
|
||||
if (ret && ret != -ENOENT) {
|
||||
return ret;
|
||||
} else if (ret == -ENOENT) {
|
||||
|
@ -977,7 +984,7 @@ next_dnode:
|
|||
for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) {
|
||||
*blkaddr = datablock_addr(dn.inode,
|
||||
dn.node_page, dn.ofs_in_node);
|
||||
if (!is_checkpointed_data(sbi, *blkaddr)) {
|
||||
if (!f2fs_is_checkpointed_data(sbi, *blkaddr)) {
|
||||
|
||||
if (test_opt(sbi, LFS)) {
|
||||
f2fs_put_dnode(&dn);
|
||||
|
@ -1010,10 +1017,10 @@ static int __roll_back_blkaddrs(struct inode *inode, block_t *blkaddr,
|
|||
continue;
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
ret = get_dnode_of_data(&dn, off + i, LOOKUP_NODE_RA);
|
||||
ret = f2fs_get_dnode_of_data(&dn, off + i, LOOKUP_NODE_RA);
|
||||
if (ret) {
|
||||
dec_valid_block_count(sbi, inode, 1);
|
||||
invalidate_blocks(sbi, *blkaddr);
|
||||
f2fs_invalidate_blocks(sbi, *blkaddr);
|
||||
} else {
|
||||
f2fs_update_data_blkaddr(&dn, *blkaddr);
|
||||
}
|
||||
|
@ -1043,18 +1050,18 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
|
|||
pgoff_t ilen;
|
||||
|
||||
set_new_dnode(&dn, dst_inode, NULL, NULL, 0);
|
||||
ret = get_dnode_of_data(&dn, dst + i, ALLOC_NODE);
|
||||
ret = f2fs_get_dnode_of_data(&dn, dst + i, ALLOC_NODE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
get_node_info(sbi, dn.nid, &ni);
|
||||
f2fs_get_node_info(sbi, dn.nid, &ni);
|
||||
ilen = min((pgoff_t)
|
||||
ADDRS_PER_PAGE(dn.node_page, dst_inode) -
|
||||
dn.ofs_in_node, len - i);
|
||||
do {
|
||||
dn.data_blkaddr = datablock_addr(dn.inode,
|
||||
dn.node_page, dn.ofs_in_node);
|
||||
truncate_data_blocks_range(&dn, 1);
|
||||
f2fs_truncate_data_blocks_range(&dn, 1);
|
||||
|
||||
if (do_replace[i]) {
|
||||
f2fs_i_blocks_write(src_inode,
|
||||
|
@ -1077,10 +1084,11 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
|
|||
} else {
|
||||
struct page *psrc, *pdst;
|
||||
|
||||
psrc = get_lock_data_page(src_inode, src + i, true);
|
||||
psrc = f2fs_get_lock_data_page(src_inode,
|
||||
src + i, true);
|
||||
if (IS_ERR(psrc))
|
||||
return PTR_ERR(psrc);
|
||||
pdst = get_new_data_page(dst_inode, NULL, dst + i,
|
||||
pdst = f2fs_get_new_data_page(dst_inode, NULL, dst + i,
|
||||
true);
|
||||
if (IS_ERR(pdst)) {
|
||||
f2fs_put_page(psrc, 1);
|
||||
|
@ -1091,7 +1099,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
|
|||
f2fs_put_page(pdst, 1);
|
||||
f2fs_put_page(psrc, 1);
|
||||
|
||||
ret = truncate_hole(src_inode, src + i, src + i + 1);
|
||||
ret = f2fs_truncate_hole(src_inode,
|
||||
src + i, src + i + 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
i++;
|
||||
|
@ -1144,7 +1153,7 @@ static int __exchange_data_block(struct inode *src_inode,
|
|||
return 0;
|
||||
|
||||
roll_back:
|
||||
__roll_back_blkaddrs(src_inode, src_blkaddr, do_replace, src, len);
|
||||
__roll_back_blkaddrs(src_inode, src_blkaddr, do_replace, src, olen);
|
||||
kvfree(src_blkaddr);
|
||||
kvfree(do_replace);
|
||||
return ret;
|
||||
|
@ -1187,7 +1196,7 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
|
|||
pg_end = (offset + len) >> PAGE_SHIFT;
|
||||
|
||||
/* avoid gc operation during block exchange */
|
||||
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
|
||||
down_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
/* write out all dirty pages from offset */
|
||||
|
@ -1208,12 +1217,12 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
|
|||
new_size = i_size_read(inode) - len;
|
||||
truncate_pagecache(inode, new_size);
|
||||
|
||||
ret = truncate_blocks(inode, new_size, true);
|
||||
ret = f2fs_truncate_blocks(inode, new_size, true);
|
||||
if (!ret)
|
||||
f2fs_i_size_write(inode, new_size);
|
||||
out_unlock:
|
||||
up_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1233,7 +1242,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
|
|||
}
|
||||
|
||||
dn->ofs_in_node = ofs_in_node;
|
||||
ret = reserve_new_blocks(dn, count);
|
||||
ret = f2fs_reserve_new_blocks(dn, count);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
@ -1242,7 +1251,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
|
|||
dn->data_blkaddr = datablock_addr(dn->inode,
|
||||
dn->node_page, dn->ofs_in_node);
|
||||
/*
|
||||
* reserve_new_blocks will not guarantee entire block
|
||||
* f2fs_reserve_new_blocks will not guarantee entire block
|
||||
* allocation.
|
||||
*/
|
||||
if (dn->data_blkaddr == NULL_ADDR) {
|
||||
|
@ -1250,9 +1259,9 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
|
|||
break;
|
||||
}
|
||||
if (dn->data_blkaddr != NEW_ADDR) {
|
||||
invalidate_blocks(sbi, dn->data_blkaddr);
|
||||
f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
|
||||
dn->data_blkaddr = NEW_ADDR;
|
||||
set_data_blkaddr(dn);
|
||||
f2fs_set_data_blkaddr(dn);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1318,7 +1327,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
|
|||
f2fs_lock_op(sbi);
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
ret = get_dnode_of_data(&dn, index, ALLOC_NODE);
|
||||
ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
|
||||
if (ret) {
|
||||
f2fs_unlock_op(sbi);
|
||||
goto out;
|
||||
|
@ -1389,10 +1398,10 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
|
|||
f2fs_balance_fs(sbi, true);
|
||||
|
||||
/* avoid gc operation during block exchange */
|
||||
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
|
||||
down_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
ret = truncate_blocks(inode, i_size_read(inode), true);
|
||||
ret = f2fs_truncate_blocks(inode, i_size_read(inode), true);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
|
@ -1430,7 +1439,7 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
|
|||
f2fs_i_size_write(inode, new_size);
|
||||
out:
|
||||
up_write(&F2FS_I(inode)->i_mmap_sem);
|
||||
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1473,7 +1482,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
|
|||
last_off = map.m_lblk + map.m_len - 1;
|
||||
|
||||
/* update new size to the failed position */
|
||||
new_size = (last_off == pg_end) ? offset + len:
|
||||
new_size = (last_off == pg_end) ? offset + len :
|
||||
(loff_t)(last_off + 1) << PAGE_SHIFT;
|
||||
} else {
|
||||
new_size = ((loff_t)pg_end << PAGE_SHIFT) + off_end;
|
||||
|
@ -1553,13 +1562,13 @@ static int f2fs_release_file(struct inode *inode, struct file *filp)
|
|||
|
||||
/* some remained atomic pages should discarded */
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
drop_inmem_pages(inode);
|
||||
f2fs_drop_inmem_pages(inode);
|
||||
if (f2fs_is_volatile_file(inode)) {
|
||||
clear_inode_flag(inode, FI_VOLATILE_FILE);
|
||||
stat_dec_volatile_write(inode);
|
||||
set_inode_flag(inode, FI_DROP_CACHE);
|
||||
filemap_fdatawrite(inode->i_mapping);
|
||||
clear_inode_flag(inode, FI_DROP_CACHE);
|
||||
clear_inode_flag(inode, FI_VOLATILE_FILE);
|
||||
stat_dec_volatile_write(inode);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -1576,7 +1585,7 @@ static int f2fs_file_flush(struct file *file, fl_owner_t id)
|
|||
*/
|
||||
if (f2fs_is_atomic_file(inode) &&
|
||||
F2FS_I(inode)->inmem_task == current)
|
||||
drop_inmem_pages(inode);
|
||||
f2fs_drop_inmem_pages(inode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1584,8 +1593,15 @@ static int f2fs_ioc_getflags(struct file *filp, unsigned long arg)
|
|||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
struct f2fs_inode_info *fi = F2FS_I(inode);
|
||||
unsigned int flags = fi->i_flags &
|
||||
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL);
|
||||
unsigned int flags = fi->i_flags;
|
||||
|
||||
if (file_is_encrypt(inode))
|
||||
flags |= F2FS_ENCRYPT_FL;
|
||||
if (f2fs_has_inline_data(inode) || f2fs_has_inline_dentry(inode))
|
||||
flags |= F2FS_INLINE_DATA_FL;
|
||||
|
||||
flags &= F2FS_FL_USER_VISIBLE;
|
||||
|
||||
return put_user(flags, (int __user *)arg);
|
||||
}
|
||||
|
||||
|
@ -1602,15 +1618,15 @@ static int __f2fs_ioc_setflags(struct inode *inode, unsigned int flags)
|
|||
|
||||
oldflags = fi->i_flags;
|
||||
|
||||
if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL))
|
||||
if ((flags ^ oldflags) & (F2FS_APPEND_FL | F2FS_IMMUTABLE_FL))
|
||||
if (!capable(CAP_LINUX_IMMUTABLE))
|
||||
return -EPERM;
|
||||
|
||||
flags = flags & (FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL);
|
||||
flags |= oldflags & ~(FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL);
|
||||
flags = flags & F2FS_FL_USER_MODIFIABLE;
|
||||
flags |= oldflags & ~F2FS_FL_USER_MODIFIABLE;
|
||||
fi->i_flags = flags;
|
||||
|
||||
if (fi->i_flags & FS_PROJINHERIT_FL)
|
||||
if (fi->i_flags & F2FS_PROJINHERIT_FL)
|
||||
set_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
else
|
||||
clear_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
|
@ -1670,6 +1686,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
|
|||
|
||||
inode_lock(inode);
|
||||
|
||||
down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
goto out;
|
||||
|
||||
|
@ -1677,28 +1695,25 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
|
|||
if (ret)
|
||||
goto out;
|
||||
|
||||
set_inode_flag(inode, FI_ATOMIC_FILE);
|
||||
set_inode_flag(inode, FI_HOT_DATA);
|
||||
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
|
||||
|
||||
if (!get_dirty_pages(inode))
|
||||
goto inc_stat;
|
||||
goto skip_flush;
|
||||
|
||||
f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
|
||||
"Unexpected flush for atomic writes: ino=%lu, npages=%u",
|
||||
inode->i_ino, get_dirty_pages(inode));
|
||||
ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
|
||||
if (ret) {
|
||||
clear_inode_flag(inode, FI_ATOMIC_FILE);
|
||||
clear_inode_flag(inode, FI_HOT_DATA);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
skip_flush:
|
||||
set_inode_flag(inode, FI_ATOMIC_FILE);
|
||||
clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
|
||||
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
|
||||
|
||||
inc_stat:
|
||||
F2FS_I(inode)->inmem_task = current;
|
||||
stat_inc_atomic_write(inode);
|
||||
stat_update_max_atomic_write(inode);
|
||||
out:
|
||||
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
inode_unlock(inode);
|
||||
mnt_drop_write_file(filp);
|
||||
return ret;
|
||||
|
@ -1718,27 +1733,33 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
|
|||
|
||||
inode_lock(inode);
|
||||
|
||||
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
|
||||
if (f2fs_is_volatile_file(inode))
|
||||
if (f2fs_is_volatile_file(inode)) {
|
||||
ret = -EINVAL;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
if (f2fs_is_atomic_file(inode)) {
|
||||
ret = commit_inmem_pages(inode);
|
||||
ret = f2fs_commit_inmem_pages(inode);
|
||||
if (ret)
|
||||
goto err_out;
|
||||
|
||||
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true);
|
||||
if (!ret) {
|
||||
clear_inode_flag(inode, FI_ATOMIC_FILE);
|
||||
clear_inode_flag(inode, FI_HOT_DATA);
|
||||
F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
|
||||
stat_dec_atomic_write(inode);
|
||||
}
|
||||
} else {
|
||||
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false);
|
||||
}
|
||||
err_out:
|
||||
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST)) {
|
||||
clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
inode_unlock(inode);
|
||||
mnt_drop_write_file(filp);
|
||||
return ret;
|
||||
|
@ -1823,7 +1844,7 @@ static int f2fs_ioc_abort_volatile_write(struct file *filp)
|
|||
inode_lock(inode);
|
||||
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
drop_inmem_pages(inode);
|
||||
f2fs_drop_inmem_pages(inode);
|
||||
if (f2fs_is_volatile_file(inode)) {
|
||||
clear_inode_flag(inode, FI_VOLATILE_FILE);
|
||||
stat_dec_volatile_write(inode);
|
||||
|
@ -1851,9 +1872,11 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
|
|||
if (get_user(in, (__u32 __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
ret = mnt_want_write_file(filp);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (in != F2FS_GOING_DOWN_FULLSYNC) {
|
||||
ret = mnt_want_write_file(filp);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (in) {
|
||||
case F2FS_GOING_DOWN_FULLSYNC:
|
||||
|
@ -1878,7 +1901,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
|
|||
f2fs_stop_checkpoint(sbi, false);
|
||||
break;
|
||||
case F2FS_GOING_DOWN_METAFLUSH:
|
||||
sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
|
||||
f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
|
||||
f2fs_stop_checkpoint(sbi, false);
|
||||
break;
|
||||
default:
|
||||
|
@ -1886,15 +1909,16 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
|
|||
goto out;
|
||||
}
|
||||
|
||||
stop_gc_thread(sbi);
|
||||
stop_discard_thread(sbi);
|
||||
f2fs_stop_gc_thread(sbi);
|
||||
f2fs_stop_discard_thread(sbi);
|
||||
|
||||
drop_discard_cmd(sbi);
|
||||
f2fs_drop_discard_cmd(sbi);
|
||||
clear_opt(sbi, DISCARD);
|
||||
|
||||
f2fs_update_time(sbi, REQ_TIME);
|
||||
out:
|
||||
mnt_drop_write_file(filp);
|
||||
if (in != F2FS_GOING_DOWN_FULLSYNC)
|
||||
mnt_drop_write_file(filp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2053,15 +2077,15 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
|
|||
if (f2fs_readonly(sbi->sb))
|
||||
return -EROFS;
|
||||
|
||||
end = range.start + range.len;
|
||||
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = mnt_want_write_file(filp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
end = range.start + range.len;
|
||||
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
do_more:
|
||||
if (!range.sync) {
|
||||
if (!mutex_trylock(&sbi->gc_mutex)) {
|
||||
|
@ -2081,7 +2105,7 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int f2fs_ioc_write_checkpoint(struct file *filp, unsigned long arg)
|
||||
static int f2fs_ioc_f2fs_write_checkpoint(struct file *filp, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
@ -2110,7 +2134,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
|
|||
struct inode *inode = file_inode(filp);
|
||||
struct f2fs_map_blocks map = { .m_next_extent = NULL,
|
||||
.m_seg_type = NO_CHECK_TYPE };
|
||||
struct extent_info ei = {0,0,0};
|
||||
struct extent_info ei = {0, 0, 0};
|
||||
pgoff_t pg_start, pg_end, next_pgofs;
|
||||
unsigned int blk_per_seg = sbi->blocks_per_seg;
|
||||
unsigned int total = 0, sec_num;
|
||||
|
@ -2119,7 +2143,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
|
|||
int err;
|
||||
|
||||
/* if in-place-update policy is enabled, don't waste time here */
|
||||
if (should_update_inplace(inode, NULL))
|
||||
if (f2fs_should_update_inplace(inode, NULL))
|
||||
return -EINVAL;
|
||||
|
||||
pg_start = range->start >> PAGE_SHIFT;
|
||||
|
@ -2214,7 +2238,7 @@ do_map:
|
|||
while (idx < map.m_lblk + map.m_len && cnt < blk_per_seg) {
|
||||
struct page *page;
|
||||
|
||||
page = get_lock_data_page(inode, idx, true);
|
||||
page = f2fs_get_lock_data_page(inode, idx, true);
|
||||
if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
goto clear_out;
|
||||
|
@ -2325,12 +2349,12 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
|
|||
}
|
||||
|
||||
inode_lock(src);
|
||||
down_write(&F2FS_I(src)->dio_rwsem[WRITE]);
|
||||
down_write(&F2FS_I(src)->i_gc_rwsem[WRITE]);
|
||||
if (src != dst) {
|
||||
ret = -EBUSY;
|
||||
if (!inode_trylock(dst))
|
||||
goto out;
|
||||
if (!down_write_trylock(&F2FS_I(dst)->dio_rwsem[WRITE])) {
|
||||
if (!down_write_trylock(&F2FS_I(dst)->i_gc_rwsem[WRITE])) {
|
||||
inode_unlock(dst);
|
||||
goto out;
|
||||
}
|
||||
|
@ -2392,11 +2416,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
|
|||
f2fs_unlock_op(sbi);
|
||||
out_unlock:
|
||||
if (src != dst) {
|
||||
up_write(&F2FS_I(dst)->dio_rwsem[WRITE]);
|
||||
up_write(&F2FS_I(dst)->i_gc_rwsem[WRITE]);
|
||||
inode_unlock(dst);
|
||||
}
|
||||
out:
|
||||
up_write(&F2FS_I(src)->dio_rwsem[WRITE]);
|
||||
up_write(&F2FS_I(src)->i_gc_rwsem[WRITE]);
|
||||
inode_unlock(src);
|
||||
return ret;
|
||||
}
|
||||
|
@ -2554,7 +2578,7 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
|
|||
if (IS_NOQUOTA(inode))
|
||||
goto out_unlock;
|
||||
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
err = PTR_ERR(ipage);
|
||||
goto out_unlock;
|
||||
|
@ -2568,7 +2592,9 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
|
|||
}
|
||||
f2fs_put_page(ipage, 1);
|
||||
|
||||
dquot_initialize(inode);
|
||||
err = dquot_initialize(inode);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
|
||||
if (!IS_ERR(transfer_to[PRJQUOTA])) {
|
||||
|
@ -2601,17 +2627,17 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags)
|
|||
{
|
||||
__u32 xflags = 0;
|
||||
|
||||
if (iflags & FS_SYNC_FL)
|
||||
if (iflags & F2FS_SYNC_FL)
|
||||
xflags |= FS_XFLAG_SYNC;
|
||||
if (iflags & FS_IMMUTABLE_FL)
|
||||
if (iflags & F2FS_IMMUTABLE_FL)
|
||||
xflags |= FS_XFLAG_IMMUTABLE;
|
||||
if (iflags & FS_APPEND_FL)
|
||||
if (iflags & F2FS_APPEND_FL)
|
||||
xflags |= FS_XFLAG_APPEND;
|
||||
if (iflags & FS_NODUMP_FL)
|
||||
if (iflags & F2FS_NODUMP_FL)
|
||||
xflags |= FS_XFLAG_NODUMP;
|
||||
if (iflags & FS_NOATIME_FL)
|
||||
if (iflags & F2FS_NOATIME_FL)
|
||||
xflags |= FS_XFLAG_NOATIME;
|
||||
if (iflags & FS_PROJINHERIT_FL)
|
||||
if (iflags & F2FS_PROJINHERIT_FL)
|
||||
xflags |= FS_XFLAG_PROJINHERIT;
|
||||
return xflags;
|
||||
}
|
||||
|
@ -2620,31 +2646,23 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags)
|
|||
FS_XFLAG_APPEND | FS_XFLAG_NODUMP | \
|
||||
FS_XFLAG_NOATIME | FS_XFLAG_PROJINHERIT)
|
||||
|
||||
/* Flags we can manipulate with through EXT4_IOC_FSSETXATTR */
|
||||
#define F2FS_FL_XFLAG_VISIBLE (FS_SYNC_FL | \
|
||||
FS_IMMUTABLE_FL | \
|
||||
FS_APPEND_FL | \
|
||||
FS_NODUMP_FL | \
|
||||
FS_NOATIME_FL | \
|
||||
FS_PROJINHERIT_FL)
|
||||
|
||||
/* Transfer xflags flags to internal */
|
||||
static inline unsigned long f2fs_xflags_to_iflags(__u32 xflags)
|
||||
{
|
||||
unsigned long iflags = 0;
|
||||
|
||||
if (xflags & FS_XFLAG_SYNC)
|
||||
iflags |= FS_SYNC_FL;
|
||||
iflags |= F2FS_SYNC_FL;
|
||||
if (xflags & FS_XFLAG_IMMUTABLE)
|
||||
iflags |= FS_IMMUTABLE_FL;
|
||||
iflags |= F2FS_IMMUTABLE_FL;
|
||||
if (xflags & FS_XFLAG_APPEND)
|
||||
iflags |= FS_APPEND_FL;
|
||||
iflags |= F2FS_APPEND_FL;
|
||||
if (xflags & FS_XFLAG_NODUMP)
|
||||
iflags |= FS_NODUMP_FL;
|
||||
iflags |= F2FS_NODUMP_FL;
|
||||
if (xflags & FS_XFLAG_NOATIME)
|
||||
iflags |= FS_NOATIME_FL;
|
||||
iflags |= F2FS_NOATIME_FL;
|
||||
if (xflags & FS_XFLAG_PROJINHERIT)
|
||||
iflags |= FS_PROJINHERIT_FL;
|
||||
iflags |= F2FS_PROJINHERIT_FL;
|
||||
|
||||
return iflags;
|
||||
}
|
||||
|
@ -2657,7 +2675,7 @@ static int f2fs_ioc_fsgetxattr(struct file *filp, unsigned long arg)
|
|||
|
||||
memset(&fa, 0, sizeof(struct fsxattr));
|
||||
fa.fsx_xflags = f2fs_iflags_to_xflags(fi->i_flags &
|
||||
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL));
|
||||
F2FS_FL_USER_VISIBLE);
|
||||
|
||||
if (f2fs_sb_has_project_quota(inode->i_sb))
|
||||
fa.fsx_projid = (__u32)from_kprojid(&init_user_ns,
|
||||
|
@ -2717,12 +2735,14 @@ int f2fs_pin_file_control(struct inode *inode, bool inc)
|
|||
|
||||
/* Use i_gc_failures for normal file as a risk signal. */
|
||||
if (inc)
|
||||
f2fs_i_gc_failures_write(inode, fi->i_gc_failures + 1);
|
||||
f2fs_i_gc_failures_write(inode,
|
||||
fi->i_gc_failures[GC_FAILURE_PIN] + 1);
|
||||
|
||||
if (fi->i_gc_failures > sbi->gc_pin_file_threshold) {
|
||||
if (fi->i_gc_failures[GC_FAILURE_PIN] > sbi->gc_pin_file_threshold) {
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
"%s: Enable GC = ino %lx after %x GC trials\n",
|
||||
__func__, inode->i_ino, fi->i_gc_failures);
|
||||
__func__, inode->i_ino,
|
||||
fi->i_gc_failures[GC_FAILURE_PIN]);
|
||||
clear_inode_flag(inode, FI_PIN_FILE);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
@ -2753,14 +2773,14 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
|
|||
|
||||
inode_lock(inode);
|
||||
|
||||
if (should_update_outplace(inode, NULL)) {
|
||||
if (f2fs_should_update_outplace(inode, NULL)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!pin) {
|
||||
clear_inode_flag(inode, FI_PIN_FILE);
|
||||
F2FS_I(inode)->i_gc_failures = 1;
|
||||
F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN] = 1;
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
@ -2773,7 +2793,7 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
|
|||
goto out;
|
||||
|
||||
set_inode_flag(inode, FI_PIN_FILE);
|
||||
ret = F2FS_I(inode)->i_gc_failures;
|
||||
ret = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN];
|
||||
done:
|
||||
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
|
||||
out:
|
||||
|
@ -2788,7 +2808,7 @@ static int f2fs_ioc_get_pin_file(struct file *filp, unsigned long arg)
|
|||
__u32 pin = 0;
|
||||
|
||||
if (is_inode_flag_set(inode, FI_PIN_FILE))
|
||||
pin = F2FS_I(inode)->i_gc_failures;
|
||||
pin = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN];
|
||||
return put_user(pin, (u32 __user *)arg);
|
||||
}
|
||||
|
||||
|
@ -2812,9 +2832,9 @@ int f2fs_precache_extents(struct inode *inode)
|
|||
while (map.m_lblk < end) {
|
||||
map.m_len = end - map.m_lblk;
|
||||
|
||||
down_write(&fi->dio_rwsem[WRITE]);
|
||||
down_write(&fi->i_gc_rwsem[WRITE]);
|
||||
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_PRECACHE);
|
||||
up_write(&fi->dio_rwsem[WRITE]);
|
||||
up_write(&fi->i_gc_rwsem[WRITE]);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -2866,7 +2886,7 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
|||
case F2FS_IOC_GARBAGE_COLLECT_RANGE:
|
||||
return f2fs_ioc_gc_range(filp, arg);
|
||||
case F2FS_IOC_WRITE_CHECKPOINT:
|
||||
return f2fs_ioc_write_checkpoint(filp, arg);
|
||||
return f2fs_ioc_f2fs_write_checkpoint(filp, arg);
|
||||
case F2FS_IOC_DEFRAGMENT:
|
||||
return f2fs_ioc_defragment(filp, arg);
|
||||
case F2FS_IOC_MOVE_RANGE:
|
||||
|
@ -2894,7 +2914,6 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
struct inode *inode = file_inode(file);
|
||||
struct blk_plug plug;
|
||||
ssize_t ret;
|
||||
|
||||
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
|
||||
|
@ -2924,6 +2943,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|||
iov_iter_count(from)) ||
|
||||
f2fs_has_inline_data(inode) ||
|
||||
f2fs_force_buffered_io(inode, WRITE)) {
|
||||
clear_inode_flag(inode,
|
||||
FI_NO_PREALLOC);
|
||||
inode_unlock(inode);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
@ -2939,9 +2960,7 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|||
return err;
|
||||
}
|
||||
}
|
||||
blk_start_plug(&plug);
|
||||
ret = __generic_file_write_iter(iocb, from);
|
||||
blk_finish_plug(&plug);
|
||||
clear_inode_flag(inode, FI_NO_PREALLOC);
|
||||
|
||||
/* if we couldn't write data, we should deallocate blocks. */
|
||||
|
|
183
fs/f2fs/gc.c
183
fs/f2fs/gc.c
|
@ -76,7 +76,7 @@ static int gc_thread_func(void *data)
|
|||
* invalidated soon after by user update or deletion.
|
||||
* So, I'd like to wait some time to collect dirty segments.
|
||||
*/
|
||||
if (gc_th->gc_urgent) {
|
||||
if (sbi->gc_mode == GC_URGENT) {
|
||||
wait_ms = gc_th->urgent_sleep_time;
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
goto do_gc;
|
||||
|
@ -114,7 +114,7 @@ next:
|
|||
return 0;
|
||||
}
|
||||
|
||||
int start_gc_thread(struct f2fs_sb_info *sbi)
|
||||
int f2fs_start_gc_thread(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_gc_kthread *gc_th;
|
||||
dev_t dev = sbi->sb->s_bdev->bd_dev;
|
||||
|
@ -131,8 +131,6 @@ int start_gc_thread(struct f2fs_sb_info *sbi)
|
|||
gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
|
||||
gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
|
||||
|
||||
gc_th->gc_idle = 0;
|
||||
gc_th->gc_urgent = 0;
|
||||
gc_th->gc_wake= 0;
|
||||
|
||||
sbi->gc_thread = gc_th;
|
||||
|
@ -148,7 +146,7 @@ out:
|
|||
return err;
|
||||
}
|
||||
|
||||
void stop_gc_thread(struct f2fs_sb_info *sbi)
|
||||
void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_gc_kthread *gc_th = sbi->gc_thread;
|
||||
if (!gc_th)
|
||||
|
@ -158,21 +156,19 @@ void stop_gc_thread(struct f2fs_sb_info *sbi)
|
|||
sbi->gc_thread = NULL;
|
||||
}
|
||||
|
||||
static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type)
|
||||
static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
|
||||
{
|
||||
int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
|
||||
|
||||
if (!gc_th)
|
||||
return gc_mode;
|
||||
|
||||
if (gc_th->gc_idle) {
|
||||
if (gc_th->gc_idle == 1)
|
||||
gc_mode = GC_CB;
|
||||
else if (gc_th->gc_idle == 2)
|
||||
gc_mode = GC_GREEDY;
|
||||
}
|
||||
if (gc_th->gc_urgent)
|
||||
switch (sbi->gc_mode) {
|
||||
case GC_IDLE_CB:
|
||||
gc_mode = GC_CB;
|
||||
break;
|
||||
case GC_IDLE_GREEDY:
|
||||
case GC_URGENT:
|
||||
gc_mode = GC_GREEDY;
|
||||
break;
|
||||
}
|
||||
return gc_mode;
|
||||
}
|
||||
|
||||
|
@ -187,7 +183,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
|
|||
p->max_search = dirty_i->nr_dirty[type];
|
||||
p->ofs_unit = 1;
|
||||
} else {
|
||||
p->gc_mode = select_gc_type(sbi->gc_thread, gc_type);
|
||||
p->gc_mode = select_gc_type(sbi, gc_type);
|
||||
p->dirty_segmap = dirty_i->dirty_segmap[DIRTY];
|
||||
p->max_search = dirty_i->nr_dirty[DIRTY];
|
||||
p->ofs_unit = sbi->segs_per_sec;
|
||||
|
@ -195,7 +191,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
|
|||
|
||||
/* we need to check every dirty segments in the FG_GC case */
|
||||
if (gc_type != FG_GC &&
|
||||
(sbi->gc_thread && !sbi->gc_thread->gc_urgent) &&
|
||||
(sbi->gc_mode != GC_URGENT) &&
|
||||
p->max_search > sbi->max_victim_search)
|
||||
p->max_search = sbi->max_victim_search;
|
||||
|
||||
|
@ -234,10 +230,6 @@ static unsigned int check_bg_victims(struct f2fs_sb_info *sbi)
|
|||
for_each_set_bit(secno, dirty_i->victim_secmap, MAIN_SECS(sbi)) {
|
||||
if (sec_usage_check(sbi, secno))
|
||||
continue;
|
||||
|
||||
if (no_fggc_candidate(sbi, secno))
|
||||
continue;
|
||||
|
||||
clear_bit(secno, dirty_i->victim_secmap);
|
||||
return GET_SEG_FROM_SEC(sbi, secno);
|
||||
}
|
||||
|
@ -377,9 +369,6 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi,
|
|||
goto next;
|
||||
if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
|
||||
goto next;
|
||||
if (gc_type == FG_GC && p.alloc_mode == LFS &&
|
||||
no_fggc_candidate(sbi, secno))
|
||||
goto next;
|
||||
|
||||
cost = get_gc_cost(sbi, segno, &p);
|
||||
|
||||
|
@ -440,7 +429,7 @@ static void add_gc_inode(struct gc_inode_list *gc_list, struct inode *inode)
|
|||
iput(inode);
|
||||
return;
|
||||
}
|
||||
new_ie = f2fs_kmem_cache_alloc(inode_entry_slab, GFP_NOFS);
|
||||
new_ie = f2fs_kmem_cache_alloc(f2fs_inode_entry_slab, GFP_NOFS);
|
||||
new_ie->inode = inode;
|
||||
|
||||
f2fs_radix_tree_insert(&gc_list->iroot, inode->i_ino, new_ie);
|
||||
|
@ -454,7 +443,7 @@ static void put_gc_inode(struct gc_inode_list *gc_list)
|
|||
radix_tree_delete(&gc_list->iroot, ie->inode->i_ino);
|
||||
iput(ie->inode);
|
||||
list_del(&ie->list);
|
||||
kmem_cache_free(inode_entry_slab, ie);
|
||||
kmem_cache_free(f2fs_inode_entry_slab, ie);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -484,12 +473,16 @@ static void gc_node_segment(struct f2fs_sb_info *sbi,
|
|||
block_t start_addr;
|
||||
int off;
|
||||
int phase = 0;
|
||||
bool fggc = (gc_type == FG_GC);
|
||||
|
||||
start_addr = START_BLOCK(sbi, segno);
|
||||
|
||||
next_step:
|
||||
entry = sum;
|
||||
|
||||
if (fggc && phase == 2)
|
||||
atomic_inc(&sbi->wb_sync_req[NODE]);
|
||||
|
||||
for (off = 0; off < sbi->blocks_per_seg; off++, entry++) {
|
||||
nid_t nid = le32_to_cpu(entry->nid);
|
||||
struct page *node_page;
|
||||
|
@ -503,39 +496,42 @@ next_step:
|
|||
continue;
|
||||
|
||||
if (phase == 0) {
|
||||
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
|
||||
f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
|
||||
META_NAT, true);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (phase == 1) {
|
||||
ra_node_page(sbi, nid);
|
||||
f2fs_ra_node_page(sbi, nid);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* phase == 2 */
|
||||
node_page = get_node_page(sbi, nid);
|
||||
node_page = f2fs_get_node_page(sbi, nid);
|
||||
if (IS_ERR(node_page))
|
||||
continue;
|
||||
|
||||
/* block may become invalid during get_node_page */
|
||||
/* block may become invalid during f2fs_get_node_page */
|
||||
if (check_valid_map(sbi, segno, off) == 0) {
|
||||
f2fs_put_page(node_page, 1);
|
||||
continue;
|
||||
}
|
||||
|
||||
get_node_info(sbi, nid, &ni);
|
||||
f2fs_get_node_info(sbi, nid, &ni);
|
||||
if (ni.blk_addr != start_addr + off) {
|
||||
f2fs_put_page(node_page, 1);
|
||||
continue;
|
||||
}
|
||||
|
||||
move_node_page(node_page, gc_type);
|
||||
f2fs_move_node_page(node_page, gc_type);
|
||||
stat_inc_node_blk_count(sbi, 1, gc_type);
|
||||
}
|
||||
|
||||
if (++phase < 3)
|
||||
goto next_step;
|
||||
|
||||
if (fggc)
|
||||
atomic_dec(&sbi->wb_sync_req[NODE]);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -545,7 +541,7 @@ next_step:
|
|||
* as indirect or double indirect node blocks, are given, it must be a caller's
|
||||
* bug.
|
||||
*/
|
||||
block_t start_bidx_of_node(unsigned int node_ofs, struct inode *inode)
|
||||
block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode)
|
||||
{
|
||||
unsigned int indirect_blks = 2 * NIDS_PER_BLOCK + 4;
|
||||
unsigned int bidx;
|
||||
|
@ -576,11 +572,11 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
|
|||
nid = le32_to_cpu(sum->nid);
|
||||
ofs_in_node = le16_to_cpu(sum->ofs_in_node);
|
||||
|
||||
node_page = get_node_page(sbi, nid);
|
||||
node_page = f2fs_get_node_page(sbi, nid);
|
||||
if (IS_ERR(node_page))
|
||||
return false;
|
||||
|
||||
get_node_info(sbi, nid, dni);
|
||||
f2fs_get_node_info(sbi, nid, dni);
|
||||
|
||||
if (sum->version != dni->version) {
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
|
@ -603,7 +599,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
|
|||
* This can be used to move blocks, aka LBAs, directly on disk.
|
||||
*/
|
||||
static void move_data_block(struct inode *inode, block_t bidx,
|
||||
unsigned int segno, int off)
|
||||
int gc_type, unsigned int segno, int off)
|
||||
{
|
||||
struct f2fs_io_info fio = {
|
||||
.sbi = F2FS_I_SB(inode),
|
||||
|
@ -614,6 +610,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
.op_flags = 0,
|
||||
.encrypted_page = NULL,
|
||||
.in_list = false,
|
||||
.retry = false,
|
||||
};
|
||||
struct dnode_of_data dn;
|
||||
struct f2fs_summary sum;
|
||||
|
@ -621,6 +618,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
struct page *page;
|
||||
block_t newaddr;
|
||||
int err;
|
||||
bool lfs_mode = test_opt(fio.sbi, LFS);
|
||||
|
||||
/* do not read out */
|
||||
page = f2fs_grab_cache_page(inode->i_mapping, bidx, false);
|
||||
|
@ -630,8 +628,11 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
if (!check_valid_map(F2FS_I_SB(inode), segno, off))
|
||||
goto out;
|
||||
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
if (f2fs_is_atomic_file(inode)) {
|
||||
F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC]++;
|
||||
F2FS_I_SB(inode)->skipped_atomic_files[gc_type]++;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (f2fs_is_pinned_file(inode)) {
|
||||
f2fs_pin_file_control(inode, true);
|
||||
|
@ -639,7 +640,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
}
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, bidx, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, bidx, LOOKUP_NODE);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -654,14 +655,17 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
*/
|
||||
f2fs_wait_on_page_writeback(page, DATA, true);
|
||||
|
||||
get_node_info(fio.sbi, dn.nid, &ni);
|
||||
f2fs_get_node_info(fio.sbi, dn.nid, &ni);
|
||||
set_summary(&sum, dn.nid, dn.ofs_in_node, ni.version);
|
||||
|
||||
/* read page */
|
||||
fio.page = page;
|
||||
fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr;
|
||||
|
||||
allocate_data_block(fio.sbi, NULL, fio.old_blkaddr, &newaddr,
|
||||
if (lfs_mode)
|
||||
down_write(&fio.sbi->io_order_lock);
|
||||
|
||||
f2fs_allocate_data_block(fio.sbi, NULL, fio.old_blkaddr, &newaddr,
|
||||
&sum, CURSEG_COLD_DATA, NULL, false);
|
||||
|
||||
fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(fio.sbi),
|
||||
|
@ -693,6 +697,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
dec_page_count(fio.sbi, F2FS_DIRTY_META);
|
||||
|
||||
set_page_writeback(fio.encrypted_page);
|
||||
ClearPageError(page);
|
||||
|
||||
/* allocate block address */
|
||||
f2fs_wait_on_page_writeback(dn.node_page, NODE, true);
|
||||
|
@ -700,8 +705,8 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
fio.op = REQ_OP_WRITE;
|
||||
fio.op_flags = REQ_SYNC;
|
||||
fio.new_blkaddr = newaddr;
|
||||
err = f2fs_submit_page_write(&fio);
|
||||
if (err) {
|
||||
f2fs_submit_page_write(&fio);
|
||||
if (fio.retry) {
|
||||
if (PageWriteback(fio.encrypted_page))
|
||||
end_page_writeback(fio.encrypted_page);
|
||||
goto put_page_out;
|
||||
|
@ -716,8 +721,10 @@ static void move_data_block(struct inode *inode, block_t bidx,
|
|||
put_page_out:
|
||||
f2fs_put_page(fio.encrypted_page, 1);
|
||||
recover_block:
|
||||
if (lfs_mode)
|
||||
up_write(&fio.sbi->io_order_lock);
|
||||
if (err)
|
||||
__f2fs_replace_block(fio.sbi, &sum, newaddr, fio.old_blkaddr,
|
||||
f2fs_do_replace_block(fio.sbi, &sum, newaddr, fio.old_blkaddr,
|
||||
true, true);
|
||||
put_out:
|
||||
f2fs_put_dnode(&dn);
|
||||
|
@ -730,15 +737,18 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type,
|
|||
{
|
||||
struct page *page;
|
||||
|
||||
page = get_lock_data_page(inode, bidx, true);
|
||||
page = f2fs_get_lock_data_page(inode, bidx, true);
|
||||
if (IS_ERR(page))
|
||||
return;
|
||||
|
||||
if (!check_valid_map(F2FS_I_SB(inode), segno, off))
|
||||
goto out;
|
||||
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
if (f2fs_is_atomic_file(inode)) {
|
||||
F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC]++;
|
||||
F2FS_I_SB(inode)->skipped_atomic_files[gc_type]++;
|
||||
goto out;
|
||||
}
|
||||
if (f2fs_is_pinned_file(inode)) {
|
||||
if (gc_type == FG_GC)
|
||||
f2fs_pin_file_control(inode, true);
|
||||
|
@ -772,15 +782,20 @@ retry:
|
|||
f2fs_wait_on_page_writeback(page, DATA, true);
|
||||
if (clear_page_dirty_for_io(page)) {
|
||||
inode_dec_dirty_pages(inode);
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
}
|
||||
|
||||
set_cold_data(page);
|
||||
|
||||
err = do_write_data_page(&fio);
|
||||
if (err == -ENOMEM && is_dirty) {
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
goto retry;
|
||||
err = f2fs_do_write_data_page(&fio);
|
||||
if (err) {
|
||||
clear_cold_data(page);
|
||||
if (err == -ENOMEM) {
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
goto retry;
|
||||
}
|
||||
if (is_dirty)
|
||||
set_page_dirty(page);
|
||||
}
|
||||
}
|
||||
out:
|
||||
|
@ -824,13 +839,13 @@ next_step:
|
|||
continue;
|
||||
|
||||
if (phase == 0) {
|
||||
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
|
||||
f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
|
||||
META_NAT, true);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (phase == 1) {
|
||||
ra_node_page(sbi, nid);
|
||||
f2fs_ra_node_page(sbi, nid);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -839,7 +854,7 @@ next_step:
|
|||
continue;
|
||||
|
||||
if (phase == 2) {
|
||||
ra_node_page(sbi, dni.ino);
|
||||
f2fs_ra_node_page(sbi, dni.ino);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -850,23 +865,23 @@ next_step:
|
|||
if (IS_ERR(inode) || is_bad_inode(inode))
|
||||
continue;
|
||||
|
||||
/* if encrypted inode, let's go phase 3 */
|
||||
if (f2fs_encrypted_file(inode)) {
|
||||
/* if inode uses special I/O path, let's go phase 3 */
|
||||
if (f2fs_post_read_required(inode)) {
|
||||
add_gc_inode(gc_list, inode);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!down_write_trylock(
|
||||
&F2FS_I(inode)->dio_rwsem[WRITE])) {
|
||||
&F2FS_I(inode)->i_gc_rwsem[WRITE])) {
|
||||
iput(inode);
|
||||
continue;
|
||||
}
|
||||
|
||||
start_bidx = start_bidx_of_node(nofs, inode);
|
||||
data_page = get_read_data_page(inode,
|
||||
start_bidx = f2fs_start_bidx_of_node(nofs, inode);
|
||||
data_page = f2fs_get_read_data_page(inode,
|
||||
start_bidx + ofs_in_node, REQ_RAHEAD,
|
||||
true);
|
||||
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
|
||||
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
|
||||
if (IS_ERR(data_page)) {
|
||||
iput(inode);
|
||||
continue;
|
||||
|
@ -884,11 +899,11 @@ next_step:
|
|||
bool locked = false;
|
||||
|
||||
if (S_ISREG(inode->i_mode)) {
|
||||
if (!down_write_trylock(&fi->dio_rwsem[READ]))
|
||||
if (!down_write_trylock(&fi->i_gc_rwsem[READ]))
|
||||
continue;
|
||||
if (!down_write_trylock(
|
||||
&fi->dio_rwsem[WRITE])) {
|
||||
up_write(&fi->dio_rwsem[READ]);
|
||||
&fi->i_gc_rwsem[WRITE])) {
|
||||
up_write(&fi->i_gc_rwsem[READ]);
|
||||
continue;
|
||||
}
|
||||
locked = true;
|
||||
|
@ -897,17 +912,18 @@ next_step:
|
|||
inode_dio_wait(inode);
|
||||
}
|
||||
|
||||
start_bidx = start_bidx_of_node(nofs, inode)
|
||||
start_bidx = f2fs_start_bidx_of_node(nofs, inode)
|
||||
+ ofs_in_node;
|
||||
if (f2fs_encrypted_file(inode))
|
||||
move_data_block(inode, start_bidx, segno, off);
|
||||
if (f2fs_post_read_required(inode))
|
||||
move_data_block(inode, start_bidx, gc_type,
|
||||
segno, off);
|
||||
else
|
||||
move_data_page(inode, start_bidx, gc_type,
|
||||
segno, off);
|
||||
|
||||
if (locked) {
|
||||
up_write(&fi->dio_rwsem[WRITE]);
|
||||
up_write(&fi->dio_rwsem[READ]);
|
||||
up_write(&fi->i_gc_rwsem[WRITE]);
|
||||
up_write(&fi->i_gc_rwsem[READ]);
|
||||
}
|
||||
|
||||
stat_inc_data_blk_count(sbi, 1, gc_type);
|
||||
|
@ -946,12 +962,12 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
|
|||
|
||||
/* readahead multi ssa blocks those have contiguous address */
|
||||
if (sbi->segs_per_sec > 1)
|
||||
ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno),
|
||||
f2fs_ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno),
|
||||
sbi->segs_per_sec, META_SSA, true);
|
||||
|
||||
/* reference all summary page */
|
||||
while (segno < end_segno) {
|
||||
sum_page = get_sum_page(sbi, segno++);
|
||||
sum_page = f2fs_get_sum_page(sbi, segno++);
|
||||
unlock_page(sum_page);
|
||||
}
|
||||
|
||||
|
@ -1017,6 +1033,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
|
|||
.ilist = LIST_HEAD_INIT(gc_list.ilist),
|
||||
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
|
||||
};
|
||||
unsigned long long last_skipped = sbi->skipped_atomic_files[FG_GC];
|
||||
unsigned int skipped_round = 0, round = 0;
|
||||
|
||||
trace_f2fs_gc_begin(sbi->sb, sync, background,
|
||||
get_pages(sbi, F2FS_DIRTY_NODES),
|
||||
|
@ -1045,7 +1063,7 @@ gc_more:
|
|||
* secure free segments which doesn't need fggc any more.
|
||||
*/
|
||||
if (prefree_segments(sbi)) {
|
||||
ret = write_checkpoint(sbi, &cpc);
|
||||
ret = f2fs_write_checkpoint(sbi, &cpc);
|
||||
if (ret)
|
||||
goto stop;
|
||||
}
|
||||
|
@ -1068,17 +1086,27 @@ gc_more:
|
|||
sec_freed++;
|
||||
total_freed += seg_freed;
|
||||
|
||||
if (gc_type == FG_GC) {
|
||||
if (sbi->skipped_atomic_files[FG_GC] > last_skipped)
|
||||
skipped_round++;
|
||||
last_skipped = sbi->skipped_atomic_files[FG_GC];
|
||||
round++;
|
||||
}
|
||||
|
||||
if (gc_type == FG_GC)
|
||||
sbi->cur_victim_sec = NULL_SEGNO;
|
||||
|
||||
if (!sync) {
|
||||
if (has_not_enough_free_secs(sbi, sec_freed, 0)) {
|
||||
if (skipped_round > MAX_SKIP_ATOMIC_COUNT &&
|
||||
skipped_round * 2 >= round)
|
||||
f2fs_drop_inmem_pages_all(sbi, true);
|
||||
segno = NULL_SEGNO;
|
||||
goto gc_more;
|
||||
}
|
||||
|
||||
if (gc_type == FG_GC)
|
||||
ret = write_checkpoint(sbi, &cpc);
|
||||
ret = f2fs_write_checkpoint(sbi, &cpc);
|
||||
}
|
||||
stop:
|
||||
SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0;
|
||||
|
@ -1102,19 +1130,10 @@ stop:
|
|||
return ret;
|
||||
}
|
||||
|
||||
void build_gc_manager(struct f2fs_sb_info *sbi)
|
||||
void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
u64 main_count, resv_count, ovp_count;
|
||||
|
||||
DIRTY_I(sbi)->v_ops = &default_v_ops;
|
||||
|
||||
/* threshold of # of valid blocks in a section for victims of FG_GC */
|
||||
main_count = SM_I(sbi)->main_segments << sbi->log_blocks_per_seg;
|
||||
resv_count = SM_I(sbi)->reserved_segments << sbi->log_blocks_per_seg;
|
||||
ovp_count = SM_I(sbi)->ovp_segments << sbi->log_blocks_per_seg;
|
||||
|
||||
sbi->fggc_threshold = div64_u64((main_count - ovp_count) *
|
||||
BLKS_PER_SEC(sbi), (main_count - resv_count));
|
||||
sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES;
|
||||
|
||||
/* give warm/cold data area from slower device */
|
||||
|
|
|
@ -36,8 +36,6 @@ struct f2fs_gc_kthread {
|
|||
unsigned int no_gc_sleep_time;
|
||||
|
||||
/* for changing gc mode */
|
||||
unsigned int gc_idle;
|
||||
unsigned int gc_urgent;
|
||||
unsigned int gc_wake;
|
||||
};
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ bool f2fs_may_inline_data(struct inode *inode)
|
|||
if (i_size_read(inode) > MAX_INLINE_DATA(inode))
|
||||
return false;
|
||||
|
||||
if (f2fs_encrypted_file(inode))
|
||||
if (f2fs_post_read_required(inode))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
@ -42,7 +42,7 @@ bool f2fs_may_inline_dentry(struct inode *inode)
|
|||
return true;
|
||||
}
|
||||
|
||||
void read_inline_data(struct page *page, struct page *ipage)
|
||||
void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
|
||||
{
|
||||
struct inode *inode = page->mapping->host;
|
||||
void *src_addr, *dst_addr;
|
||||
|
@ -64,7 +64,8 @@ void read_inline_data(struct page *page, struct page *ipage)
|
|||
SetPageUptodate(page);
|
||||
}
|
||||
|
||||
void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from)
|
||||
void f2fs_truncate_inline_inode(struct inode *inode,
|
||||
struct page *ipage, u64 from)
|
||||
{
|
||||
void *addr;
|
||||
|
||||
|
@ -85,7 +86,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page)
|
|||
{
|
||||
struct page *ipage;
|
||||
|
||||
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
unlock_page(page);
|
||||
return PTR_ERR(ipage);
|
||||
|
@ -99,7 +100,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page)
|
|||
if (page->index)
|
||||
zero_user_segment(page, 0, PAGE_SIZE);
|
||||
else
|
||||
read_inline_data(page, ipage);
|
||||
f2fs_do_read_inline_data(page, ipage);
|
||||
|
||||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
|
@ -131,7 +132,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
|
|||
|
||||
f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page));
|
||||
|
||||
read_inline_data(page, dn->inode_page);
|
||||
f2fs_do_read_inline_data(page, dn->inode_page);
|
||||
set_page_dirty(page);
|
||||
|
||||
/* clear dirty state */
|
||||
|
@ -139,20 +140,21 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
|
|||
|
||||
/* write data page to try to make data consistent */
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
fio.old_blkaddr = dn->data_blkaddr;
|
||||
set_inode_flag(dn->inode, FI_HOT_DATA);
|
||||
write_data_page(dn, &fio);
|
||||
f2fs_outplace_write_data(dn, &fio);
|
||||
f2fs_wait_on_page_writeback(page, DATA, true);
|
||||
if (dirty) {
|
||||
inode_dec_dirty_pages(dn->inode);
|
||||
remove_dirty_inode(dn->inode);
|
||||
f2fs_remove_dirty_inode(dn->inode);
|
||||
}
|
||||
|
||||
/* this converted inline_data should be recovered. */
|
||||
set_inode_flag(dn->inode, FI_APPEND_WRITE);
|
||||
|
||||
/* clear inline data and flag after data writeback */
|
||||
truncate_inline_inode(dn->inode, dn->inode_page, 0);
|
||||
f2fs_truncate_inline_inode(dn->inode, dn->inode_page, 0);
|
||||
clear_inline_node(dn->inode_page);
|
||||
clear_out:
|
||||
stat_dec_inline_inode(dn->inode);
|
||||
|
@ -177,7 +179,7 @@ int f2fs_convert_inline_inode(struct inode *inode)
|
|||
|
||||
f2fs_lock_op(sbi);
|
||||
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
err = PTR_ERR(ipage);
|
||||
goto out;
|
||||
|
@ -203,12 +205,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
|
|||
{
|
||||
void *src_addr, *dst_addr;
|
||||
struct dnode_of_data dn;
|
||||
struct address_space *mapping = page_mapping(page);
|
||||
unsigned long flags;
|
||||
int err;
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = get_dnode_of_data(&dn, 0, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, 0, LOOKUP_NODE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -226,10 +226,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
|
|||
kunmap_atomic(src_addr);
|
||||
set_page_dirty(dn.inode_page);
|
||||
|
||||
xa_lock_irqsave(&mapping->i_pages, flags);
|
||||
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
|
||||
PAGECACHE_TAG_DIRTY);
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags);
|
||||
f2fs_clear_radix_tree_dirty_tag(page);
|
||||
|
||||
set_inode_flag(inode, FI_APPEND_WRITE);
|
||||
set_inode_flag(inode, FI_DATA_EXIST);
|
||||
|
@ -239,7 +236,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
|
|||
return 0;
|
||||
}
|
||||
|
||||
bool recover_inline_data(struct inode *inode, struct page *npage)
|
||||
bool f2fs_recover_inline_data(struct inode *inode, struct page *npage)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct f2fs_inode *ri = NULL;
|
||||
|
@ -260,7 +257,7 @@ bool recover_inline_data(struct inode *inode, struct page *npage)
|
|||
if (f2fs_has_inline_data(inode) &&
|
||||
ri && (ri->i_inline & F2FS_INLINE_DATA)) {
|
||||
process_inline:
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
f2fs_bug_on(sbi, IS_ERR(ipage));
|
||||
|
||||
f2fs_wait_on_page_writeback(ipage, NODE, true);
|
||||
|
@ -278,20 +275,20 @@ process_inline:
|
|||
}
|
||||
|
||||
if (f2fs_has_inline_data(inode)) {
|
||||
ipage = get_node_page(sbi, inode->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
f2fs_bug_on(sbi, IS_ERR(ipage));
|
||||
truncate_inline_inode(inode, ipage, 0);
|
||||
f2fs_truncate_inline_inode(inode, ipage, 0);
|
||||
clear_inode_flag(inode, FI_INLINE_DATA);
|
||||
f2fs_put_page(ipage, 1);
|
||||
} else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
|
||||
if (truncate_blocks(inode, 0, false))
|
||||
if (f2fs_truncate_blocks(inode, 0, false))
|
||||
return false;
|
||||
goto process_inline;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
|
||||
struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
|
||||
struct fscrypt_name *fname, struct page **res_page)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
|
||||
|
@ -302,7 +299,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
|
|||
void *inline_dentry;
|
||||
f2fs_hash_t namehash;
|
||||
|
||||
ipage = get_node_page(sbi, dir->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, dir->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
*res_page = ipage;
|
||||
return NULL;
|
||||
|
@ -313,7 +310,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
|
|||
inline_dentry = inline_data_addr(dir, ipage);
|
||||
|
||||
make_dentry_ptr_inline(dir, &d, inline_dentry);
|
||||
de = find_target_dentry(fname, namehash, NULL, &d);
|
||||
de = f2fs_find_target_dentry(fname, namehash, NULL, &d);
|
||||
unlock_page(ipage);
|
||||
if (de)
|
||||
*res_page = ipage;
|
||||
|
@ -323,7 +320,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
|
|||
return de;
|
||||
}
|
||||
|
||||
int make_empty_inline_dir(struct inode *inode, struct inode *parent,
|
||||
int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
|
||||
struct page *ipage)
|
||||
{
|
||||
struct f2fs_dentry_ptr d;
|
||||
|
@ -332,7 +329,7 @@ int make_empty_inline_dir(struct inode *inode, struct inode *parent,
|
|||
inline_dentry = inline_data_addr(inode, ipage);
|
||||
|
||||
make_dentry_ptr_inline(inode, &d, inline_dentry);
|
||||
do_make_empty_dir(inode, parent, &d);
|
||||
f2fs_do_make_empty_dir(inode, parent, &d);
|
||||
|
||||
set_page_dirty(ipage);
|
||||
|
||||
|
@ -367,7 +364,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
|
|||
goto out;
|
||||
|
||||
f2fs_wait_on_page_writeback(page, DATA, true);
|
||||
zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE);
|
||||
|
||||
dentry_blk = page_address(page);
|
||||
|
||||
|
@ -391,7 +387,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
|
|||
set_page_dirty(page);
|
||||
|
||||
/* clear inline dir and flag after data writeback */
|
||||
truncate_inline_inode(dir, ipage, 0);
|
||||
f2fs_truncate_inline_inode(dir, ipage, 0);
|
||||
|
||||
stat_dec_inline_dir(dir);
|
||||
clear_inode_flag(dir, FI_INLINE_DENTRY);
|
||||
|
@ -434,7 +430,7 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry)
|
|||
new_name.len = le16_to_cpu(de->name_len);
|
||||
|
||||
ino = le32_to_cpu(de->ino);
|
||||
fake_mode = get_de_type(de) << S_SHIFT;
|
||||
fake_mode = f2fs_get_de_type(de) << S_SHIFT;
|
||||
|
||||
err = f2fs_add_regular_entry(dir, &new_name, NULL, NULL,
|
||||
ino, fake_mode);
|
||||
|
@ -446,8 +442,8 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry)
|
|||
return 0;
|
||||
punch_dentry_pages:
|
||||
truncate_inode_pages(&dir->i_data, 0);
|
||||
truncate_blocks(dir, 0, false);
|
||||
remove_dirty_inode(dir);
|
||||
f2fs_truncate_blocks(dir, 0, false);
|
||||
f2fs_remove_dirty_inode(dir);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -465,7 +461,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
|
|||
}
|
||||
|
||||
memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA(dir));
|
||||
truncate_inline_inode(dir, ipage, 0);
|
||||
f2fs_truncate_inline_inode(dir, ipage, 0);
|
||||
|
||||
unlock_page(ipage);
|
||||
|
||||
|
@ -514,14 +510,14 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
|||
struct page *page = NULL;
|
||||
int err = 0;
|
||||
|
||||
ipage = get_node_page(sbi, dir->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, dir->i_ino);
|
||||
if (IS_ERR(ipage))
|
||||
return PTR_ERR(ipage);
|
||||
|
||||
inline_dentry = inline_data_addr(dir, ipage);
|
||||
make_dentry_ptr_inline(dir, &d, inline_dentry);
|
||||
|
||||
bit_pos = room_for_filename(d.bitmap, slots, d.max);
|
||||
bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
|
||||
if (bit_pos >= d.max) {
|
||||
err = f2fs_convert_inline_dir(dir, ipage, inline_dentry);
|
||||
if (err)
|
||||
|
@ -532,7 +528,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
|||
|
||||
if (inode) {
|
||||
down_write(&F2FS_I(inode)->i_sem);
|
||||
page = init_inode_metadata(inode, dir, new_name,
|
||||
page = f2fs_init_inode_metadata(inode, dir, new_name,
|
||||
orig_name, ipage);
|
||||
if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
|
@ -553,7 +549,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
|||
f2fs_put_page(page, 1);
|
||||
}
|
||||
|
||||
update_parent_metadata(dir, inode, 0);
|
||||
f2fs_update_parent_metadata(dir, inode, 0);
|
||||
fail:
|
||||
if (inode)
|
||||
up_write(&F2FS_I(inode)->i_sem);
|
||||
|
@ -599,7 +595,7 @@ bool f2fs_empty_inline_dir(struct inode *dir)
|
|||
void *inline_dentry;
|
||||
struct f2fs_dentry_ptr d;
|
||||
|
||||
ipage = get_node_page(sbi, dir->i_ino);
|
||||
ipage = f2fs_get_node_page(sbi, dir->i_ino);
|
||||
if (IS_ERR(ipage))
|
||||
return false;
|
||||
|
||||
|
@ -630,7 +626,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
|
|||
if (ctx->pos == d.max)
|
||||
return 0;
|
||||
|
||||
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
if (IS_ERR(ipage))
|
||||
return PTR_ERR(ipage);
|
||||
|
||||
|
@ -656,7 +652,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
|
|||
struct page *ipage;
|
||||
int err = 0;
|
||||
|
||||
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
if (IS_ERR(ipage))
|
||||
return PTR_ERR(ipage);
|
||||
|
||||
|
@ -672,7 +668,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
|
|||
ilen = start + len;
|
||||
ilen -= start;
|
||||
|
||||
get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni);
|
||||
f2fs_get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni);
|
||||
byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits;
|
||||
byteaddr += (char *)inline_data_addr(inode, ipage) -
|
||||
(char *)F2FS_INODE(ipage);
|
||||
|
|
115
fs/f2fs/inode.c
115
fs/f2fs/inode.c
|
@ -36,15 +36,15 @@ void f2fs_set_inode_flags(struct inode *inode)
|
|||
unsigned int flags = F2FS_I(inode)->i_flags;
|
||||
unsigned int new_fl = 0;
|
||||
|
||||
if (flags & FS_SYNC_FL)
|
||||
if (flags & F2FS_SYNC_FL)
|
||||
new_fl |= S_SYNC;
|
||||
if (flags & FS_APPEND_FL)
|
||||
if (flags & F2FS_APPEND_FL)
|
||||
new_fl |= S_APPEND;
|
||||
if (flags & FS_IMMUTABLE_FL)
|
||||
if (flags & F2FS_IMMUTABLE_FL)
|
||||
new_fl |= S_IMMUTABLE;
|
||||
if (flags & FS_NOATIME_FL)
|
||||
if (flags & F2FS_NOATIME_FL)
|
||||
new_fl |= S_NOATIME;
|
||||
if (flags & FS_DIRSYNC_FL)
|
||||
if (flags & F2FS_DIRSYNC_FL)
|
||||
new_fl |= S_DIRSYNC;
|
||||
if (f2fs_encrypted_inode(inode))
|
||||
new_fl |= S_ENCRYPTED;
|
||||
|
@ -72,7 +72,7 @@ static bool __written_first_block(struct f2fs_inode *ri)
|
|||
{
|
||||
block_t addr = le32_to_cpu(ri->i_addr[offset_in_addr(ri)]);
|
||||
|
||||
if (addr != NEW_ADDR && addr != NULL_ADDR)
|
||||
if (is_valid_blkaddr(addr))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
@ -117,7 +117,6 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage)
|
|||
static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page)
|
||||
{
|
||||
struct f2fs_inode *ri = &F2FS_NODE(page)->i;
|
||||
int extra_isize = le32_to_cpu(ri->i_extra_isize);
|
||||
|
||||
if (!f2fs_sb_has_inode_chksum(sbi->sb))
|
||||
return false;
|
||||
|
@ -125,7 +124,8 @@ static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page
|
|||
if (!RAW_IS_INODE(F2FS_NODE(page)) || !(ri->i_inline & F2FS_EXTRA_ATTR))
|
||||
return false;
|
||||
|
||||
if (!F2FS_FITS_IN_INODE(ri, extra_isize, i_inode_checksum))
|
||||
if (!F2FS_FITS_IN_INODE(ri, le16_to_cpu(ri->i_extra_isize),
|
||||
i_inode_checksum))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
@ -185,6 +185,21 @@ void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page)
|
|||
ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page));
|
||||
}
|
||||
|
||||
static bool sanity_check_inode(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
||||
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)
|
||||
&& !f2fs_has_extra_attr(inode)) {
|
||||
set_sbi_flag(sbi, SBI_NEED_FSCK);
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
"%s: corrupted inode ino=%lx, run fsck to fix.",
|
||||
__func__, inode->i_ino);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static int do_read_inode(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
@ -194,14 +209,10 @@ static int do_read_inode(struct inode *inode)
|
|||
projid_t i_projid;
|
||||
|
||||
/* Check if ino is within scope */
|
||||
if (check_nid_range(sbi, inode->i_ino)) {
|
||||
f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu",
|
||||
(unsigned long) inode->i_ino);
|
||||
WARN_ON(1);
|
||||
if (f2fs_check_nid_range(sbi, inode->i_ino))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
node_page = get_node_page(sbi, inode->i_ino);
|
||||
node_page = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(node_page))
|
||||
return PTR_ERR(node_page);
|
||||
|
||||
|
@ -221,8 +232,11 @@ static int do_read_inode(struct inode *inode)
|
|||
inode->i_ctime.tv_nsec = le32_to_cpu(ri->i_ctime_nsec);
|
||||
inode->i_mtime.tv_nsec = le32_to_cpu(ri->i_mtime_nsec);
|
||||
inode->i_generation = le32_to_cpu(ri->i_generation);
|
||||
|
||||
fi->i_current_depth = le32_to_cpu(ri->i_current_depth);
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
fi->i_current_depth = le32_to_cpu(ri->i_current_depth);
|
||||
else if (S_ISREG(inode->i_mode))
|
||||
fi->i_gc_failures[GC_FAILURE_PIN] =
|
||||
le16_to_cpu(ri->i_gc_failures);
|
||||
fi->i_xattr_nid = le32_to_cpu(ri->i_xattr_nid);
|
||||
fi->i_flags = le32_to_cpu(ri->i_flags);
|
||||
fi->flags = 0;
|
||||
|
@ -239,7 +253,6 @@ static int do_read_inode(struct inode *inode)
|
|||
le16_to_cpu(ri->i_extra_isize) : 0;
|
||||
|
||||
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) {
|
||||
f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode));
|
||||
fi->i_inline_xattr_size = le16_to_cpu(ri->i_inline_xattr_size);
|
||||
} else if (f2fs_has_inline_xattr(inode) ||
|
||||
f2fs_has_inline_dentry(inode)) {
|
||||
|
@ -265,10 +278,10 @@ static int do_read_inode(struct inode *inode)
|
|||
if (__written_first_block(ri))
|
||||
set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
|
||||
|
||||
if (!need_inode_block_update(sbi, inode->i_ino))
|
||||
if (!f2fs_need_inode_block_update(sbi, inode->i_ino))
|
||||
fi->last_disk_size = inode->i_size;
|
||||
|
||||
if (fi->i_flags & FS_PROJINHERIT_FL)
|
||||
if (fi->i_flags & F2FS_PROJINHERIT_FL)
|
||||
set_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
|
||||
if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi->sb) &&
|
||||
|
@ -317,13 +330,17 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
|
|||
ret = do_read_inode(inode);
|
||||
if (ret)
|
||||
goto bad_inode;
|
||||
if (!sanity_check_inode(inode)) {
|
||||
ret = -EINVAL;
|
||||
goto bad_inode;
|
||||
}
|
||||
make_now:
|
||||
if (ino == F2FS_NODE_INO(sbi)) {
|
||||
inode->i_mapping->a_ops = &f2fs_node_aops;
|
||||
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
|
||||
mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
|
||||
} else if (ino == F2FS_META_INO(sbi)) {
|
||||
inode->i_mapping->a_ops = &f2fs_meta_aops;
|
||||
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
|
||||
mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
|
||||
} else if (S_ISREG(inode->i_mode)) {
|
||||
inode->i_op = &f2fs_file_inode_operations;
|
||||
inode->i_fop = &f2fs_file_operations;
|
||||
|
@ -373,7 +390,7 @@ retry:
|
|||
return inode;
|
||||
}
|
||||
|
||||
void update_inode(struct inode *inode, struct page *node_page)
|
||||
void f2fs_update_inode(struct inode *inode, struct page *node_page)
|
||||
{
|
||||
struct f2fs_inode *ri;
|
||||
struct extent_tree *et = F2FS_I(inode)->extent_tree;
|
||||
|
@ -408,7 +425,12 @@ void update_inode(struct inode *inode, struct page *node_page)
|
|||
ri->i_atime_nsec = cpu_to_le32(inode->i_atime.tv_nsec);
|
||||
ri->i_ctime_nsec = cpu_to_le32(inode->i_ctime.tv_nsec);
|
||||
ri->i_mtime_nsec = cpu_to_le32(inode->i_mtime.tv_nsec);
|
||||
ri->i_current_depth = cpu_to_le32(F2FS_I(inode)->i_current_depth);
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
ri->i_current_depth =
|
||||
cpu_to_le32(F2FS_I(inode)->i_current_depth);
|
||||
else if (S_ISREG(inode->i_mode))
|
||||
ri->i_gc_failures =
|
||||
cpu_to_le16(F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN]);
|
||||
ri->i_xattr_nid = cpu_to_le32(F2FS_I(inode)->i_xattr_nid);
|
||||
ri->i_flags = cpu_to_le32(F2FS_I(inode)->i_flags);
|
||||
ri->i_pino = cpu_to_le32(F2FS_I(inode)->i_pino);
|
||||
|
@ -454,12 +476,12 @@ void update_inode(struct inode *inode, struct page *node_page)
|
|||
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
|
||||
}
|
||||
|
||||
void update_inode_page(struct inode *inode)
|
||||
void f2fs_update_inode_page(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct page *node_page;
|
||||
retry:
|
||||
node_page = get_node_page(sbi, inode->i_ino);
|
||||
node_page = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(node_page)) {
|
||||
int err = PTR_ERR(node_page);
|
||||
if (err == -ENOMEM) {
|
||||
|
@ -470,7 +492,7 @@ retry:
|
|||
}
|
||||
return;
|
||||
}
|
||||
update_inode(inode, node_page);
|
||||
f2fs_update_inode(inode, node_page);
|
||||
f2fs_put_page(node_page, 1);
|
||||
}
|
||||
|
||||
|
@ -489,7 +511,7 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
|
|||
* We need to balance fs here to prevent from producing dirty node pages
|
||||
* during the urgent cleaning time when runing out of free sections.
|
||||
*/
|
||||
update_inode_page(inode);
|
||||
f2fs_update_inode_page(inode);
|
||||
if (wbc && wbc->nr_to_write)
|
||||
f2fs_balance_fs(sbi, true);
|
||||
return 0;
|
||||
|
@ -506,7 +528,7 @@ void f2fs_evict_inode(struct inode *inode)
|
|||
|
||||
/* some remained atomic pages should discarded */
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
drop_inmem_pages(inode);
|
||||
f2fs_drop_inmem_pages(inode);
|
||||
|
||||
trace_f2fs_evict_inode(inode);
|
||||
truncate_inode_pages_final(&inode->i_data);
|
||||
|
@ -516,7 +538,7 @@ void f2fs_evict_inode(struct inode *inode)
|
|||
goto out_clear;
|
||||
|
||||
f2fs_bug_on(sbi, get_dirty_pages(inode));
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
|
||||
f2fs_destroy_extent_tree(inode);
|
||||
|
||||
|
@ -525,9 +547,9 @@ void f2fs_evict_inode(struct inode *inode)
|
|||
|
||||
dquot_initialize(inode);
|
||||
|
||||
remove_ino_entry(sbi, inode->i_ino, APPEND_INO);
|
||||
remove_ino_entry(sbi, inode->i_ino, UPDATE_INO);
|
||||
remove_ino_entry(sbi, inode->i_ino, FLUSH_INO);
|
||||
f2fs_remove_ino_entry(sbi, inode->i_ino, APPEND_INO);
|
||||
f2fs_remove_ino_entry(sbi, inode->i_ino, UPDATE_INO);
|
||||
f2fs_remove_ino_entry(sbi, inode->i_ino, FLUSH_INO);
|
||||
|
||||
sb_start_intwrite(inode->i_sb);
|
||||
set_inode_flag(inode, FI_NO_ALLOC);
|
||||
|
@ -544,7 +566,7 @@ retry:
|
|||
#endif
|
||||
if (!err) {
|
||||
f2fs_lock_op(sbi);
|
||||
err = remove_inode_page(inode);
|
||||
err = f2fs_remove_inode_page(inode);
|
||||
f2fs_unlock_op(sbi);
|
||||
if (err == -ENOENT)
|
||||
err = 0;
|
||||
|
@ -557,7 +579,7 @@ retry:
|
|||
}
|
||||
|
||||
if (err)
|
||||
update_inode_page(inode);
|
||||
f2fs_update_inode_page(inode);
|
||||
dquot_free_inode(inode);
|
||||
sb_end_intwrite(inode->i_sb);
|
||||
no_delete:
|
||||
|
@ -580,16 +602,19 @@ no_delete:
|
|||
invalidate_mapping_pages(NODE_MAPPING(sbi), xnid, xnid);
|
||||
if (inode->i_nlink) {
|
||||
if (is_inode_flag_set(inode, FI_APPEND_WRITE))
|
||||
add_ino_entry(sbi, inode->i_ino, APPEND_INO);
|
||||
f2fs_add_ino_entry(sbi, inode->i_ino, APPEND_INO);
|
||||
if (is_inode_flag_set(inode, FI_UPDATE_WRITE))
|
||||
add_ino_entry(sbi, inode->i_ino, UPDATE_INO);
|
||||
f2fs_add_ino_entry(sbi, inode->i_ino, UPDATE_INO);
|
||||
}
|
||||
if (is_inode_flag_set(inode, FI_FREE_NID)) {
|
||||
alloc_nid_failed(sbi, inode->i_ino);
|
||||
f2fs_alloc_nid_failed(sbi, inode->i_ino);
|
||||
clear_inode_flag(inode, FI_FREE_NID);
|
||||
} else {
|
||||
f2fs_bug_on(sbi, err &&
|
||||
!exist_written_data(sbi, inode->i_ino, ORPHAN_INO));
|
||||
/*
|
||||
* If xattr nid is corrupted, we can reach out error condition,
|
||||
* err & !f2fs_exist_written_data(sbi, inode->i_ino, ORPHAN_INO)).
|
||||
* In that case, f2fs_check_nid_range() is enough to give a clue.
|
||||
*/
|
||||
}
|
||||
out_clear:
|
||||
fscrypt_put_encryption_info(inode);
|
||||
|
@ -597,7 +622,7 @@ out_clear:
|
|||
}
|
||||
|
||||
/* caller should call f2fs_lock_op() */
|
||||
void handle_failed_inode(struct inode *inode)
|
||||
void f2fs_handle_failed_inode(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct node_info ni;
|
||||
|
@ -612,7 +637,7 @@ void handle_failed_inode(struct inode *inode)
|
|||
* we must call this to avoid inode being remained as dirty, resulting
|
||||
* in a panic when flushing dirty inodes in gdirty_list.
|
||||
*/
|
||||
update_inode_page(inode);
|
||||
f2fs_update_inode_page(inode);
|
||||
f2fs_inode_synced(inode);
|
||||
|
||||
/* don't make bad inode, since it becomes a regular file. */
|
||||
|
@ -623,18 +648,18 @@ void handle_failed_inode(struct inode *inode)
|
|||
* so we can prevent losing this orphan when encoutering checkpoint
|
||||
* and following suddenly power-off.
|
||||
*/
|
||||
get_node_info(sbi, inode->i_ino, &ni);
|
||||
f2fs_get_node_info(sbi, inode->i_ino, &ni);
|
||||
|
||||
if (ni.blk_addr != NULL_ADDR) {
|
||||
int err = acquire_orphan_inode(sbi);
|
||||
int err = f2fs_acquire_orphan_inode(sbi);
|
||||
if (err) {
|
||||
set_sbi_flag(sbi, SBI_NEED_FSCK);
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
"Too many orphan inodes, run fsck to fix.");
|
||||
} else {
|
||||
add_orphan_inode(inode);
|
||||
f2fs_add_orphan_inode(inode);
|
||||
}
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
f2fs_alloc_nid_done(sbi, inode->i_ino);
|
||||
} else {
|
||||
set_inode_flag(inode, FI_FREE_NID);
|
||||
}
|
||||
|
|
|
@ -37,7 +37,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
|
|||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
if (!alloc_nid(sbi, &ino)) {
|
||||
if (!f2fs_alloc_nid(sbi, &ino)) {
|
||||
f2fs_unlock_op(sbi);
|
||||
err = -ENOSPC;
|
||||
goto fail;
|
||||
|
@ -54,6 +54,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
|
|||
F2FS_I(inode)->i_crtime = current_time(inode);
|
||||
inode->i_generation = sbi->s_next_generation++;
|
||||
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
F2FS_I(inode)->i_current_depth = 1;
|
||||
|
||||
err = insert_inode_locked(inode);
|
||||
if (err) {
|
||||
err = -EINVAL;
|
||||
|
@ -61,7 +64,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
|
|||
}
|
||||
|
||||
if (f2fs_sb_has_project_quota(sbi->sb) &&
|
||||
(F2FS_I(dir)->i_flags & FS_PROJINHERIT_FL))
|
||||
(F2FS_I(dir)->i_flags & F2FS_PROJINHERIT_FL))
|
||||
F2FS_I(inode)->i_projid = F2FS_I(dir)->i_projid;
|
||||
else
|
||||
F2FS_I(inode)->i_projid = make_kprojid(&init_user_ns,
|
||||
|
@ -116,9 +119,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
|
|||
f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
|
||||
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
F2FS_I(inode)->i_flags |= FS_INDEX_FL;
|
||||
F2FS_I(inode)->i_flags |= F2FS_INDEX_FL;
|
||||
|
||||
if (F2FS_I(inode)->i_flags & FS_PROJINHERIT_FL)
|
||||
if (F2FS_I(inode)->i_flags & F2FS_PROJINHERIT_FL)
|
||||
set_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
|
||||
trace_f2fs_new_inode(inode, 0);
|
||||
|
@ -193,7 +196,7 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *
|
|||
up_read(&sbi->sb_lock);
|
||||
}
|
||||
|
||||
int update_extension_list(struct f2fs_sb_info *sbi, const char *name,
|
||||
int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
|
||||
bool hot, bool set)
|
||||
{
|
||||
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
|
||||
|
@ -292,7 +295,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
|||
goto out;
|
||||
f2fs_unlock_op(sbi);
|
||||
|
||||
alloc_nid_done(sbi, ino);
|
||||
f2fs_alloc_nid_done(sbi, ino);
|
||||
|
||||
d_instantiate_new(dentry, inode);
|
||||
|
||||
|
@ -302,7 +305,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
|||
f2fs_balance_fs(sbi, true);
|
||||
return 0;
|
||||
out:
|
||||
handle_failed_inode(inode);
|
||||
f2fs_handle_failed_inode(inode);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -397,7 +400,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
|
|||
err = PTR_ERR(page);
|
||||
goto out;
|
||||
} else {
|
||||
err = __f2fs_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
|
||||
err = f2fs_do_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
@ -408,7 +411,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
|
|||
else if (IS_ERR(page))
|
||||
err = PTR_ERR(page);
|
||||
else
|
||||
err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
|
||||
err = f2fs_do_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
|
||||
out:
|
||||
if (!err)
|
||||
clear_inode_flag(dir, FI_INLINE_DOTS);
|
||||
|
@ -520,7 +523,7 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
|
|||
f2fs_balance_fs(sbi, true);
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
err = acquire_orphan_inode(sbi);
|
||||
err = f2fs_acquire_orphan_inode(sbi);
|
||||
if (err) {
|
||||
f2fs_unlock_op(sbi);
|
||||
f2fs_put_page(page, 0);
|
||||
|
@ -585,9 +588,9 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
|
|||
f2fs_lock_op(sbi);
|
||||
err = f2fs_add_link(dentry, inode);
|
||||
if (err)
|
||||
goto out_handle_failed_inode;
|
||||
goto out_f2fs_handle_failed_inode;
|
||||
f2fs_unlock_op(sbi);
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
f2fs_alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
err = fscrypt_encrypt_symlink(inode, symname, len, &disk_link);
|
||||
if (err)
|
||||
|
@ -620,8 +623,8 @@ err_out:
|
|||
f2fs_balance_fs(sbi, true);
|
||||
goto out_free_encrypted_link;
|
||||
|
||||
out_handle_failed_inode:
|
||||
handle_failed_inode(inode);
|
||||
out_f2fs_handle_failed_inode:
|
||||
f2fs_handle_failed_inode(inode);
|
||||
out_free_encrypted_link:
|
||||
if (disk_link.name != (unsigned char *)symname)
|
||||
kfree(disk_link.name);
|
||||
|
@ -657,7 +660,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
|
|||
goto out_fail;
|
||||
f2fs_unlock_op(sbi);
|
||||
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
f2fs_alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
d_instantiate_new(dentry, inode);
|
||||
|
||||
|
@ -669,7 +672,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
|
|||
|
||||
out_fail:
|
||||
clear_inode_flag(inode, FI_INC_LINK);
|
||||
handle_failed_inode(inode);
|
||||
f2fs_handle_failed_inode(inode);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -708,7 +711,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
|
|||
goto out;
|
||||
f2fs_unlock_op(sbi);
|
||||
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
f2fs_alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
d_instantiate_new(dentry, inode);
|
||||
|
||||
|
@ -718,7 +721,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
|
|||
f2fs_balance_fs(sbi, true);
|
||||
return 0;
|
||||
out:
|
||||
handle_failed_inode(inode);
|
||||
f2fs_handle_failed_inode(inode);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -747,7 +750,7 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
|
|||
}
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
err = acquire_orphan_inode(sbi);
|
||||
err = f2fs_acquire_orphan_inode(sbi);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -759,8 +762,8 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
|
|||
* add this non-linked tmpfile to orphan list, in this way we could
|
||||
* remove all unused data of tmpfile after abnormal power-off.
|
||||
*/
|
||||
add_orphan_inode(inode);
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
f2fs_add_orphan_inode(inode);
|
||||
f2fs_alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
if (whiteout) {
|
||||
f2fs_i_links_write(inode, false);
|
||||
|
@ -776,9 +779,9 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
|
|||
return 0;
|
||||
|
||||
release_out:
|
||||
release_orphan_inode(sbi);
|
||||
f2fs_release_orphan_inode(sbi);
|
||||
out:
|
||||
handle_failed_inode(inode);
|
||||
f2fs_handle_failed_inode(inode);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -885,7 +888,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
|
||||
f2fs_lock_op(sbi);
|
||||
|
||||
err = acquire_orphan_inode(sbi);
|
||||
err = f2fs_acquire_orphan_inode(sbi);
|
||||
if (err)
|
||||
goto put_out_dir;
|
||||
|
||||
|
@ -899,9 +902,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
up_write(&F2FS_I(new_inode)->i_sem);
|
||||
|
||||
if (!new_inode->i_nlink)
|
||||
add_orphan_inode(new_inode);
|
||||
f2fs_add_orphan_inode(new_inode);
|
||||
else
|
||||
release_orphan_inode(sbi);
|
||||
f2fs_release_orphan_inode(sbi);
|
||||
} else {
|
||||
f2fs_balance_fs(sbi, true);
|
||||
|
||||
|
@ -969,8 +972,12 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
f2fs_put_page(old_dir_page, 0);
|
||||
f2fs_i_links_write(old_dir, false);
|
||||
}
|
||||
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
|
||||
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
|
||||
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
|
||||
f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
|
||||
if (S_ISDIR(old_inode->i_mode))
|
||||
f2fs_add_ino_entry(sbi, old_inode->i_ino,
|
||||
TRANS_DIR_INO);
|
||||
}
|
||||
|
||||
f2fs_unlock_op(sbi);
|
||||
|
||||
|
@ -1121,8 +1128,8 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
f2fs_mark_inode_dirty_sync(new_dir, false);
|
||||
|
||||
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
|
||||
add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO);
|
||||
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
|
||||
f2fs_add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO);
|
||||
f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
|
||||
}
|
||||
|
||||
f2fs_unlock_op(sbi);
|
||||
|
|
285
fs/f2fs/node.c
285
fs/f2fs/node.c
|
@ -23,13 +23,28 @@
|
|||
#include "trace.h"
|
||||
#include <trace/events/f2fs.h>
|
||||
|
||||
#define on_build_free_nids(nmi) mutex_is_locked(&(nm_i)->build_lock)
|
||||
#define on_f2fs_build_free_nids(nmi) mutex_is_locked(&(nm_i)->build_lock)
|
||||
|
||||
static struct kmem_cache *nat_entry_slab;
|
||||
static struct kmem_cache *free_nid_slab;
|
||||
static struct kmem_cache *nat_entry_set_slab;
|
||||
|
||||
bool available_free_memory(struct f2fs_sb_info *sbi, int type)
|
||||
/*
|
||||
* Check whether the given nid is within node id range.
|
||||
*/
|
||||
int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
if (unlikely(nid < F2FS_ROOT_INO(sbi) || nid >= NM_I(sbi)->max_nid)) {
|
||||
set_sbi_flag(sbi, SBI_NEED_FSCK);
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
"%s: out-of-range nid=%x, run fsck to fix.",
|
||||
__func__, nid);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct sysinfo val;
|
||||
|
@ -87,18 +102,10 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
|
|||
|
||||
static void clear_node_page_dirty(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
unsigned int long flags;
|
||||
|
||||
if (PageDirty(page)) {
|
||||
xa_lock_irqsave(&mapping->i_pages, flags);
|
||||
radix_tree_tag_clear(&mapping->i_pages,
|
||||
page_index(page),
|
||||
PAGECACHE_TAG_DIRTY);
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags);
|
||||
|
||||
f2fs_clear_radix_tree_dirty_tag(page);
|
||||
clear_page_dirty_for_io(page);
|
||||
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
|
||||
dec_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
|
||||
}
|
||||
ClearPageUptodate(page);
|
||||
}
|
||||
|
@ -106,7 +113,7 @@ static void clear_node_page_dirty(struct page *page)
|
|||
static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
pgoff_t index = current_nat_addr(sbi, nid);
|
||||
return get_meta_page(sbi, index);
|
||||
return f2fs_get_meta_page(sbi, index);
|
||||
}
|
||||
|
||||
static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
|
@ -123,8 +130,8 @@ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
dst_off = next_nat_addr(sbi, src_off);
|
||||
|
||||
/* get current nat block page with lock */
|
||||
src_page = get_meta_page(sbi, src_off);
|
||||
dst_page = grab_meta_page(sbi, dst_off);
|
||||
src_page = f2fs_get_meta_page(sbi, src_off);
|
||||
dst_page = f2fs_grab_meta_page(sbi, dst_off);
|
||||
f2fs_bug_on(sbi, PageDirty(src_page));
|
||||
|
||||
src_addr = page_address(src_page);
|
||||
|
@ -260,7 +267,7 @@ static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i,
|
|||
start, nr);
|
||||
}
|
||||
|
||||
int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct nat_entry *e;
|
||||
|
@ -277,7 +284,7 @@ int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
return need;
|
||||
}
|
||||
|
||||
bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct nat_entry *e;
|
||||
|
@ -291,7 +298,7 @@ bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
return is_cp;
|
||||
}
|
||||
|
||||
bool need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct nat_entry *e;
|
||||
|
@ -364,8 +371,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
|
|||
new_blkaddr == NULL_ADDR);
|
||||
f2fs_bug_on(sbi, nat_get_blkaddr(e) == NEW_ADDR &&
|
||||
new_blkaddr == NEW_ADDR);
|
||||
f2fs_bug_on(sbi, nat_get_blkaddr(e) != NEW_ADDR &&
|
||||
nat_get_blkaddr(e) != NULL_ADDR &&
|
||||
f2fs_bug_on(sbi, is_valid_blkaddr(nat_get_blkaddr(e)) &&
|
||||
new_blkaddr == NEW_ADDR);
|
||||
|
||||
/* increment version no as node is removed */
|
||||
|
@ -376,7 +382,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
|
|||
|
||||
/* change address */
|
||||
nat_set_blkaddr(e, new_blkaddr);
|
||||
if (new_blkaddr == NEW_ADDR || new_blkaddr == NULL_ADDR)
|
||||
if (!is_valid_blkaddr(new_blkaddr))
|
||||
set_nat_flag(e, IS_CHECKPOINTED, false);
|
||||
__set_nat_cache_dirty(nm_i, e);
|
||||
|
||||
|
@ -391,7 +397,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
|
|||
up_write(&nm_i->nat_tree_lock);
|
||||
}
|
||||
|
||||
int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
|
||||
int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
int nr = nr_shrink;
|
||||
|
@ -413,7 +419,8 @@ int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
|
|||
/*
|
||||
* This function always returns success
|
||||
*/
|
||||
void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
|
||||
void f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
|
||||
struct node_info *ni)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
|
||||
|
@ -443,7 +450,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
|
|||
|
||||
/* Check current segment summary */
|
||||
down_read(&curseg->journal_rwsem);
|
||||
i = lookup_journal_in_cursum(journal, NAT_JOURNAL, nid, 0);
|
||||
i = f2fs_lookup_journal_in_cursum(journal, NAT_JOURNAL, nid, 0);
|
||||
if (i >= 0) {
|
||||
ne = nat_in_journal(journal, i);
|
||||
node_info_from_raw_nat(ni, &ne);
|
||||
|
@ -458,7 +465,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
|
|||
index = current_nat_addr(sbi, nid);
|
||||
up_read(&nm_i->nat_tree_lock);
|
||||
|
||||
page = get_meta_page(sbi, index);
|
||||
page = f2fs_get_meta_page(sbi, index);
|
||||
nat_blk = (struct f2fs_nat_block *)page_address(page);
|
||||
ne = nat_blk->entries[nid - start_nid];
|
||||
node_info_from_raw_nat(ni, &ne);
|
||||
|
@ -471,7 +478,7 @@ cache:
|
|||
/*
|
||||
* readahead MAX_RA_NODE number of node pages.
|
||||
*/
|
||||
static void ra_node_pages(struct page *parent, int start, int n)
|
||||
static void f2fs_ra_node_pages(struct page *parent, int start, int n)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_P_SB(parent);
|
||||
struct blk_plug plug;
|
||||
|
@ -485,13 +492,13 @@ static void ra_node_pages(struct page *parent, int start, int n)
|
|||
end = min(end, NIDS_PER_BLOCK);
|
||||
for (i = start; i < end; i++) {
|
||||
nid = get_nid(parent, i, false);
|
||||
ra_node_page(sbi, nid);
|
||||
f2fs_ra_node_page(sbi, nid);
|
||||
}
|
||||
|
||||
blk_finish_plug(&plug);
|
||||
}
|
||||
|
||||
pgoff_t get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs)
|
||||
pgoff_t f2fs_get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs)
|
||||
{
|
||||
const long direct_index = ADDRS_PER_INODE(dn->inode);
|
||||
const long direct_blks = ADDRS_PER_BLOCK;
|
||||
|
@ -606,7 +613,7 @@ got:
|
|||
* f2fs_unlock_op() only if ro is not set RDONLY_NODE.
|
||||
* In the case of RDONLY_NODE, we don't need to care about mutex.
|
||||
*/
|
||||
int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
|
||||
int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
|
||||
struct page *npage[4];
|
||||
|
@ -625,7 +632,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
|
|||
npage[0] = dn->inode_page;
|
||||
|
||||
if (!npage[0]) {
|
||||
npage[0] = get_node_page(sbi, nids[0]);
|
||||
npage[0] = f2fs_get_node_page(sbi, nids[0]);
|
||||
if (IS_ERR(npage[0]))
|
||||
return PTR_ERR(npage[0]);
|
||||
}
|
||||
|
@ -649,24 +656,24 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
|
|||
|
||||
if (!nids[i] && mode == ALLOC_NODE) {
|
||||
/* alloc new node */
|
||||
if (!alloc_nid(sbi, &(nids[i]))) {
|
||||
if (!f2fs_alloc_nid(sbi, &(nids[i]))) {
|
||||
err = -ENOSPC;
|
||||
goto release_pages;
|
||||
}
|
||||
|
||||
dn->nid = nids[i];
|
||||
npage[i] = new_node_page(dn, noffset[i]);
|
||||
npage[i] = f2fs_new_node_page(dn, noffset[i]);
|
||||
if (IS_ERR(npage[i])) {
|
||||
alloc_nid_failed(sbi, nids[i]);
|
||||
f2fs_alloc_nid_failed(sbi, nids[i]);
|
||||
err = PTR_ERR(npage[i]);
|
||||
goto release_pages;
|
||||
}
|
||||
|
||||
set_nid(parent, offset[i - 1], nids[i], i == 1);
|
||||
alloc_nid_done(sbi, nids[i]);
|
||||
f2fs_alloc_nid_done(sbi, nids[i]);
|
||||
done = true;
|
||||
} else if (mode == LOOKUP_NODE_RA && i == level && level > 1) {
|
||||
npage[i] = get_node_page_ra(parent, offset[i - 1]);
|
||||
npage[i] = f2fs_get_node_page_ra(parent, offset[i - 1]);
|
||||
if (IS_ERR(npage[i])) {
|
||||
err = PTR_ERR(npage[i]);
|
||||
goto release_pages;
|
||||
|
@ -681,7 +688,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
|
|||
}
|
||||
|
||||
if (!done) {
|
||||
npage[i] = get_node_page(sbi, nids[i]);
|
||||
npage[i] = f2fs_get_node_page(sbi, nids[i]);
|
||||
if (IS_ERR(npage[i])) {
|
||||
err = PTR_ERR(npage[i]);
|
||||
f2fs_put_page(npage[0], 0);
|
||||
|
@ -720,15 +727,15 @@ static void truncate_node(struct dnode_of_data *dn)
|
|||
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
|
||||
struct node_info ni;
|
||||
|
||||
get_node_info(sbi, dn->nid, &ni);
|
||||
f2fs_get_node_info(sbi, dn->nid, &ni);
|
||||
|
||||
/* Deallocate node address */
|
||||
invalidate_blocks(sbi, ni.blk_addr);
|
||||
f2fs_invalidate_blocks(sbi, ni.blk_addr);
|
||||
dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
|
||||
set_node_addr(sbi, &ni, NULL_ADDR, false);
|
||||
|
||||
if (dn->nid == dn->inode->i_ino) {
|
||||
remove_orphan_inode(sbi, dn->nid);
|
||||
f2fs_remove_orphan_inode(sbi, dn->nid);
|
||||
dec_valid_inode_count(sbi);
|
||||
f2fs_inode_synced(dn->inode);
|
||||
}
|
||||
|
@ -753,7 +760,7 @@ static int truncate_dnode(struct dnode_of_data *dn)
|
|||
return 1;
|
||||
|
||||
/* get direct node */
|
||||
page = get_node_page(F2FS_I_SB(dn->inode), dn->nid);
|
||||
page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid);
|
||||
if (IS_ERR(page) && PTR_ERR(page) == -ENOENT)
|
||||
return 1;
|
||||
else if (IS_ERR(page))
|
||||
|
@ -762,7 +769,7 @@ static int truncate_dnode(struct dnode_of_data *dn)
|
|||
/* Make dnode_of_data for parameter */
|
||||
dn->node_page = page;
|
||||
dn->ofs_in_node = 0;
|
||||
truncate_data_blocks(dn);
|
||||
f2fs_truncate_data_blocks(dn);
|
||||
truncate_node(dn);
|
||||
return 1;
|
||||
}
|
||||
|
@ -783,13 +790,13 @@ static int truncate_nodes(struct dnode_of_data *dn, unsigned int nofs,
|
|||
|
||||
trace_f2fs_truncate_nodes_enter(dn->inode, dn->nid, dn->data_blkaddr);
|
||||
|
||||
page = get_node_page(F2FS_I_SB(dn->inode), dn->nid);
|
||||
page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid);
|
||||
if (IS_ERR(page)) {
|
||||
trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(page));
|
||||
return PTR_ERR(page);
|
||||
}
|
||||
|
||||
ra_node_pages(page, ofs, NIDS_PER_BLOCK);
|
||||
f2fs_ra_node_pages(page, ofs, NIDS_PER_BLOCK);
|
||||
|
||||
rn = F2FS_NODE(page);
|
||||
if (depth < 3) {
|
||||
|
@ -859,7 +866,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn,
|
|||
/* get indirect nodes in the path */
|
||||
for (i = 0; i < idx + 1; i++) {
|
||||
/* reference count'll be increased */
|
||||
pages[i] = get_node_page(F2FS_I_SB(dn->inode), nid[i]);
|
||||
pages[i] = f2fs_get_node_page(F2FS_I_SB(dn->inode), nid[i]);
|
||||
if (IS_ERR(pages[i])) {
|
||||
err = PTR_ERR(pages[i]);
|
||||
idx = i - 1;
|
||||
|
@ -868,7 +875,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn,
|
|||
nid[i + 1] = get_nid(pages[i], offset[i + 1], false);
|
||||
}
|
||||
|
||||
ra_node_pages(pages[idx], offset[idx + 1], NIDS_PER_BLOCK);
|
||||
f2fs_ra_node_pages(pages[idx], offset[idx + 1], NIDS_PER_BLOCK);
|
||||
|
||||
/* free direct nodes linked to a partial indirect node */
|
||||
for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) {
|
||||
|
@ -905,7 +912,7 @@ fail:
|
|||
/*
|
||||
* All the block addresses of data and nodes should be nullified.
|
||||
*/
|
||||
int truncate_inode_blocks(struct inode *inode, pgoff_t from)
|
||||
int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
int err = 0, cont = 1;
|
||||
|
@ -921,7 +928,7 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from)
|
|||
if (level < 0)
|
||||
return level;
|
||||
|
||||
page = get_node_page(sbi, inode->i_ino);
|
||||
page = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(page)) {
|
||||
trace_f2fs_truncate_inode_blocks_exit(inode, PTR_ERR(page));
|
||||
return PTR_ERR(page);
|
||||
|
@ -1001,7 +1008,7 @@ fail:
|
|||
}
|
||||
|
||||
/* caller must lock inode page */
|
||||
int truncate_xattr_node(struct inode *inode)
|
||||
int f2fs_truncate_xattr_node(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
nid_t nid = F2FS_I(inode)->i_xattr_nid;
|
||||
|
@ -1011,7 +1018,7 @@ int truncate_xattr_node(struct inode *inode)
|
|||
if (!nid)
|
||||
return 0;
|
||||
|
||||
npage = get_node_page(sbi, nid);
|
||||
npage = f2fs_get_node_page(sbi, nid);
|
||||
if (IS_ERR(npage))
|
||||
return PTR_ERR(npage);
|
||||
|
||||
|
@ -1026,17 +1033,17 @@ int truncate_xattr_node(struct inode *inode)
|
|||
* Caller should grab and release a rwsem by calling f2fs_lock_op() and
|
||||
* f2fs_unlock_op().
|
||||
*/
|
||||
int remove_inode_page(struct inode *inode)
|
||||
int f2fs_remove_inode_page(struct inode *inode)
|
||||
{
|
||||
struct dnode_of_data dn;
|
||||
int err;
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
|
||||
err = get_dnode_of_data(&dn, 0, LOOKUP_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, 0, LOOKUP_NODE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = truncate_xattr_node(inode);
|
||||
err = f2fs_truncate_xattr_node(inode);
|
||||
if (err) {
|
||||
f2fs_put_dnode(&dn);
|
||||
return err;
|
||||
|
@ -1045,7 +1052,7 @@ int remove_inode_page(struct inode *inode)
|
|||
/* remove potential inline_data blocks */
|
||||
if (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
|
||||
S_ISLNK(inode->i_mode))
|
||||
truncate_data_blocks_range(&dn, 1);
|
||||
f2fs_truncate_data_blocks_range(&dn, 1);
|
||||
|
||||
/* 0 is possible, after f2fs_new_inode() has failed */
|
||||
f2fs_bug_on(F2FS_I_SB(inode),
|
||||
|
@ -1056,7 +1063,7 @@ int remove_inode_page(struct inode *inode)
|
|||
return 0;
|
||||
}
|
||||
|
||||
struct page *new_inode_page(struct inode *inode)
|
||||
struct page *f2fs_new_inode_page(struct inode *inode)
|
||||
{
|
||||
struct dnode_of_data dn;
|
||||
|
||||
|
@ -1064,10 +1071,10 @@ struct page *new_inode_page(struct inode *inode)
|
|||
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
|
||||
|
||||
/* caller should f2fs_put_page(page, 1); */
|
||||
return new_node_page(&dn, 0);
|
||||
return f2fs_new_node_page(&dn, 0);
|
||||
}
|
||||
|
||||
struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
|
||||
struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
|
||||
struct node_info new_ni;
|
||||
|
@ -1085,7 +1092,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
|
|||
goto fail;
|
||||
|
||||
#ifdef CONFIG_F2FS_CHECK_FS
|
||||
get_node_info(sbi, dn->nid, &new_ni);
|
||||
f2fs_get_node_info(sbi, dn->nid, &new_ni);
|
||||
f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR);
|
||||
#endif
|
||||
new_ni.nid = dn->nid;
|
||||
|
@ -1137,7 +1144,7 @@ static int read_node_page(struct page *page, int op_flags)
|
|||
if (PageUptodate(page))
|
||||
return LOCKED_PAGE;
|
||||
|
||||
get_node_info(sbi, page->index, &ni);
|
||||
f2fs_get_node_info(sbi, page->index, &ni);
|
||||
|
||||
if (unlikely(ni.blk_addr == NULL_ADDR)) {
|
||||
ClearPageUptodate(page);
|
||||
|
@ -1151,14 +1158,15 @@ static int read_node_page(struct page *page, int op_flags)
|
|||
/*
|
||||
* Readahead a node page
|
||||
*/
|
||||
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
struct page *apage;
|
||||
int err;
|
||||
|
||||
if (!nid)
|
||||
return;
|
||||
f2fs_bug_on(sbi, check_nid_range(sbi, nid));
|
||||
if (f2fs_check_nid_range(sbi, nid))
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
apage = radix_tree_lookup(&NODE_MAPPING(sbi)->i_pages, nid);
|
||||
|
@ -1182,7 +1190,8 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
|
|||
|
||||
if (!nid)
|
||||
return ERR_PTR(-ENOENT);
|
||||
f2fs_bug_on(sbi, check_nid_range(sbi, nid));
|
||||
if (f2fs_check_nid_range(sbi, nid))
|
||||
return ERR_PTR(-EINVAL);
|
||||
repeat:
|
||||
page = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false);
|
||||
if (!page)
|
||||
|
@ -1198,7 +1207,7 @@ repeat:
|
|||
}
|
||||
|
||||
if (parent)
|
||||
ra_node_pages(parent, start + 1, MAX_RA_NODE);
|
||||
f2fs_ra_node_pages(parent, start + 1, MAX_RA_NODE);
|
||||
|
||||
lock_page(page);
|
||||
|
||||
|
@ -1232,12 +1241,12 @@ out_err:
|
|||
return page;
|
||||
}
|
||||
|
||||
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
|
||||
struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
|
||||
{
|
||||
return __get_node_page(sbi, nid, NULL, 0);
|
||||
}
|
||||
|
||||
struct page *get_node_page_ra(struct page *parent, int start)
|
||||
struct page *f2fs_get_node_page_ra(struct page *parent, int start)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_P_SB(parent);
|
||||
nid_t nid = get_nid(parent, start, false);
|
||||
|
@ -1272,7 +1281,7 @@ static void flush_inline_data(struct f2fs_sb_info *sbi, nid_t ino)
|
|||
|
||||
ret = f2fs_write_inline_data(inode, page);
|
||||
inode_dec_dirty_pages(inode);
|
||||
remove_dirty_inode(inode);
|
||||
f2fs_remove_dirty_inode(inode);
|
||||
if (ret)
|
||||
set_page_dirty(page);
|
||||
page_out:
|
||||
|
@ -1359,11 +1368,8 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
|
|||
|
||||
trace_f2fs_writepage(page, NODE);
|
||||
|
||||
if (unlikely(f2fs_cp_error(sbi))) {
|
||||
dec_page_count(sbi, F2FS_DIRTY_NODES);
|
||||
unlock_page(page);
|
||||
return 0;
|
||||
}
|
||||
if (unlikely(f2fs_cp_error(sbi)))
|
||||
goto redirty_out;
|
||||
|
||||
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
|
||||
goto redirty_out;
|
||||
|
@ -1379,7 +1385,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
|
|||
down_read(&sbi->node_write);
|
||||
}
|
||||
|
||||
get_node_info(sbi, nid, &ni);
|
||||
f2fs_get_node_info(sbi, nid, &ni);
|
||||
|
||||
/* This page is already truncated */
|
||||
if (unlikely(ni.blk_addr == NULL_ADDR)) {
|
||||
|
@ -1394,8 +1400,9 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
|
|||
fio.op_flags |= REQ_PREFLUSH | REQ_FUA;
|
||||
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
fio.old_blkaddr = ni.blk_addr;
|
||||
write_node_page(nid, &fio);
|
||||
f2fs_do_write_node_page(nid, &fio);
|
||||
set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(page));
|
||||
dec_page_count(sbi, F2FS_DIRTY_NODES);
|
||||
up_read(&sbi->node_write);
|
||||
|
@ -1424,7 +1431,7 @@ redirty_out:
|
|||
return AOP_WRITEPAGE_ACTIVATE;
|
||||
}
|
||||
|
||||
void move_node_page(struct page *node_page, int gc_type)
|
||||
void f2fs_move_node_page(struct page *node_page, int gc_type)
|
||||
{
|
||||
if (gc_type == FG_GC) {
|
||||
struct writeback_control wbc = {
|
||||
|
@ -1461,7 +1468,7 @@ static int f2fs_write_node_page(struct page *page,
|
|||
return __write_node_page(page, false, NULL, wbc, false, FS_NODE_IO);
|
||||
}
|
||||
|
||||
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
|
||||
int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
|
||||
struct writeback_control *wbc, bool atomic)
|
||||
{
|
||||
pgoff_t index;
|
||||
|
@ -1528,9 +1535,9 @@ continue_unlock:
|
|||
if (IS_INODE(page)) {
|
||||
if (is_inode_flag_set(inode,
|
||||
FI_DIRTY_INODE))
|
||||
update_inode(inode, page);
|
||||
f2fs_update_inode(inode, page);
|
||||
set_dentry_mark(page,
|
||||
need_dentry_mark(sbi, ino));
|
||||
f2fs_need_dentry_mark(sbi, ino));
|
||||
}
|
||||
/* may be written by other thread */
|
||||
if (!PageDirty(page))
|
||||
|
@ -1580,7 +1587,8 @@ out:
|
|||
return ret ? -EIO: 0;
|
||||
}
|
||||
|
||||
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
|
||||
int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
|
||||
struct writeback_control *wbc,
|
||||
bool do_balance, enum iostat_type io_type)
|
||||
{
|
||||
pgoff_t index;
|
||||
|
@ -1588,21 +1596,28 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
|
|||
int step = 0;
|
||||
int nwritten = 0;
|
||||
int ret = 0;
|
||||
int nr_pages;
|
||||
int nr_pages, done = 0;
|
||||
|
||||
pagevec_init(&pvec);
|
||||
|
||||
next_step:
|
||||
index = 0;
|
||||
|
||||
while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
|
||||
PAGECACHE_TAG_DIRTY))) {
|
||||
while (!done && (nr_pages = pagevec_lookup_tag(&pvec,
|
||||
NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nr_pages; i++) {
|
||||
struct page *page = pvec.pages[i];
|
||||
bool submitted = false;
|
||||
|
||||
/* give a priority to WB_SYNC threads */
|
||||
if (atomic_read(&sbi->wb_sync_req[NODE]) &&
|
||||
wbc->sync_mode == WB_SYNC_NONE) {
|
||||
done = 1;
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* flushing sequence with step:
|
||||
* 0. indirect nodes
|
||||
|
@ -1681,7 +1696,7 @@ continue_unlock:
|
|||
return ret;
|
||||
}
|
||||
|
||||
int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino)
|
||||
{
|
||||
pgoff_t index = 0;
|
||||
struct pagevec pvec;
|
||||
|
@ -1730,14 +1745,21 @@ static int f2fs_write_node_pages(struct address_space *mapping,
|
|||
if (get_pages(sbi, F2FS_DIRTY_NODES) < nr_pages_to_skip(sbi, NODE))
|
||||
goto skip_write;
|
||||
|
||||
if (wbc->sync_mode == WB_SYNC_ALL)
|
||||
atomic_inc(&sbi->wb_sync_req[NODE]);
|
||||
else if (atomic_read(&sbi->wb_sync_req[NODE]))
|
||||
goto skip_write;
|
||||
|
||||
trace_f2fs_writepages(mapping->host, wbc, NODE);
|
||||
|
||||
diff = nr_pages_to_write(sbi, NODE, wbc);
|
||||
wbc->sync_mode = WB_SYNC_NONE;
|
||||
blk_start_plug(&plug);
|
||||
sync_node_pages(sbi, wbc, true, FS_NODE_IO);
|
||||
f2fs_sync_node_pages(sbi, wbc, true, FS_NODE_IO);
|
||||
blk_finish_plug(&plug);
|
||||
wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff);
|
||||
|
||||
if (wbc->sync_mode == WB_SYNC_ALL)
|
||||
atomic_dec(&sbi->wb_sync_req[NODE]);
|
||||
return 0;
|
||||
|
||||
skip_write:
|
||||
|
@ -1753,7 +1775,7 @@ static int f2fs_set_node_page_dirty(struct page *page)
|
|||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
|
||||
SetPagePrivate(page);
|
||||
f2fs_trace_pid(page);
|
||||
|
@ -1883,20 +1905,20 @@ static bool add_free_nid(struct f2fs_sb_info *sbi,
|
|||
* Thread A Thread B
|
||||
* - f2fs_create
|
||||
* - f2fs_new_inode
|
||||
* - alloc_nid
|
||||
* - f2fs_alloc_nid
|
||||
* - __insert_nid_to_list(PREALLOC_NID)
|
||||
* - f2fs_balance_fs_bg
|
||||
* - build_free_nids
|
||||
* - __build_free_nids
|
||||
* - f2fs_build_free_nids
|
||||
* - __f2fs_build_free_nids
|
||||
* - scan_nat_page
|
||||
* - add_free_nid
|
||||
* - __lookup_nat_cache
|
||||
* - f2fs_add_link
|
||||
* - init_inode_metadata
|
||||
* - new_inode_page
|
||||
* - new_node_page
|
||||
* - f2fs_init_inode_metadata
|
||||
* - f2fs_new_inode_page
|
||||
* - f2fs_new_node_page
|
||||
* - set_node_addr
|
||||
* - alloc_nid_done
|
||||
* - f2fs_alloc_nid_done
|
||||
* - __remove_nid_from_list(PREALLOC_NID)
|
||||
* - __insert_nid_to_list(FREE_NID)
|
||||
*/
|
||||
|
@ -2028,7 +2050,8 @@ out:
|
|||
up_read(&nm_i->nat_tree_lock);
|
||||
}
|
||||
|
||||
static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
||||
static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
|
||||
bool sync, bool mount)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
int i = 0;
|
||||
|
@ -2041,7 +2064,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
|||
if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
|
||||
return;
|
||||
|
||||
if (!sync && !available_free_memory(sbi, FREE_NIDS))
|
||||
if (!sync && !f2fs_available_free_memory(sbi, FREE_NIDS))
|
||||
return;
|
||||
|
||||
if (!mount) {
|
||||
|
@ -2053,7 +2076,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
|||
}
|
||||
|
||||
/* readahead nat pages to be scanned */
|
||||
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES,
|
||||
f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES,
|
||||
META_NAT, true);
|
||||
|
||||
down_read(&nm_i->nat_tree_lock);
|
||||
|
@ -2083,14 +2106,14 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
|||
|
||||
up_read(&nm_i->nat_tree_lock);
|
||||
|
||||
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
|
||||
f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
|
||||
nm_i->ra_nid_pages, META_NAT, false);
|
||||
}
|
||||
|
||||
void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
||||
void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
||||
{
|
||||
mutex_lock(&NM_I(sbi)->build_lock);
|
||||
__build_free_nids(sbi, sync, mount);
|
||||
__f2fs_build_free_nids(sbi, sync, mount);
|
||||
mutex_unlock(&NM_I(sbi)->build_lock);
|
||||
}
|
||||
|
||||
|
@ -2099,7 +2122,7 @@ void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
|
|||
* from second parameter of this function.
|
||||
* The returned nid could be used ino as well as nid when inode is created.
|
||||
*/
|
||||
bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid)
|
||||
bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct free_nid *i = NULL;
|
||||
|
@ -2117,8 +2140,8 @@ retry:
|
|||
return false;
|
||||
}
|
||||
|
||||
/* We should not use stale free nids created by build_free_nids */
|
||||
if (nm_i->nid_cnt[FREE_NID] && !on_build_free_nids(nm_i)) {
|
||||
/* We should not use stale free nids created by f2fs_build_free_nids */
|
||||
if (nm_i->nid_cnt[FREE_NID] && !on_f2fs_build_free_nids(nm_i)) {
|
||||
f2fs_bug_on(sbi, list_empty(&nm_i->free_nid_list));
|
||||
i = list_first_entry(&nm_i->free_nid_list,
|
||||
struct free_nid, list);
|
||||
|
@ -2135,14 +2158,14 @@ retry:
|
|||
spin_unlock(&nm_i->nid_list_lock);
|
||||
|
||||
/* Let's scan nat pages and its caches to get free nids */
|
||||
build_free_nids(sbi, true, false);
|
||||
f2fs_build_free_nids(sbi, true, false);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
/*
|
||||
* alloc_nid() should be called prior to this function.
|
||||
* f2fs_alloc_nid() should be called prior to this function.
|
||||
*/
|
||||
void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct free_nid *i;
|
||||
|
@ -2157,9 +2180,9 @@ void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
}
|
||||
|
||||
/*
|
||||
* alloc_nid() should be called prior to this function.
|
||||
* f2fs_alloc_nid() should be called prior to this function.
|
||||
*/
|
||||
void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct free_nid *i;
|
||||
|
@ -2172,7 +2195,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
i = __lookup_free_nid_list(nm_i, nid);
|
||||
f2fs_bug_on(sbi, !i);
|
||||
|
||||
if (!available_free_memory(sbi, FREE_NIDS)) {
|
||||
if (!f2fs_available_free_memory(sbi, FREE_NIDS)) {
|
||||
__remove_free_nid(sbi, i, PREALLOC_NID);
|
||||
need_free = true;
|
||||
} else {
|
||||
|
@ -2189,7 +2212,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
|
|||
kmem_cache_free(free_nid_slab, i);
|
||||
}
|
||||
|
||||
int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
|
||||
int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct free_nid *i, *next;
|
||||
|
@ -2217,14 +2240,14 @@ int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
|
|||
return nr - nr_shrink;
|
||||
}
|
||||
|
||||
void recover_inline_xattr(struct inode *inode, struct page *page)
|
||||
void f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
|
||||
{
|
||||
void *src_addr, *dst_addr;
|
||||
size_t inline_size;
|
||||
struct page *ipage;
|
||||
struct f2fs_inode *ri;
|
||||
|
||||
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
|
||||
f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(ipage));
|
||||
|
||||
ri = F2FS_INODE(page);
|
||||
|
@ -2242,11 +2265,11 @@ void recover_inline_xattr(struct inode *inode, struct page *page)
|
|||
f2fs_wait_on_page_writeback(ipage, NODE, true);
|
||||
memcpy(dst_addr, src_addr, inline_size);
|
||||
update_inode:
|
||||
update_inode(inode, ipage);
|
||||
f2fs_update_inode(inode, ipage);
|
||||
f2fs_put_page(ipage, 1);
|
||||
}
|
||||
|
||||
int recover_xattr_data(struct inode *inode, struct page *page)
|
||||
int f2fs_recover_xattr_data(struct inode *inode, struct page *page)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid;
|
||||
|
@ -2259,25 +2282,25 @@ int recover_xattr_data(struct inode *inode, struct page *page)
|
|||
goto recover_xnid;
|
||||
|
||||
/* 1: invalidate the previous xattr nid */
|
||||
get_node_info(sbi, prev_xnid, &ni);
|
||||
invalidate_blocks(sbi, ni.blk_addr);
|
||||
f2fs_get_node_info(sbi, prev_xnid, &ni);
|
||||
f2fs_invalidate_blocks(sbi, ni.blk_addr);
|
||||
dec_valid_node_count(sbi, inode, false);
|
||||
set_node_addr(sbi, &ni, NULL_ADDR, false);
|
||||
|
||||
recover_xnid:
|
||||
/* 2: update xattr nid in inode */
|
||||
if (!alloc_nid(sbi, &new_xnid))
|
||||
if (!f2fs_alloc_nid(sbi, &new_xnid))
|
||||
return -ENOSPC;
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, new_xnid);
|
||||
xpage = new_node_page(&dn, XATTR_NODE_OFFSET);
|
||||
xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET);
|
||||
if (IS_ERR(xpage)) {
|
||||
alloc_nid_failed(sbi, new_xnid);
|
||||
f2fs_alloc_nid_failed(sbi, new_xnid);
|
||||
return PTR_ERR(xpage);
|
||||
}
|
||||
|
||||
alloc_nid_done(sbi, new_xnid);
|
||||
update_inode_page(inode);
|
||||
f2fs_alloc_nid_done(sbi, new_xnid);
|
||||
f2fs_update_inode_page(inode);
|
||||
|
||||
/* 3: update and set xattr node page dirty */
|
||||
memcpy(F2FS_NODE(xpage), F2FS_NODE(page), VALID_XATTR_BLOCK_SIZE);
|
||||
|
@ -2288,14 +2311,14 @@ recover_xnid:
|
|||
return 0;
|
||||
}
|
||||
|
||||
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
|
||||
int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
|
||||
{
|
||||
struct f2fs_inode *src, *dst;
|
||||
nid_t ino = ino_of_node(page);
|
||||
struct node_info old_ni, new_ni;
|
||||
struct page *ipage;
|
||||
|
||||
get_node_info(sbi, ino, &old_ni);
|
||||
f2fs_get_node_info(sbi, ino, &old_ni);
|
||||
|
||||
if (unlikely(old_ni.blk_addr != NULL_ADDR))
|
||||
return -EINVAL;
|
||||
|
@ -2349,7 +2372,7 @@ retry:
|
|||
return 0;
|
||||
}
|
||||
|
||||
void restore_node_summary(struct f2fs_sb_info *sbi,
|
||||
void f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
|
||||
unsigned int segno, struct f2fs_summary_block *sum)
|
||||
{
|
||||
struct f2fs_node *rn;
|
||||
|
@ -2366,10 +2389,10 @@ void restore_node_summary(struct f2fs_sb_info *sbi,
|
|||
nrpages = min(last_offset - i, BIO_MAX_PAGES);
|
||||
|
||||
/* readahead node pages */
|
||||
ra_meta_pages(sbi, addr, nrpages, META_POR, true);
|
||||
f2fs_ra_meta_pages(sbi, addr, nrpages, META_POR, true);
|
||||
|
||||
for (idx = addr; idx < addr + nrpages; idx++) {
|
||||
struct page *page = get_tmp_page(sbi, idx);
|
||||
struct page *page = f2fs_get_tmp_page(sbi, idx);
|
||||
|
||||
rn = F2FS_NODE(page);
|
||||
sum_entry->nid = rn->footer.nid;
|
||||
|
@ -2511,7 +2534,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi,
|
|||
f2fs_bug_on(sbi, nat_get_blkaddr(ne) == NEW_ADDR);
|
||||
|
||||
if (to_journal) {
|
||||
offset = lookup_journal_in_cursum(journal,
|
||||
offset = f2fs_lookup_journal_in_cursum(journal,
|
||||
NAT_JOURNAL, nid, 1);
|
||||
f2fs_bug_on(sbi, offset < 0);
|
||||
raw_ne = &nat_in_journal(journal, offset);
|
||||
|
@ -2548,7 +2571,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi,
|
|||
/*
|
||||
* This function is called during the checkpointing process.
|
||||
*/
|
||||
void flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
||||
void f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
|
||||
|
@ -2611,7 +2634,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
|
|||
nat_bits_addr = __start_cp_addr(sbi) + sbi->blocks_per_seg -
|
||||
nm_i->nat_bits_blocks;
|
||||
for (i = 0; i < nm_i->nat_bits_blocks; i++) {
|
||||
struct page *page = get_meta_page(sbi, nat_bits_addr++);
|
||||
struct page *page = f2fs_get_meta_page(sbi, nat_bits_addr++);
|
||||
|
||||
memcpy(nm_i->nat_bits + (i << F2FS_BLKSIZE_BITS),
|
||||
page_address(page), F2FS_BLKSIZE);
|
||||
|
@ -2754,7 +2777,7 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int build_node_manager(struct f2fs_sb_info *sbi)
|
||||
int f2fs_build_node_manager(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -2774,11 +2797,11 @@ int build_node_manager(struct f2fs_sb_info *sbi)
|
|||
/* load free nid status from nat_bits table */
|
||||
load_free_nid_bitmap(sbi);
|
||||
|
||||
build_free_nids(sbi, true, true);
|
||||
f2fs_build_free_nids(sbi, true, true);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void destroy_node_manager(struct f2fs_sb_info *sbi)
|
||||
void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_nm_info *nm_i = NM_I(sbi);
|
||||
struct free_nid *i, *next_i;
|
||||
|
@ -2850,7 +2873,7 @@ void destroy_node_manager(struct f2fs_sb_info *sbi)
|
|||
kfree(nm_i);
|
||||
}
|
||||
|
||||
int __init create_node_manager_caches(void)
|
||||
int __init f2fs_create_node_manager_caches(void)
|
||||
{
|
||||
nat_entry_slab = f2fs_kmem_cache_create("nat_entry",
|
||||
sizeof(struct nat_entry));
|
||||
|
@ -2876,7 +2899,7 @@ fail:
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
void destroy_node_manager_caches(void)
|
||||
void f2fs_destroy_node_manager_caches(void)
|
||||
{
|
||||
kmem_cache_destroy(nat_entry_set_slab);
|
||||
kmem_cache_destroy(free_nid_slab);
|
||||
|
|
|
@ -47,7 +47,7 @@
|
|||
|
||||
static struct kmem_cache *fsync_entry_slab;
|
||||
|
||||
bool space_for_roll_forward(struct f2fs_sb_info *sbi)
|
||||
bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
|
||||
|
||||
|
@ -162,7 +162,7 @@ retry:
|
|||
goto out_put;
|
||||
}
|
||||
|
||||
err = acquire_orphan_inode(F2FS_I_SB(inode));
|
||||
err = f2fs_acquire_orphan_inode(F2FS_I_SB(inode));
|
||||
if (err) {
|
||||
iput(einode);
|
||||
goto out_put;
|
||||
|
@ -173,7 +173,7 @@ retry:
|
|||
} else if (IS_ERR(page)) {
|
||||
err = PTR_ERR(page);
|
||||
} else {
|
||||
err = __f2fs_do_add_link(dir, &fname, inode,
|
||||
err = f2fs_add_dentry(dir, &fname, inode,
|
||||
inode->i_ino, inode->i_mode);
|
||||
}
|
||||
if (err == -ENOMEM)
|
||||
|
@ -204,8 +204,6 @@ static void recover_inline_flags(struct inode *inode, struct f2fs_inode *ri)
|
|||
set_inode_flag(inode, FI_DATA_EXIST);
|
||||
else
|
||||
clear_inode_flag(inode, FI_DATA_EXIST);
|
||||
if (!(ri->i_inline & F2FS_INLINE_DOTS))
|
||||
clear_inode_flag(inode, FI_INLINE_DOTS);
|
||||
}
|
||||
|
||||
static void recover_inode(struct inode *inode, struct page *page)
|
||||
|
@ -254,10 +252,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
|
|||
while (1) {
|
||||
struct fsync_inode_entry *entry;
|
||||
|
||||
if (!is_valid_blkaddr(sbi, blkaddr, META_POR))
|
||||
if (!f2fs_is_valid_meta_blkaddr(sbi, blkaddr, META_POR))
|
||||
return 0;
|
||||
|
||||
page = get_tmp_page(sbi, blkaddr);
|
||||
page = f2fs_get_tmp_page(sbi, blkaddr);
|
||||
|
||||
if (!is_recoverable_dnode(page))
|
||||
break;
|
||||
|
@ -271,7 +269,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
|
|||
|
||||
if (!check_only &&
|
||||
IS_INODE(page) && is_dent_dnode(page)) {
|
||||
err = recover_inode_page(sbi, page);
|
||||
err = f2fs_recover_inode_page(sbi, page);
|
||||
if (err)
|
||||
break;
|
||||
quota_inode = true;
|
||||
|
@ -312,7 +310,7 @@ next:
|
|||
blkaddr = next_blkaddr_of_node(page);
|
||||
f2fs_put_page(page, 1);
|
||||
|
||||
ra_meta_pages_cond(sbi, blkaddr);
|
||||
f2fs_ra_meta_pages_cond(sbi, blkaddr);
|
||||
}
|
||||
f2fs_put_page(page, 1);
|
||||
return err;
|
||||
|
@ -355,7 +353,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
|
|||
}
|
||||
}
|
||||
|
||||
sum_page = get_sum_page(sbi, segno);
|
||||
sum_page = f2fs_get_sum_page(sbi, segno);
|
||||
sum_node = (struct f2fs_summary_block *)page_address(sum_page);
|
||||
sum = sum_node->entries[blkoff];
|
||||
f2fs_put_page(sum_page, 1);
|
||||
|
@ -375,7 +373,7 @@ got_it:
|
|||
}
|
||||
|
||||
/* Get the node page */
|
||||
node_page = get_node_page(sbi, nid);
|
||||
node_page = f2fs_get_node_page(sbi, nid);
|
||||
if (IS_ERR(node_page))
|
||||
return PTR_ERR(node_page);
|
||||
|
||||
|
@ -400,7 +398,8 @@ got_it:
|
|||
inode = dn->inode;
|
||||
}
|
||||
|
||||
bidx = start_bidx_of_node(offset, inode) + le16_to_cpu(sum.ofs_in_node);
|
||||
bidx = f2fs_start_bidx_of_node(offset, inode) +
|
||||
le16_to_cpu(sum.ofs_in_node);
|
||||
|
||||
/*
|
||||
* if inode page is locked, unlock temporarily, but its reference
|
||||
|
@ -410,11 +409,11 @@ got_it:
|
|||
unlock_page(dn->inode_page);
|
||||
|
||||
set_new_dnode(&tdn, inode, NULL, NULL, 0);
|
||||
if (get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
|
||||
if (f2fs_get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
|
||||
goto out;
|
||||
|
||||
if (tdn.data_blkaddr == blkaddr)
|
||||
truncate_data_blocks_range(&tdn, 1);
|
||||
f2fs_truncate_data_blocks_range(&tdn, 1);
|
||||
|
||||
f2fs_put_dnode(&tdn);
|
||||
out:
|
||||
|
@ -427,7 +426,7 @@ out:
|
|||
truncate_out:
|
||||
if (datablock_addr(tdn.inode, tdn.node_page,
|
||||
tdn.ofs_in_node) == blkaddr)
|
||||
truncate_data_blocks_range(&tdn, 1);
|
||||
f2fs_truncate_data_blocks_range(&tdn, 1);
|
||||
if (dn->inode->i_ino == nid && !dn->inode_page_locked)
|
||||
unlock_page(dn->inode_page);
|
||||
return 0;
|
||||
|
@ -443,25 +442,25 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
|
|||
|
||||
/* step 1: recover xattr */
|
||||
if (IS_INODE(page)) {
|
||||
recover_inline_xattr(inode, page);
|
||||
f2fs_recover_inline_xattr(inode, page);
|
||||
} else if (f2fs_has_xattr_block(ofs_of_node(page))) {
|
||||
err = recover_xattr_data(inode, page);
|
||||
err = f2fs_recover_xattr_data(inode, page);
|
||||
if (!err)
|
||||
recovered++;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* step 2: recover inline data */
|
||||
if (recover_inline_data(inode, page))
|
||||
if (f2fs_recover_inline_data(inode, page))
|
||||
goto out;
|
||||
|
||||
/* step 3: recover data indices */
|
||||
start = start_bidx_of_node(ofs_of_node(page), inode);
|
||||
start = f2fs_start_bidx_of_node(ofs_of_node(page), inode);
|
||||
end = start + ADDRS_PER_PAGE(page, inode);
|
||||
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
retry_dn:
|
||||
err = get_dnode_of_data(&dn, start, ALLOC_NODE);
|
||||
err = f2fs_get_dnode_of_data(&dn, start, ALLOC_NODE);
|
||||
if (err) {
|
||||
if (err == -ENOMEM) {
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
|
@ -472,7 +471,7 @@ retry_dn:
|
|||
|
||||
f2fs_wait_on_page_writeback(dn.node_page, NODE, true);
|
||||
|
||||
get_node_info(sbi, dn.nid, &ni);
|
||||
f2fs_get_node_info(sbi, dn.nid, &ni);
|
||||
f2fs_bug_on(sbi, ni.ino != ino_of_node(page));
|
||||
f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page));
|
||||
|
||||
|
@ -488,7 +487,7 @@ retry_dn:
|
|||
|
||||
/* dest is invalid, just invalidate src block */
|
||||
if (dest == NULL_ADDR) {
|
||||
truncate_data_blocks_range(&dn, 1);
|
||||
f2fs_truncate_data_blocks_range(&dn, 1);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -502,19 +501,19 @@ retry_dn:
|
|||
* and then reserve one new block in dnode page.
|
||||
*/
|
||||
if (dest == NEW_ADDR) {
|
||||
truncate_data_blocks_range(&dn, 1);
|
||||
reserve_new_block(&dn);
|
||||
f2fs_truncate_data_blocks_range(&dn, 1);
|
||||
f2fs_reserve_new_block(&dn);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* dest is valid block, try to recover from src to dest */
|
||||
if (is_valid_blkaddr(sbi, dest, META_POR)) {
|
||||
if (f2fs_is_valid_meta_blkaddr(sbi, dest, META_POR)) {
|
||||
|
||||
if (src == NULL_ADDR) {
|
||||
err = reserve_new_block(&dn);
|
||||
err = f2fs_reserve_new_block(&dn);
|
||||
#ifdef CONFIG_F2FS_FAULT_INJECTION
|
||||
while (err)
|
||||
err = reserve_new_block(&dn);
|
||||
err = f2fs_reserve_new_block(&dn);
|
||||
#endif
|
||||
/* We should not get -ENOSPC */
|
||||
f2fs_bug_on(sbi, err);
|
||||
|
@ -569,12 +568,12 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list,
|
|||
while (1) {
|
||||
struct fsync_inode_entry *entry;
|
||||
|
||||
if (!is_valid_blkaddr(sbi, blkaddr, META_POR))
|
||||
if (!f2fs_is_valid_meta_blkaddr(sbi, blkaddr, META_POR))
|
||||
break;
|
||||
|
||||
ra_meta_pages_cond(sbi, blkaddr);
|
||||
f2fs_ra_meta_pages_cond(sbi, blkaddr);
|
||||
|
||||
page = get_tmp_page(sbi, blkaddr);
|
||||
page = f2fs_get_tmp_page(sbi, blkaddr);
|
||||
|
||||
if (!is_recoverable_dnode(page)) {
|
||||
f2fs_put_page(page, 1);
|
||||
|
@ -612,11 +611,11 @@ next:
|
|||
f2fs_put_page(page, 1);
|
||||
}
|
||||
if (!err)
|
||||
allocate_new_segments(sbi);
|
||||
f2fs_allocate_new_segments(sbi);
|
||||
return err;
|
||||
}
|
||||
|
||||
int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
|
||||
int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
|
||||
{
|
||||
struct list_head inode_list;
|
||||
struct list_head dir_list;
|
||||
|
@ -691,7 +690,7 @@ skip:
|
|||
struct cp_control cpc = {
|
||||
.reason = CP_RECOVERY,
|
||||
};
|
||||
err = write_checkpoint(sbi, &cpc);
|
||||
err = f2fs_write_checkpoint(sbi, &cpc);
|
||||
}
|
||||
|
||||
kmem_cache_destroy(fsync_entry_slab);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -85,7 +85,7 @@
|
|||
(GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & ((sbi)->blocks_per_seg - 1))
|
||||
|
||||
#define GET_SEGNO(sbi, blk_addr) \
|
||||
((((blk_addr) == NULL_ADDR) || ((blk_addr) == NEW_ADDR)) ? \
|
||||
((!is_valid_blkaddr(blk_addr)) ? \
|
||||
NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \
|
||||
GET_SEGNO_FROM_SEG0(sbi, blk_addr)))
|
||||
#define BLKS_PER_SEC(sbi) \
|
||||
|
@ -215,6 +215,8 @@ struct segment_allocation {
|
|||
#define IS_DUMMY_WRITTEN_PAGE(page) \
|
||||
(page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE)
|
||||
|
||||
#define MAX_SKIP_ATOMIC_COUNT 16
|
||||
|
||||
struct inmem_pages {
|
||||
struct list_head list;
|
||||
struct page *page;
|
||||
|
@ -375,6 +377,7 @@ static inline void seg_info_to_sit_page(struct f2fs_sb_info *sbi,
|
|||
int i;
|
||||
|
||||
raw_sit = (struct f2fs_sit_block *)page_address(page);
|
||||
memset(raw_sit, 0, PAGE_SIZE);
|
||||
for (i = 0; i < end - start; i++) {
|
||||
rs = &raw_sit->entries[i];
|
||||
se = get_seg_entry(sbi, start + i);
|
||||
|
@ -742,12 +745,23 @@ static inline void set_to_next_sit(struct sit_info *sit_i, unsigned int start)
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline unsigned long long get_mtime(struct f2fs_sb_info *sbi)
|
||||
static inline unsigned long long get_mtime(struct f2fs_sb_info *sbi,
|
||||
bool base_time)
|
||||
{
|
||||
struct sit_info *sit_i = SIT_I(sbi);
|
||||
time64_t now = ktime_get_real_seconds();
|
||||
time64_t diff, now = ktime_get_real_seconds();
|
||||
|
||||
return sit_i->elapsed_time + now - sit_i->mounted_time;
|
||||
if (now >= sit_i->mounted_time)
|
||||
return sit_i->elapsed_time + now - sit_i->mounted_time;
|
||||
|
||||
/* system time is set to the past */
|
||||
if (!base_time) {
|
||||
diff = sit_i->mounted_time - now;
|
||||
if (sit_i->elapsed_time >= diff)
|
||||
return sit_i->elapsed_time - diff;
|
||||
return 0;
|
||||
}
|
||||
return sit_i->elapsed_time;
|
||||
}
|
||||
|
||||
static inline void set_summary(struct f2fs_summary *sum, nid_t nid,
|
||||
|
@ -771,15 +785,6 @@ static inline block_t sum_blk_addr(struct f2fs_sb_info *sbi, int base, int type)
|
|||
- (base + 1) + type;
|
||||
}
|
||||
|
||||
static inline bool no_fggc_candidate(struct f2fs_sb_info *sbi,
|
||||
unsigned int secno)
|
||||
{
|
||||
if (get_valid_blocks(sbi, GET_SEG_FROM_SEC(sbi, secno), true) >
|
||||
sbi->fggc_threshold)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool sec_usage_check(struct f2fs_sb_info *sbi, unsigned int secno)
|
||||
{
|
||||
if (IS_CURSEC(sbi, secno) || (sbi->cur_victim_sec == secno))
|
||||
|
|
|
@ -109,11 +109,11 @@ unsigned long f2fs_shrink_scan(struct shrinker *shrink,
|
|||
|
||||
/* shrink clean nat cache entries */
|
||||
if (freed < nr)
|
||||
freed += try_to_free_nats(sbi, nr - freed);
|
||||
freed += f2fs_try_to_free_nats(sbi, nr - freed);
|
||||
|
||||
/* shrink free nids cache entries */
|
||||
if (freed < nr)
|
||||
freed += try_to_free_nids(sbi, nr - freed);
|
||||
freed += f2fs_try_to_free_nids(sbi, nr - freed);
|
||||
|
||||
spin_lock(&f2fs_list_lock);
|
||||
p = p->next;
|
||||
|
|
198
fs/f2fs/super.c
198
fs/f2fs/super.c
|
@ -740,6 +740,10 @@ static int parse_options(struct super_block *sb, char *options)
|
|||
} else if (strlen(name) == 6 &&
|
||||
!strncmp(name, "strict", 6)) {
|
||||
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_STRICT;
|
||||
} else if (strlen(name) == 9 &&
|
||||
!strncmp(name, "nobarrier", 9)) {
|
||||
F2FS_OPTION(sbi).fsync_mode =
|
||||
FSYNC_MODE_NOBARRIER;
|
||||
} else {
|
||||
kfree(name);
|
||||
return -EINVAL;
|
||||
|
@ -826,15 +830,14 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
|
|||
|
||||
/* Initialize f2fs-specific inode info */
|
||||
atomic_set(&fi->dirty_pages, 0);
|
||||
fi->i_current_depth = 1;
|
||||
init_rwsem(&fi->i_sem);
|
||||
INIT_LIST_HEAD(&fi->dirty_list);
|
||||
INIT_LIST_HEAD(&fi->gdirty_list);
|
||||
INIT_LIST_HEAD(&fi->inmem_ilist);
|
||||
INIT_LIST_HEAD(&fi->inmem_pages);
|
||||
mutex_init(&fi->inmem_lock);
|
||||
init_rwsem(&fi->dio_rwsem[READ]);
|
||||
init_rwsem(&fi->dio_rwsem[WRITE]);
|
||||
init_rwsem(&fi->i_gc_rwsem[READ]);
|
||||
init_rwsem(&fi->i_gc_rwsem[WRITE]);
|
||||
init_rwsem(&fi->i_mmap_sem);
|
||||
init_rwsem(&fi->i_xattr_sem);
|
||||
|
||||
|
@ -862,7 +865,7 @@ static int f2fs_drop_inode(struct inode *inode)
|
|||
|
||||
/* some remained atomic pages should discarded */
|
||||
if (f2fs_is_atomic_file(inode))
|
||||
drop_inmem_pages(inode);
|
||||
f2fs_drop_inmem_pages(inode);
|
||||
|
||||
/* should remain fi->extent_tree for writepage */
|
||||
f2fs_destroy_extent_node(inode);
|
||||
|
@ -999,7 +1002,7 @@ static void f2fs_put_super(struct super_block *sb)
|
|||
struct cp_control cpc = {
|
||||
.reason = CP_UMOUNT,
|
||||
};
|
||||
write_checkpoint(sbi, &cpc);
|
||||
f2fs_write_checkpoint(sbi, &cpc);
|
||||
}
|
||||
|
||||
/* be sure to wait for any on-going discard commands */
|
||||
|
@ -1009,17 +1012,17 @@ static void f2fs_put_super(struct super_block *sb)
|
|||
struct cp_control cpc = {
|
||||
.reason = CP_UMOUNT | CP_TRIMMED,
|
||||
};
|
||||
write_checkpoint(sbi, &cpc);
|
||||
f2fs_write_checkpoint(sbi, &cpc);
|
||||
}
|
||||
|
||||
/* write_checkpoint can update stat informaion */
|
||||
/* f2fs_write_checkpoint can update stat informaion */
|
||||
f2fs_destroy_stats(sbi);
|
||||
|
||||
/*
|
||||
* normally superblock is clean, so we need to release this.
|
||||
* In addition, EIO will skip do checkpoint, we need this as well.
|
||||
*/
|
||||
release_ino_entry(sbi, true);
|
||||
f2fs_release_ino_entry(sbi, true);
|
||||
|
||||
f2fs_leave_shrinker(sbi);
|
||||
mutex_unlock(&sbi->umount_mutex);
|
||||
|
@ -1031,8 +1034,8 @@ static void f2fs_put_super(struct super_block *sb)
|
|||
iput(sbi->meta_inode);
|
||||
|
||||
/* destroy f2fs internal modules */
|
||||
destroy_node_manager(sbi);
|
||||
destroy_segment_manager(sbi);
|
||||
f2fs_destroy_node_manager(sbi);
|
||||
f2fs_destroy_segment_manager(sbi);
|
||||
|
||||
kfree(sbi->ckpt);
|
||||
|
||||
|
@ -1074,7 +1077,7 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
|
|||
cpc.reason = __get_cp_reason(sbi);
|
||||
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
err = write_checkpoint(sbi, &cpc);
|
||||
err = f2fs_write_checkpoint(sbi, &cpc);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
}
|
||||
f2fs_trace_ios(NULL, 1);
|
||||
|
@ -1477,11 +1480,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
|
|||
*/
|
||||
if ((*flags & SB_RDONLY) || !test_opt(sbi, BG_GC)) {
|
||||
if (sbi->gc_thread) {
|
||||
stop_gc_thread(sbi);
|
||||
f2fs_stop_gc_thread(sbi);
|
||||
need_restart_gc = true;
|
||||
}
|
||||
} else if (!sbi->gc_thread) {
|
||||
err = start_gc_thread(sbi);
|
||||
err = f2fs_start_gc_thread(sbi);
|
||||
if (err)
|
||||
goto restore_opts;
|
||||
need_stop_gc = true;
|
||||
|
@ -1504,9 +1507,9 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
|
|||
*/
|
||||
if ((*flags & SB_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) {
|
||||
clear_opt(sbi, FLUSH_MERGE);
|
||||
destroy_flush_cmd_control(sbi, false);
|
||||
f2fs_destroy_flush_cmd_control(sbi, false);
|
||||
} else {
|
||||
err = create_flush_cmd_control(sbi);
|
||||
err = f2fs_create_flush_cmd_control(sbi);
|
||||
if (err)
|
||||
goto restore_gc;
|
||||
}
|
||||
|
@ -1524,11 +1527,11 @@ skip:
|
|||
return 0;
|
||||
restore_gc:
|
||||
if (need_restart_gc) {
|
||||
if (start_gc_thread(sbi))
|
||||
if (f2fs_start_gc_thread(sbi))
|
||||
f2fs_msg(sbi->sb, KERN_WARNING,
|
||||
"background gc thread has stopped");
|
||||
} else if (need_stop_gc) {
|
||||
stop_gc_thread(sbi);
|
||||
f2fs_stop_gc_thread(sbi);
|
||||
}
|
||||
restore_opts:
|
||||
#ifdef CONFIG_QUOTA
|
||||
|
@ -1800,7 +1803,7 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id,
|
|||
inode = d_inode(path->dentry);
|
||||
|
||||
inode_lock(inode);
|
||||
F2FS_I(inode)->i_flags |= FS_NOATIME_FL | FS_IMMUTABLE_FL;
|
||||
F2FS_I(inode)->i_flags |= F2FS_NOATIME_FL | F2FS_IMMUTABLE_FL;
|
||||
inode_set_flags(inode, S_NOATIME | S_IMMUTABLE,
|
||||
S_NOATIME | S_IMMUTABLE);
|
||||
inode_unlock(inode);
|
||||
|
@ -1824,7 +1827,7 @@ static int f2fs_quota_off(struct super_block *sb, int type)
|
|||
goto out_put;
|
||||
|
||||
inode_lock(inode);
|
||||
F2FS_I(inode)->i_flags &= ~(FS_NOATIME_FL | FS_IMMUTABLE_FL);
|
||||
F2FS_I(inode)->i_flags &= ~(F2FS_NOATIME_FL | F2FS_IMMUTABLE_FL);
|
||||
inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE);
|
||||
inode_unlock(inode);
|
||||
f2fs_mark_inode_dirty_sync(inode, false);
|
||||
|
@ -1946,7 +1949,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb,
|
|||
struct f2fs_sb_info *sbi = F2FS_SB(sb);
|
||||
struct inode *inode;
|
||||
|
||||
if (check_nid_range(sbi, ino))
|
||||
if (f2fs_check_nid_range(sbi, ino))
|
||||
return ERR_PTR(-ESTALE);
|
||||
|
||||
/*
|
||||
|
@ -2129,6 +2132,8 @@ static inline bool sanity_check_area_boundary(struct f2fs_sb_info *sbi,
|
|||
static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
|
||||
struct buffer_head *bh)
|
||||
{
|
||||
block_t segment_count, segs_per_sec, secs_per_zone;
|
||||
block_t total_sections, blocks_per_seg;
|
||||
struct f2fs_super_block *raw_super = (struct f2fs_super_block *)
|
||||
(bh->b_data + F2FS_SUPER_OFFSET);
|
||||
struct super_block *sb = sbi->sb;
|
||||
|
@ -2185,6 +2190,72 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
|
|||
return 1;
|
||||
}
|
||||
|
||||
segment_count = le32_to_cpu(raw_super->segment_count);
|
||||
segs_per_sec = le32_to_cpu(raw_super->segs_per_sec);
|
||||
secs_per_zone = le32_to_cpu(raw_super->secs_per_zone);
|
||||
total_sections = le32_to_cpu(raw_super->section_count);
|
||||
|
||||
/* blocks_per_seg should be 512, given the above check */
|
||||
blocks_per_seg = 1 << le32_to_cpu(raw_super->log_blocks_per_seg);
|
||||
|
||||
if (segment_count > F2FS_MAX_SEGMENT ||
|
||||
segment_count < F2FS_MIN_SEGMENTS) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Invalid segment count (%u)",
|
||||
segment_count);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (total_sections > segment_count ||
|
||||
total_sections < F2FS_MIN_SEGMENTS ||
|
||||
segs_per_sec > segment_count || !segs_per_sec) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Invalid segment/section count (%u, %u x %u)",
|
||||
segment_count, total_sections, segs_per_sec);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if ((segment_count / segs_per_sec) < total_sections) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Small segment_count (%u < %u * %u)",
|
||||
segment_count, segs_per_sec, total_sections);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (segment_count > (le32_to_cpu(raw_super->block_count) >> 9)) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Wrong segment_count / block_count (%u > %u)",
|
||||
segment_count, le32_to_cpu(raw_super->block_count));
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (secs_per_zone > total_sections) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Wrong secs_per_zone (%u > %u)",
|
||||
secs_per_zone, total_sections);
|
||||
return 1;
|
||||
}
|
||||
if (le32_to_cpu(raw_super->extension_count) > F2FS_MAX_EXTENSION ||
|
||||
raw_super->hot_ext_count > F2FS_MAX_EXTENSION ||
|
||||
(le32_to_cpu(raw_super->extension_count) +
|
||||
raw_super->hot_ext_count) > F2FS_MAX_EXTENSION) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Corrupted extension count (%u + %u > %u)",
|
||||
le32_to_cpu(raw_super->extension_count),
|
||||
raw_super->hot_ext_count,
|
||||
F2FS_MAX_EXTENSION);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (le32_to_cpu(raw_super->cp_payload) >
|
||||
(blocks_per_seg - F2FS_CP_PACKS)) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Insane cp_payload (%u > %u)",
|
||||
le32_to_cpu(raw_super->cp_payload),
|
||||
blocks_per_seg - F2FS_CP_PACKS);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* check reserved ino info */
|
||||
if (le32_to_cpu(raw_super->node_ino) != 1 ||
|
||||
le32_to_cpu(raw_super->meta_ino) != 2 ||
|
||||
|
@ -2197,13 +2268,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
|
|||
return 1;
|
||||
}
|
||||
|
||||
if (le32_to_cpu(raw_super->segment_count) > F2FS_MAX_SEGMENT) {
|
||||
f2fs_msg(sb, KERN_INFO,
|
||||
"Invalid segment count (%u)",
|
||||
le32_to_cpu(raw_super->segment_count));
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* check CP/SIT/NAT/SSA/MAIN_AREA area boundary */
|
||||
if (sanity_check_area_boundary(sbi, bh))
|
||||
return 1;
|
||||
|
@ -2211,7 +2275,7 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
|
|||
return 0;
|
||||
}
|
||||
|
||||
int sanity_check_ckpt(struct f2fs_sb_info *sbi)
|
||||
int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
unsigned int total, fsmeta;
|
||||
struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
|
||||
|
@ -2292,13 +2356,15 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
|
|||
for (i = 0; i < NR_COUNT_TYPE; i++)
|
||||
atomic_set(&sbi->nr_pages[i], 0);
|
||||
|
||||
atomic_set(&sbi->wb_sync_req, 0);
|
||||
for (i = 0; i < META; i++)
|
||||
atomic_set(&sbi->wb_sync_req[i], 0);
|
||||
|
||||
INIT_LIST_HEAD(&sbi->s_list);
|
||||
mutex_init(&sbi->umount_mutex);
|
||||
for (i = 0; i < NR_PAGE_TYPE - 1; i++)
|
||||
for (j = HOT; j < NR_TEMP_TYPE; j++)
|
||||
mutex_init(&sbi->wio_mutex[i][j]);
|
||||
init_rwsem(&sbi->io_order_lock);
|
||||
spin_lock_init(&sbi->cp_lock);
|
||||
|
||||
sbi->dirty_device = 0;
|
||||
|
@ -2759,7 +2825,7 @@ try_onemore:
|
|||
goto free_io_dummy;
|
||||
}
|
||||
|
||||
err = get_valid_checkpoint(sbi);
|
||||
err = f2fs_get_valid_checkpoint(sbi);
|
||||
if (err) {
|
||||
f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint");
|
||||
goto free_meta_inode;
|
||||
|
@ -2789,18 +2855,18 @@ try_onemore:
|
|||
spin_lock_init(&sbi->inode_lock[i]);
|
||||
}
|
||||
|
||||
init_extent_cache_info(sbi);
|
||||
f2fs_init_extent_cache_info(sbi);
|
||||
|
||||
init_ino_entry_info(sbi);
|
||||
f2fs_init_ino_entry_info(sbi);
|
||||
|
||||
/* setup f2fs internal modules */
|
||||
err = build_segment_manager(sbi);
|
||||
err = f2fs_build_segment_manager(sbi);
|
||||
if (err) {
|
||||
f2fs_msg(sb, KERN_ERR,
|
||||
"Failed to initialize F2FS segment manager");
|
||||
goto free_sm;
|
||||
}
|
||||
err = build_node_manager(sbi);
|
||||
err = f2fs_build_node_manager(sbi);
|
||||
if (err) {
|
||||
f2fs_msg(sb, KERN_ERR,
|
||||
"Failed to initialize F2FS node manager");
|
||||
|
@ -2818,7 +2884,7 @@ try_onemore:
|
|||
sbi->kbytes_written =
|
||||
le64_to_cpu(seg_i->journal->info.kbytes_written);
|
||||
|
||||
build_gc_manager(sbi);
|
||||
f2fs_build_gc_manager(sbi);
|
||||
|
||||
/* get an inode for node space */
|
||||
sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi));
|
||||
|
@ -2870,7 +2936,7 @@ try_onemore:
|
|||
}
|
||||
#endif
|
||||
/* if there are nt orphan nodes free them */
|
||||
err = recover_orphan_inodes(sbi);
|
||||
err = f2fs_recover_orphan_inodes(sbi);
|
||||
if (err)
|
||||
goto free_meta;
|
||||
|
||||
|
@ -2892,7 +2958,7 @@ try_onemore:
|
|||
if (!retry)
|
||||
goto skip_recovery;
|
||||
|
||||
err = recover_fsync_data(sbi, false);
|
||||
err = f2fs_recover_fsync_data(sbi, false);
|
||||
if (err < 0) {
|
||||
need_fsck = true;
|
||||
f2fs_msg(sb, KERN_ERR,
|
||||
|
@ -2900,7 +2966,7 @@ try_onemore:
|
|||
goto free_meta;
|
||||
}
|
||||
} else {
|
||||
err = recover_fsync_data(sbi, true);
|
||||
err = f2fs_recover_fsync_data(sbi, true);
|
||||
|
||||
if (!f2fs_readonly(sb) && err > 0) {
|
||||
err = -EINVAL;
|
||||
|
@ -2910,7 +2976,7 @@ try_onemore:
|
|||
}
|
||||
}
|
||||
skip_recovery:
|
||||
/* recover_fsync_data() cleared this already */
|
||||
/* f2fs_recover_fsync_data() cleared this already */
|
||||
clear_sbi_flag(sbi, SBI_POR_DOING);
|
||||
|
||||
/*
|
||||
|
@ -2919,7 +2985,7 @@ skip_recovery:
|
|||
*/
|
||||
if (test_opt(sbi, BG_GC) && !f2fs_readonly(sb)) {
|
||||
/* After POR, we can run background GC thread.*/
|
||||
err = start_gc_thread(sbi);
|
||||
err = f2fs_start_gc_thread(sbi);
|
||||
if (err)
|
||||
goto free_meta;
|
||||
}
|
||||
|
@ -2950,10 +3016,10 @@ free_meta:
|
|||
#endif
|
||||
f2fs_sync_inode_meta(sbi);
|
||||
/*
|
||||
* Some dirty meta pages can be produced by recover_orphan_inodes()
|
||||
* Some dirty meta pages can be produced by f2fs_recover_orphan_inodes()
|
||||
* failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg()
|
||||
* followed by write_checkpoint() through f2fs_write_node_pages(), which
|
||||
* falls into an infinite loop in sync_meta_pages().
|
||||
* followed by f2fs_write_checkpoint() through f2fs_write_node_pages(), which
|
||||
* falls into an infinite loop in f2fs_sync_meta_pages().
|
||||
*/
|
||||
truncate_inode_pages_final(META_MAPPING(sbi));
|
||||
#ifdef CONFIG_QUOTA
|
||||
|
@ -2966,13 +3032,13 @@ free_root_inode:
|
|||
free_stats:
|
||||
f2fs_destroy_stats(sbi);
|
||||
free_node_inode:
|
||||
release_ino_entry(sbi, true);
|
||||
f2fs_release_ino_entry(sbi, true);
|
||||
truncate_inode_pages_final(NODE_MAPPING(sbi));
|
||||
iput(sbi->node_inode);
|
||||
free_nm:
|
||||
destroy_node_manager(sbi);
|
||||
f2fs_destroy_node_manager(sbi);
|
||||
free_sm:
|
||||
destroy_segment_manager(sbi);
|
||||
f2fs_destroy_segment_manager(sbi);
|
||||
free_devices:
|
||||
destroy_device_list(sbi);
|
||||
kfree(sbi->ckpt);
|
||||
|
@ -3018,8 +3084,8 @@ static void kill_f2fs_super(struct super_block *sb)
|
|||
{
|
||||
if (sb->s_root) {
|
||||
set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE);
|
||||
stop_gc_thread(F2FS_SB(sb));
|
||||
stop_discard_thread(F2FS_SB(sb));
|
||||
f2fs_stop_gc_thread(F2FS_SB(sb));
|
||||
f2fs_stop_discard_thread(F2FS_SB(sb));
|
||||
}
|
||||
kill_block_super(sb);
|
||||
}
|
||||
|
@ -3057,21 +3123,27 @@ static int __init init_f2fs_fs(void)
|
|||
{
|
||||
int err;
|
||||
|
||||
if (PAGE_SIZE != F2FS_BLKSIZE) {
|
||||
printk("F2FS not supported on PAGE_SIZE(%lu) != %d\n",
|
||||
PAGE_SIZE, F2FS_BLKSIZE);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
f2fs_build_trace_ios();
|
||||
|
||||
err = init_inodecache();
|
||||
if (err)
|
||||
goto fail;
|
||||
err = create_node_manager_caches();
|
||||
err = f2fs_create_node_manager_caches();
|
||||
if (err)
|
||||
goto free_inodecache;
|
||||
err = create_segment_manager_caches();
|
||||
err = f2fs_create_segment_manager_caches();
|
||||
if (err)
|
||||
goto free_node_manager_caches;
|
||||
err = create_checkpoint_caches();
|
||||
err = f2fs_create_checkpoint_caches();
|
||||
if (err)
|
||||
goto free_segment_manager_caches;
|
||||
err = create_extent_cache();
|
||||
err = f2fs_create_extent_cache();
|
||||
if (err)
|
||||
goto free_checkpoint_caches;
|
||||
err = f2fs_init_sysfs();
|
||||
|
@ -3086,8 +3158,13 @@ static int __init init_f2fs_fs(void)
|
|||
err = f2fs_create_root_stats();
|
||||
if (err)
|
||||
goto free_filesystem;
|
||||
err = f2fs_init_post_read_processing();
|
||||
if (err)
|
||||
goto free_root_stats;
|
||||
return 0;
|
||||
|
||||
free_root_stats:
|
||||
f2fs_destroy_root_stats();
|
||||
free_filesystem:
|
||||
unregister_filesystem(&f2fs_fs_type);
|
||||
free_shrinker:
|
||||
|
@ -3095,13 +3172,13 @@ free_shrinker:
|
|||
free_sysfs:
|
||||
f2fs_exit_sysfs();
|
||||
free_extent_cache:
|
||||
destroy_extent_cache();
|
||||
f2fs_destroy_extent_cache();
|
||||
free_checkpoint_caches:
|
||||
destroy_checkpoint_caches();
|
||||
f2fs_destroy_checkpoint_caches();
|
||||
free_segment_manager_caches:
|
||||
destroy_segment_manager_caches();
|
||||
f2fs_destroy_segment_manager_caches();
|
||||
free_node_manager_caches:
|
||||
destroy_node_manager_caches();
|
||||
f2fs_destroy_node_manager_caches();
|
||||
free_inodecache:
|
||||
destroy_inodecache();
|
||||
fail:
|
||||
|
@ -3110,14 +3187,15 @@ fail:
|
|||
|
||||
static void __exit exit_f2fs_fs(void)
|
||||
{
|
||||
f2fs_destroy_post_read_processing();
|
||||
f2fs_destroy_root_stats();
|
||||
unregister_filesystem(&f2fs_fs_type);
|
||||
unregister_shrinker(&f2fs_shrinker_info);
|
||||
f2fs_exit_sysfs();
|
||||
destroy_extent_cache();
|
||||
destroy_checkpoint_caches();
|
||||
destroy_segment_manager_caches();
|
||||
destroy_node_manager_caches();
|
||||
f2fs_destroy_extent_cache();
|
||||
f2fs_destroy_checkpoint_caches();
|
||||
f2fs_destroy_segment_manager_caches();
|
||||
f2fs_destroy_node_manager_caches();
|
||||
destroy_inodecache();
|
||||
f2fs_destroy_trace_ios();
|
||||
}
|
||||
|
|
|
@ -147,13 +147,13 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
|
|||
int len = 0, i;
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
"cold file extenstion:\n");
|
||||
"cold file extension:\n");
|
||||
for (i = 0; i < cold_count; i++)
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
|
||||
extlist[i]);
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
"hot file extenstion:\n");
|
||||
"hot file extension:\n");
|
||||
for (i = cold_count; i < cold_count + hot_count; i++)
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
|
||||
extlist[i]);
|
||||
|
@ -165,7 +165,7 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
|
|||
return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
|
||||
}
|
||||
|
||||
static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
|
||||
static ssize_t __sbi_store(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
|
@ -201,13 +201,13 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
|
|||
|
||||
down_write(&sbi->sb_lock);
|
||||
|
||||
ret = update_extension_list(sbi, name, hot, set);
|
||||
ret = f2fs_update_extension_list(sbi, name, hot, set);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = f2fs_commit_super(sbi, false);
|
||||
if (ret)
|
||||
update_extension_list(sbi, name, hot, !set);
|
||||
f2fs_update_extension_list(sbi, name, hot, !set);
|
||||
out:
|
||||
up_write(&sbi->sb_lock);
|
||||
return ret ? ret : count;
|
||||
|
@ -245,19 +245,56 @@ out:
|
|||
return count;
|
||||
}
|
||||
|
||||
if (!strcmp(a->attr.name, "trim_sections"))
|
||||
return -EINVAL;
|
||||
|
||||
if (!strcmp(a->attr.name, "gc_urgent")) {
|
||||
if (t >= 1) {
|
||||
sbi->gc_mode = GC_URGENT;
|
||||
if (sbi->gc_thread) {
|
||||
wake_up_interruptible_all(
|
||||
&sbi->gc_thread->gc_wait_queue_head);
|
||||
wake_up_discard_thread(sbi, true);
|
||||
}
|
||||
} else {
|
||||
sbi->gc_mode = GC_NORMAL;
|
||||
}
|
||||
return count;
|
||||
}
|
||||
if (!strcmp(a->attr.name, "gc_idle")) {
|
||||
if (t == GC_IDLE_CB)
|
||||
sbi->gc_mode = GC_IDLE_CB;
|
||||
else if (t == GC_IDLE_GREEDY)
|
||||
sbi->gc_mode = GC_IDLE_GREEDY;
|
||||
else
|
||||
sbi->gc_mode = GC_NORMAL;
|
||||
return count;
|
||||
}
|
||||
|
||||
*ui = t;
|
||||
|
||||
if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
|
||||
f2fs_reset_iostat(sbi);
|
||||
if (!strcmp(a->attr.name, "gc_urgent") && t == 1 && sbi->gc_thread) {
|
||||
sbi->gc_thread->gc_wake = 1;
|
||||
wake_up_interruptible_all(&sbi->gc_thread->gc_wait_queue_head);
|
||||
wake_up_discard_thread(sbi, true);
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
ssize_t ret;
|
||||
bool gc_entry = (!strcmp(a->attr.name, "gc_urgent") ||
|
||||
a->struct_type == GC_THREAD);
|
||||
|
||||
if (gc_entry)
|
||||
down_read(&sbi->sb->s_umount);
|
||||
ret = __sbi_store(a, sbi, buf, count);
|
||||
if (gc_entry)
|
||||
up_read(&sbi->sb->s_umount);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t f2fs_attr_show(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
{
|
||||
|
@ -346,8 +383,8 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent_sleep_time,
|
|||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time);
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time);
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle);
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent, gc_urgent);
|
||||
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_idle, gc_mode);
|
||||
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_urgent, gc_mode);
|
||||
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
|
||||
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards);
|
||||
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity);
|
||||
|
|
|
@ -252,7 +252,7 @@ static int read_inline_xattr(struct inode *inode, struct page *ipage,
|
|||
if (ipage) {
|
||||
inline_addr = inline_xattr_addr(inode, ipage);
|
||||
} else {
|
||||
page = get_node_page(sbi, inode->i_ino);
|
||||
page = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(page))
|
||||
return PTR_ERR(page);
|
||||
|
||||
|
@ -273,7 +273,7 @@ static int read_xattr_block(struct inode *inode, void *txattr_addr)
|
|||
void *xattr_addr;
|
||||
|
||||
/* The inode already has an extended attribute block. */
|
||||
xpage = get_node_page(sbi, xnid);
|
||||
xpage = f2fs_get_node_page(sbi, xnid);
|
||||
if (IS_ERR(xpage))
|
||||
return PTR_ERR(xpage);
|
||||
|
||||
|
@ -397,7 +397,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
|
|||
int err = 0;
|
||||
|
||||
if (hsize > inline_size && !F2FS_I(inode)->i_xattr_nid)
|
||||
if (!alloc_nid(sbi, &new_nid))
|
||||
if (!f2fs_alloc_nid(sbi, &new_nid))
|
||||
return -ENOSPC;
|
||||
|
||||
/* write to inline xattr */
|
||||
|
@ -405,9 +405,9 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
|
|||
if (ipage) {
|
||||
inline_addr = inline_xattr_addr(inode, ipage);
|
||||
} else {
|
||||
in_page = get_node_page(sbi, inode->i_ino);
|
||||
in_page = f2fs_get_node_page(sbi, inode->i_ino);
|
||||
if (IS_ERR(in_page)) {
|
||||
alloc_nid_failed(sbi, new_nid);
|
||||
f2fs_alloc_nid_failed(sbi, new_nid);
|
||||
return PTR_ERR(in_page);
|
||||
}
|
||||
inline_addr = inline_xattr_addr(inode, in_page);
|
||||
|
@ -417,8 +417,8 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
|
|||
NODE, true);
|
||||
/* no need to use xattr node block */
|
||||
if (hsize <= inline_size) {
|
||||
err = truncate_xattr_node(inode);
|
||||
alloc_nid_failed(sbi, new_nid);
|
||||
err = f2fs_truncate_xattr_node(inode);
|
||||
f2fs_alloc_nid_failed(sbi, new_nid);
|
||||
if (err) {
|
||||
f2fs_put_page(in_page, 1);
|
||||
return err;
|
||||
|
@ -431,10 +431,10 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
|
|||
|
||||
/* write to xattr node block */
|
||||
if (F2FS_I(inode)->i_xattr_nid) {
|
||||
xpage = get_node_page(sbi, F2FS_I(inode)->i_xattr_nid);
|
||||
xpage = f2fs_get_node_page(sbi, F2FS_I(inode)->i_xattr_nid);
|
||||
if (IS_ERR(xpage)) {
|
||||
err = PTR_ERR(xpage);
|
||||
alloc_nid_failed(sbi, new_nid);
|
||||
f2fs_alloc_nid_failed(sbi, new_nid);
|
||||
goto in_page_out;
|
||||
}
|
||||
f2fs_bug_on(sbi, new_nid);
|
||||
|
@ -442,13 +442,13 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
|
|||
} else {
|
||||
struct dnode_of_data dn;
|
||||
set_new_dnode(&dn, inode, NULL, NULL, new_nid);
|
||||
xpage = new_node_page(&dn, XATTR_NODE_OFFSET);
|
||||
xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET);
|
||||
if (IS_ERR(xpage)) {
|
||||
err = PTR_ERR(xpage);
|
||||
alloc_nid_failed(sbi, new_nid);
|
||||
f2fs_alloc_nid_failed(sbi, new_nid);
|
||||
goto in_page_out;
|
||||
}
|
||||
alloc_nid_done(sbi, new_nid);
|
||||
f2fs_alloc_nid_done(sbi, new_nid);
|
||||
}
|
||||
xattr_addr = page_address(xpage);
|
||||
|
||||
|
@ -693,7 +693,7 @@ int f2fs_setxattr(struct inode *inode, int index, const char *name,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
/* this case is only from init_inode_metadata */
|
||||
/* this case is only from f2fs_init_inode_metadata */
|
||||
if (ipage)
|
||||
return __f2fs_setxattr(inode, index, name, value,
|
||||
size, ipage, flags);
|
||||
|
|
|
@ -25,6 +25,10 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
|
|||
}
|
||||
|
||||
/* crypto.c */
|
||||
static inline void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
{
|
||||
}
|
||||
|
||||
static inline struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *inode,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
|
@ -150,10 +154,13 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
|
|||
}
|
||||
|
||||
/* bio.c */
|
||||
static inline void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *ctx,
|
||||
struct bio *bio)
|
||||
static inline void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
|
||||
struct bio *bio)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
static inline void fscrypt_pullback_bio_page(struct page **page, bool restore)
|
||||
|
|
|
@ -59,6 +59,7 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
|
|||
}
|
||||
|
||||
/* crypto.c */
|
||||
extern void fscrypt_enqueue_decrypt_work(struct work_struct *);
|
||||
extern struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *, gfp_t);
|
||||
extern void fscrypt_release_ctx(struct fscrypt_ctx *);
|
||||
extern struct page *fscrypt_encrypt_page(const struct inode *, struct page *,
|
||||
|
@ -174,7 +175,9 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
|
|||
}
|
||||
|
||||
/* bio.c */
|
||||
extern void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *, struct bio *);
|
||||
extern void fscrypt_decrypt_bio(struct bio *);
|
||||
extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
|
||||
struct bio *bio);
|
||||
extern void fscrypt_pullback_bio_page(struct page **, bool);
|
||||
extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
|
||||
unsigned int);
|
||||
|
|
Loading…
Reference in New Issue