linux-sg2042/fs/logfs/dir.c

802 lines
20 KiB
C
Raw Normal View History

/*
* fs/logfs/dir.c - directory-related code
*
* As should be obvious for Linux kernel code, license is GPLv2
*
* Copyright (c) 2005-2008 Joern Engel <joern@logfs.org>
*/
#include "logfs.h"
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
#include <linux/slab.h>
/*
* Atomic dir operations
*
* Directory operations are by default not atomic. Dentries and Inodes are
* created/removed/altered in separate operations. Therefore we need to do
* a small amount of journaling.
*
* Create, link, mkdir, mknod and symlink all share the same function to do
* the work: __logfs_create. This function works in two atomic steps:
* 1. allocate inode (remember in journal)
* 2. allocate dentry (clear journal)
*
* As we can only get interrupted between the two, when the inode we just
* created is simply stored in the anchor. On next mount, if we were
* interrupted, we delete the inode. From a users point of view the
* operation never happened.
*
* Unlink and rmdir also share the same function: unlink. Again, this
* function works in two atomic steps
* 1. remove dentry (remember inode in journal)
* 2. unlink inode (clear journal)
*
* And again, on the next mount, if we were interrupted, we delete the inode.
* From a users point of view the operation succeeded.
*
* Rename is the real pain to deal with, harder than all the other methods
* combined. Depending on the circumstances we can run into three cases.
* A "target rename" where the target dentry already existed, a "local
* rename" where both parent directories are identical or a "cross-directory
* rename" in the remaining case.
*
* Local rename is atomic, as the old dentry is simply rewritten with a new
* name.
*
* Cross-directory rename works in two steps, similar to __logfs_create and
* logfs_unlink:
* 1. Write new dentry (remember old dentry in journal)
* 2. Remove old dentry (clear journal)
*
* Here we remember a dentry instead of an inode. On next mount, if we were
* interrupted, we delete the dentry. From a users point of view, the
* operation succeeded.
*
* Target rename works in three atomic steps:
* 1. Attach old inode to new dentry (remember old dentry and new inode)
* 2. Remove old dentry (still remember the new inode)
* 3. Remove victim inode
*
* Here we remember both an inode an a dentry. If we get interrupted
* between steps 1 and 2, we delete both the dentry and the inode. If
* we get interrupted between steps 2 and 3, we delete just the inode.
* In either case, the remaining objects are deleted on next mount. From
* a users point of view, the operation succeeded.
*/
static int write_dir(struct inode *dir, struct logfs_disk_dentry *dd,
loff_t pos)
{
return logfs_inode_write(dir, dd, sizeof(*dd), pos, WF_LOCK, NULL);
}
static int write_inode(struct inode *inode)
{
logfs: Propagate page parameter to __logfs_write_inode During GC LogFS has to rewrite each valid block to a separate segment. Rewrite operation reads data from an old segment and writes it to a newly allocated segment. Since every write operation changes data block pointers maintained in inode, inode should also be rewritten. In GC path to avoid AB-BA deadlock LogFS marks a page with PG_pre_locked in addition to locking the page (PG_locked). The page lock is ignored iff the page is pre-locked. LogFS uses a special file called segment file. The segment file maintains an 8 bytes entry for every segment. It keeps track of erase count, level etc. for every segment. Bad things happen with a segment belonging to the segment file is GCed ------------[ cut here ]------------ kernel BUG at /home/prasad/logfs/readwrite.c:297! invalid opcode: 0000 [#1] SMP Modules linked in: logfs joydev usbhid hid psmouse e1000 i2c_piix4 serio_raw [last unloaded: logfs] Pid: 20161, comm: mount Not tainted 3.1.0-rc3+ #3 innotek GmbH VirtualBox EIP: 0060:[<f809132a>] EFLAGS: 00010292 CPU: 0 EIP is at logfs_lock_write_page+0x6a/0x70 [logfs] EAX: 00000027 EBX: f73f5b20 ECX: c16007c8 EDX: 00000094 ESI: 00000000 EDI: e59be6e4 EBP: c7337b28 ESP: c7337b18 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process mount (pid: 20161, ti=c7336000 task=eb323f70 task.ti=c7336000) Stack: f8099a3d c7337b24 f73f5b20 00001002 c7337b50 f8091f6d f8099a4d f80994e4 00000003 00000000 c7337b68 00000000 c67e4400 00001000 c7337b80 f80935e5 00000000 00000000 00000000 00000000 e1fcf000 0000000f e59be618 c70bf900 Call Trace: [<f8091f6d>] logfs_get_write_page.clone.16+0xdd/0x100 [logfs] [<f80935e5>] logfs_mod_segment_entry+0x55/0x110 [logfs] [<f809460d>] logfs_get_segment_entry+0x1d/0x20 [logfs] [<f8091060>] ? logfs_cleanup_journal+0x50/0x50 [logfs] [<f809521b>] ostore_get_erase_count+0x1b/0x40 [logfs] [<f80965b8>] logfs_open_area+0xc8/0x150 [logfs] [<c141a7ec>] ? kmemleak_alloc+0x2c/0x60 [<f809668e>] __logfs_segment_write.clone.16+0x4e/0x1b0 [logfs] [<c10dd563>] ? mempool_kmalloc+0x13/0x20 [<c10dd563>] ? mempool_kmalloc+0x13/0x20 [<f809696f>] logfs_segment_write+0x17f/0x1d0 [logfs] [<f8092e8c>] logfs_write_i0+0x11c/0x180 [logfs] [<f8092f35>] logfs_write_direct+0x45/0x90 [logfs] [<f80934cd>] __logfs_write_buf+0xbd/0xf0 [logfs] [<c102900e>] ? kmap_atomic_prot+0x4e/0xe0 [<f809424b>] logfs_write_buf+0x3b/0x60 [logfs] [<f80947a9>] __logfs_write_inode+0xa9/0x110 [logfs] [<f8094cb0>] logfs_rewrite_block+0xc0/0x110 [logfs] [<f8095300>] ? get_mapping_page+0x10/0x60 [logfs] [<f8095aa0>] ? logfs_load_object_aliases+0x2e0/0x2f0 [logfs] [<f808e57d>] logfs_gc_segment+0x2ad/0x310 [logfs] [<f808e62a>] __logfs_gc_once+0x4a/0x80 [logfs] [<f808ed43>] logfs_gc_pass+0x683/0x6a0 [logfs] [<f8097a89>] logfs_mount+0x5a9/0x680 [logfs] [<c1126b21>] mount_fs+0x21/0xd0 [<c10f6f6f>] ? __alloc_percpu+0xf/0x20 [<c113da41>] ? alloc_vfsmnt+0xb1/0x130 [<c113db4b>] vfs_kern_mount+0x4b/0xa0 [<c113e06e>] do_kern_mount+0x3e/0xe0 [<c113f60d>] do_mount+0x34d/0x670 [<c10f2749>] ? strndup_user+0x49/0x70 [<c113fcab>] sys_mount+0x6b/0xa0 [<c142d87c>] syscall_call+0x7/0xb Code: f8 e8 8b 93 39 c9 8b 45 f8 3e 0f ba 28 00 19 d2 85 d2 74 ca eb d0 0f 0b 8d 45 fc 89 44 24 04 c7 04 24 3d 9a 09 f8 e8 09 92 39 c9 <0f> 0b 8d 74 26 00 55 89 e5 3e 8d 74 26 00 8b 10 80 e6 01 74 09 EIP: [<f809132a>] logfs_lock_write_page+0x6a/0x70 [logfs] SS:ESP 0068:c7337b18 ---[ end trace 96e67d5b3aa3d6ca ]--- The patch passes locked page to __logfs_write_inode. It calls function logfs_get_wblocks() to pre-lock the page. This ensures any further attempts to lock the page are ignored (esp from get_erase_count). Acked-by: Joern Engel <joern@logfs.org> Signed-off-by: Prasad Joshi <prasadjoshi.linux@gmail.com>
2011-10-03 02:16:51 +08:00
return __logfs_write_inode(inode, NULL, WF_LOCK);
}
static s64 dir_seek_data(struct inode *inode, s64 pos)
{
s64 new_pos = logfs_seek_data(inode, pos);
return max(pos, new_pos - 1);
}
static int beyond_eof(struct inode *inode, loff_t bix)
{
loff_t pos = bix << inode->i_sb->s_blocksize_bits;
return pos >= i_size_read(inode);
}
/*
* Prime value was chosen to be roughly 256 + 26. r5 hash uses 11,
* so short names (len <= 9) don't even occupy the complete 32bit name
* space. A prime >256 ensures short names quickly spread the 32bit
* name space. Add about 26 for the estimated amount of information
* of each character and pick a prime nearby, preferably a bit-sparse
* one.
*/
static u32 hash_32(const char *s, int len, u32 seed)
{
u32 hash = seed;
int i;
for (i = 0; i < len; i++)
hash = hash * 293 + s[i];
return hash;
}
/*
* We have to satisfy several conflicting requirements here. Small
* directories should stay fairly compact and not require too many
* indirect blocks. The number of possible locations for a given hash
* should be small to make lookup() fast. And we should try hard not
* to overflow the 32bit name space or nfs and 32bit host systems will
* be unhappy.
*
* So we use the following scheme. First we reduce the hash to 0..15
* and try a direct block. If that is occupied we reduce the hash to
* 16..255 and try an indirect block. Same for 2x and 3x indirect
* blocks. Lastly we reduce the hash to 0x800_0000 .. 0xffff_ffff,
* but use buckets containing eight entries instead of a single one.
*
* Using 16 entries should allow for a reasonable amount of hash
* collisions, so the 32bit name space can be packed fairly tight
* before overflowing. Oh and currently we don't overflow but return
* and error.
*
* How likely are collisions? Doing the appropriate math is beyond me
* and the Bronstein textbook. But running a test program to brute
* force collisions for a couple of days showed that on average the
* first collision occurs after 598M entries, with 290M being the
* smallest result. Obviously 21 entries could already cause a
* collision if all entries are carefully chosen.
*/
static pgoff_t hash_index(u32 hash, int round)
{
u32 i0_blocks = I0_BLOCKS;
u32 i1_blocks = I1_BLOCKS;
u32 i2_blocks = I2_BLOCKS;
u32 i3_blocks = I3_BLOCKS;
switch (round) {
case 0:
return hash % i0_blocks;
case 1:
return i0_blocks + hash % (i1_blocks - i0_blocks);
case 2:
return i1_blocks + hash % (i2_blocks - i1_blocks);
case 3:
return i2_blocks + hash % (i3_blocks - i2_blocks);
case 4 ... 19:
return i3_blocks + 16 * (hash % (((1<<31) - i3_blocks) / 16))
+ round - 4;
}
BUG();
}
static struct page *logfs_get_dd_page(struct inode *dir, struct dentry *dentry)
{
struct qstr *name = &dentry->d_name;
struct page *page;
struct logfs_disk_dentry *dd;
u32 hash = hash_32(name->name, name->len, 0);
pgoff_t index;
int round;
if (name->len > LOGFS_MAX_NAMELEN)
return ERR_PTR(-ENAMETOOLONG);
for (round = 0; round < 20; round++) {
index = hash_index(hash, round);
if (beyond_eof(dir, index))
return NULL;
if (!logfs_exist_block(dir, index))
continue;
page = read_cache_page(dir->i_mapping, index,
(filler_t *)logfs_readpage, NULL);
if (IS_ERR(page))
return page;
dd = kmap_atomic(page);
BUG_ON(dd->namelen == 0);
if (name->len != be16_to_cpu(dd->namelen) ||
memcmp(name->name, dd->name, name->len)) {
kunmap_atomic(dd);
page_cache_release(page);
continue;
}
kunmap_atomic(dd);
return page;
}
return NULL;
}
static int logfs_remove_inode(struct inode *inode)
{
int ret;
drop_nlink(inode);
ret = write_inode(inode);
LOGFS_BUG_ON(ret, inode->i_sb);
return ret;
}
static void abort_transaction(struct inode *inode, struct logfs_transaction *ta)
{
if (logfs_inode(inode)->li_block)
logfs_inode(inode)->li_block->ta = NULL;
kfree(ta);
}
static int logfs_unlink(struct inode *dir, struct dentry *dentry)
{
struct logfs_super *super = logfs_super(dir->i_sb);
struct inode *inode = dentry->d_inode;
struct logfs_transaction *ta;
struct page *page;
pgoff_t index;
int ret;
ta = kzalloc(sizeof(*ta), GFP_KERNEL);
if (!ta)
return -ENOMEM;
ta->state = UNLINK_1;
ta->ino = inode->i_ino;
inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
page = logfs_get_dd_page(dir, dentry);
if (!page) {
kfree(ta);
return -ENOENT;
}
if (IS_ERR(page)) {
kfree(ta);
return PTR_ERR(page);
}
index = page->index;
page_cache_release(page);
mutex_lock(&super->s_dirop_mutex);
logfs_add_transaction(dir, ta);
ret = logfs_delete(dir, index, NULL);
if (!ret)
ret = write_inode(dir);
if (ret) {
abort_transaction(dir, ta);
printk(KERN_ERR"LOGFS: unable to delete inode\n");
goto out;
}
ta->state = UNLINK_2;
logfs_add_transaction(inode, ta);
ret = logfs_remove_inode(inode);
out:
mutex_unlock(&super->s_dirop_mutex);
return ret;
}
static inline int logfs_empty_dir(struct inode *dir)
{
u64 data;
data = logfs_seek_data(dir, 0) << dir->i_sb->s_blocksize_bits;
return data >= i_size_read(dir);
}
static int logfs_rmdir(struct inode *dir, struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
if (!logfs_empty_dir(inode))
return -ENOTEMPTY;
return logfs_unlink(dir, dentry);
}
/* FIXME: readdir currently has it's own dir_walk code. I don't see a good
* way to combine the two copies */
static int logfs_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *dir = file_inode(file);
loff_t pos;
struct page *page;
struct logfs_disk_dentry *dd;
if (ctx->pos < 0)
return -EINVAL;
if (!dir_emit_dots(file, ctx))
return 0;
pos = ctx->pos - 2;
BUG_ON(pos < 0);
for (;; pos++, ctx->pos++) {
bool full;
if (beyond_eof(dir, pos))
break;
if (!logfs_exist_block(dir, pos)) {
/* deleted dentry */
pos = dir_seek_data(dir, pos);
continue;
}
page = read_cache_page(dir->i_mapping, pos,
(filler_t *)logfs_readpage, NULL);
if (IS_ERR(page))
return PTR_ERR(page);
dd = kmap(page);
BUG_ON(dd->namelen == 0);
full = !dir_emit(ctx, (char *)dd->name,
be16_to_cpu(dd->namelen),
be64_to_cpu(dd->ino), dd->type);
kunmap(page);
page_cache_release(page);
if (full)
break;
}
return 0;
}
static void logfs_set_name(struct logfs_disk_dentry *dd, struct qstr *name)
{
dd->namelen = cpu_to_be16(name->len);
memcpy(dd->name, name->name, name->len);
}
static struct dentry *logfs_lookup(struct inode *dir, struct dentry *dentry,
unsigned int flags)
{
struct page *page;
struct logfs_disk_dentry *dd;
pgoff_t index;
u64 ino = 0;
struct inode *inode;
page = logfs_get_dd_page(dir, dentry);
if (IS_ERR(page))
return ERR_CAST(page);
if (!page) {
d_add(dentry, NULL);
return NULL;
}
index = page->index;
dd = kmap_atomic(page);
ino = be64_to_cpu(dd->ino);
kunmap_atomic(dd);
page_cache_release(page);
inode = logfs_iget(dir->i_sb, ino);
if (IS_ERR(inode))
printk(KERN_ERR"LogFS: Cannot read inode #%llx for dentry (%lx, %lx)n",
ino, dir->i_ino, index);
return d_splice_alias(inode, dentry);
}
static void grow_dir(struct inode *dir, loff_t index)
{
index = (index + 1) << dir->i_sb->s_blocksize_bits;
if (i_size_read(dir) < index)
i_size_write(dir, index);
}
static int logfs_write_dir(struct inode *dir, struct dentry *dentry,
struct inode *inode)
{
struct page *page;
struct logfs_disk_dentry *dd;
u32 hash = hash_32(dentry->d_name.name, dentry->d_name.len, 0);
pgoff_t index;
int round, err;
for (round = 0; round < 20; round++) {
index = hash_index(hash, round);
if (logfs_exist_block(dir, index))
continue;
page = find_or_create_page(dir->i_mapping, index, GFP_KERNEL);
if (!page)
return -ENOMEM;
dd = kmap_atomic(page);
memset(dd, 0, sizeof(*dd));
dd->ino = cpu_to_be64(inode->i_ino);
dd->type = logfs_type(inode);
logfs_set_name(dd, &dentry->d_name);
kunmap_atomic(dd);
err = logfs_write_buf(dir, page, WF_LOCK);
unlock_page(page);
page_cache_release(page);
if (!err)
grow_dir(dir, index);
return err;
}
/* FIXME: Is there a better return value? In most cases neither
* the filesystem nor the directory are full. But we have had
* too many collisions for this particular hash and no fallback.
*/
return -ENOSPC;
}
static int __logfs_create(struct inode *dir, struct dentry *dentry,
struct inode *inode, const char *dest, long destlen)
{
struct logfs_super *super = logfs_super(dir->i_sb);
struct logfs_inode *li = logfs_inode(inode);
struct logfs_transaction *ta;
int ret;
ta = kzalloc(sizeof(*ta), GFP_KERNEL);
if (!ta) {
drop_nlink(inode);
iput(inode);
return -ENOMEM;
}
ta->state = CREATE_1;
ta->ino = inode->i_ino;
mutex_lock(&super->s_dirop_mutex);
logfs_add_transaction(inode, ta);
if (dest) {
/* symlink */
ret = logfs_inode_write(inode, dest, destlen, 0, WF_LOCK, NULL);
if (!ret)
ret = write_inode(inode);
} else {
/* creat/mkdir/mknod */
ret = write_inode(inode);
}
if (ret) {
abort_transaction(inode, ta);
li->li_flags |= LOGFS_IF_STILLBORN;
/* FIXME: truncate symlink */
drop_nlink(inode);
iput(inode);
goto out;
}
ta->state = CREATE_2;
logfs_add_transaction(dir, ta);
ret = logfs_write_dir(dir, dentry, inode);
/* sync directory */
if (!ret)
ret = write_inode(dir);
if (ret) {
logfs_del_transaction(dir, ta);
ta->state = CREATE_2;
logfs_add_transaction(inode, ta);
logfs_remove_inode(inode);
iput(inode);
goto out;
}
d_instantiate(dentry, inode);
out:
mutex_unlock(&super->s_dirop_mutex);
return ret;
}
static int logfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct inode *inode;
/*
* FIXME: why do we have to fill in S_IFDIR, while the mode is
* correct for mknod, creat, etc.? Smells like the vfs *should*
* do it for us but for some reason fails to do so.
*/
inode = logfs_new_inode(dir, S_IFDIR | mode);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_op = &logfs_dir_iops;
inode->i_fop = &logfs_dir_fops;
return __logfs_create(dir, dentry, inode, NULL, 0);
}
static int logfs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
bool excl)
{
struct inode *inode;
inode = logfs_new_inode(dir, mode);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_op = &logfs_reg_iops;
inode->i_fop = &logfs_reg_fops;
inode->i_mapping->a_ops = &logfs_reg_aops;
return __logfs_create(dir, dentry, inode, NULL, 0);
}
static int logfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode,
dev_t rdev)
{
struct inode *inode;
if (dentry->d_name.len > LOGFS_MAX_NAMELEN)
return -ENAMETOOLONG;
inode = logfs_new_inode(dir, mode);
if (IS_ERR(inode))
return PTR_ERR(inode);
init_special_inode(inode, mode, rdev);
return __logfs_create(dir, dentry, inode, NULL, 0);
}
static int logfs_symlink(struct inode *dir, struct dentry *dentry,
const char *target)
{
struct inode *inode;
size_t destlen = strlen(target) + 1;
if (destlen > dir->i_sb->s_blocksize)
return -ENAMETOOLONG;
inode = logfs_new_inode(dir, S_IFLNK | 0777);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_op = &logfs_symlink_iops;
inode->i_mapping->a_ops = &logfs_reg_aops;
return __logfs_create(dir, dentry, inode, target, destlen);
}
static int logfs_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *dentry)
{
struct inode *inode = old_dentry->d_inode;
inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
ihold(inode);
inc_nlink(inode);
mark_inode_dirty_sync(inode);
return __logfs_create(dir, dentry, inode, NULL, 0);
}
static int logfs_get_dd(struct inode *dir, struct dentry *dentry,
struct logfs_disk_dentry *dd, loff_t *pos)
{
struct page *page;
void *map;
page = logfs_get_dd_page(dir, dentry);
if (IS_ERR(page))
return PTR_ERR(page);
*pos = page->index;
map = kmap_atomic(page);
memcpy(dd, map, sizeof(*dd));
kunmap_atomic(map);
page_cache_release(page);
return 0;
}
static int logfs_delete_dd(struct inode *dir, loff_t pos)
{
/*
* Getting called with pos somewhere beyond eof is either a goofup
* within this file or means someone maliciously edited the
* (crc-protected) journal.
*/
BUG_ON(beyond_eof(dir, pos));
dir->i_ctime = dir->i_mtime = CURRENT_TIME;
log_dir(" Delete dentry (%lx, %llx)\n", dir->i_ino, pos);
return logfs_delete(dir, pos, NULL);
}
/*
* Cross-directory rename, target does not exist. Just a little nasty.
* Create a new dentry in the target dir, then remove the old dentry,
* all the while taking care to remember our operation in the journal.
*/
static int logfs_rename_cross(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
struct logfs_super *super = logfs_super(old_dir->i_sb);
struct logfs_disk_dentry dd;
struct logfs_transaction *ta;
loff_t pos;
int err;
/* 1. locate source dd */
err = logfs_get_dd(old_dir, old_dentry, &dd, &pos);
if (err)
return err;
ta = kzalloc(sizeof(*ta), GFP_KERNEL);
if (!ta)
return -ENOMEM;
ta->state = CROSS_RENAME_1;
ta->dir = old_dir->i_ino;
ta->pos = pos;
/* 2. write target dd */
mutex_lock(&super->s_dirop_mutex);
logfs_add_transaction(new_dir, ta);
err = logfs_write_dir(new_dir, new_dentry, old_dentry->d_inode);
if (!err)
err = write_inode(new_dir);
if (err) {
super->s_rename_dir = 0;
super->s_rename_pos = 0;
abort_transaction(new_dir, ta);
goto out;
}
/* 3. remove source dd */
ta->state = CROSS_RENAME_2;
logfs_add_transaction(old_dir, ta);
err = logfs_delete_dd(old_dir, pos);
if (!err)
err = write_inode(old_dir);
LOGFS_BUG_ON(err, old_dir->i_sb);
out:
mutex_unlock(&super->s_dirop_mutex);
return err;
}
static int logfs_replace_inode(struct inode *dir, struct dentry *dentry,
struct logfs_disk_dentry *dd, struct inode *inode)
{
loff_t pos;
int err;
err = logfs_get_dd(dir, dentry, dd, &pos);
if (err)
return err;
dd->ino = cpu_to_be64(inode->i_ino);
dd->type = logfs_type(inode);
err = write_dir(dir, dd, pos);
if (err)
return err;
log_dir("Replace dentry (%lx, %llx) %s -> %llx\n", dir->i_ino, pos,
dd->name, be64_to_cpu(dd->ino));
return write_inode(dir);
}
/* Target dentry exists - the worst case. We need to attach the source
* inode to the target dentry, then remove the orphaned target inode and
* source dentry.
*/
static int logfs_rename_target(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
struct logfs_super *super = logfs_super(old_dir->i_sb);
struct inode *old_inode = old_dentry->d_inode;
struct inode *new_inode = new_dentry->d_inode;
int isdir = S_ISDIR(old_inode->i_mode);
struct logfs_disk_dentry dd;
struct logfs_transaction *ta;
loff_t pos;
int err;
BUG_ON(isdir != S_ISDIR(new_inode->i_mode));
if (isdir) {
if (!logfs_empty_dir(new_inode))
return -ENOTEMPTY;
}
/* 1. locate source dd */
err = logfs_get_dd(old_dir, old_dentry, &dd, &pos);
if (err)
return err;
ta = kzalloc(sizeof(*ta), GFP_KERNEL);
if (!ta)
return -ENOMEM;
ta->state = TARGET_RENAME_1;
ta->dir = old_dir->i_ino;
ta->pos = pos;
ta->ino = new_inode->i_ino;
/* 2. attach source inode to target dd */
mutex_lock(&super->s_dirop_mutex);
logfs_add_transaction(new_dir, ta);
err = logfs_replace_inode(new_dir, new_dentry, &dd, old_inode);
if (err) {
super->s_rename_dir = 0;
super->s_rename_pos = 0;
super->s_victim_ino = 0;
abort_transaction(new_dir, ta);
goto out;
}
/* 3. remove source dd */
ta->state = TARGET_RENAME_2;
logfs_add_transaction(old_dir, ta);
err = logfs_delete_dd(old_dir, pos);
if (!err)
err = write_inode(old_dir);
LOGFS_BUG_ON(err, old_dir->i_sb);
/* 4. remove target inode */
ta->state = TARGET_RENAME_3;
logfs_add_transaction(new_inode, ta);
err = logfs_remove_inode(new_inode);
out:
mutex_unlock(&super->s_dirop_mutex);
return err;
}
static int logfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
if (new_dentry->d_inode)
return logfs_rename_target(old_dir, old_dentry,
new_dir, new_dentry);
return logfs_rename_cross(old_dir, old_dentry, new_dir, new_dentry);
}
/* No locking done here, as this is called before .get_sb() returns. */
int logfs_replay_journal(struct super_block *sb)
{
struct logfs_super *super = logfs_super(sb);
struct inode *inode;
u64 ino, pos;
int err;
if (super->s_victim_ino) {
/* delete victim inode */
ino = super->s_victim_ino;
printk(KERN_INFO"LogFS: delete unmapped inode #%llx\n", ino);
inode = logfs_iget(sb, ino);
if (IS_ERR(inode))
goto fail;
LOGFS_BUG_ON(i_size_read(inode) > 0, sb);
super->s_victim_ino = 0;
err = logfs_remove_inode(inode);
iput(inode);
if (err) {
super->s_victim_ino = ino;
goto fail;
}
}
if (super->s_rename_dir) {
/* delete old dd from rename */
ino = super->s_rename_dir;
pos = super->s_rename_pos;
printk(KERN_INFO"LogFS: delete unbacked dentry (%llx, %llx)\n",
ino, pos);
inode = logfs_iget(sb, ino);
if (IS_ERR(inode))
goto fail;
super->s_rename_dir = 0;
super->s_rename_pos = 0;
err = logfs_delete_dd(inode, pos);
iput(inode);
if (err) {
super->s_rename_dir = ino;
super->s_rename_pos = pos;
goto fail;
}
}
return 0;
fail:
LOGFS_BUG(sb);
return -EIO;
}
const struct inode_operations logfs_symlink_iops = {
.readlink = generic_readlink,
.follow_link = page_follow_link_light,
};
const struct inode_operations logfs_dir_iops = {
.create = logfs_create,
.link = logfs_link,
.lookup = logfs_lookup,
.mkdir = logfs_mkdir,
.mknod = logfs_mknod,
.rename = logfs_rename,
.rmdir = logfs_rmdir,
.symlink = logfs_symlink,
.unlink = logfs_unlink,
};
const struct file_operations logfs_dir_fops = {
.fsync = logfs_fsync,
.unlocked_ioctl = logfs_ioctl,
.iterate = logfs_readdir,
.read = generic_read_dir,
llseek: automatically add .llseek fop All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
2010-08-16 00:52:59 +08:00
.llseek = default_llseek,
};