Slab constructors currently have a flags parameter that is never used. And
the order of the arguments is opposite to other slab functions. The object
pointer is placed before the kmem_cache pointer.
Convert
ctor(void *object, struct kmem_cache *s, unsigned long flags)
to
ctor(struct kmem_cache *s, void *object)
throughout the kernel
[akpm@linux-foundation.org: coupla fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In three places: summary scan, normal scan, REF_PRISTINE GC.
Just truncate at the NUL, since that was the correct thing to do in the
only case where this (inexplicable) breakage has been seen.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
In OLPC trac #4184 we found a case where a corrupted node didn't
actually get obsoleted when we tried to garbage-collect it. So we wrote
out many million copies of it, in repeated attempts to obsolete it,
until the flash became full. Don't Do That.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Instead of matching resv_blocks_gcmerge, which is only about 3, instead
match resv_blocks_gctrigger, which includes a proportion of the total
device size.
These ought to become tunable from userspace, at some point.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
With huge amounts of free space, we weren't bothering to GC for while a
while, and pathological numbers of obsolete nodes were accumulating,
seriously affecting performance on NAND flash (OLPC trac #3978)
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Fix a couple of instances in JFFS2 where the unpoint() routine is
being called with the wrong length in cases where the point() routine
truncated a request.
Signed-off-by: Andy Lowe <alowe@mvista.com>
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
I've bisected the deadlock when many small appends are done on jffs2 down to
this commit:
commit 6fe6900e1e
Author: Nick Piggin <npiggin@suse.de>
Date: Sun May 6 14:49:04 2007 -0700
mm: make read_cache_page synchronous
Ensure pages are uptodate after returning from read_cache_page, which allows
us to cut out most of the filesystem-internal PageUptodate calls.
I didn't have a great look down the call chains, but this appears to fixes 7
possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
block2mtd. All depending on whether the filler is async and/or can return
with a !uptodate page.
It introduced a wait to read_cache_page, as well as a
read_cache_page_async function equivalent to the old read_cache_page
without any callers.
Switching jffs2_gc_fetch_page to read_cache_page_async for the old
behavior makes the deadlocks go away, but maybe reintroduces the
use-before-uptodate problem? I don't understand the mm/fs interaction
well enough to say.
[It's fine. dwmw2.]
Signed-off-by: Jason Lunz <lunz@falooley.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
fs/jffs2/erase.c: In function 'jffs2_block_check_erase':
fs/jffs2/erase.c:355: warning: format '%08x' expects type 'unsigned int', but argument 3 has type 'long unsigned int'
and
fs/jffs2/erase.c: In function 'jffs2_erase_pending_blocks':
fs/jffs2/erase.c:404: warning: 'bad_offset' may be used uninitialized in this function
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When POSIX ACL support was enabled, we weren't writing correct
legacy modes to the medium on inode creation, or when the ACL was set.
This meant that the permissions would be incorrect after the file system
was remounted.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Commit a491486a20 introduced a locking
problem in JFFS2 -- we up() the alloc_sem when we weren't previously
holding it. This leads to all kinds of fun behaviour later.
There was a _reason_ for the
if (1 /* alternative path needs testing */ ||
which the above-mentioned commit removed :)
Discovered and debugged by Giulio Fedel <giulio.fedel@andorsystems.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit a7a6ace140 revamped the OOB
handling but accidentally switched to 12-byte cleanmarkers, which is
incompatible with what 'flash_eraseall -j' will do. So using
flash_eraseall -j and then trying to mount the 'empty' flash will fail,
because the cleanmarkers aren't recognised.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Debugging the hardware problems in OLPC trac #1905 would be a whole lot
easier if the correct node offsets were printed for the offending nodes.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The try_to_freeze() call was in the wrong place; we need it in the
signal-pending loop now that a pending freeze also makes
signal_pending() return true.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
jffs2_add_physical_node_ref() should never really return error -- it's
an internal debugging check which triggered. We really need to work out
why and stop it happening. But in the meantime, let's make the failure
mode a little less nasty.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Slab destructors were no longer supported after Christoph's
c59def9f22 change. They've been
BUGs for both slab and slub, and slob never supported them
either.
This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Introduce is_owner_or_cap() macro in fs.h, and convert over relevant
users to it. This is done because we want to avoid bugs in the future
where we check for only effective fsuid of the current task against a
file's owning uid, without simultaneously checking for CAP_FOWNER as
well, thus violating its semantics.
[ XFS uses special macros and structures, and in general looked ...
untouchable, so we leave it alone -- but it has been looked over. ]
The (current->fsuid != inode->i_uid) check in generic_permission() and
exec_permission_lite() is left alone, because those operations are
covered by CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH. Similarly operations
falling under the purview of CAP_CHOWN and CAP_LEASE are also left alone.
Signed-off-by: Satyam Sharma <ssatyam@cse.iitk.ac.in>
Cc: Al Viro <viro@ftp.linux.org.uk>
Acked-by: Serge E. Hallyn <serge@hallyn.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, the freezer treats all tasks as freezable, except for the kernel
threads that explicitly set the PF_NOFREEZE flag for themselves. This
approach is problematic, since it requires every kernel thread to either
set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
care for the freezing of tasks at all.
It seems better to only require the kernel threads that want to or need to
be frozen to use some freezer-related code and to remove any
freezer-related code from the other (nonfreezable) kernel threads, which is
done in this patch.
The patch causes all kernel threads to be nonfreezable by default (ie. to
have PF_NOFREEZE set by default) and introduces the set_freezable()
function that should be called by the freezable kernel threads in order to
unset PF_NOFREEZE. It also makes all of the currently freezable kernel
threads call set_freezable(), so it shouldn't cause any (intentional)
change of behaviour to appear. Additionally, it updates documentation to
describe the freezing of tasks more accurately.
[akpm@linux-foundation.org: build fixes]
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
fs/jffs2/compr.c: In function ‘jffs2_compressors_init’:
fs/jffs2/compr.c:320: warning: implicit declaration of function ‘jffs2_lzo_init’
fs/jffs2/compr.c: In function ‘jffs2_compressors_exit’:
fs/jffs2/compr.c:346: warning: implicit declaration of function ‘jffs2_lzo_exit’
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Add a "favourlzo" compression mode to jffs2 which tries to
optimise by size but gives lzo an advantage when comparing sizes.
This means the faster lzo algorithm can be preferred when there
isn't much difference in compressed size (the exact threshold can
be changed).
Signed-off-by: Richard Purdie <rpurdie@openedhand.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Add LZO1X compression/decompression support to jffs2.
LZO's interface doesn't entirely match that required by jffs2 so a
buffer and memcpy is unavoidable.
Signed-off-by: Richard Purdie <rpurdie@openedhand.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We've seen some evil corruption issues, where the corruption seems to be
introduced after the JFFS2 crc32 is calculated but before the NAND
controller calculates the ECC. So it's in RAM or in the PCI DMA
transfer; not on the flash. Attempt to catch it earlier by (optionally)
reading back from the flash immediately after writing it.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
They can use generic_file_splice_read() instead. Since sys_sendfile() now
prefers that, there should be no change in behaviour.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Debugging the hardware problems in OLPC trac #1905 would be a whole lot
easier if the correct node offsets were printed for the offending nodes.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We should have stopped returning 1 from read_dnode() to indicate
failure. We can just mark the damn thing obsolete immediately. But I
missed a case where we don't.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We should have stopped returning 1 from read_dnode() to indicate
failure. We can just mark the damn thing obsolete immediately. But I
missed a case where we don't.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The try_to_freeze() call was in the wrong place; we need it in the
signal-pending loop now that a pending freeze also makes
signal_pending() return true.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
With current desing erase_free_sem is locked every time the flash
block is being erased. For NOR flashes - ~1 second is needed to erase
single flash block. In the worst case scenario erase_free_sem may be
locked for a couple of seconds when the number of blocks is being
erased (e.g. after large file was removed). When erase_free_sem is
locked all read/write operations for given JFFS2 partition are locked
too - in effect from time to time access to the JFFS2 partition is
locked for a number of seconds. This fix makes critical section in
flash erasing procedure shorter - now erase_free_sem is locked around
erase_completion_lock spinlock only.
Originally from Radoslaw Bisewski
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
jffs2_add_physical_node_ref() should never really return error -- it's
an internal debugging check which triggered. We really need to work out
why and stop it happening. But in the meantime, let's make the failure
mode a little less nasty.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Faster and won't trash the D-cache.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When pdflush is erasing lots of sectors, drivers calling
mtd->sync will hang until all blocks are erased. Be nicer.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* git://git.infradead.org/mtd-2.6:
[JFFS2] Fix obsoletion of metadata nodes in jffs2_add_tn_to_tree()
[MTD] Fix error checking after get_mtd_device() in get_sb_mtd functions
[JFFS2] Fix buffer length calculations in jffs2_get_inode_nodes()
[JFFS2] Fix potential memory leak of dead xattrs on unmount.
[JFFS2] Fix BUG() caused by failing to discard xattrs on deleted files.
[MTD] generalise the handling of MTD-specific superblocks
[MTD] [MAPS] don't force uclinux mtd map to be root dev
We should keep the mdata node with higher version number, not just the
one we happen to find latest. Doh.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
If we have already read enough bytes, no need to call read_more().
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
An xattr_datum which ends up orphaned should be freed by the GC
thread. But if we umount before the GC thread is finished, or if we
mount read-only and the GC thread never runs, they might never be
freed. Clean them up during unmount, if there are any left.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When we cannot mark nodes as obsolete, such as on NAND flash, we end up
having to delete inodes with !nlink in jffs2_build_remove_unlinked_inode().
However, jffs2_build_xattr_subsystem() runs later than this, and will
attach an xref to the dead inode. Then later when the last nodes of that
dead inode are erased we hit a BUG() in jffs2_del_ino_cache()
because we're not supposed to get there with an xattr still attached to
the inode which is being killed.
The simple fix is to refrain from attaching xattrs to inodes with zero
nlink, in jffs2_build_xattr_subsystem(). It's it's OK to trust nlink
here because the file system isn't actually mounted yet, so there's no
chance that a zero-nlink file could actually be alive still because
it's open.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Steven French <sfrench@us.ibm.com>
Cc: Michael Halcrow <mhalcrow@us.ibm.com>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@ucw.cz>
Cc: David Chinner <dgc@sgi.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Generalise the handling of MTD-specific superblocks so that JFFS2 and ROMFS
can both share it.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* git://git.infradead.org/mtd-2.6: (21 commits)
[MTD] [CHIPS] Remove MTD_OBSOLETE_CHIPS (jedec, amd_flash, sharp)
[MTD] Delete allegedly obsolete "bank_size" field of mtd_info.
[MTD] Remove unnecessary user space check from mtd.h.
[MTD] [MAPS] Remove flash maps for no longer supported 405LP boards
[MTD] [MAPS] Fix missing printk() parameter in physmap_of.c MTD driver
[MTD] [NAND] platform NAND driver: add driver
[MTD] [NAND] platform NAND driver: update header
[JFFS2] Simplify and clean up jffs2_add_tn_to_tree() some more.
[JFFS2] Remove another bogus optimisation in jffs2_add_tn_to_tree()
[JFFS2] Remove broken insert_point optimisation in jffs2_add_tn_to_tree()
[JFFS2] Remember to calculate overlap on nodes which replace older nodes
[JFFS2] Don't advance c->wbuf_ofs to next eraseblock after wbuf flush
[MTD] [NAND] at91_nand.c: CMDLINE_PARTS support
[MTD] [NAND] Tidy up handling of page number in nand_block_bad()
[MTD] block2mtd_paramline[] mustn't be __initdata
[MTD] [NAND] Support multiple chips in CAFÉ driver
[MTD] [NAND] Rename cafe.c to cafe_nand.c and remove the multi-obj magic
[MTD] [NAND] Use rslib for CAFÉ ECC
[RSLIB] Support non-canonical GF representations
[JFFS2] Remove dead file histo_mips.h
...
I have never seen a use of SLAB_DEBUG_INITIAL. It is only supported by
SLAB.
I think its purpose was to have a callback after an object has been freed
to verify that the state is the constructor state again? The callback is
performed before each freeing of an object.
I would think that it is much easier to check the object state manually
before the free. That also places the check near the code object
manipulation of the object.
Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
compiled with SLAB debugging on. If there would be code in a constructor
handling SLAB_DEBUG_INITIAL then it would have to be conditional on
SLAB_DEBUG otherwise it would just be dead code. But there is no such code
in the kernel. I think SLUB_DEBUG_INITIAL is too problematic to make real
use of, difficult to understand and there are easier ways to accomplish the
same effect (i.e. add debug code before kfree).
There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
clear in fs inode caches. Remove the pointless checks (they would even be
pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.
This is the last slab flag that SLUB did not support. Remove the check for
unimplemented flags from SLUB.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We attempted to insert new nodes into the tree by just using
rb_replace_node to let them replace an earlier node which they
completely overlapped. However, that could place the new node into the
wrong place in the tree, since its start could be node only before the
start of the victim, but before the node _before_ the victim in the tree
(if that previous node actually ends _after_ the new node, thus isn't
entirely overlapped and wasn't itself chosen to be the victim).
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The original code would remember, during the first pass over the tree,
a suitable place to start the insertion from when we eventually come
to add a new node.
The optimisation was broken, and we sometimes ended up inserting a new
node in the wrong place because we started the insertion from the wrong
point.
Just ditch the optimisation and start the insertion from the root of the
tree, for now. I'll try it again when I'm feeling cleverer.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
This fixes a problem Artem found with the integck test tool -- we
weren't correctly keeping track of the 'overlap' flag in some cases,
which led to the nodes being played back in an incorrect order and file
corruption.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
After flushing the last page of an eraseblock, don't leave the
wbuf 'offset' field pointing at the start of the next physical
eraseblock. This was causing a BUG() on NOR-ECC (Sibley) flash, where
we start writing a little further in, after the cleanmarker.
Debugged by Alexander Belyakov <abelyako@googlemail.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
This patch make JFFS2 able to work with UBI volumes via the emulated MTD
devices which are directly mapped to these volumes.
Signed-off-by: Artem Bityutskiy <dedekind@infradead.org>
It seems to be silly season lately.
(Oops, test builds are more useful if the file in question is actually
configured on. dwmw2).
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
This should never happen unless there's corruption on the medium and the
actual data nodes go missing. But the failure mode (an oops when we assume
the fragtree isn't empty and go looking for its last node) isn't useful.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
In particular, remove the bit in the LICENCE file about contacting
Red Hat for alternative arrangements. Their errant IS department broke
that arrangement a long time ago -- the policy of collecting copyright
assignments from contributors came to an end when the plug was pulled on
the servers hosting the project, without notice or reason.
We do still dual-license it for use with eCos, with the GPL+exception
licence approved by the FSF as being GPL-compatible. It's just that nobody
has the right to license it differently.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
No need to check for all-zero header since the header cannot
be zero due to other checks.
Replace the all-zero header check in readinode.c with a
check for the magic word.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We originally used to read every node and allocate a jffs2_tmp_dnode_info
structure for each, before processing them in (reverse) version order
and discarding the ones which are obsoleted by later nodes.
With huge logfiles, this behaviour caused memory problems. For example, a
file involved in OLPC trac #1292 has 1822391 nodes, and would cause the XO
machine to run out of memory during the first stage of read_inode().
Instead of just inserting nodes into a tree in version order as we find
them, we now put them into a tree in order of their offset within the
file, which allows us to immediately discard nodes which are completely
obsoleted.
We don't use a full tree with 'fragments' pointing to the real data
structure, as we do in the normal fragtree. We sort only on the start
address, and add an 'overlapped' flag to the tmp_dnode_info to indicate
that the node in question is (partially) overlapped by another.
When the scan is complete, we start at the end of the file, adding each
node to a real fragtree as before. Where the node is non-overlapped, we
just add it (it doesn't matter that it's not the latest version; there is
no overlap). When the node at the end of the tree _is_ overlapped, we sort
it and all its overlapping nodes into version order and then add them to
the fragtree in that order.
This 'early discard' reduces the peak allocation of tmp_dnode_info
structures from 1.8M to a mere 62872 (3.5%) in the degenerate case
referenced above.
This version of the patch also correctly rememembers the highest node
version# seen for an inode when it's scanned.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We should never find the unchecked size is non-zero after we've finished
checking all inodes. If it happens, used to BUG(), leaving the alloc_sem
held and deadlocking. Instead, just return -ENOSPC after complaining. The
GC thread will die, but read-only operation should be able to continue and
the file system should be unmountable.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When compiling a LE-capable JFFS2 on PowerPC, wbuf.c fails to compile:
fs/jffs2/wbuf.c:973: error: braced-group within expression allowed only inside a function
fs/jffs2/wbuf.c:973: error: initializer element is not constant
fs/jffs2/wbuf.c:973: error: (near initialization for ‘oob_cleanmarker.magic’)
fs/jffs2/wbuf.c:974: error: braced-group within expression allowed only inside a function
fs/jffs2/wbuf.c:974: error: initializer element is not constant
fs/jffs2/wbuf.c:974: error: (near initialization for ‘oob_cleanmarker.nodetype’)
fs/jffs2/wbuf.c:975: error: braced-group within expression allowed only inside a function
fs/jffs2/wbuf.c:976: error: initializer element is not constant
fs/jffs2/wbuf.c:976: error: (near initialization for ‘oob_cleanmarker.totlen’)
Provide constant_cpu_to_je{16,32} functions, and use them for initialising the
offending structure.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Remove excessive scanning of empty flash after a clean
marker for users of the point/unpoint method. cfi_cmdset_0001
uses point/unpoint by default iff flash mapping is linear.
The speedup is several orders of magnitude if FS is less than
half full.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
In read inode we have an optimization which prevents one
min. I/O unit (e.g. NAND page) to be read more then once.
Namely, at the beginning we do not know which node type we read,
so we read so we assume we read the directory entry, because it
has the smallest node header. When we read it, we read up to the
next min. I/O unit, just because if later we'll need to read more,
we already have this data.
If it turns out to be that the node is not directory entry, and
we need more data, and we did not read it because it sits in the
next min. I/O unit, we read the whole next (or several next)
min. I/O unit(s). And if it happens to be that we read a data node,
and we've read part of its data, we calculate partial CRC.
So if later we need to check data CRC, we'll only read the rest
of the data from further min. I/O units and continue CRC checking.
This code was a bit messy and buggy. The bug was that it assumed
relatively large min. I/O unit, so that the largest node header
could overlap only one min. I/O unit boundary.
This parch clean-ups the code a bit and fixes this bug.
The patch was not tested on flash with small min. I/O unit, like
NOR-ECC, nut it was tested on NAND with 512 bytes NAND page, so
it at least does not break NAND. It was also tested with mtdram
so it should not break NOR.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
After a write error, any data in the write buffer must
be relocated. This is handled by the jffs2_wbuf_recover
function. This function does not fix up the erase block
summary information that is collected for writing at the
end of the block, which results in an incorrect summary
(or BUG if the summary was found to be empty).
As the summary is not essential (it is an optimisation),
it may be disabled for the current erase block when this
situation arises. This patch does that.
Signed-off-by: Adrian Hunter <ext-adrian.hunter@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
If a write error occurs, the affected block is placed on the
bad_used_list. In the case that the write error occured
when writing summary data the block was also being placed on
the dirty_list, which caused list corruption and ultimately
a soft lockup in jffs2_mark_node_obsolete. This fixes that.
Signed-off-by: Adrian Hunter <ext-adrian.hunter@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When the MTD driver returns write failure, the following deadlock
occurs:
We are in __jffs2_flush_wbuf(), we hold &c->wbuf_sem. Write failure.
jffs2_wbuf_recover()->jffs2_reserve_space_gc()->jffs2_do_reserve_space()
->jffs2_erase_pending_blocks()->jffs2_flash_read()
and it tries to lock &c->wbuf_sem again. Deadlock.
Reported-by: Adrian Hunter <ext-adrian.hunter@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Check the node CRC on scan before doing anything else with the node.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Delete everything related to the apparently non-existent kernel config
option JFFS2_PROC.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Delete the obsolete source file fs/jffs2/comprtest.c.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
New bad eraseblock is an event which is important enough to be printed
about.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Due to a poor choice of CRC32 seed, a node header which is all zeroes
would pass the CRC32 check. Explicitly check for this case, and treat it
as we do a CRC failure.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The garbage collection thread is strictly an optimisation. Everything it
does would also be done just-in-time in the context of something in
userspace trying to access the file system.
Sometimes, however, it's a pessimisation. Especially during early boot
when it's checksumming nodes and scanning inodes which are shortly going
to be pulled in by read_inode anyway. We end up building the rbtree of
node coverage twice for the same inode.
By switching to yield() instead of cond_resched() in the main loop, we
observe boot times on the OLPC system going down from about 100 seconds to
60.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
For the case when nand_write_page fail with -EIO for the first page in an
eraseblock, jffs2_wbuf_recover ends up producing a BUG in jffs2_block_refile
as jeb->first_node is not yet set up (it's set up later in jffs2_wbuf_recover).
This BUG is not really a bug; it's just jffs2_wbuf_recover calling
jffs2_block_refile with the wrong second parameter.
This patch takes care of this situation.
Signed-off-by: Vitaly Wool <vwool@ru.mvista.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
fs/jffs2/wbuf.c: In function 'jffs2_check_oob_empty':
fs/jffs2/wbuf.c:993: warning: format '%d' expects type 'int', but argument 3 has type 'size_t'
fs/jffs2/wbuf.c:993: warning: format '%d' expects type 'int', but argument 4 has type 'size_t'
fs/jffs2/wbuf.c: In function 'jffs2_check_nand_cleanmarker':
fs/jffs2/wbuf.c:1036: warning: format '%d' expects type 'int', but argument 3 has type 'size_t'
fs/jffs2/wbuf.c:1036: warning: format '%d' expects type 'int', but argument 4 has type 'size_t'
fs/jffs2/wbuf.c: In function 'jffs2_write_nand_cleanmarker':
fs/jffs2/wbuf.c:1062: warning: format '%d' expects type 'int', but argument 3 has type 'size_t'
fs/jffs2/wbuf.c:1062: warning: format '%d' expects type 'int', but argument 4 has type 'size_t'
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
After Al Viro (finally) succeeded in removing the sched.h #include in module.h
recently, it makes sense again to remove other superfluous sched.h includes.
There are quite a lot of files which include it but don't actually need
anything defined in there. Presumably these includes were once needed for
macros that used to live in sched.h, but moved to other header files in the
course of cleaning it up.
To ease the pain, this time I did not fiddle with any header files and only
removed #includes from .c-files, which tend to cause less trouble.
Compile tested against 2.6.20-rc2 and 2.6.20-rc2-mm2 (with offsets) on alpha,
arm, i386, ia64, mips, powerpc, and x86_64 with allnoconfig, defconfig,
allmodconfig, and allyesconfig as well as a few randconfigs on x86_64 and all
configs in arch/arm/configs on arm. I also checked that no new warnings were
introduced by the patch (actually, some warnings are removed that were emitted
by unnecessarily included header files).
Signed-off-by: Tim Schmielau <tim@physik3.uni-rostock.de>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch is inspired by Arjan's "Patch series to mark struct
file_operations and struct inode_operations const".
Compile tested with gcc & sparse.
Signed-off-by: Josef 'Jeff' Sipek <jsipek@cs.sunysb.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Many struct inode_operations in the kernel can be "const". Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data. In addition it'll catch accidental writes at compile time to
these shared resources.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nowadays MTD supports an MTD_OOB_AUTO option which allows users
to access free bytes in NAND's OOB as a contiguous buffer, although
it may be highly discontinuous.
This patch teaches JFFS2 to use this nice feature instead of the
old MTD_OOB_PLACE option. This for example caused problems with
OneNAND. Now JFFS2 does not care how are the free bytes situated.
This may change position of the clean marker on some flashes,
but this is not a problem. JFFS2 will just re-erase the empty
eraseblocks and write the new (correct) clean marker.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
If jffs2_sum_init() fails, c->blocks is not freed neither in
jffs2_do_mount_fs() nor in jffs2_do_fill_super().
Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Don't use ref->flash_offset directly in debugging code, use the ref_offset macro instead.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Artem Bityutskiy <dedekind@infradead.org>
We observe soft lockups when doing heavy test which creates
directory with a lot of direntries and deletes them. This
cycle is the reason fo this. Make it nicer and add cond_resched()
inside of it.
Signed-off-by: Artem Bityutskiy <dedekind@infradead.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Replace kmalloc+memset with kzalloc
Signed-off-by: Yan Burman <burman.yan@gmail.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Move process freezing functions from include/linux/sched.h to freezer.h, so
that modifications to the freezer or the kernel configuration don't require
recompiling just about everything.
[akpm@osdl.org: fix ueagle driver]
Signed-off-by: Nigel Cunningham <nigel@suspend2.net>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Replace all uses of kmem_cache_t with struct kmem_cache.
The patch was generated using the following script:
#!/bin/sh
#
# Replace one string by another in all the kernel sources.
#
set -e
for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
quilt add $file
sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
mv /tmp/$$ $file
quilt refresh
done
The script was run like this
sh replace kmem_cache_t "struct kmem_cache"
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
SLAB_KERNEL is an alias of GFP_KERNEL.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
get_mtd_device() returns NULL in case of any failure. Teach it to return an
error code instead. Fix all users as well.
Signed-off-by: Artem Bityutskiy <dedekind@infradead.org>
As was discussed between Ricard Wanderlöf, David Woodhouse, Artem
Bityutskiy and me, the current API for reading/writing OOB is confusing.
The thing that introduces confusion is the need to specify ops.len
together with ops.ooblen for reads/writes that concern only OOB not data
area. So, ops.len is overloaded: when ops.datbuf != NULL it serves to
specify the length of the data read, and when ops.datbuf == NULL, it
serves to specify the full OOB read length.
The patch inlined below is the slightly updated version of the previous
patch serving the same purpose, but with the new Artem's comments taken
into account.
Artem, BTW, thanks a lot for your valuable input!
Signed-off-by: Vitaly Wool <vwool@ru.mvista.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>