[PATCH] knfsd: stop NFSD writes from being broken into lots of little writes to filesystem
When NFSD receives a write request, the data is typically in a number of
1448 byte segments and writev is used to collect them together.
Unfortunately, generic_file_buffered_write passes these to the filesystem
one at a time, so an e.g. 32K over-write becomes a series of partial-page
writes to each page, causing the filesystem to have to pre-read those pages
- wasted effort.
generic_file_buffered_write handles one segment of the vector at a time as
it has to pre-fault in each segment to avoid deadlocks. When writing from
kernel-space (and nfsd does) this is not an issue, so
generic_file_buffered_write does not need to break and iovec from nfsd into
little pieces.
This patch avoids the splitting when get_fs is KERNEL_DS as it is
from NFSd.
This issue was introduced by commit 6527c2bdf1
Acked-by: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Norman Weathers <norman.r.weathers@conocophillips.com>
Cc: Vladimir V. Saveliev <vs@namesys.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
3160a711ef
commit
29dbb3fc80
32
mm/filemap.c
32
mm/filemap.c
|
@ -2079,21 +2079,27 @@ generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
|
||||||
/* Limit the size of the copy to the caller's write size */
|
/* Limit the size of the copy to the caller's write size */
|
||||||
bytes = min(bytes, count);
|
bytes = min(bytes, count);
|
||||||
|
|
||||||
/*
|
/* We only need to worry about prefaulting when writes are from
|
||||||
* Limit the size of the copy to that of the current segment,
|
* user-space. NFSd uses vfs_writev with several non-aligned
|
||||||
* because fault_in_pages_readable() doesn't know how to walk
|
* segments in the vector, and limiting to one segment a time is
|
||||||
* segments.
|
* a noticeable performance for re-write
|
||||||
*/
|
*/
|
||||||
bytes = min(bytes, cur_iov->iov_len - iov_base);
|
if (!segment_eq(get_fs(), KERNEL_DS)) {
|
||||||
|
/*
|
||||||
/*
|
* Limit the size of the copy to that of the current
|
||||||
* Bring in the user page that we will copy from _first_.
|
* segment, because fault_in_pages_readable() doesn't
|
||||||
* Otherwise there's a nasty deadlock on copying from the
|
* know how to walk segments.
|
||||||
* same page as we're writing to, without it being marked
|
*/
|
||||||
* up-to-date.
|
bytes = min(bytes, cur_iov->iov_len - iov_base);
|
||||||
*/
|
|
||||||
fault_in_pages_readable(buf, bytes);
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Bring in the user page that we will copy from
|
||||||
|
* _first_. Otherwise there's a nasty deadlock on
|
||||||
|
* copying from the same page as we're writing to,
|
||||||
|
* without it being marked up-to-date.
|
||||||
|
*/
|
||||||
|
fault_in_pages_readable(buf, bytes);
|
||||||
|
}
|
||||||
page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
|
page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
|
||||||
if (!page) {
|
if (!page) {
|
||||||
status = -ENOMEM;
|
status = -ENOMEM;
|
||||||
|
|
Loading…
Reference in New Issue