2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2006-09-28 08:52:15 +08:00
|
|
|
* Copyright (c) 2000-2006 Silicon Graphics, Inc.
|
2005-11-02 11:58:39 +08:00
|
|
|
* All Rights Reserved.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2005-11-02 11:58:39 +08:00
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License as
|
2005-04-17 06:20:36 +08:00
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
2005-11-02 11:58:39 +08:00
|
|
|
* This program is distributed in the hope that it would be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2005-11-02 11:58:39 +08:00
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write the Free Software Foundation,
|
|
|
|
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2006-11-11 15:03:49 +08:00
|
|
|
#include "xfs.h"
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/stddef.h>
|
|
|
|
#include <linux/errno.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#include <linux/sysctl.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/hash.h>
|
2005-09-05 06:34:18 +08:00
|
|
|
#include <linux/kthread.h>
|
2006-03-22 16:09:12 +08:00
|
|
|
#include <linux/migrate.h>
|
2006-10-20 14:28:16 +08:00
|
|
|
#include <linux/backing-dev.h>
|
2006-12-07 12:34:23 +08:00
|
|
|
#include <linux/freezer.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-03-04 03:48:37 +08:00
|
|
|
#include "xfs_sb.h"
|
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 10:07:08 +08:00
|
|
|
#include "xfs_log.h"
|
2009-03-04 03:48:37 +08:00
|
|
|
#include "xfs_ag.h"
|
|
|
|
#include "xfs_mount.h"
|
2009-12-15 07:14:59 +08:00
|
|
|
#include "xfs_trace.h"
|
2009-03-04 03:48:37 +08:00
|
|
|
|
2007-02-10 15:34:56 +08:00
|
|
|
static kmem_zone_t *xfs_buf_zone;
|
2005-06-21 13:14:01 +08:00
|
|
|
|
2007-02-10 15:34:56 +08:00
|
|
|
static struct workqueue_struct *xfslogd_workqueue;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
#ifdef XFS_BUF_LOCK_TRACKING
|
|
|
|
# define XB_SET_OWNER(bp) ((bp)->b_last_holder = current->pid)
|
|
|
|
# define XB_CLEAR_OWNER(bp) ((bp)->b_last_holder = -1)
|
|
|
|
# define XB_GET_OWNER(bp) ((bp)->b_last_holder)
|
2005-04-17 06:20:36 +08:00
|
|
|
#else
|
2006-01-11 12:39:08 +08:00
|
|
|
# define XB_SET_OWNER(bp) do { } while (0)
|
|
|
|
# define XB_CLEAR_OWNER(bp) do { } while (0)
|
|
|
|
# define XB_GET_OWNER(bp) do { } while (0)
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
#define xb_to_gfp(flags) \
|
2012-04-23 13:58:56 +08:00
|
|
|
((((flags) & XBF_READ_AHEAD) ? __GFP_NORETRY : GFP_NOFS) | __GFP_NOWARN)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
2010-01-26 01:42:24 +08:00
|
|
|
static inline int
|
|
|
|
xfs_buf_is_vmapped(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Return true if the buffer is vmapped.
|
|
|
|
*
|
2012-04-23 13:59:07 +08:00
|
|
|
* b_addr is null if the buffer is not mapped, but the code is clever
|
|
|
|
* enough to know it doesn't have to map a single page, so the check has
|
|
|
|
* to be both for b_addr and bp->b_page_count > 1.
|
2010-01-26 01:42:24 +08:00
|
|
|
*/
|
2012-04-23 13:59:07 +08:00
|
|
|
return bp->b_addr && bp->b_page_count > 1;
|
2010-01-26 01:42:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
xfs_buf_vmap_len(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
return (bp->b_page_count * PAGE_SIZE) - bp->b_offset;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2010-12-02 13:30:55 +08:00
|
|
|
* xfs_buf_lru_add - add a buffer to the LRU.
|
|
|
|
*
|
|
|
|
* The LRU takes a new reference to the buffer so that it will only be freed
|
|
|
|
* once the shrinker takes the buffer off the LRU.
|
|
|
|
*/
|
|
|
|
STATIC void
|
|
|
|
xfs_buf_lru_add(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
struct xfs_buftarg *btp = bp->b_target;
|
|
|
|
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
|
|
|
if (list_empty(&bp->b_lru)) {
|
|
|
|
atomic_inc(&bp->b_hold);
|
|
|
|
list_add_tail(&bp->b_lru, &btp->bt_lru);
|
|
|
|
btp->bt_lru_nr++;
|
2012-08-11 02:01:51 +08:00
|
|
|
bp->b_lru_flags &= ~_XBF_LRU_DISPOSE;
|
2010-12-02 13:30:55 +08:00
|
|
|
}
|
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* xfs_buf_lru_del - remove a buffer from the LRU
|
|
|
|
*
|
|
|
|
* The unlocked check is safe here because it only occurs when there are not
|
|
|
|
* b_lru_ref counts left on the inode under the pag->pag_buf_lock. it is there
|
|
|
|
* to optimise the shrinker removing the buffer from the LRU and calling
|
2011-03-31 09:57:33 +08:00
|
|
|
* xfs_buf_free(). i.e. it removes an unnecessary round trip on the
|
2010-12-02 13:30:55 +08:00
|
|
|
* bt_lru_lock.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2010-12-02 13:30:55 +08:00
|
|
|
STATIC void
|
|
|
|
xfs_buf_lru_del(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
struct xfs_buftarg *btp = bp->b_target;
|
|
|
|
|
|
|
|
if (list_empty(&bp->b_lru))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
|
|
|
if (!list_empty(&bp->b_lru)) {
|
|
|
|
list_del_init(&bp->b_lru);
|
|
|
|
btp->bt_lru_nr--;
|
|
|
|
}
|
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When we mark a buffer stale, we remove the buffer from the LRU and clear the
|
|
|
|
* b_lru_ref count so that the buffer is freed immediately when the buffer
|
|
|
|
* reference count falls to zero. If the buffer is already on the LRU, we need
|
|
|
|
* to remove the reference that LRU holds on the buffer.
|
|
|
|
*
|
|
|
|
* This prevents build-up of stale buffers on the LRU.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfs_buf_stale(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
|
|
|
|
2010-12-02 13:30:55 +08:00
|
|
|
bp->b_flags |= XBF_STALE;
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Clear the delwri status so that a delwri queue walker will not
|
|
|
|
* flush this buffer to disk now that it is stale. The delwri queue has
|
|
|
|
* a reference to the buffer, so this is safe to do.
|
|
|
|
*/
|
|
|
|
bp->b_flags &= ~_XBF_DELWRI_Q;
|
|
|
|
|
2010-12-02 13:30:55 +08:00
|
|
|
atomic_set(&(bp)->b_lru_ref, 0);
|
|
|
|
if (!list_empty(&bp->b_lru)) {
|
|
|
|
struct xfs_buftarg *btp = bp->b_target;
|
|
|
|
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
2012-08-11 02:01:51 +08:00
|
|
|
if (!list_empty(&bp->b_lru) &&
|
|
|
|
!(bp->b_lru_flags & _XBF_LRU_DISPOSE)) {
|
2010-12-02 13:30:55 +08:00
|
|
|
list_del_init(&bp->b_lru);
|
|
|
|
btp->bt_lru_nr--;
|
|
|
|
atomic_dec(&bp->b_hold);
|
|
|
|
}
|
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
|
|
|
}
|
|
|
|
ASSERT(atomic_read(&bp->b_hold) >= 1);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
static int
|
|
|
|
xfs_buf_get_maps(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
int map_count)
|
|
|
|
{
|
|
|
|
ASSERT(bp->b_maps == NULL);
|
|
|
|
bp->b_map_count = map_count;
|
|
|
|
|
|
|
|
if (map_count == 1) {
|
|
|
|
bp->b_maps = &bp->b_map;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
bp->b_maps = kmem_zalloc(map_count * sizeof(struct xfs_buf_map),
|
|
|
|
KM_NOFS);
|
|
|
|
if (!bp->b_maps)
|
|
|
|
return ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Frees b_pages if it was allocated.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
xfs_buf_free_maps(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
if (bp->b_maps != &bp->b_map) {
|
|
|
|
kmem_free(bp->b_maps);
|
|
|
|
bp->b_maps = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 00:52:48 +08:00
|
|
|
struct xfs_buf *
|
2012-06-22 16:50:09 +08:00
|
|
|
_xfs_buf_alloc(
|
2011-10-11 00:52:48 +08:00
|
|
|
struct xfs_buftarg *target,
|
2012-06-22 16:50:09 +08:00
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_flags_t flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 00:52:48 +08:00
|
|
|
struct xfs_buf *bp;
|
2012-06-22 16:50:09 +08:00
|
|
|
int error;
|
|
|
|
int i;
|
2011-10-11 00:52:48 +08:00
|
|
|
|
2012-04-23 13:58:56 +08:00
|
|
|
bp = kmem_zone_zalloc(xfs_buf_zone, KM_NOFS);
|
2011-10-11 00:52:48 +08:00
|
|
|
if (unlikely(!bp))
|
|
|
|
return NULL;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2012-04-23 13:59:05 +08:00
|
|
|
* We don't want certain flags to appear in b_flags unless they are
|
|
|
|
* specifically set by later operations on the buffer.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2012-04-23 13:59:07 +08:00
|
|
|
flags &= ~(XBF_UNMAPPED | XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD);
|
2006-01-11 12:39:08 +08:00
|
|
|
|
|
|
|
atomic_set(&bp->b_hold, 1);
|
2010-12-02 13:30:55 +08:00
|
|
|
atomic_set(&bp->b_lru_ref, 1);
|
2008-08-13 14:36:11 +08:00
|
|
|
init_completion(&bp->b_iowait);
|
2010-12-02 13:30:55 +08:00
|
|
|
INIT_LIST_HEAD(&bp->b_lru);
|
2006-01-11 12:39:08 +08:00
|
|
|
INIT_LIST_HEAD(&bp->b_list);
|
2010-09-24 17:59:04 +08:00
|
|
|
RB_CLEAR_NODE(&bp->b_rbnode);
|
2010-09-07 22:33:15 +08:00
|
|
|
sema_init(&bp->b_sema, 0); /* held, no waiters */
|
2006-01-11 12:39:08 +08:00
|
|
|
XB_SET_OWNER(bp);
|
|
|
|
bp->b_target = target;
|
2012-06-22 16:50:09 +08:00
|
|
|
bp->b_flags = flags;
|
2012-04-23 13:58:50 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2012-04-23 13:58:52 +08:00
|
|
|
* Set length and io_length to the same value initially.
|
|
|
|
* I/O routines should use io_length, which will be the same in
|
2005-04-17 06:20:36 +08:00
|
|
|
* most cases but may be reset (e.g. XFS recovery).
|
|
|
|
*/
|
2012-06-22 16:50:09 +08:00
|
|
|
error = xfs_buf_get_maps(bp, nmaps);
|
|
|
|
if (error) {
|
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
bp->b_bn = map[0].bm_bn;
|
|
|
|
bp->b_length = 0;
|
|
|
|
for (i = 0; i < nmaps; i++) {
|
|
|
|
bp->b_maps[i].bm_bn = map[i].bm_bn;
|
|
|
|
bp->b_maps[i].bm_len = map[i].bm_len;
|
|
|
|
bp->b_length += map[i].bm_len;
|
|
|
|
}
|
|
|
|
bp->b_io_length = bp->b_length;
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
atomic_set(&bp->b_pin_count, 0);
|
|
|
|
init_waitqueue_head(&bp->b_waiters);
|
|
|
|
|
|
|
|
XFS_STATS_INC(xb_create);
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_init(bp, _RET_IP_);
|
2011-10-11 00:52:48 +08:00
|
|
|
|
|
|
|
return bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* Allocate a page array capable of holding a specified number
|
|
|
|
* of pages, and point the page buf at it.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
STATIC int
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_get_pages(
|
|
|
|
xfs_buf_t *bp,
|
2005-04-17 06:20:36 +08:00
|
|
|
int page_count,
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_flags_t flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
/* Make sure that we have a page list */
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_pages == NULL) {
|
|
|
|
bp->b_page_count = page_count;
|
|
|
|
if (page_count <= XB_PAGES) {
|
|
|
|
bp->b_pages = bp->b_page_array;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_pages = kmem_alloc(sizeof(struct page *) *
|
2012-04-23 13:58:56 +08:00
|
|
|
page_count, KM_NOFS);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_pages == NULL)
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2006-01-11 12:39:08 +08:00
|
|
|
memset(bp->b_pages, 0, sizeof(struct page *) * page_count);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* Frees b_pages if it was allocated.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
STATIC void
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_free_pages(
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_t *bp)
|
|
|
|
{
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_pages != bp->b_page_array) {
|
2008-05-19 14:31:57 +08:00
|
|
|
kmem_free(bp->b_pages);
|
2009-12-15 07:11:57 +08:00
|
|
|
bp->b_pages = NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Releases the specified buffer.
|
|
|
|
*
|
|
|
|
* The modification state of any associated pages is left unchanged.
|
2006-01-11 12:39:08 +08:00
|
|
|
* The buffer most not be on any hash - use xfs_buf_rele instead for
|
2005-04-17 06:20:36 +08:00
|
|
|
* hashed and refcounted buffers
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_free(
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_t *bp)
|
|
|
|
{
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_free(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-12-02 13:30:55 +08:00
|
|
|
ASSERT(list_empty(&bp->b_lru));
|
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
if (bp->b_flags & _XBF_PAGES) {
|
2005-04-17 06:20:36 +08:00
|
|
|
uint i;
|
|
|
|
|
2010-01-26 01:42:24 +08:00
|
|
|
if (xfs_buf_is_vmapped(bp))
|
2010-03-17 02:55:56 +08:00
|
|
|
vm_unmap_ram(bp->b_addr - bp->b_offset,
|
|
|
|
bp->b_page_count);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-28 09:03:13 +08:00
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
|
|
|
struct page *page = bp->b_pages[i];
|
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
__free_page(page);
|
2006-09-28 09:03:13 +08:00
|
|
|
}
|
2011-03-26 06:16:45 +08:00
|
|
|
} else if (bp->b_flags & _XBF_KMEM)
|
|
|
|
kmem_free(bp->b_addr);
|
2009-12-15 07:11:57 +08:00
|
|
|
_xfs_buf_free_pages(bp);
|
2012-06-22 16:50:09 +08:00
|
|
|
xfs_buf_free_maps(bp);
|
2011-10-11 00:52:48 +08:00
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-26 06:16:45 +08:00
|
|
|
* Allocates all the pages for buffer in question and builds it's page list.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
STATIC int
|
2011-03-26 06:16:45 +08:00
|
|
|
xfs_buf_allocate_memory(
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_t *bp,
|
|
|
|
uint flags)
|
|
|
|
{
|
2012-04-23 13:58:52 +08:00
|
|
|
size_t size;
|
2005-04-17 06:20:36 +08:00
|
|
|
size_t nbytes, offset;
|
2006-01-11 12:39:08 +08:00
|
|
|
gfp_t gfp_mask = xb_to_gfp(flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned short page_count, i;
|
2012-04-23 13:58:53 +08:00
|
|
|
xfs_off_t start, end;
|
2005-04-17 06:20:36 +08:00
|
|
|
int error;
|
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
/*
|
|
|
|
* for buffers that are contained within a single page, just allocate
|
|
|
|
* the memory from the heap - there's no need for the complexity of
|
|
|
|
* page arrays to keep allocation down to order 0.
|
|
|
|
*/
|
2012-04-23 13:58:53 +08:00
|
|
|
size = BBTOB(bp->b_length);
|
|
|
|
if (size < PAGE_SIZE) {
|
2012-04-23 13:58:56 +08:00
|
|
|
bp->b_addr = kmem_alloc(size, KM_NOFS);
|
2011-03-26 06:16:45 +08:00
|
|
|
if (!bp->b_addr) {
|
|
|
|
/* low memory - use alloc_page loop instead */
|
|
|
|
goto use_alloc_page;
|
|
|
|
}
|
|
|
|
|
2012-04-23 13:58:53 +08:00
|
|
|
if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) !=
|
2011-03-26 06:16:45 +08:00
|
|
|
((unsigned long)bp->b_addr & PAGE_MASK)) {
|
|
|
|
/* b_addr spans two pages - use alloc_page instead */
|
|
|
|
kmem_free(bp->b_addr);
|
|
|
|
bp->b_addr = NULL;
|
|
|
|
goto use_alloc_page;
|
|
|
|
}
|
|
|
|
bp->b_offset = offset_in_page(bp->b_addr);
|
|
|
|
bp->b_pages = bp->b_page_array;
|
|
|
|
bp->b_pages[0] = virt_to_page(bp->b_addr);
|
|
|
|
bp->b_page_count = 1;
|
2012-04-23 13:59:07 +08:00
|
|
|
bp->b_flags |= _XBF_KMEM;
|
2011-03-26 06:16:45 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
use_alloc_page:
|
2012-06-22 16:50:08 +08:00
|
|
|
start = BBTOB(bp->b_map.bm_bn) >> PAGE_SHIFT;
|
|
|
|
end = (BBTOB(bp->b_map.bm_bn + bp->b_length) + PAGE_SIZE - 1)
|
|
|
|
>> PAGE_SHIFT;
|
2012-04-23 13:58:53 +08:00
|
|
|
page_count = end - start;
|
2006-01-11 12:39:08 +08:00
|
|
|
error = _xfs_buf_get_pages(bp, page_count, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (unlikely(error))
|
|
|
|
return error;
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
offset = bp->b_offset;
|
2011-03-26 06:16:45 +08:00
|
|
|
bp->b_flags |= _XBF_PAGES;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct page *page;
|
|
|
|
uint retries = 0;
|
2011-03-26 06:16:45 +08:00
|
|
|
retry:
|
|
|
|
page = alloc_page(gfp_mask);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (unlikely(page == NULL)) {
|
2006-01-11 12:39:08 +08:00
|
|
|
if (flags & XBF_READ_AHEAD) {
|
|
|
|
bp->b_page_count = i;
|
2011-03-26 06:16:45 +08:00
|
|
|
error = ENOMEM;
|
|
|
|
goto out_free_pages;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This could deadlock.
|
|
|
|
*
|
|
|
|
* But until all the XFS lowlevel code is revamped to
|
|
|
|
* handle buffer allocation failures we can't do much.
|
|
|
|
*/
|
|
|
|
if (!(++retries % 100))
|
2011-03-07 07:00:35 +08:00
|
|
|
xfs_err(NULL,
|
|
|
|
"possible memory allocation deadlock in %s (mode:0x%x)",
|
2008-04-10 10:19:21 +08:00
|
|
|
__func__, gfp_mask);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
XFS_STATS_INC(xb_page_retries);
|
2009-07-09 20:52:32 +08:00
|
|
|
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
XFS_STATS_INC(xb_page_found);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
nbytes = min_t(size_t, size, PAGE_SIZE - offset);
|
2005-04-17 06:20:36 +08:00
|
|
|
size -= nbytes;
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_pages[i] = page;
|
2005-04-17 06:20:36 +08:00
|
|
|
offset = 0;
|
|
|
|
}
|
2011-03-26 06:16:45 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
out_free_pages:
|
|
|
|
for (i = 0; i < bp->b_page_count; i++)
|
|
|
|
__free_page(bp->b_pages[i]);
|
2005-04-17 06:20:36 +08:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-31 09:57:33 +08:00
|
|
|
* Map buffer into kernel address-space if necessary.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
STATIC int
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_map_pages(
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_t *bp,
|
|
|
|
uint flags)
|
|
|
|
{
|
2011-03-26 06:16:45 +08:00
|
|
|
ASSERT(bp->b_flags & _XBF_PAGES);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_page_count == 1) {
|
2011-03-26 06:16:45 +08:00
|
|
|
/* A single page buffer is always mappable */
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset;
|
2012-04-23 13:59:07 +08:00
|
|
|
} else if (flags & XBF_UNMAPPED) {
|
|
|
|
bp->b_addr = NULL;
|
|
|
|
} else {
|
2011-03-26 06:13:42 +08:00
|
|
|
int retried = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count,
|
|
|
|
-1, PAGE_KERNEL);
|
|
|
|
if (bp->b_addr)
|
|
|
|
break;
|
|
|
|
vm_unmap_aliases();
|
|
|
|
} while (retried++ <= 1);
|
|
|
|
|
|
|
|
if (!bp->b_addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ENOMEM;
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_addr += bp->b_offset;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finding and Reading Buffers
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* Look up, and creates if absent, a lockable buffer for
|
2005-04-17 06:20:36 +08:00
|
|
|
* a given range of an inode. The buffer is returned
|
2011-09-09 04:18:50 +08:00
|
|
|
* locked. No I/O is implied by this call.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
xfs_buf_t *
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_find(
|
2012-04-23 13:58:49 +08:00
|
|
|
struct xfs_buftarg *btp,
|
2012-06-22 16:50:09 +08:00
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_flags_t flags,
|
|
|
|
xfs_buf_t *new_bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-04-23 13:58:49 +08:00
|
|
|
size_t numbytes;
|
2010-09-24 17:59:04 +08:00
|
|
|
struct xfs_perag *pag;
|
|
|
|
struct rb_node **rbp;
|
|
|
|
struct rb_node *parent;
|
|
|
|
xfs_buf_t *bp;
|
2012-06-22 16:50:09 +08:00
|
|
|
xfs_daddr_t blkno = map[0].bm_bn;
|
|
|
|
int numblks = 0;
|
|
|
|
int i;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
for (i = 0; i < nmaps; i++)
|
|
|
|
numblks += map[i].bm_len;
|
2012-04-23 13:58:49 +08:00
|
|
|
numbytes = BBTOB(numblks);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Check for IOs smaller than the sector size / not sector aligned */
|
2012-04-23 13:58:49 +08:00
|
|
|
ASSERT(!(numbytes < (1 << btp->bt_sshift)));
|
2012-04-23 13:58:50 +08:00
|
|
|
ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_smask));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-24 17:59:04 +08:00
|
|
|
/* get tree root */
|
|
|
|
pag = xfs_perag_get(btp->bt_mount,
|
2012-04-23 13:58:49 +08:00
|
|
|
xfs_daddr_to_agno(btp->bt_mount, blkno));
|
2010-09-24 17:59:04 +08:00
|
|
|
|
|
|
|
/* walk tree */
|
|
|
|
spin_lock(&pag->pag_buf_lock);
|
|
|
|
rbp = &pag->pag_buf_tree.rb_node;
|
|
|
|
parent = NULL;
|
|
|
|
bp = NULL;
|
|
|
|
while (*rbp) {
|
|
|
|
parent = *rbp;
|
|
|
|
bp = rb_entry(parent, struct xfs_buf, b_rbnode);
|
|
|
|
|
2012-04-23 13:58:50 +08:00
|
|
|
if (blkno < bp->b_bn)
|
2010-09-24 17:59:04 +08:00
|
|
|
rbp = &(*rbp)->rb_left;
|
2012-04-23 13:58:50 +08:00
|
|
|
else if (blkno > bp->b_bn)
|
2010-09-24 17:59:04 +08:00
|
|
|
rbp = &(*rbp)->rb_right;
|
|
|
|
else {
|
|
|
|
/*
|
2012-04-23 13:58:50 +08:00
|
|
|
* found a block number match. If the range doesn't
|
2010-09-24 17:59:04 +08:00
|
|
|
* match, the only way this is allowed is if the buffer
|
|
|
|
* in the cache is stale and the transaction that made
|
|
|
|
* it stale has not yet committed. i.e. we are
|
|
|
|
* reallocating a busy extent. Skip this buffer and
|
|
|
|
* continue searching to the right for an exact match.
|
|
|
|
*/
|
2012-04-23 13:58:51 +08:00
|
|
|
if (bp->b_length != numblks) {
|
2010-09-24 17:59:04 +08:00
|
|
|
ASSERT(bp->b_flags & XBF_STALE);
|
|
|
|
rbp = &(*rbp)->rb_right;
|
|
|
|
continue;
|
|
|
|
}
|
2006-01-11 12:39:08 +08:00
|
|
|
atomic_inc(&bp->b_hold);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* No match found */
|
2006-01-11 12:39:08 +08:00
|
|
|
if (new_bp) {
|
2010-09-24 17:59:04 +08:00
|
|
|
rb_link_node(&new_bp->b_rbnode, parent, rbp);
|
|
|
|
rb_insert_color(&new_bp->b_rbnode, &pag->pag_buf_tree);
|
|
|
|
/* the buffer keeps the perag reference until it is freed */
|
|
|
|
new_bp->b_pag = pag;
|
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2006-01-11 12:39:08 +08:00
|
|
|
XFS_STATS_INC(xb_miss_locked);
|
2010-09-24 17:59:04 +08:00
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-01-11 12:39:08 +08:00
|
|
|
return new_bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
found:
|
2010-09-24 17:59:04 +08:00
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-07-08 20:36:19 +08:00
|
|
|
if (!xfs_buf_trylock(bp)) {
|
|
|
|
if (flags & XBF_TRYLOCK) {
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_rele(bp);
|
|
|
|
XFS_STATS_INC(xb_busy_locked);
|
|
|
|
return NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2011-07-08 20:36:19 +08:00
|
|
|
xfs_buf_lock(bp);
|
|
|
|
XFS_STATS_INC(xb_get_locked_waited);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
/*
|
|
|
|
* if the buffer is stale, clear all the external state associated with
|
|
|
|
* it. We need to keep flags such as how we allocated the buffer memory
|
|
|
|
* intact here.
|
|
|
|
*/
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_flags & XBF_STALE) {
|
|
|
|
ASSERT((bp->b_flags & _XBF_DELWRI_Q) == 0);
|
2012-11-12 19:54:19 +08:00
|
|
|
ASSERT(bp->b_iodone == NULL);
|
2012-04-23 13:59:07 +08:00
|
|
|
bp->b_flags &= _XBF_KMEM | _XBF_PAGES;
|
2012-11-14 14:54:40 +08:00
|
|
|
bp->b_ops = NULL;
|
2005-09-05 06:33:35 +08:00
|
|
|
}
|
2009-12-15 07:14:59 +08:00
|
|
|
|
|
|
|
trace_xfs_buf_find(bp, flags, _RET_IP_);
|
2006-01-11 12:39:08 +08:00
|
|
|
XFS_STATS_INC(xb_get_locked);
|
|
|
|
return bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-09-30 12:45:02 +08:00
|
|
|
* Assembles a buffer covering the specified range. The code is optimised for
|
|
|
|
* cache hits, as metadata intensive workloads will see 3 orders of magnitude
|
|
|
|
* more hits than misses.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-09-30 12:45:02 +08:00
|
|
|
struct xfs_buf *
|
2012-06-22 16:50:10 +08:00
|
|
|
xfs_buf_get_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_flags_t flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-09-30 12:45:02 +08:00
|
|
|
struct xfs_buf *bp;
|
|
|
|
struct xfs_buf *new_bp;
|
2011-03-26 06:16:45 +08:00
|
|
|
int error = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-22 16:50:10 +08:00
|
|
|
bp = _xfs_buf_find(target, map, nmaps, flags, NULL);
|
2011-09-30 12:45:02 +08:00
|
|
|
if (likely(bp))
|
|
|
|
goto found;
|
|
|
|
|
2012-06-22 16:50:10 +08:00
|
|
|
new_bp = _xfs_buf_alloc(target, map, nmaps, flags);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (unlikely(!new_bp))
|
2005-04-17 06:20:36 +08:00
|
|
|
return NULL;
|
|
|
|
|
2012-04-23 13:58:45 +08:00
|
|
|
error = xfs_buf_allocate_memory(new_bp, flags);
|
|
|
|
if (error) {
|
2012-06-22 16:50:09 +08:00
|
|
|
xfs_buf_free(new_bp);
|
2012-04-23 13:58:45 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-06-22 16:50:10 +08:00
|
|
|
bp = _xfs_buf_find(target, map, nmaps, flags, new_bp);
|
2011-09-30 12:45:02 +08:00
|
|
|
if (!bp) {
|
2012-04-23 13:58:45 +08:00
|
|
|
xfs_buf_free(new_bp);
|
2011-09-30 12:45:02 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-04-23 13:58:45 +08:00
|
|
|
if (bp != new_bp)
|
|
|
|
xfs_buf_free(new_bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-09-30 12:45:02 +08:00
|
|
|
found:
|
2012-04-23 13:59:07 +08:00
|
|
|
if (!bp->b_addr) {
|
2006-01-11 12:39:08 +08:00
|
|
|
error = _xfs_buf_map_pages(bp, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (unlikely(error)) {
|
2011-03-07 07:00:35 +08:00
|
|
|
xfs_warn(target->bt_mount,
|
|
|
|
"%s: failed to map pages\n", __func__);
|
2012-04-23 13:58:54 +08:00
|
|
|
xfs_buf_relse(bp);
|
|
|
|
return NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
XFS_STATS_INC(xb_get);
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_get(bp, flags, _RET_IP_);
|
2006-01-11 12:39:08 +08:00
|
|
|
return bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-12-03 19:20:26 +08:00
|
|
|
STATIC int
|
|
|
|
_xfs_buf_read(
|
|
|
|
xfs_buf_t *bp,
|
|
|
|
xfs_buf_flags_t flags)
|
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
ASSERT(!(flags & XBF_WRITE));
|
2012-06-22 16:50:08 +08:00
|
|
|
ASSERT(bp->b_map.bm_bn != XFS_BUF_DADDR_NULL);
|
2008-12-03 19:20:26 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD);
|
2011-07-08 20:36:32 +08:00
|
|
|
bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD);
|
2008-12-03 19:20:26 +08:00
|
|
|
|
2012-04-23 13:58:46 +08:00
|
|
|
xfs_buf_iorequest(bp);
|
|
|
|
if (flags & XBF_ASYNC)
|
|
|
|
return 0;
|
2010-07-20 15:52:59 +08:00
|
|
|
return xfs_buf_iowait(bp);
|
2008-12-03 19:20:26 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_t *
|
2012-06-22 16:50:10 +08:00
|
|
|
xfs_buf_read_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
|
|
|
int nmaps,
|
2012-11-12 19:54:01 +08:00
|
|
|
xfs_buf_flags_t flags,
|
2012-11-14 14:54:40 +08:00
|
|
|
const struct xfs_buf_ops *ops)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-06-22 16:50:10 +08:00
|
|
|
struct xfs_buf *bp;
|
2006-01-11 12:39:08 +08:00
|
|
|
|
|
|
|
flags |= XBF_READ;
|
|
|
|
|
2012-06-22 16:50:10 +08:00
|
|
|
bp = xfs_buf_get_map(target, map, nmaps, flags);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp) {
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_read(bp, flags, _RET_IP_);
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
if (!XFS_BUF_ISDONE(bp)) {
|
|
|
|
XFS_STATS_INC(xb_get_read);
|
2012-11-14 14:54:40 +08:00
|
|
|
bp->b_ops = ops;
|
2008-12-03 19:20:26 +08:00
|
|
|
_xfs_buf_read(bp, flags);
|
2006-01-11 12:39:08 +08:00
|
|
|
} else if (flags & XBF_ASYNC) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Read ahead call which is already satisfied,
|
|
|
|
* drop the buffer
|
|
|
|
*/
|
2012-04-23 13:58:54 +08:00
|
|
|
xfs_buf_relse(bp);
|
|
|
|
return NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
|
|
|
/* We do not want read in the flags */
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_flags &= ~XBF_READ;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
return bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* If we are not low on memory then do the readahead in a deadlock
|
|
|
|
* safe manner.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
void
|
2012-06-22 16:50:10 +08:00
|
|
|
xfs_buf_readahead_map(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
struct xfs_buf_map *map,
|
2012-11-12 19:54:01 +08:00
|
|
|
int nmaps,
|
2012-11-14 14:54:40 +08:00
|
|
|
const struct xfs_buf_ops *ops)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-03-26 06:16:45 +08:00
|
|
|
if (bdi_read_congested(target->bt_bdi))
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
|
2012-06-22 16:50:10 +08:00
|
|
|
xfs_buf_read_map(target, map, nmaps,
|
2012-11-14 14:54:40 +08:00
|
|
|
XBF_TRYLOCK|XBF_ASYNC|XBF_READ_AHEAD, ops);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-24 19:58:31 +08:00
|
|
|
/*
|
|
|
|
* Read an uncached buffer from disk. Allocates and returns a locked
|
|
|
|
* buffer containing the disk contents or nothing.
|
|
|
|
*/
|
|
|
|
struct xfs_buf *
|
|
|
|
xfs_buf_read_uncached(
|
|
|
|
struct xfs_buftarg *target,
|
|
|
|
xfs_daddr_t daddr,
|
2012-04-23 13:58:49 +08:00
|
|
|
size_t numblks,
|
2012-11-12 19:54:01 +08:00
|
|
|
int flags,
|
2012-11-14 14:54:40 +08:00
|
|
|
const struct xfs_buf_ops *ops)
|
2010-09-24 19:58:31 +08:00
|
|
|
{
|
2012-11-12 19:54:02 +08:00
|
|
|
struct xfs_buf *bp;
|
2010-09-24 19:58:31 +08:00
|
|
|
|
2012-04-23 13:58:49 +08:00
|
|
|
bp = xfs_buf_get_uncached(target, numblks, flags);
|
2010-09-24 19:58:31 +08:00
|
|
|
if (!bp)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* set up the buffer for a read IO */
|
2012-06-22 16:50:09 +08:00
|
|
|
ASSERT(bp->b_map_count == 1);
|
|
|
|
bp->b_bn = daddr;
|
|
|
|
bp->b_maps[0].bm_bn = daddr;
|
2012-06-22 16:50:08 +08:00
|
|
|
bp->b_flags |= XBF_READ;
|
2012-11-14 14:54:40 +08:00
|
|
|
bp->b_ops = ops;
|
2010-09-24 19:58:31 +08:00
|
|
|
|
2012-04-23 13:58:49 +08:00
|
|
|
xfsbdstrat(target->bt_mount, bp);
|
2012-11-12 19:54:02 +08:00
|
|
|
xfs_buf_iowait(bp);
|
2010-09-24 19:58:31 +08:00
|
|
|
return bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-04-21 17:34:27 +08:00
|
|
|
/*
|
|
|
|
* Return a buffer allocated as an empty buffer and associated to external
|
|
|
|
* memory via xfs_buf_associate_memory() back to it's empty state.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfs_buf_set_empty(
|
|
|
|
struct xfs_buf *bp,
|
2012-04-23 13:58:49 +08:00
|
|
|
size_t numblks)
|
2011-04-21 17:34:27 +08:00
|
|
|
{
|
|
|
|
if (bp->b_pages)
|
|
|
|
_xfs_buf_free_pages(bp);
|
|
|
|
|
|
|
|
bp->b_pages = NULL;
|
|
|
|
bp->b_page_count = 0;
|
|
|
|
bp->b_addr = NULL;
|
2012-04-23 13:58:51 +08:00
|
|
|
bp->b_length = numblks;
|
2012-04-23 13:58:52 +08:00
|
|
|
bp->b_io_length = numblks;
|
2012-06-22 16:50:09 +08:00
|
|
|
|
|
|
|
ASSERT(bp->b_map_count == 1);
|
2011-04-21 17:34:27 +08:00
|
|
|
bp->b_bn = XFS_BUF_DADDR_NULL;
|
2012-06-22 16:50:09 +08:00
|
|
|
bp->b_maps[0].bm_bn = XFS_BUF_DADDR_NULL;
|
|
|
|
bp->b_maps[0].bm_len = bp->b_length;
|
2011-04-21 17:34:27 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static inline struct page *
|
|
|
|
mem_to_page(
|
|
|
|
void *addr)
|
|
|
|
{
|
2008-02-05 14:28:34 +08:00
|
|
|
if ((!is_vmalloc_addr(addr))) {
|
2005-04-17 06:20:36 +08:00
|
|
|
return virt_to_page(addr);
|
|
|
|
} else {
|
|
|
|
return vmalloc_to_page(addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_associate_memory(
|
|
|
|
xfs_buf_t *bp,
|
2005-04-17 06:20:36 +08:00
|
|
|
void *mem,
|
|
|
|
size_t len)
|
|
|
|
{
|
|
|
|
int rval;
|
|
|
|
int i = 0;
|
2007-11-27 14:01:24 +08:00
|
|
|
unsigned long pageaddr;
|
|
|
|
unsigned long offset;
|
|
|
|
size_t buflen;
|
2005-04-17 06:20:36 +08:00
|
|
|
int page_count;
|
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
pageaddr = (unsigned long)mem & PAGE_MASK;
|
2007-11-27 14:01:24 +08:00
|
|
|
offset = (unsigned long)mem - pageaddr;
|
2011-03-26 06:16:45 +08:00
|
|
|
buflen = PAGE_ALIGN(len + offset);
|
|
|
|
page_count = buflen >> PAGE_SHIFT;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Free any previous set of page pointers */
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_pages)
|
|
|
|
_xfs_buf_free_pages(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_pages = NULL;
|
|
|
|
bp->b_addr = mem;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-04-23 13:58:56 +08:00
|
|
|
rval = _xfs_buf_get_pages(bp, page_count, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (rval)
|
|
|
|
return rval;
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_offset = offset;
|
2007-11-27 14:01:24 +08:00
|
|
|
|
|
|
|
for (i = 0; i < bp->b_page_count; i++) {
|
|
|
|
bp->b_pages[i] = mem_to_page((void *)pageaddr);
|
2011-03-26 06:16:45 +08:00
|
|
|
pageaddr += PAGE_SIZE;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-04-23 13:58:52 +08:00
|
|
|
bp->b_io_length = BTOBB(len);
|
2012-04-23 13:58:51 +08:00
|
|
|
bp->b_length = BTOBB(buflen);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buf_t *
|
2010-09-24 18:07:47 +08:00
|
|
|
xfs_buf_get_uncached(
|
|
|
|
struct xfs_buftarg *target,
|
2012-04-23 13:58:49 +08:00
|
|
|
size_t numblks,
|
2010-09-24 18:07:47 +08:00
|
|
|
int flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-04-23 13:58:49 +08:00
|
|
|
unsigned long page_count;
|
2007-05-14 16:23:50 +08:00
|
|
|
int error, i;
|
2012-06-22 16:50:09 +08:00
|
|
|
struct xfs_buf *bp;
|
|
|
|
DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
bp = _xfs_buf_alloc(target, &map, 1, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (unlikely(bp == NULL))
|
|
|
|
goto fail;
|
|
|
|
|
2012-04-23 13:58:49 +08:00
|
|
|
page_count = PAGE_ALIGN(numblks << BBSHIFT) >> PAGE_SHIFT;
|
2007-05-14 16:23:50 +08:00
|
|
|
error = _xfs_buf_get_pages(bp, page_count, 0);
|
|
|
|
if (error)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto fail_free_buf;
|
|
|
|
|
2007-05-14 16:23:50 +08:00
|
|
|
for (i = 0; i < page_count; i++) {
|
2010-09-24 18:07:47 +08:00
|
|
|
bp->b_pages[i] = alloc_page(xb_to_gfp(flags));
|
2007-05-14 16:23:50 +08:00
|
|
|
if (!bp->b_pages[i])
|
|
|
|
goto fail_free_mem;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-05-14 16:23:50 +08:00
|
|
|
bp->b_flags |= _XBF_PAGES;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-04-23 13:59:07 +08:00
|
|
|
error = _xfs_buf_map_pages(bp, 0);
|
2007-05-14 16:23:50 +08:00
|
|
|
if (unlikely(error)) {
|
2011-03-07 07:00:35 +08:00
|
|
|
xfs_warn(target->bt_mount,
|
|
|
|
"%s: failed to map pages\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto fail_free_mem;
|
2007-05-14 16:23:50 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-24 18:07:47 +08:00
|
|
|
trace_xfs_buf_get_uncached(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
return bp;
|
2007-05-14 16:23:50 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
fail_free_mem:
|
2007-05-14 16:23:50 +08:00
|
|
|
while (--i >= 0)
|
|
|
|
__free_page(bp->b_pages[i]);
|
2007-05-24 13:21:11 +08:00
|
|
|
_xfs_buf_free_pages(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
fail_free_buf:
|
2012-06-22 16:50:09 +08:00
|
|
|
xfs_buf_free_maps(bp);
|
2011-10-11 00:52:48 +08:00
|
|
|
kmem_zone_free(xfs_buf_zone, bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
fail:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment reference count on buffer, to hold the buffer concurrently
|
|
|
|
* with another thread which may release (free) the buffer asynchronously.
|
|
|
|
* Must hold the buffer already to call this function.
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_hold(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_hold(bp, _RET_IP_);
|
2006-01-11 12:39:08 +08:00
|
|
|
atomic_inc(&bp->b_hold);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* Releases a hold on the specified buffer. If the
|
|
|
|
* the hold count is 1, calls xfs_buf_free.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_rele(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-24 17:59:04 +08:00
|
|
|
struct xfs_perag *pag = bp->b_pag;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_rele(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-24 17:59:04 +08:00
|
|
|
if (!pag) {
|
2010-12-02 13:30:55 +08:00
|
|
|
ASSERT(list_empty(&bp->b_lru));
|
2010-09-24 17:59:04 +08:00
|
|
|
ASSERT(RB_EMPTY_NODE(&bp->b_rbnode));
|
2006-02-01 09:14:52 +08:00
|
|
|
if (atomic_dec_and_test(&bp->b_hold))
|
|
|
|
xfs_buf_free(bp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-09-24 17:59:04 +08:00
|
|
|
ASSERT(!RB_EMPTY_NODE(&bp->b_rbnode));
|
2010-12-02 13:30:55 +08:00
|
|
|
|
2008-08-13 13:42:10 +08:00
|
|
|
ASSERT(atomic_read(&bp->b_hold) > 0);
|
2010-09-24 17:59:04 +08:00
|
|
|
if (atomic_dec_and_lock(&bp->b_hold, &pag->pag_buf_lock)) {
|
xfs: fix error handling for synchronous writes
If we get an IO error on a synchronous superblock write, we attach an
error release function to it so that when the last reference goes away
the release function is called and the buffer is invalidated and
unlocked. The buffer is left locked until the release function is
called so that other concurrent users of the buffer will be locked out
until the buffer error is fully processed.
Unfortunately, for the superblock buffer the filesyetm itself holds a
reference to the buffer which prevents the reference count from
dropping to zero and the release function being called. As a result,
once an IO error occurs on a sync write, the buffer will never be
unlocked and all future attempts to lock the buffer will hang.
To make matters worse, this problems is not unique to such buffers;
if there is a concurrent _xfs_buf_find() running, the lookup will grab
a reference to the buffer and then wait on the buffer lock, preventing
the reference count from ever falling to zero and hence unlocking the
buffer.
As such, the whole b_relse function implementation is broken because it
cannot rely on the buffer reference count falling to zero to unlock the
errored buffer. The synchronous write error path is the only path that
uses this callback - it is used to ensure that the synchronous waiter
gets the buffer error before the error state is cleared from the buffer
by the release function.
Given that the only sychronous buffer writes now go through xfs_bwrite
and the error path in question can only occur for a write of a dirty,
logged buffer, we can move most of the b_relse processing to happen
inline in xfs_buf_iodone_callbacks, just like a normal I/O completion.
In addition to that we make sure the error is not cleared in
xfs_buf_iodone_callbacks, so that xfs_bwrite can reliably check it.
Given that xfs_bwrite keeps the buffer locked until it has waited for
it and checked the error this allows to reliably propagate the error
to the caller, and make sure that the buffer is reliably unlocked.
Given that xfs_buf_iodone_callbacks was the only instance of the
b_relse callback we can remove it entirely.
Based on earlier patches by Dave Chinner and Ajeet Yadav.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Ajeet Yadav <ajeet.yadav.77@gmail.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
2011-01-07 21:02:23 +08:00
|
|
|
if (!(bp->b_flags & XBF_STALE) &&
|
2010-12-02 13:30:55 +08:00
|
|
|
atomic_read(&bp->b_lru_ref)) {
|
|
|
|
xfs_buf_lru_add(bp);
|
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2010-12-02 13:30:55 +08:00
|
|
|
xfs_buf_lru_del(bp);
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
|
2010-09-24 17:59:04 +08:00
|
|
|
rb_erase(&bp->b_rbnode, &pag->pag_buf_tree);
|
|
|
|
spin_unlock(&pag->pag_buf_lock);
|
|
|
|
xfs_perag_put(pag);
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_free(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2011-03-26 06:16:45 +08:00
|
|
|
* Lock a buffer object, if it is not already locked.
|
2010-11-30 12:16:16 +08:00
|
|
|
*
|
|
|
|
* If we come across a stale, pinned, locked buffer, we know that we are
|
|
|
|
* being asked to lock a buffer that has been reallocated. Because it is
|
|
|
|
* pinned, we know that the log has not been pushed to disk and hence it
|
|
|
|
* will still be locked. Rather than continuing to have trylock attempts
|
|
|
|
* fail until someone else pushes the log, push it ourselves before
|
|
|
|
* returning. This means that the xfsaild will not get stuck trying
|
|
|
|
* to push on stale inode buffers.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
int
|
2011-07-08 20:36:19 +08:00
|
|
|
xfs_buf_trylock(
|
|
|
|
struct xfs_buf *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int locked;
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
locked = down_trylock(&bp->b_sema) == 0;
|
2009-12-15 07:14:59 +08:00
|
|
|
if (locked)
|
2006-01-11 12:39:08 +08:00
|
|
|
XB_SET_OWNER(bp);
|
2010-11-30 12:16:16 +08:00
|
|
|
else if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
|
|
|
|
xfs_log_force(bp->b_target->bt_mount, 0);
|
2009-12-15 07:14:59 +08:00
|
|
|
|
2011-07-08 20:36:19 +08:00
|
|
|
trace_xfs_buf_trylock(bp, _RET_IP_);
|
|
|
|
return locked;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-03-26 06:16:45 +08:00
|
|
|
* Lock a buffer object.
|
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 10:07:08 +08:00
|
|
|
*
|
|
|
|
* If we come across a stale, pinned, locked buffer, we know that we
|
|
|
|
* are being asked to lock a buffer that has been reallocated. Because
|
|
|
|
* it is pinned, we know that the log has not been pushed to disk and
|
|
|
|
* hence it will still be locked. Rather than sleeping until someone
|
|
|
|
* else pushes the log, push it ourselves before trying to get the lock.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2006-01-11 12:39:08 +08:00
|
|
|
void
|
|
|
|
xfs_buf_lock(
|
2011-07-08 20:36:19 +08:00
|
|
|
struct xfs_buf *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_lock(bp, _RET_IP_);
|
|
|
|
|
xfs: Improve scalability of busy extent tracking
When we free a metadata extent, we record it in the per-AG busy
extent array so that it is not re-used before the freeing
transaction hits the disk. This array is fixed size, so when it
overflows we make further allocation transactions synchronous
because we cannot track more freed extents until those transactions
hit the disk and are completed. Under heavy mixed allocation and
freeing workloads with large log buffers, we can overflow this array
quite easily.
Further, the array is sparsely populated, which means that inserts
need to search for a free slot, and array searches often have to
search many more slots that are actually used to check all the
busy extents. Quite inefficient, really.
To enable this aspect of extent freeing to scale better, we need
a structure that can grow dynamically. While in other areas of
XFS we have used radix trees, the extents being freed are at random
locations on disk so are better suited to being indexed by an rbtree.
So, use a per-AG rbtree indexed by block number to track busy
extents. This incures a memory allocation when marking an extent
busy, but should not occur too often in low memory situations. This
should scale to an arbitrary number of extents so should not be a
limitation for features such as in-memory aggregation of
transactions.
However, there are still situations where we can't avoid allocating
busy extents (such as allocation from the AGFL). To minimise the
overhead of such occurences, we need to avoid doing a synchronous
log force while holding the AGF locked to ensure that the previous
transactions are safely on disk before we use the extent. We can do
this by marking the transaction doing the allocation as synchronous
rather issuing a log force.
Because of the locking involved and the ordering of transactions,
the synchronous transaction provides the same guarantees as a
synchronous log force because it ensures that all the prior
transactions are already on disk when the synchronous transaction
hits the disk. i.e. it preserves the free->allocate order of the
extent correctly in recovery.
By doing this, we avoid holding the AGF locked while log writes are
in progress, hence reducing the length of time the lock is held and
therefore we increase the rate at which we can allocate and free
from the allocation group, thereby increasing overall throughput.
The only problem with this approach is that when a metadata buffer is
marked stale (e.g. a directory block is removed), then buffer remains
pinned and locked until the log goes to disk. The issue here is that
if that stale buffer is reallocated in a subsequent transaction, the
attempt to lock that buffer in the transaction will hang waiting
the log to go to disk to unlock and unpin the buffer. Hence if
someone tries to lock a pinned, stale, locked buffer we need to
push on the log to get it unlocked ASAP. Effectively we are trading
off a guaranteed log force for a much less common trigger for log
force to occur.
Ideally we should not reallocate busy extents. That is a much more
complex fix to the problem as it involves direct intervention in the
allocation btree searches in many places. This is left to a future
set of modifications.
Finally, now that we track busy extents in allocated memory, we
don't need the descriptors in the transaction structure to point to
them. We can replace the complex busy chunk infrastructure with a
simple linked list of busy extents. This allows us to remove a large
chunk of code, making the overall change a net reduction in code
size.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 10:07:08 +08:00
|
|
|
if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
|
2010-09-22 08:47:20 +08:00
|
|
|
xfs_log_force(bp->b_target->bt_mount, 0);
|
2006-01-11 12:39:08 +08:00
|
|
|
down(&bp->b_sema);
|
|
|
|
XB_SET_OWNER(bp);
|
2009-12-15 07:14:59 +08:00
|
|
|
|
|
|
|
trace_xfs_buf_lock_done(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_unlock(
|
2011-07-08 20:36:19 +08:00
|
|
|
struct xfs_buf *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-01-11 12:39:08 +08:00
|
|
|
XB_CLEAR_OWNER(bp);
|
|
|
|
up(&bp->b_sema);
|
2009-12-15 07:14:59 +08:00
|
|
|
|
|
|
|
trace_xfs_buf_unlock(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
STATIC void
|
|
|
|
xfs_buf_wait_unpin(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
DECLARE_WAITQUEUE (wait, current);
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
if (atomic_read(&bp->b_pin_count) == 0)
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
add_wait_queue(&bp->b_waiters, &wait);
|
2005-04-17 06:20:36 +08:00
|
|
|
for (;;) {
|
|
|
|
set_current_state(TASK_UNINTERRUPTIBLE);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (atomic_read(&bp->b_pin_count) == 0)
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2011-03-10 15:52:07 +08:00
|
|
|
io_schedule();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-01-11 12:39:08 +08:00
|
|
|
remove_wait_queue(&bp->b_waiters, &wait);
|
2005-04-17 06:20:36 +08:00
|
|
|
set_current_state(TASK_RUNNING);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Buffer Utility Routines
|
|
|
|
*/
|
|
|
|
|
|
|
|
STATIC void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_iodone_work(
|
2006-11-22 22:57:56 +08:00
|
|
|
struct work_struct *work)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-11-14 14:54:40 +08:00
|
|
|
struct xfs_buf *bp =
|
2006-11-22 22:57:56 +08:00
|
|
|
container_of(work, xfs_buf_t, b_iodone_work);
|
2012-11-14 14:54:40 +08:00
|
|
|
bool read = !!(bp->b_flags & XBF_READ);
|
|
|
|
|
|
|
|
bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD);
|
|
|
|
if (read && bp->b_ops)
|
|
|
|
bp->b_ops->verify_read(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-08-18 17:29:11 +08:00
|
|
|
if (bp->b_iodone)
|
2006-01-11 12:39:08 +08:00
|
|
|
(*(bp->b_iodone))(bp);
|
|
|
|
else if (bp->b_flags & XBF_ASYNC)
|
2005-04-17 06:20:36 +08:00
|
|
|
xfs_buf_relse(bp);
|
2012-11-14 14:54:40 +08:00
|
|
|
else {
|
|
|
|
ASSERT(read && bp->b_ops);
|
|
|
|
complete(&bp->b_iowait);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_ioend(
|
2012-11-14 14:54:40 +08:00
|
|
|
struct xfs_buf *bp,
|
|
|
|
int schedule)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-11-14 14:54:40 +08:00
|
|
|
bool read = !!(bp->b_flags & XBF_READ);
|
|
|
|
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_iodone(bp, _RET_IP_);
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
if (bp->b_error == 0)
|
|
|
|
bp->b_flags |= XBF_DONE;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-11-14 14:54:40 +08:00
|
|
|
if (bp->b_iodone || (read && bp->b_ops) || (bp->b_flags & XBF_ASYNC)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
if (schedule) {
|
2006-11-22 22:57:56 +08:00
|
|
|
INIT_WORK(&bp->b_iodone_work, xfs_buf_iodone_work);
|
2006-01-11 12:39:08 +08:00
|
|
|
queue_work(xfslogd_workqueue, &bp->b_iodone_work);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2006-11-22 22:57:56 +08:00
|
|
|
xfs_buf_iodone_work(&bp->b_iodone_work);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
} else {
|
2012-11-14 14:54:40 +08:00
|
|
|
bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD);
|
2008-08-13 14:36:11 +08:00
|
|
|
complete(&bp->b_iowait);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_ioerror(
|
|
|
|
xfs_buf_t *bp,
|
|
|
|
int error)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
ASSERT(error >= 0 && error <= 0xffff);
|
2006-01-11 12:39:08 +08:00
|
|
|
bp->b_error = (unsigned short)error;
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_ioerror(bp, error, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 00:52:49 +08:00
|
|
|
void
|
|
|
|
xfs_buf_ioerror_alert(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
const char *func)
|
|
|
|
{
|
|
|
|
xfs_alert(bp->b_target->bt_mount,
|
2012-04-23 13:58:52 +08:00
|
|
|
"metadata I/O error: block 0x%llx (\"%s\") error %d numblks %d",
|
|
|
|
(__uint64_t)XFS_BUF_ADDR(bp), func, bp->b_error, bp->b_length);
|
2011-10-11 00:52:49 +08:00
|
|
|
}
|
|
|
|
|
2010-01-14 06:17:56 +08:00
|
|
|
/*
|
|
|
|
* Called when we want to stop a buffer from getting written or read.
|
2010-10-07 02:41:18 +08:00
|
|
|
* We attach the EIO error, muck with its flags, and call xfs_buf_ioend
|
2010-01-14 06:17:56 +08:00
|
|
|
* so that the proper iodone callbacks get called.
|
|
|
|
*/
|
|
|
|
STATIC int
|
|
|
|
xfs_bioerror(
|
|
|
|
xfs_buf_t *bp)
|
|
|
|
{
|
|
|
|
#ifdef XFSERRORDEBUG
|
|
|
|
ASSERT(XFS_BUF_ISREAD(bp) || bp->b_iodone);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No need to wait until the buffer is unpinned, we aren't flushing it.
|
|
|
|
*/
|
2011-07-23 07:39:51 +08:00
|
|
|
xfs_buf_ioerror(bp, EIO);
|
2010-01-14 06:17:56 +08:00
|
|
|
|
|
|
|
/*
|
2010-10-07 02:41:18 +08:00
|
|
|
* We're calling xfs_buf_ioend, so delete XBF_DONE flag.
|
2010-01-14 06:17:56 +08:00
|
|
|
*/
|
|
|
|
XFS_BUF_UNREAD(bp);
|
|
|
|
XFS_BUF_UNDONE(bp);
|
2011-10-11 00:52:46 +08:00
|
|
|
xfs_buf_stale(bp);
|
2010-01-14 06:17:56 +08:00
|
|
|
|
2010-10-07 02:41:18 +08:00
|
|
|
xfs_buf_ioend(bp, 0);
|
2010-01-14 06:17:56 +08:00
|
|
|
|
|
|
|
return EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Same as xfs_bioerror, except that we are releasing the buffer
|
2010-10-07 02:41:18 +08:00
|
|
|
* here ourselves, and avoiding the xfs_buf_ioend call.
|
2010-01-14 06:17:56 +08:00
|
|
|
* This is meant for userdata errors; metadata bufs come with
|
|
|
|
* iodone functions attached, so that we can track down errors.
|
|
|
|
*/
|
|
|
|
STATIC int
|
|
|
|
xfs_bioerror_relse(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
2011-07-23 07:39:39 +08:00
|
|
|
int64_t fl = bp->b_flags;
|
2010-01-14 06:17:56 +08:00
|
|
|
/*
|
|
|
|
* No need to wait until the buffer is unpinned.
|
|
|
|
* We aren't flushing it.
|
|
|
|
*
|
|
|
|
* chunkhold expects B_DONE to be set, whether
|
|
|
|
* we actually finish the I/O or not. We don't want to
|
|
|
|
* change that interface.
|
|
|
|
*/
|
|
|
|
XFS_BUF_UNREAD(bp);
|
|
|
|
XFS_BUF_DONE(bp);
|
2011-10-11 00:52:46 +08:00
|
|
|
xfs_buf_stale(bp);
|
2011-07-13 19:43:49 +08:00
|
|
|
bp->b_iodone = NULL;
|
2010-01-19 17:56:44 +08:00
|
|
|
if (!(fl & XBF_ASYNC)) {
|
2010-01-14 06:17:56 +08:00
|
|
|
/*
|
|
|
|
* Mark b_error and B_ERROR _both_.
|
|
|
|
* Lot's of chunkcache code assumes that.
|
|
|
|
* There's no reason to mark error for
|
|
|
|
* ASYNC buffers.
|
|
|
|
*/
|
2011-07-23 07:39:51 +08:00
|
|
|
xfs_buf_ioerror(bp, EIO);
|
2011-10-11 00:52:44 +08:00
|
|
|
complete(&bp->b_iowait);
|
2010-01-14 06:17:56 +08:00
|
|
|
} else {
|
|
|
|
xfs_buf_relse(bp);
|
|
|
|
}
|
|
|
|
|
|
|
|
return EIO;
|
|
|
|
}
|
|
|
|
|
2012-07-13 14:24:10 +08:00
|
|
|
STATIC int
|
2010-01-14 06:17:56 +08:00
|
|
|
xfs_bdstrat_cb(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
2010-09-22 08:47:20 +08:00
|
|
|
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
|
2010-01-14 06:17:56 +08:00
|
|
|
trace_xfs_bdstrat_shut(bp, _RET_IP_);
|
|
|
|
/*
|
|
|
|
* Metadata write that didn't get logged but
|
|
|
|
* written delayed anyway. These aren't associated
|
|
|
|
* with a transaction, and can be ignored.
|
|
|
|
*/
|
|
|
|
if (!bp->b_iodone && !XFS_BUF_ISREAD(bp))
|
|
|
|
return xfs_bioerror_relse(bp);
|
|
|
|
else
|
|
|
|
return xfs_bioerror(bp);
|
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buf_iorequest(bp);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-07-13 14:24:10 +08:00
|
|
|
int
|
|
|
|
xfs_bwrite(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
|
|
|
|
|
|
|
bp->b_flags |= XBF_WRITE;
|
|
|
|
bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q);
|
|
|
|
|
|
|
|
xfs_bdstrat_cb(bp);
|
|
|
|
|
|
|
|
error = xfs_buf_iowait(bp);
|
|
|
|
if (error) {
|
|
|
|
xfs_force_shutdown(bp->b_target->bt_mount,
|
|
|
|
SHUTDOWN_META_IO_ERROR);
|
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2010-01-14 06:17:56 +08:00
|
|
|
/*
|
|
|
|
* Wrapper around bdstrat so that we can stop data from going to disk in case
|
|
|
|
* we are shutting down the filesystem. Typically user data goes thru this
|
|
|
|
* path; one of the exceptions is the superblock.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfsbdstrat(
|
|
|
|
struct xfs_mount *mp,
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
if (XFS_FORCED_SHUTDOWN(mp)) {
|
|
|
|
trace_xfs_bdstrat_shut(bp, _RET_IP_);
|
|
|
|
xfs_bioerror_relse(bp);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buf_iorequest(bp);
|
|
|
|
}
|
|
|
|
|
2009-11-15 00:17:22 +08:00
|
|
|
STATIC void
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_ioend(
|
|
|
|
xfs_buf_t *bp,
|
2005-04-17 06:20:36 +08:00
|
|
|
int schedule)
|
|
|
|
{
|
2011-03-26 06:16:45 +08:00
|
|
|
if (atomic_dec_and_test(&bp->b_io_remaining) == 1)
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_ioend(bp, schedule);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2007-10-12 14:17:47 +08:00
|
|
|
STATIC void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_bio_end_io(
|
2005-04-17 06:20:36 +08:00
|
|
|
struct bio *bio,
|
|
|
|
int error)
|
|
|
|
{
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_t *bp = (xfs_buf_t *)bio->bi_private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-11-12 19:09:46 +08:00
|
|
|
/*
|
|
|
|
* don't overwrite existing errors - otherwise we can lose errors on
|
|
|
|
* buffers that require multiple bios to complete.
|
|
|
|
*/
|
|
|
|
if (!bp->b_error)
|
|
|
|
xfs_buf_ioerror(bp, -error);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-11-12 19:09:46 +08:00
|
|
|
if (!bp->b_error && xfs_buf_is_vmapped(bp) && (bp->b_flags & XBF_READ))
|
2010-01-26 01:42:24 +08:00
|
|
|
invalidate_kernel_vmap_range(bp->b_addr, xfs_buf_vmap_len(bp));
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
_xfs_buf_ioend(bp, 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
static void
|
|
|
|
xfs_buf_ioapply_map(
|
|
|
|
struct xfs_buf *bp,
|
|
|
|
int map,
|
|
|
|
int *buf_offset,
|
|
|
|
int *count,
|
|
|
|
int rw)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-06-22 16:50:09 +08:00
|
|
|
int page_index;
|
|
|
|
int total_nr_pages = bp->b_page_count;
|
|
|
|
int nr_pages;
|
|
|
|
struct bio *bio;
|
|
|
|
sector_t sector = bp->b_maps[map].bm_bn;
|
|
|
|
int size;
|
|
|
|
int offset;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
total_nr_pages = bp->b_page_count;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
/* skip the pages in the buffer before the start offset */
|
|
|
|
page_index = 0;
|
|
|
|
offset = *buf_offset;
|
|
|
|
while (offset >= PAGE_SIZE) {
|
|
|
|
page_index++;
|
|
|
|
offset -= PAGE_SIZE;
|
2005-11-02 07:26:59 +08:00
|
|
|
}
|
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
/*
|
|
|
|
* Limit the IO size to the length of the current vector, and update the
|
|
|
|
* remaining IO count for the next time around.
|
|
|
|
*/
|
|
|
|
size = min_t(int, BBTOB(bp->b_maps[map].bm_len), *count);
|
|
|
|
*count -= size;
|
|
|
|
*buf_offset += size;
|
2011-07-26 23:06:44 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
next_chunk:
|
2006-01-11 12:39:08 +08:00
|
|
|
atomic_inc(&bp->b_io_remaining);
|
2005-04-17 06:20:36 +08:00
|
|
|
nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
|
|
|
|
if (nr_pages > total_nr_pages)
|
|
|
|
nr_pages = total_nr_pages;
|
|
|
|
|
|
|
|
bio = bio_alloc(GFP_NOIO, nr_pages);
|
2006-01-11 12:39:08 +08:00
|
|
|
bio->bi_bdev = bp->b_target->bt_bdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
bio->bi_sector = sector;
|
2006-01-11 12:39:08 +08:00
|
|
|
bio->bi_end_io = xfs_buf_bio_end_io;
|
|
|
|
bio->bi_private = bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-26 06:16:45 +08:00
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
for (; size && nr_pages; nr_pages--, page_index++) {
|
2011-03-26 06:16:45 +08:00
|
|
|
int rbytes, nbytes = PAGE_SIZE - offset;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (nbytes > size)
|
|
|
|
nbytes = size;
|
|
|
|
|
2012-06-22 16:50:09 +08:00
|
|
|
rbytes = bio_add_page(bio, bp->b_pages[page_index], nbytes,
|
|
|
|
offset);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (rbytes < nbytes)
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
|
|
|
|
offset = 0;
|
2012-04-23 13:58:52 +08:00
|
|
|
sector += BTOBB(nbytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
size -= nbytes;
|
|
|
|
total_nr_pages--;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (likely(bio->bi_size)) {
|
2010-01-26 01:42:24 +08:00
|
|
|
if (xfs_buf_is_vmapped(bp)) {
|
|
|
|
flush_kernel_vmap_range(bp->b_addr,
|
|
|
|
xfs_buf_vmap_len(bp));
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
submit_bio(rw, bio);
|
|
|
|
if (size)
|
|
|
|
goto next_chunk;
|
|
|
|
} else {
|
2012-11-12 19:09:46 +08:00
|
|
|
/*
|
|
|
|
* This is guaranteed not to be the last io reference count
|
|
|
|
* because the caller (xfs_buf_iorequest) holds a count itself.
|
|
|
|
*/
|
|
|
|
atomic_dec(&bp->b_io_remaining);
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_ioerror(bp, EIO);
|
2010-07-20 15:52:59 +08:00
|
|
|
bio_put(bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2012-06-22 16:50:09 +08:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
STATIC void
|
|
|
|
_xfs_buf_ioapply(
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
|
|
|
struct blk_plug plug;
|
|
|
|
int rw;
|
|
|
|
int offset;
|
|
|
|
int size;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (bp->b_flags & XBF_WRITE) {
|
|
|
|
if (bp->b_flags & XBF_SYNCIO)
|
|
|
|
rw = WRITE_SYNC;
|
|
|
|
else
|
|
|
|
rw = WRITE;
|
|
|
|
if (bp->b_flags & XBF_FUA)
|
|
|
|
rw |= REQ_FUA;
|
|
|
|
if (bp->b_flags & XBF_FLUSH)
|
|
|
|
rw |= REQ_FLUSH;
|
2012-11-14 14:54:40 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Run the write verifier callback function if it exists. If
|
|
|
|
* this function fails it will mark the buffer with an error and
|
|
|
|
* the IO should not be dispatched.
|
|
|
|
*/
|
|
|
|
if (bp->b_ops) {
|
|
|
|
bp->b_ops->verify_write(bp);
|
|
|
|
if (bp->b_error) {
|
|
|
|
xfs_force_shutdown(bp->b_target->bt_mount,
|
|
|
|
SHUTDOWN_CORRUPT_INCORE);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2012-06-22 16:50:09 +08:00
|
|
|
} else if (bp->b_flags & XBF_READ_AHEAD) {
|
|
|
|
rw = READA;
|
|
|
|
} else {
|
|
|
|
rw = READ;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we only use the buffer cache for meta-data */
|
|
|
|
rw |= REQ_META;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Walk all the vectors issuing IO on them. Set up the initial offset
|
|
|
|
* into the buffer and the desired IO size before we start -
|
|
|
|
* _xfs_buf_ioapply_vec() will modify them appropriately for each
|
|
|
|
* subsequent call.
|
|
|
|
*/
|
|
|
|
offset = bp->b_offset;
|
|
|
|
size = BBTOB(bp->b_io_length);
|
|
|
|
blk_start_plug(&plug);
|
|
|
|
for (i = 0; i < bp->b_map_count; i++) {
|
|
|
|
xfs_buf_ioapply_map(bp, i, &offset, &size, rw);
|
|
|
|
if (bp->b_error)
|
|
|
|
break;
|
|
|
|
if (size <= 0)
|
|
|
|
break; /* all done */
|
|
|
|
}
|
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-04-23 13:58:46 +08:00
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_iorequest(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_iorequest(bp, _RET_IP_);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-08-23 16:28:03 +08:00
|
|
|
if (bp->b_flags & XBF_WRITE)
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_wait_unpin(bp);
|
|
|
|
xfs_buf_hold(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Set the count to 1 initially, this will stop an I/O
|
|
|
|
* completion callout which happens before we have started
|
2006-01-11 12:39:08 +08:00
|
|
|
* all the I/O from calling xfs_buf_ioend too early.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2006-01-11 12:39:08 +08:00
|
|
|
atomic_set(&bp->b_io_remaining, 1);
|
|
|
|
_xfs_buf_ioapply(bp);
|
2012-07-02 18:00:04 +08:00
|
|
|
_xfs_buf_ioend(bp, 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_rele(bp);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2012-04-23 13:58:46 +08:00
|
|
|
* Waits for I/O to complete on the buffer supplied. It returns immediately if
|
|
|
|
* no I/O is pending or there is already a pending error on the buffer. It
|
|
|
|
* returns the I/O error code, if any, or 0 if there was no error.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
int
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_iowait(
|
|
|
|
xfs_buf_t *bp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-12-15 07:14:59 +08:00
|
|
|
trace_xfs_buf_iowait(bp, _RET_IP_);
|
|
|
|
|
2012-04-23 13:58:46 +08:00
|
|
|
if (!bp->b_error)
|
|
|
|
wait_for_completion(&bp->b_iowait);
|
2009-12-15 07:14:59 +08:00
|
|
|
|
|
|
|
trace_xfs_buf_iowait_done(bp, _RET_IP_);
|
2006-01-11 12:39:08 +08:00
|
|
|
return bp->b_error;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_caddr_t
|
|
|
|
xfs_buf_offset(
|
|
|
|
xfs_buf_t *bp,
|
2005-04-17 06:20:36 +08:00
|
|
|
size_t offset)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
2012-04-23 13:59:07 +08:00
|
|
|
if (bp->b_addr)
|
2011-07-23 07:40:15 +08:00
|
|
|
return bp->b_addr + offset;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
offset += bp->b_offset;
|
2011-03-26 06:16:45 +08:00
|
|
|
page = bp->b_pages[offset >> PAGE_SHIFT];
|
|
|
|
return (xfs_caddr_t)page_address(page) + (offset & (PAGE_SIZE-1));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move data into or out of a buffer.
|
|
|
|
*/
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_iomove(
|
|
|
|
xfs_buf_t *bp, /* buffer to process */
|
2005-04-17 06:20:36 +08:00
|
|
|
size_t boff, /* starting buffer offset */
|
|
|
|
size_t bsize, /* length to copy */
|
2010-01-20 07:47:39 +08:00
|
|
|
void *data, /* data address */
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_rw_t mode) /* read/write/zero flag */
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-04-23 13:58:53 +08:00
|
|
|
size_t bend;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
bend = boff + bsize;
|
|
|
|
while (boff < bend) {
|
2012-04-23 13:58:53 +08:00
|
|
|
struct page *page;
|
|
|
|
int page_index, page_offset, csize;
|
|
|
|
|
|
|
|
page_index = (boff + bp->b_offset) >> PAGE_SHIFT;
|
|
|
|
page_offset = (boff + bp->b_offset) & ~PAGE_MASK;
|
|
|
|
page = bp->b_pages[page_index];
|
|
|
|
csize = min_t(size_t, PAGE_SIZE - page_offset,
|
|
|
|
BBTOB(bp->b_io_length) - boff);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-04-23 13:58:53 +08:00
|
|
|
ASSERT((csize + page_offset) <= PAGE_SIZE);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
switch (mode) {
|
2006-01-11 12:39:08 +08:00
|
|
|
case XBRW_ZERO:
|
2012-04-23 13:58:53 +08:00
|
|
|
memset(page_address(page) + page_offset, 0, csize);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2006-01-11 12:39:08 +08:00
|
|
|
case XBRW_READ:
|
2012-04-23 13:58:53 +08:00
|
|
|
memcpy(data, page_address(page) + page_offset, csize);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2006-01-11 12:39:08 +08:00
|
|
|
case XBRW_WRITE:
|
2012-04-23 13:58:53 +08:00
|
|
|
memcpy(page_address(page) + page_offset, data, csize);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
boff += csize;
|
|
|
|
data += csize;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* Handling of buffer targets (buftargs).
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2010-12-02 13:30:55 +08:00
|
|
|
* Wait for any bufs with callbacks that have been submitted but have not yet
|
|
|
|
* returned. These buffers will have an elevated hold count, so wait on those
|
|
|
|
* while freeing all the buffers only held by the LRU.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
xfs_wait_buftarg(
|
2010-09-24 17:59:04 +08:00
|
|
|
struct xfs_buftarg *btp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-12-02 13:30:55 +08:00
|
|
|
struct xfs_buf *bp;
|
|
|
|
|
|
|
|
restart:
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
|
|
|
while (!list_empty(&btp->bt_lru)) {
|
|
|
|
bp = list_first_entry(&btp->bt_lru, struct xfs_buf, b_lru);
|
|
|
|
if (atomic_read(&bp->b_hold) > 1) {
|
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
2010-09-22 08:47:20 +08:00
|
|
|
delay(100);
|
2010-12-02 13:30:55 +08:00
|
|
|
goto restart;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2010-12-02 13:30:55 +08:00
|
|
|
/*
|
2011-12-05 20:00:34 +08:00
|
|
|
* clear the LRU reference count so the buffer doesn't get
|
2010-12-02 13:30:55 +08:00
|
|
|
* ignored in xfs_buf_rele().
|
|
|
|
*/
|
|
|
|
atomic_set(&bp->b_lru_ref, 0);
|
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
|
|
|
xfs_buf_rele(bp);
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2010-12-02 13:30:55 +08:00
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-11-30 14:27:57 +08:00
|
|
|
int
|
|
|
|
xfs_buftarg_shrink(
|
|
|
|
struct shrinker *shrink,
|
2011-05-25 08:12:27 +08:00
|
|
|
struct shrink_control *sc)
|
2006-01-11 12:37:58 +08:00
|
|
|
{
|
2010-11-30 14:27:57 +08:00
|
|
|
struct xfs_buftarg *btp = container_of(shrink,
|
|
|
|
struct xfs_buftarg, bt_shrinker);
|
2010-12-02 13:30:55 +08:00
|
|
|
struct xfs_buf *bp;
|
2011-05-25 08:12:27 +08:00
|
|
|
int nr_to_scan = sc->nr_to_scan;
|
2010-12-02 13:30:55 +08:00
|
|
|
LIST_HEAD(dispose);
|
|
|
|
|
|
|
|
if (!nr_to_scan)
|
|
|
|
return btp->bt_lru_nr;
|
|
|
|
|
|
|
|
spin_lock(&btp->bt_lru_lock);
|
|
|
|
while (!list_empty(&btp->bt_lru)) {
|
|
|
|
if (nr_to_scan-- <= 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
bp = list_first_entry(&btp->bt_lru, struct xfs_buf, b_lru);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Decrement the b_lru_ref count unless the value is already
|
|
|
|
* zero. If the value is already zero, we need to reclaim the
|
|
|
|
* buffer, otherwise it gets another trip through the LRU.
|
|
|
|
*/
|
|
|
|
if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
|
|
|
|
list_move_tail(&bp->b_lru, &btp->bt_lru);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* remove the buffer from the LRU now to avoid needing another
|
|
|
|
* lock round trip inside xfs_buf_rele().
|
|
|
|
*/
|
|
|
|
list_move(&bp->b_lru, &dispose);
|
|
|
|
btp->bt_lru_nr--;
|
2012-08-11 02:01:51 +08:00
|
|
|
bp->b_lru_flags |= _XBF_LRU_DISPOSE;
|
2010-11-30 14:27:57 +08:00
|
|
|
}
|
2010-12-02 13:30:55 +08:00
|
|
|
spin_unlock(&btp->bt_lru_lock);
|
|
|
|
|
|
|
|
while (!list_empty(&dispose)) {
|
|
|
|
bp = list_first_entry(&dispose, struct xfs_buf, b_lru);
|
|
|
|
list_del_init(&bp->b_lru);
|
|
|
|
xfs_buf_rele(bp);
|
|
|
|
}
|
|
|
|
|
|
|
|
return btp->bt_lru_nr;
|
2006-01-11 12:37:58 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
void
|
|
|
|
xfs_free_buftarg(
|
2009-03-04 03:48:37 +08:00
|
|
|
struct xfs_mount *mp,
|
|
|
|
struct xfs_buftarg *btp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-11-30 14:27:57 +08:00
|
|
|
unregister_shrinker(&btp->bt_shrinker);
|
|
|
|
|
2009-03-04 03:48:37 +08:00
|
|
|
if (mp->m_flags & XFS_MOUNT_BARRIER)
|
|
|
|
xfs_blkdev_issue_flush(btp);
|
2006-01-11 12:37:58 +08:00
|
|
|
|
2008-05-19 14:31:57 +08:00
|
|
|
kmem_free(btp);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
STATIC int
|
|
|
|
xfs_setsize_buftarg_flags(
|
|
|
|
xfs_buftarg_t *btp,
|
|
|
|
unsigned int blocksize,
|
|
|
|
unsigned int sectorsize,
|
|
|
|
int verbose)
|
|
|
|
{
|
2006-01-11 12:39:08 +08:00
|
|
|
btp->bt_bsize = blocksize;
|
|
|
|
btp->bt_sshift = ffs(sectorsize) - 1;
|
|
|
|
btp->bt_smask = sectorsize - 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-11 12:39:08 +08:00
|
|
|
if (set_blocksize(btp->bt_bdev, sectorsize)) {
|
2011-10-11 00:52:51 +08:00
|
|
|
char name[BDEVNAME_SIZE];
|
|
|
|
|
|
|
|
bdevname(btp->bt_bdev, name);
|
|
|
|
|
2011-03-07 07:00:35 +08:00
|
|
|
xfs_warn(btp->bt_mount,
|
|
|
|
"Cannot set_blocksize to %u on device %s\n",
|
2011-10-11 00:52:51 +08:00
|
|
|
sectorsize, name);
|
2005-04-17 06:20:36 +08:00
|
|
|
return EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-11 12:39:08 +08:00
|
|
|
* When allocating the initial buffer target we have not yet
|
|
|
|
* read in the superblock, so don't know what sized sectors
|
|
|
|
* are being used is at this early stage. Play safe.
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
STATIC int
|
|
|
|
xfs_setsize_buftarg_early(
|
|
|
|
xfs_buftarg_t *btp,
|
|
|
|
struct block_device *bdev)
|
|
|
|
{
|
|
|
|
return xfs_setsize_buftarg_flags(btp,
|
2011-03-26 06:16:45 +08:00
|
|
|
PAGE_SIZE, bdev_logical_block_size(bdev), 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
xfs_setsize_buftarg(
|
|
|
|
xfs_buftarg_t *btp,
|
|
|
|
unsigned int blocksize,
|
|
|
|
unsigned int sectorsize)
|
|
|
|
{
|
|
|
|
return xfs_setsize_buftarg_flags(btp, blocksize, sectorsize, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
xfs_buftarg_t *
|
|
|
|
xfs_alloc_buftarg(
|
2010-09-22 08:47:20 +08:00
|
|
|
struct xfs_mount *mp,
|
2005-04-17 06:20:36 +08:00
|
|
|
struct block_device *bdev,
|
2010-03-23 06:52:55 +08:00
|
|
|
int external,
|
|
|
|
const char *fsname)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
xfs_buftarg_t *btp;
|
|
|
|
|
|
|
|
btp = kmem_zalloc(sizeof(*btp), KM_SLEEP);
|
|
|
|
|
2010-09-22 08:47:20 +08:00
|
|
|
btp->bt_mount = mp;
|
2006-01-11 12:39:08 +08:00
|
|
|
btp->bt_dev = bdev->bd_dev;
|
|
|
|
btp->bt_bdev = bdev;
|
2011-03-26 06:16:45 +08:00
|
|
|
btp->bt_bdi = blk_get_backing_dev_info(bdev);
|
|
|
|
if (!btp->bt_bdi)
|
|
|
|
goto error;
|
|
|
|
|
2010-12-02 13:30:55 +08:00
|
|
|
INIT_LIST_HEAD(&btp->bt_lru);
|
|
|
|
spin_lock_init(&btp->bt_lru_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (xfs_setsize_buftarg_early(btp, bdev))
|
|
|
|
goto error;
|
2010-11-30 14:27:57 +08:00
|
|
|
btp->bt_shrinker.shrink = xfs_buftarg_shrink;
|
|
|
|
btp->bt_shrinker.seeks = DEFAULT_SEEKS;
|
|
|
|
register_shrinker(&btp->bt_shrinker);
|
2005-04-17 06:20:36 +08:00
|
|
|
return btp;
|
|
|
|
|
|
|
|
error:
|
2008-05-19 14:31:57 +08:00
|
|
|
kmem_free(btp);
|
2005-04-17 06:20:36 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
* Add a buffer to the delayed write list.
|
|
|
|
*
|
|
|
|
* This queues a buffer for writeout if it hasn't already been. Note that
|
|
|
|
* neither this routine nor the buffer list submission functions perform
|
|
|
|
* any internal synchronization. It is expected that the lists are thread-local
|
|
|
|
* to the callers.
|
|
|
|
*
|
|
|
|
* Returns true if we queued up the buffer, or false if it already had
|
|
|
|
* been on the buffer list.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
bool
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_delwri_queue(
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
struct xfs_buf *bp,
|
|
|
|
struct list_head *list)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
ASSERT(xfs_buf_islocked(bp));
|
2011-08-23 16:28:05 +08:00
|
|
|
ASSERT(!(bp->b_flags & XBF_READ));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
/*
|
|
|
|
* If the buffer is already marked delwri it already is queued up
|
|
|
|
* by someone else for imediate writeout. Just ignore it in that
|
|
|
|
* case.
|
|
|
|
*/
|
|
|
|
if (bp->b_flags & _XBF_DELWRI_Q) {
|
|
|
|
trace_xfs_buf_delwri_queued(bp, _RET_IP_);
|
|
|
|
return false;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
trace_xfs_buf_delwri_queue(bp, _RET_IP_);
|
2010-02-02 07:13:42 +08:00
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
* If a buffer gets written out synchronously or marked stale while it
|
|
|
|
* is on a delwri list we lazily remove it. To do this, the other party
|
|
|
|
* clears the _XBF_DELWRI_Q flag but otherwise leaves the buffer alone.
|
|
|
|
* It remains referenced and on the list. In a rare corner case it
|
|
|
|
* might get readded to a delwri list after the synchronous writeout, in
|
|
|
|
* which case we need just need to re-add the flag here.
|
2010-02-02 07:13:42 +08:00
|
|
|
*/
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
bp->b_flags |= _XBF_DELWRI_Q;
|
|
|
|
if (list_empty(&bp->b_list)) {
|
|
|
|
atomic_inc(&bp->b_hold);
|
|
|
|
list_add_tail(&bp->b_list, list);
|
2007-02-10 15:32:29 +08:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
return true;
|
2007-02-10 15:32:29 +08:00
|
|
|
}
|
|
|
|
|
2010-01-26 12:13:25 +08:00
|
|
|
/*
|
|
|
|
* Compare function is more complex than it needs to be because
|
|
|
|
* the return value is only 32 bits and we are doing comparisons
|
|
|
|
* on 64 bit values
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
xfs_buf_cmp(
|
|
|
|
void *priv,
|
|
|
|
struct list_head *a,
|
|
|
|
struct list_head *b)
|
|
|
|
{
|
|
|
|
struct xfs_buf *ap = container_of(a, struct xfs_buf, b_list);
|
|
|
|
struct xfs_buf *bp = container_of(b, struct xfs_buf, b_list);
|
|
|
|
xfs_daddr_t diff;
|
|
|
|
|
2012-06-22 16:50:08 +08:00
|
|
|
diff = ap->b_map.bm_bn - bp->b_map.bm_bn;
|
2010-01-26 12:13:25 +08:00
|
|
|
if (diff < 0)
|
|
|
|
return -1;
|
|
|
|
if (diff > 0)
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
static int
|
|
|
|
__xfs_buf_delwri_submit(
|
|
|
|
struct list_head *buffer_list,
|
|
|
|
struct list_head *io_list,
|
|
|
|
bool wait)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
struct blk_plug plug;
|
|
|
|
struct xfs_buf *bp, *n;
|
|
|
|
int pinned = 0;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(bp, n, buffer_list, b_list) {
|
|
|
|
if (!wait) {
|
|
|
|
if (xfs_buf_ispinned(bp)) {
|
|
|
|
pinned++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!xfs_buf_trylock(bp))
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
xfs_buf_lock(bp);
|
|
|
|
}
|
2007-12-07 11:09:02 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
/*
|
|
|
|
* Someone else might have written the buffer synchronously or
|
|
|
|
* marked it stale in the meantime. In that case only the
|
|
|
|
* _XBF_DELWRI_Q flag got cleared, and we have to drop the
|
|
|
|
* reference and remove it from the list here.
|
|
|
|
*/
|
|
|
|
if (!(bp->b_flags & _XBF_DELWRI_Q)) {
|
|
|
|
list_del_init(&bp->b_list);
|
|
|
|
xfs_buf_relse(bp);
|
|
|
|
continue;
|
|
|
|
}
|
2010-01-11 19:49:59 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
list_move_tail(&bp->b_list, io_list);
|
|
|
|
trace_xfs_buf_delwri_split(bp, _RET_IP_);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
list_sort(NULL, io_list, xfs_buf_cmp);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
blk_start_plug(&plug);
|
|
|
|
list_for_each_entry_safe(bp, n, io_list, b_list) {
|
|
|
|
bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC);
|
|
|
|
bp->b_flags |= XBF_WRITE;
|
2011-03-30 19:05:09 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
if (!wait) {
|
|
|
|
bp->b_flags |= XBF_ASYNC;
|
2006-01-11 12:39:08 +08:00
|
|
|
list_del_init(&bp->b_list);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
xfs_bdstrat_cb(bp);
|
|
|
|
}
|
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
return pinned;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
* Write out a buffer list asynchronously.
|
|
|
|
*
|
|
|
|
* This will take the @buffer_list, write all non-locked and non-pinned buffers
|
|
|
|
* out and not wait for I/O completion on any of the buffers. This interface
|
|
|
|
* is only safely useable for callers that can track I/O completion by higher
|
|
|
|
* level means, e.g. AIL pushing as the @buffer_list is consumed in this
|
|
|
|
* function.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
int
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
xfs_buf_delwri_submit_nowait(
|
|
|
|
struct list_head *buffer_list)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
LIST_HEAD (io_list);
|
|
|
|
return __xfs_buf_delwri_submit(buffer_list, &io_list, false);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
/*
|
|
|
|
* Write out a buffer list synchronously.
|
|
|
|
*
|
|
|
|
* This will take the @buffer_list, write all buffers out and wait for I/O
|
|
|
|
* completion on all of the buffers. @buffer_list is consumed by the function,
|
|
|
|
* so callers must have some other way of tracking buffers if they require such
|
|
|
|
* functionality.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
xfs_buf_delwri_submit(
|
|
|
|
struct list_head *buffer_list)
|
|
|
|
{
|
|
|
|
LIST_HEAD (io_list);
|
|
|
|
int error = 0, error2;
|
|
|
|
struct xfs_buf *bp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
__xfs_buf_delwri_submit(buffer_list, &io_list, true);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
/* Wait for IO to complete. */
|
|
|
|
while (!list_empty(&io_list)) {
|
|
|
|
bp = list_first_entry(&io_list, struct xfs_buf, b_list);
|
2011-03-30 19:05:09 +08:00
|
|
|
|
2010-01-26 12:13:25 +08:00
|
|
|
list_del_init(&bp->b_list);
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
error2 = xfs_buf_iowait(bp);
|
|
|
|
xfs_buf_relse(bp);
|
|
|
|
if (!error)
|
|
|
|
error = error2;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
xfs: on-stack delayed write buffer lists
Queue delwri buffers on a local on-stack list instead of a per-buftarg one,
and write back the buffers per-process instead of by waking up xfsbufd.
This is now easily doable given that we have very few places left that write
delwri buffers:
- log recovery:
Only done at mount time, and already forcing out the buffers
synchronously using xfs_flush_buftarg
- quotacheck:
Same story.
- dquot reclaim:
Writes out dirty dquots on the LRU under memory pressure. We might
want to look into doing more of this via xfsaild, but it's already
more optimal than the synchronous inode reclaim that writes each
buffer synchronously.
- xfsaild:
This is the main beneficiary of the change. By keeping a local list
of buffers to write we reduce latency of writing out buffers, and
more importably we can remove all the delwri list promotions which
were hitting the buffer cache hard under sustained metadata loads.
The implementation is very straight forward - xfs_buf_delwri_queue now gets
a new list_head pointer that it adds the delwri buffers to, and all callers
need to eventually submit the list using xfs_buf_delwi_submit or
xfs_buf_delwi_submit_nowait. Buffers that already are on a delwri list are
skipped in xfs_buf_delwri_queue, assuming they already are on another delwri
list. The biggest change to pass down the buffer list was done to the AIL
pushing. Now that we operate on buffers the trylock, push and pushbuf log
item methods are merged into a single push routine, which tries to lock the
item, and if possible add the buffer that needs writeback to the buffer list.
This leads to much simpler code than the previous split but requires the
individual IOP_PUSH instances to unlock and reacquire the AIL around calls
to blocking routines.
Given that xfsailds now also handle writing out buffers, the conditions for
log forcing and the sleep times needed some small changes. The most
important one is that we consider an AIL busy as long we still have buffers
to push, and the other one is that we do increment the pushed LSN for
buffers that are under flushing at this moment, but still count them towards
the stuck items for restart purposes. Without this we could hammer on stuck
items without ever forcing the log and not make progress under heavy random
delete workloads on fast flash storage devices.
[ Dave Chinner:
- rebase on previous patches.
- improved comments for XBF_DELWRI_Q handling
- fix XBF_ASYNC handling in queue submission (test 106 failure)
- rename delwri submit function buffer list parameters for clarity
- xfs_efd_item_push() should return XFS_ITEM_PINNED ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
2012-04-23 13:58:39 +08:00
|
|
|
return error;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2005-11-02 07:15:05 +08:00
|
|
|
int __init
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_init(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-03-14 10:18:19 +08:00
|
|
|
xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf",
|
|
|
|
KM_ZONE_HWALIGN, NULL);
|
2006-01-11 12:39:08 +08:00
|
|
|
if (!xfs_buf_zone)
|
2009-12-15 07:14:59 +08:00
|
|
|
goto out;
|
2005-11-02 07:15:05 +08:00
|
|
|
|
2010-09-08 17:00:22 +08:00
|
|
|
xfslogd_workqueue = alloc_workqueue("xfslogd",
|
2010-10-11 21:12:27 +08:00
|
|
|
WQ_MEM_RECLAIM | WQ_HIGHPRI, 1);
|
2005-06-21 13:14:01 +08:00
|
|
|
if (!xfslogd_workqueue)
|
2005-11-02 07:15:05 +08:00
|
|
|
goto out_free_buf_zone;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-21 13:14:01 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-21 13:14:01 +08:00
|
|
|
out_free_buf_zone:
|
2006-01-11 12:39:08 +08:00
|
|
|
kmem_zone_destroy(xfs_buf_zone);
|
2009-12-15 07:14:59 +08:00
|
|
|
out:
|
2006-03-14 10:18:19 +08:00
|
|
|
return -ENOMEM;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-01-11 12:39:08 +08:00
|
|
|
xfs_buf_terminate(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2005-11-02 07:15:05 +08:00
|
|
|
destroy_workqueue(xfslogd_workqueue);
|
2006-01-11 12:39:08 +08:00
|
|
|
kmem_zone_destroy(xfs_buf_zone);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|