ocfs2: Always try for maximum bits with new local alloc windows
What we were doing before was to ask for the current window size as the maximum allocation. This had the effect of limiting the amount of allocation we could get for the local alloc during times when the window size was shrunk due to fragmentation. In some cases, that could actually *increase* fragmentation by artificially limiting the number of bits we can accept. So while we still want to ask for a minimum number of bits equal to window size, there is no reason why we should limit the number of bits the local alloc should accept. Hence always allow the maximum number of local alloc bits. Signed-off-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
This commit is contained in:
parent
fcefd25ac8
commit
b22b63ebaf
|
@ -984,8 +984,7 @@ static int ocfs2_local_alloc_reserve_for_window(struct ocfs2_super *osb,
|
|||
}
|
||||
|
||||
retry_enospc:
|
||||
(*ac)->ac_bits_wanted = osb->local_alloc_bits;
|
||||
|
||||
(*ac)->ac_bits_wanted = osb->local_alloc_default_bits;
|
||||
status = ocfs2_reserve_cluster_bitmap_bits(osb, *ac);
|
||||
if (status == -ENOSPC) {
|
||||
if (ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_ENOSPC) ==
|
||||
|
@ -1061,6 +1060,7 @@ retry_enospc:
|
|||
OCFS2_LA_DISABLED)
|
||||
goto bail;
|
||||
|
||||
ac->ac_bits_wanted = osb->local_alloc_default_bits;
|
||||
status = ocfs2_claim_clusters(osb, handle, ac,
|
||||
osb->local_alloc_bits,
|
||||
&cluster_off,
|
||||
|
|
Loading…
Reference in New Issue