hugetlb: restore interleaving of bootmem huge pages
I noticed that alloc_bootmem_huge_page() will only advance to the next
node on failure to allocate a huge page, potentially filling nodes with
huge-pages. I asked about this on linux-mm and linux-numa, cc'ing the
usual huge page suspects.
Mel Gorman responded:
I strongly suspect that the same node being used until allocation
failure instead of round-robin is an oversight and not deliberate
at all. It appears to be a side-effect of a fix made way back in
commit 63b4613c3f
["hugetlb: fix
hugepage allocation with memoryless nodes"]. Prior to that patch
it looked like allocations would always round-robin even when
allocation was successful.
This patch--factored out of my "hugetlb mempolicy" series--moves the
advance of the hstate next node from which to allocate up before the test
for success of the attempted allocation.
Note that alloc_bootmem_huge_page() is only used for order > MAX_ORDER
huge pages.
I'll post a separate patch for mainline/stable, as the above mentioned
"balance freeing" series renamed the next node to alloc function.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Andy Whitcroft <apw@canonical.com>
Reviewed-by: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
41a25e7e67
commit
57dd28fb05
|
@ -1031,6 +1031,7 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
|
|||
NODE_DATA(h->next_nid_to_alloc),
|
||||
huge_page_size(h), huge_page_size(h), 0);
|
||||
|
||||
hstate_next_node_to_alloc(h);
|
||||
if (addr) {
|
||||
/*
|
||||
* Use the beginning of the huge page to store the
|
||||
|
@ -1040,7 +1041,6 @@ int __weak alloc_bootmem_huge_page(struct hstate *h)
|
|||
m = addr;
|
||||
goto found;
|
||||
}
|
||||
hstate_next_node_to_alloc(h);
|
||||
nr_nodes--;
|
||||
}
|
||||
return 0;
|
||||
|
|
Loading…
Reference in New Issue