hugetlbfs: fix issue of preallocation of gigantic pages can't work
Preallocation of gigantic pages can't work bacause of commitb5389086ad
("hugetlbfs: extend the definition of hugepages parameter to support node allocation"). When nid is NUMA_NO_NODE(-1), alloc_bootmem_huge_page will always return without doing allocation. Fix this by adding more check. Link: https://lkml.kernel.org/r/20211129133803.15653-1-yaozhenguo1@gmail.com Fixes:b5389086ad
("hugetlbfs: extend the definition of hugepages parameter to support node allocation") Signed-off-by: Zhenguo Yao <yaozhenguo1@gmail.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Tested-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
a7ebf564de
commit
4178158ef8
|
@ -2973,7 +2973,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
|
|||
struct huge_bootmem_page *m = NULL; /* initialize for clang */
|
||||
int nr_nodes, node;
|
||||
|
||||
if (nid >= nr_online_nodes)
|
||||
if (nid != NUMA_NO_NODE && nid >= nr_online_nodes)
|
||||
return 0;
|
||||
/* do node specific alloc */
|
||||
if (nid != NUMA_NO_NODE) {
|
||||
|
|
Loading…
Reference in New Issue