[PATCH] hugepage: Fix hugepage logic in free_pgtables() harder
Turns out the hugepage logic in free_pgtables() was doubly broken. The loop coalescing multiple normal page VMAs into one call to free_pgd_range() had an off by one error, which could mean it would coalesce one hugepage VMA into the same bundle (checking 'vma' not 'next' in the loop). I transferred this bug into the new is_vm_hugetlb_page() based version. Here's the fix. This one didn't bite on powerpc previously for the same reason the is_hugepage_only_range() problem didn't: powerpc's hugetlb_free_pgd_range() is identical to free_pgd_range(). It didn't bite on ia64 because the hugepage region is distant enough from any other region that the separated PMD_SIZE distance test would always prevent coalescing the two together. No libhugetlbfs testsuite regressions (ppc64, POWER5). Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
parent
9da61aef0f
commit
4866920b93
|
@ -285,7 +285,7 @@ void free_pgtables(struct mmu_gather **tlb, struct vm_area_struct *vma,
|
||||||
* Optimization: gather nearby vmas into one call down
|
* Optimization: gather nearby vmas into one call down
|
||||||
*/
|
*/
|
||||||
while (next && next->vm_start <= vma->vm_end + PMD_SIZE
|
while (next && next->vm_start <= vma->vm_end + PMD_SIZE
|
||||||
&& !is_vm_hugetlb_page(vma)) {
|
&& !is_vm_hugetlb_page(next)) {
|
||||||
vma = next;
|
vma = next;
|
||||||
next = vma->vm_next;
|
next = vma->vm_next;
|
||||||
anon_vma_unlink(vma);
|
anon_vma_unlink(vma);
|
||||||
|
|
Loading…
Reference in New Issue