x86/ldt: Use tlb_gather_mmu_fullmm() when freeing LDT page-tables
free_ldt_pgtables() uses the MMU gather API for batching TLB flushes over the call to free_pgd_range(). However, tlb_gather_mmu() expects to operate on user addresses and so passing LDT_{BASE,END}_ADDR will confuse the range setting logic in __tlb_adjust_range(), causing the gather to identify a range starting at TASK_SIZE. Such a large range will be converted into a 'fullmm' flush by the low-level invalidation code, so change the caller to invoke tlb_gather_mmu_fullmm() directly. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Yu Zhao <yuzhao@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lkml.kernel.org/r/20210127235347.1402-7-will@kernel.org
This commit is contained in:
parent
c7bd8010a3
commit
8cf55f24ce
|
@ -398,7 +398,13 @@ static void free_ldt_pgtables(struct mm_struct *mm)
|
||||||
if (!boot_cpu_has(X86_FEATURE_PTI))
|
if (!boot_cpu_has(X86_FEATURE_PTI))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
tlb_gather_mmu(&tlb, mm);
|
/*
|
||||||
|
* Although free_pgd_range() is intended for freeing user
|
||||||
|
* page-tables, it also works out for kernel mappings on x86.
|
||||||
|
* We use tlb_gather_mmu_fullmm() to avoid confusing the
|
||||||
|
* range-tracking logic in __tlb_adjust_range().
|
||||||
|
*/
|
||||||
|
tlb_gather_mmu_fullmm(&tlb, mm);
|
||||||
free_pgd_range(&tlb, start, end, start, end);
|
free_pgd_range(&tlb, start, end, start, end);
|
||||||
tlb_finish_mmu(&tlb);
|
tlb_finish_mmu(&tlb);
|
||||||
#endif
|
#endif
|
||||||
|
|
Loading…
Reference in New Issue