x86/mm: Recompute physical address for every page of per-CPU CEA mapping
Recompute the physical address for each per-CPU page in the CPU entry area, a recent commit inadvertantly modified cea_map_percpu_pages() such that every PTE is mapped to the physical address of the first page. Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com
This commit is contained in:
parent
e996365ee7
commit
80d72a8f76
|
@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
|
|||
early_pfn_to_nid(PFN_DOWN(pa)));
|
||||
|
||||
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
|
||||
cea_set_pte(cea_vaddr, pa, prot);
|
||||
cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
|
||||
}
|
||||
|
||||
static void __init percpu_setup_debug_store(unsigned int cpu)
|
||||
|
|
Loading…
Reference in New Issue