From a8bd38dbc57c2fe074df2c9e549b9c2ad3183c83 Mon Sep 17 00:00:00 2001 From: Nikhil V Date: Thu, 13 Jul 2023 12:37:57 +0530 Subject: [PATCH] arm64: mm: Make hibernation aware of KFENCE In the restore path, swsusp_arch_suspend_exit uses copy_page() to over-write memory. However, with features like KFENCE enabled, there could be situations where it may have marked some pages as not valid, due to which it could be reported as invalid accesses. Consider a situation where page 'P' was part of the hibernation image. Now, when the resume kernel tries to restore the pages, the same page 'P' is already in use in the resume kernel and is kfence protected, due to which its mapping is removed from linear map. Since restoring pages happens with the resume kernel page tables, we would end up accessing 'P' during copy and results in kernel pagefault. The proposed fix tries to solve this issue by marking PTE as valid for such kfence protected pages. Co-developed-by: Pavankumar Kondeti Signed-off-by: Pavankumar Kondeti Signed-off-by: Nikhil V Link: https://lore.kernel.org/r/20230713070757.4093-1-quic_nprakash@quicinc.com Signed-off-by: Will Deacon --- arch/arm64/mm/trans_pgd.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 4ea2eefbc053..e9ad391fc8ea 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -24,6 +24,7 @@ #include #include #include +#include static void *trans_alloc(struct trans_pgd_info *info) { @@ -41,7 +42,8 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) * the temporary mappings we use during restore. */ set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { + } else if ((debug_pagealloc_enabled() || + is_kfence_address((void *)addr)) && !pte_none(pte)) { /* * debug_pagealloc will removed the PTE_VALID bit if * the page isn't in use by the resume kernel. It may have