mm/autonuma: don't use set_pte_at when updating protnone ptes

Architectures like ppc64, use privilege access bit to mark pte non
accessible.  This implies that kernel can do a copy_to_user to an
address marked for numa fault.  This also implies that there can be a
parallel hardware update for the pte.  set_pte_at cannot be used in such
scenarios.  Hence switch the pte update to use ptep_get_and_clear and
set_pte_at combination.

[akpm@linux-foundation.org: remove unwanted ppc change, per Aneesh]
Link: http://lkml.kernel.org/r/1486400776-28114-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Aneesh Kumar K.V 2017-02-24 14:59:13 -08:00 committed by Linus Torvalds
parent 3f472cc978
commit cee216a696
1 changed files with 9 additions and 9 deletions

View File

@ -3400,32 +3400,32 @@ static int do_numa_page(struct vm_fault *vmf)
int last_cpupid; int last_cpupid;
int target_nid; int target_nid;
bool migrated = false; bool migrated = false;
pte_t pte = vmf->orig_pte; pte_t pte;
bool was_writable = pte_write(pte); bool was_writable = pte_write(vmf->orig_pte);
int flags = 0; int flags = 0;
/* /*
* The "pte" at this point cannot be used safely without * The "pte" at this point cannot be used safely without
* validation through pte_unmap_same(). It's of NUMA type but * validation through pte_unmap_same(). It's of NUMA type but
* the pfn may be screwed if the read is non atomic. * the pfn may be screwed if the read is non atomic.
*
* We can safely just do a "set_pte_at()", because the old
* page table entry is not accessible, so there would be no
* concurrent hardware modifications to the PTE.
*/ */
vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd); vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
spin_lock(vmf->ptl); spin_lock(vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, pte))) { if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
pte_unmap_unlock(vmf->pte, vmf->ptl); pte_unmap_unlock(vmf->pte, vmf->ptl);
goto out; goto out;
} }
/* Make it present again */ /*
* Make it present again, Depending on how arch implementes non
* accessible ptes, some can allow access by kernel mode.
*/
pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
pte = pte_modify(pte, vma->vm_page_prot); pte = pte_modify(pte, vma->vm_page_prot);
pte = pte_mkyoung(pte); pte = pte_mkyoung(pte);
if (was_writable) if (was_writable)
pte = pte_mkwrite(pte); pte = pte_mkwrite(pte);
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte);
update_mmu_cache(vma, vmf->address, vmf->pte); update_mmu_cache(vma, vmf->address, vmf->pte);
page = vm_normal_page(vma, vmf->address, pte); page = vm_normal_page(vma, vmf->address, pte);