KVM: arm/arm64: Ensure only THP is candidate for adjustment

PageTransCompoundMap() returns true for hugetlbfs and THP
hugepages. This behaviour incorrectly leads to stage 2 faults for
unsupported hugepage sizes (e.g., 64K hugepage with 4K pages) to be
treated as THP faults.

Tighten the check to filter out hugetlbfs pages. This also leads to
consistently mapping all unsupported hugepage sizes as PTE level
entries at stage 2.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: stable@vger.kernel.org # v4.13+
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This commit is contained in:
Punit Agrawal 2018-10-01 16:54:35 +01:00 committed by Marc Zyngier
parent f0725345e3
commit fd2ef35828
1 changed files with 7 additions and 1 deletions

View File

@ -1231,8 +1231,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
{ {
kvm_pfn_t pfn = *pfnp; kvm_pfn_t pfn = *pfnp;
gfn_t gfn = *ipap >> PAGE_SHIFT; gfn_t gfn = *ipap >> PAGE_SHIFT;
struct page *page = pfn_to_page(pfn);
if (PageTransCompoundMap(pfn_to_page(pfn))) { /*
* PageTransCompoungMap() returns true for THP and
* hugetlbfs. Make sure the adjustment is done only for THP
* pages.
*/
if (!PageHuge(page) && PageTransCompoundMap(page)) {
unsigned long mask; unsigned long mask;
/* /*
* The address we faulted on is backed by a transparent huge * The address we faulted on is backed by a transparent huge