KVM: MMU: large page update_pte issue with non-PAE 32-bit guests (resend)

kvm_mmu_pte_write() does not handle 32-bit non-PAE large page backed
guests properly. It will instantiate two 2MB sptes pointing to the same
physical 2MB page when a guest large pte update is trapped.

Instead of duplicating code to handle this, disallow directory level
updates to happen through kvm_mmu_pte_write(), so the two 2MB sptes
emulating one guest 4MB pte can be correctly created by the page fault
handling path.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
This commit is contained in:
Marcelo Tosatti 2008-06-11 20:32:40 -03:00 committed by Avi Kivity
parent 6597ca09e6
commit 3094538739
1 changed files with 7 additions and 5 deletions

View File

@ -1581,11 +1581,13 @@ static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
u64 *spte,
const void *new)
{
if ((sp->role.level != PT_PAGE_TABLE_LEVEL)
&& !vcpu->arch.update_pte.largepage) {
++vcpu->kvm->stat.mmu_pde_zapped;
return;
}
if (sp->role.level != PT_PAGE_TABLE_LEVEL) {
if (!vcpu->arch.update_pte.largepage ||
sp->role.glevels == PT32_ROOT_LEVEL) {
++vcpu->kvm->stat.mmu_pde_zapped;
return;
}
}
++vcpu->kvm->stat.mmu_pte_updated;
if (sp->role.glevels == PT32_ROOT_LEVEL)