KVM: x86/mmu: WARN on an invalid root_hpa
WARN on the existing invalid root_hpa checks in __direct_map() and FNAME(fetch). The "legitimate" path that invalidated root_hpa in the middle of a page fault is long since gone, i.e. it should no longer be impossible to invalidate in the middle of a page fault[*]. The root_hpa checks were added by two related commits989c6b34f6
("KVM: MMU: handle invalid root_hpa at __direct_map")37f6a4e237
("KVM: x86: handle invalid root_hpa everywhere") to fix a bug where nested_vmx_vmexit() could be called *in the middle* of a page fault. At the time, vmx_interrupt_allowed(), which was and still is used by kvm_can_do_async_pf() via ->interrupt_allowed(), directly invoked nested_vmx_vmexit() to switch from L2 to L1 to emulate a VM-Exit on a pending interrupt. Emulating the nested VM-Exit resulted in root_hpa being invalidated by kvm_mmu_reset_context() without explicitly terminating the page fault. Now that root_hpa is checked for validity by kvm_mmu_page_fault(), WARN on an invalid root_hpa to detect any flows that reset the MMU while handling a page fault. The broken vmx_interrupt_allowed() behavior has long since been fixed and resetting the MMU during a page fault should not be considered legal behavior. [*] It's actually technically possible in FNAME(page_fault)() because it calls inject_page_fault() when the guest translation is invalid, but in that case the page fault handling is immediately terminated. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
ddce620821
commit
0c7a98e34d
|
@ -3391,7 +3391,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
|
|||
gfn_t gfn = gpa >> PAGE_SHIFT;
|
||||
gfn_t base_gfn = gfn;
|
||||
|
||||
if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
|
||||
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
|
||||
return RET_PF_RETRY;
|
||||
|
||||
if (likely(max_level > PT_PAGE_TABLE_LEVEL))
|
||||
|
|
|
@ -637,7 +637,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
|
|||
if (FNAME(gpte_changed)(vcpu, gw, top_level))
|
||||
goto out_gpte_changed;
|
||||
|
||||
if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
|
||||
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
|
||||
goto out_gpte_changed;
|
||||
|
||||
for (shadow_walk_init(&it, vcpu, addr);
|
||||
|
|
Loading…
Reference in New Issue