KVM: x86: Manually calculate reserved bits when loading PDPTRS
Manually generate the PDPTR reserved bit mask when explicitly loading PDPTRs. The reserved bits that are being tracked by the MMU reflect the current paging mode, which is unlikely to be PAE paging in the vast majority of flows that use load_pdptrs(), e.g. CR0 and CR4 emulation, __set_sregs(), etc... This can cause KVM to incorrectly signal a bad PDPTR, or more likely, miss a reserved bit check and subsequently fail a VM-Enter due to a bad VMCS.GUEST_PDPTR. Add a one off helper to generate the reserved bits instead of sharing code across the MMU's calculations and the PDPTR emulation. The PDPTR reserved bits are basically set in stone, and pushing a helper into the MMU's calculation adds unnecessary complexity without improving readability. Oppurtunistically fix/update the comment for load_pdptrs(). Note, the buggy commit also introduced a deliberate functional change, "Also remove bit 5-6 from rsvd_bits_mask per latest SDM.", which was effectively (and correctly) reverted by commitcd9ae5fe47
("KVM: x86: Fix page-tables reserved bits"). A bit of SDM archaeology shows that the SDM from late 2008 had a bug (likely a copy+paste error) where it listed bits 6:5 as AVL and A for PDPTEs used for 4k entries but reserved for 2mb entries. I.e. the SDM contradicted itself, and bits 6:5 are and always have been reserved. Fixes:20c466b561
("KVM: Use rsvd_bits_mask in load_pdptrs()") Cc: stable@vger.kernel.org Cc: Nadav Amit <nadav.amit@gmail.com> Reported-by: Doug Reiland <doug.reiland@intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
fdcf756213
commit
16cfacc808
|
@ -674,8 +674,14 @@ static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
|
|||
data, offset, len, access);
|
||||
}
|
||||
|
||||
static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) |
|
||||
rsvd_bits(1, 2);
|
||||
}
|
||||
|
||||
/*
|
||||
* Load the pae pdptrs. Return true is they are all valid.
|
||||
* Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise.
|
||||
*/
|
||||
int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
|
||||
{
|
||||
|
@ -694,8 +700,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
|
|||
}
|
||||
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
|
||||
if ((pdpte[i] & PT_PRESENT_MASK) &&
|
||||
(pdpte[i] &
|
||||
vcpu->arch.mmu->guest_rsvd_check.rsvd_bits_mask[0][2])) {
|
||||
(pdpte[i] & pdptr_rsvd_bits(vcpu))) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue