KVM: arm64: Fix handling of merging tables into a block entry
When dirty logging is enabled, we collapse block entries into tables as necessary. If dirty logging gets canceled, we can end-up merging tables back into block entries. When this happens, we must not only free the non-huge page-table pages but also invalidate all the TLB entries that can potentially cover the block. Otherwise, we end-up with multiple possible translations for the same physical page, which can legitimately result in a TLB conflict. To address this, replease the bogus invalidation by IPA with a full VM invalidation. Although this is pretty heavy handed, it happens very infrequently and saves a bunch of invalidations by IPA. Signed-off-by: Yanan Wang <wangyanan55@huawei.com> [maz: fixup commit message] Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201201201034.116760-3-wangyanan55@huawei.com
This commit is contained in:
parent
5c646b7e1d
commit
3a0b870e34
|
@ -502,7 +502,13 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
|
|||
return 0;
|
||||
|
||||
kvm_set_invalid_pte(ptep);
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
|
||||
|
||||
/*
|
||||
* Invalidate the whole stage-2, as we may have numerous leaf
|
||||
* entries below us which would otherwise need invalidating
|
||||
* individually.
|
||||
*/
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
|
||||
data->anchor = ptep;
|
||||
return 0;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue