ARM:
- Provide a virtual cache topology to the guest to avoid inconsistencies with migration on heterogenous systems. Non secure software has no practical need to traverse the caches by set/way in the first place. - Add support for taking stage-2 access faults in parallel. This was an accidental omission in the original parallel faults implementation, but should provide a marginal improvement to machines w/o FEAT_HAFDBS (such as hardware from the fruit company). - A preamble to adding support for nested virtualization to KVM, including vEL2 register state, rudimentary nested exception handling and masking unsupported features for nested guests. - Fixes to the PSCI relay that avoid an unexpected host SVE trap when resuming a CPU when running pKVM. - VGIC maintenance interrupt support for the AIC - Improvements to the arch timer emulation, primarily aimed at reducing the trap overhead of running nested. - Add CONFIG_USERFAULTFD to the KVM selftests config fragment in the interest of CI systems. - Avoid VM-wide stop-the-world operations when a vCPU accesses its own redistributor. - Serialize when toggling CPACR_EL1.SMEN to avoid unexpected exceptions in the host. - Aesthetic and comment/kerneldoc fixes - Drop the vestiges of the old Columbia mailing list and add [Oliver] as co-maintainer This also drags in arm64's 'for-next/sme2' branch, because both it and the PSCI relay changes touch the EL2 initialization code. RISC-V: - Fix wrong usage of PGDIR_SIZE instead of PUD_SIZE - Correctly place the guest in S-mode after redirecting a trap to the guest - Redirect illegal instruction traps to guest - SBI PMU support for guest s390: - Two patches sorting out confusion between virtual and physical addresses, which currently are the same on s390. - A new ioctl that performs cmpxchg on guest memory - A few fixes x86: - Change tdp_mmu to a read-only parameter - Separate TDP and shadow MMU page fault paths - Enable Hyper-V invariant TSC control - Fix a variety of APICv and AVIC bugs, some of them real-world, some of them affecting architecurally legal but unlikely to happen in practice - Mark APIC timer as expired if its in one-shot mode and the count underflows while the vCPU task was being migrated - Advertise support for Intel's new fast REP string features - Fix a double-shootdown issue in the emergency reboot code - Ensure GIF=1 and disable SVM during an emergency reboot, i.e. give SVM similar treatment to VMX - Update Xen's TSC info CPUID sub-leaves as appropriate - Add support for Hyper-V's extended hypercalls, where "support" at this point is just forwarding the hypercalls to userspace - Clean up the kvm->lock vs. kvm->srcu sequences when updating the PMU and MSR filters - One-off fixes and cleanups - Fix and cleanup the range-based TLB flushing code, used when KVM is running on Hyper-V - Add support for filtering PMU events using a mask. If userspace wants to restrict heavily what events the guest can use, it can now do so without needing an absurd number of filter entries - Clean up KVM's handling of "PMU MSRs to save", especially when vPMU support is disabled - Add PEBS support for Intel Sapphire Rapids - Fix a mostly benign overflow bug in SEV's send|receive_update_data() - Move several SVM-specific flags into vcpu_svm x86 Intel: - Handle NMI VM-Exits before leaving the noinstr region - A few trivial cleanups in the VM-Enter flows - Stop enabling VMFUNC for L1 purely to document that KVM doesn't support EPTP switching (or any other VM function) for L1 - Fix a crash when using eVMCS's enlighted MSR bitmaps Generic: - Clean up the hardware enable and initialization flow, which was scattered around multiple arch-specific hooks. Instead, just let the arch code call into generic code. Both x86 and ARM should benefit from not having to fight common KVM code's notion of how to do initialization. - Account allocations in generic kvm_arch_alloc_vm() - Fix a memory leak if coalesced MMIO unregistration fails selftests: - On x86, cache the CPU vendor (AMD vs. Intel) and use the info to emit the correct hypercall instruction instead of relying on KVM to patch in VMMCALL - Use TAP interface for kvm_binary_stats_test and tsc_msrs_test -----BEGIN PGP SIGNATURE----- iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmP2YA0UHHBib256aW5p QHJlZGhhdC5jb20ACgkQv/vSX3jHroPg/Qf+J6nT+TkIa+8Ei+fN1oMTDp4YuIOx mXvJ9mRK9sQ+tAUVwvDz3qN/fK5mjsYbRHIDlVc5p2Q3bCrVGDDqXPFfCcLx1u+O 9U9xjkO4JxD2LS9pc70FYOyzVNeJ8VMGOBbC2b0lkdYZ4KnUc6e/WWFKJs96bK+H duo+RIVyaMthnvbTwSv1K3qQb61n6lSJXplywS8KWFK6NZAmBiEFDAWGRYQE9lLs VcVcG0iDJNL/BQJ5InKCcvXVGskcCm9erDszPo7w4Bypa4S9AMS42DHUaRZrBJwV /WqdH7ckIz7+OSV0W1j+bKTHAFVTCjXYOM7wQykgjawjICzMSnnG9Gpskw== =goe1 -----END PGP SIGNATURE----- Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm Pull kvm updates from Paolo Bonzini: "ARM: - Provide a virtual cache topology to the guest to avoid inconsistencies with migration on heterogenous systems. Non secure software has no practical need to traverse the caches by set/way in the first place - Add support for taking stage-2 access faults in parallel. This was an accidental omission in the original parallel faults implementation, but should provide a marginal improvement to machines w/o FEAT_HAFDBS (such as hardware from the fruit company) - A preamble to adding support for nested virtualization to KVM, including vEL2 register state, rudimentary nested exception handling and masking unsupported features for nested guests - Fixes to the PSCI relay that avoid an unexpected host SVE trap when resuming a CPU when running pKVM - VGIC maintenance interrupt support for the AIC - Improvements to the arch timer emulation, primarily aimed at reducing the trap overhead of running nested - Add CONFIG_USERFAULTFD to the KVM selftests config fragment in the interest of CI systems - Avoid VM-wide stop-the-world operations when a vCPU accesses its own redistributor - Serialize when toggling CPACR_EL1.SMEN to avoid unexpected exceptions in the host - Aesthetic and comment/kerneldoc fixes - Drop the vestiges of the old Columbia mailing list and add [Oliver] as co-maintainer RISC-V: - Fix wrong usage of PGDIR_SIZE instead of PUD_SIZE - Correctly place the guest in S-mode after redirecting a trap to the guest - Redirect illegal instruction traps to guest - SBI PMU support for guest s390: - Sort out confusion between virtual and physical addresses, which currently are the same on s390 - A new ioctl that performs cmpxchg on guest memory - A few fixes x86: - Change tdp_mmu to a read-only parameter - Separate TDP and shadow MMU page fault paths - Enable Hyper-V invariant TSC control - Fix a variety of APICv and AVIC bugs, some of them real-world, some of them affecting architecurally legal but unlikely to happen in practice - Mark APIC timer as expired if its in one-shot mode and the count underflows while the vCPU task was being migrated - Advertise support for Intel's new fast REP string features - Fix a double-shootdown issue in the emergency reboot code - Ensure GIF=1 and disable SVM during an emergency reboot, i.e. give SVM similar treatment to VMX - Update Xen's TSC info CPUID sub-leaves as appropriate - Add support for Hyper-V's extended hypercalls, where "support" at this point is just forwarding the hypercalls to userspace - Clean up the kvm->lock vs. kvm->srcu sequences when updating the PMU and MSR filters - One-off fixes and cleanups - Fix and cleanup the range-based TLB flushing code, used when KVM is running on Hyper-V - Add support for filtering PMU events using a mask. If userspace wants to restrict heavily what events the guest can use, it can now do so without needing an absurd number of filter entries - Clean up KVM's handling of "PMU MSRs to save", especially when vPMU support is disabled - Add PEBS support for Intel Sapphire Rapids - Fix a mostly benign overflow bug in SEV's send|receive_update_data() - Move several SVM-specific flags into vcpu_svm x86 Intel: - Handle NMI VM-Exits before leaving the noinstr region - A few trivial cleanups in the VM-Enter flows - Stop enabling VMFUNC for L1 purely to document that KVM doesn't support EPTP switching (or any other VM function) for L1 - Fix a crash when using eVMCS's enlighted MSR bitmaps Generic: - Clean up the hardware enable and initialization flow, which was scattered around multiple arch-specific hooks. Instead, just let the arch code call into generic code. Both x86 and ARM should benefit from not having to fight common KVM code's notion of how to do initialization - Account allocations in generic kvm_arch_alloc_vm() - Fix a memory leak if coalesced MMIO unregistration fails selftests: - On x86, cache the CPU vendor (AMD vs. Intel) and use the info to emit the correct hypercall instruction instead of relying on KVM to patch in VMMCALL - Use TAP interface for kvm_binary_stats_test and tsc_msrs_test" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (325 commits) KVM: SVM: hyper-v: placate modpost section mismatch error KVM: x86/mmu: Make tdp_mmu_allowed static KVM: arm64: nv: Use reg_to_encoding() to get sysreg ID KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2 changes KVM: arm64: nv: Filter out unsupported features from ID regs KVM: arm64: nv: Emulate EL12 register accesses from the virtual EL2 KVM: arm64: nv: Allow a sysreg to be hidden from userspace only KVM: arm64: nv: Emulate PSTATE.M for a guest hypervisor KVM: arm64: nv: Add accessors for SPSR_EL1, ELR_EL1 and VBAR_EL1 from virtual EL2 KVM: arm64: nv: Handle SMCs taken from virtual EL2 KVM: arm64: nv: Handle trapped ERET from virtual EL2 KVM: arm64: nv: Inject HVC exceptions to the virtual EL2 KVM: arm64: nv: Support virtual EL2 exceptions KVM: arm64: nv: Handle HCR_EL2.NV system register traps KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state KVM: arm64: nv: Add EL2 system registers to vcpu context KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set KVM: arm64: nv: Introduce nested virtualization VCPU feature KVM: arm64: Use the S2 MMU context to iterate over S2 table ...
This commit is contained in:
commit
49d5759268
|
@ -2536,9 +2536,14 @@
|
|||
protected: nVHE-based mode with support for guests whose
|
||||
state is kept private from the host.
|
||||
|
||||
nested: VHE-based mode with support for nested
|
||||
virtualization. Requires at least ARMv8.3
|
||||
hardware.
|
||||
|
||||
Defaults to VHE/nVHE based on hardware support. Setting
|
||||
mode to "protected" will disable kexec and hibernation
|
||||
for the host.
|
||||
for the host. "nested" is experimental and should be
|
||||
used with extreme caution.
|
||||
|
||||
kvm-arm.vgic_v3_group0_trap=
|
||||
[KVM,ARM] Trap guest accesses to GICv3 group-0
|
||||
|
|
|
@ -3736,7 +3736,7 @@ The fields in each entry are defined as follows:
|
|||
:Parameters: struct kvm_s390_mem_op (in)
|
||||
:Returns: = 0 on success,
|
||||
< 0 on generic error (e.g. -EFAULT or -ENOMEM),
|
||||
> 0 if an exception occurred while walking the page tables
|
||||
16 bit program exception code if the access causes such an exception
|
||||
|
||||
Read or write data from/to the VM's memory.
|
||||
The KVM_CAP_S390_MEM_OP_EXTENSION capability specifies what functionality is
|
||||
|
@ -3754,6 +3754,8 @@ Parameters are specified via the following structure::
|
|||
struct {
|
||||
__u8 ar; /* the access register number */
|
||||
__u8 key; /* access key, ignored if flag unset */
|
||||
__u8 pad1[6]; /* ignored */
|
||||
__u64 old_addr; /* ignored if flag unset */
|
||||
};
|
||||
__u32 sida_offset; /* offset into the sida */
|
||||
__u8 reserved[32]; /* ignored */
|
||||
|
@ -3781,6 +3783,7 @@ Possible operations are:
|
|||
* ``KVM_S390_MEMOP_ABSOLUTE_WRITE``
|
||||
* ``KVM_S390_MEMOP_SIDA_READ``
|
||||
* ``KVM_S390_MEMOP_SIDA_WRITE``
|
||||
* ``KVM_S390_MEMOP_ABSOLUTE_CMPXCHG``
|
||||
|
||||
Logical read/write:
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -3829,7 +3832,7 @@ the checks required for storage key protection as one operation (as opposed to
|
|||
user space getting the storage keys, performing the checks, and accessing
|
||||
memory thereafter, which could lead to a delay between check and access).
|
||||
Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION
|
||||
is > 0.
|
||||
has the KVM_S390_MEMOP_EXTENSION_CAP_BASE bit set.
|
||||
Currently absolute accesses are not permitted for VCPU ioctls.
|
||||
Absolute accesses are permitted for non-protected guests only.
|
||||
|
||||
|
@ -3837,7 +3840,26 @@ Supported flags:
|
|||
* ``KVM_S390_MEMOP_F_CHECK_ONLY``
|
||||
* ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
|
||||
|
||||
The semantics of the flags are as for logical accesses.
|
||||
The semantics of the flags common with logical accesses are as for logical
|
||||
accesses.
|
||||
|
||||
Absolute cmpxchg:
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Perform cmpxchg on absolute guest memory. Intended for use with the
|
||||
KVM_S390_MEMOP_F_SKEY_PROTECTION flag.
|
||||
Instead of doing an unconditional write, the access occurs only if the target
|
||||
location contains the value pointed to by "old_addr".
|
||||
This is performed as an atomic cmpxchg with the length specified by the "size"
|
||||
parameter. "size" must be a power of two up to and including 16.
|
||||
If the exchange did not take place because the target value doesn't match the
|
||||
old value, the value "old_addr" points to is replaced by the target value.
|
||||
User space can tell if an exchange took place by checking if this replacement
|
||||
occurred. The cmpxchg op is permitted for the VM ioctl if
|
||||
KVM_CAP_S390_MEM_OP_EXTENSION has flag KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG set.
|
||||
|
||||
Supported flags:
|
||||
* ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
|
||||
|
||||
SIDA read/write:
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
@ -4457,6 +4479,18 @@ not holding a previously reported uncorrected error).
|
|||
:Parameters: struct kvm_s390_cmma_log (in, out)
|
||||
:Returns: 0 on success, a negative value on error
|
||||
|
||||
Errors:
|
||||
|
||||
====== =============================================================
|
||||
ENOMEM not enough memory can be allocated to complete the task
|
||||
ENXIO if CMMA is not enabled
|
||||
EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled
|
||||
EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been
|
||||
disabled (and thus migration mode was automatically disabled)
|
||||
EFAULT if the userspace address is invalid or if no page table is
|
||||
present for the addresses (e.g. when using hugepages).
|
||||
====== =============================================================
|
||||
|
||||
This ioctl is used to get the values of the CMMA bits on the s390
|
||||
architecture. It is meant to be used in two scenarios:
|
||||
|
||||
|
@ -4537,12 +4571,6 @@ mask is unused.
|
|||
|
||||
values points to the userspace buffer where the result will be stored.
|
||||
|
||||
This ioctl can fail with -ENOMEM if not enough memory can be allocated to
|
||||
complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if
|
||||
KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with
|
||||
-EFAULT if the userspace address is invalid or if no page table is
|
||||
present for the addresses (e.g. when using hugepages).
|
||||
|
||||
4.108 KVM_S390_SET_CMMA_BITS
|
||||
----------------------------
|
||||
|
||||
|
@ -5005,6 +5033,15 @@ using this ioctl.
|
|||
:Parameters: struct kvm_pmu_event_filter (in)
|
||||
:Returns: 0 on success, -1 on error
|
||||
|
||||
Errors:
|
||||
|
||||
====== ============================================================
|
||||
EFAULT args[0] cannot be accessed
|
||||
EINVAL args[0] contains invalid data in the filter or filter events
|
||||
E2BIG nevents is too large
|
||||
EBUSY not enough memory to allocate the filter
|
||||
====== ============================================================
|
||||
|
||||
::
|
||||
|
||||
struct kvm_pmu_event_filter {
|
||||
|
@ -5016,14 +5053,69 @@ using this ioctl.
|
|||
__u64 events[0];
|
||||
};
|
||||
|
||||
This ioctl restricts the set of PMU events that the guest can program.
|
||||
The argument holds a list of events which will be allowed or denied.
|
||||
The eventsel+umask of each event the guest attempts to program is compared
|
||||
against the events field to determine whether the guest should have access.
|
||||
The events field only controls general purpose counters; fixed purpose
|
||||
counters are controlled by the fixed_counter_bitmap.
|
||||
This ioctl restricts the set of PMU events the guest can program by limiting
|
||||
which event select and unit mask combinations are permitted.
|
||||
|
||||
No flags are defined yet, the field must be zero.
|
||||
The argument holds a list of filter events which will be allowed or denied.
|
||||
|
||||
Filter events only control general purpose counters; fixed purpose counters
|
||||
are controlled by the fixed_counter_bitmap.
|
||||
|
||||
Valid values for 'flags'::
|
||||
|
||||
``0``
|
||||
|
||||
To use this mode, clear the 'flags' field.
|
||||
|
||||
In this mode each event will contain an event select + unit mask.
|
||||
|
||||
When the guest attempts to program the PMU the guest's event select +
|
||||
unit mask is compared against the filter events to determine whether the
|
||||
guest should have access.
|
||||
|
||||
``KVM_PMU_EVENT_FLAG_MASKED_EVENTS``
|
||||
:Capability: KVM_CAP_PMU_EVENT_MASKED_EVENTS
|
||||
|
||||
In this mode each filter event will contain an event select, mask, match, and
|
||||
exclude value. To encode a masked event use::
|
||||
|
||||
KVM_PMU_ENCODE_MASKED_ENTRY()
|
||||
|
||||
An encoded event will follow this layout::
|
||||
|
||||
Bits Description
|
||||
---- -----------
|
||||
7:0 event select (low bits)
|
||||
15:8 umask match
|
||||
31:16 unused
|
||||
35:32 event select (high bits)
|
||||
36:54 unused
|
||||
55 exclude bit
|
||||
63:56 umask mask
|
||||
|
||||
When the guest attempts to program the PMU, these steps are followed in
|
||||
determining if the guest should have access:
|
||||
|
||||
1. Match the event select from the guest against the filter events.
|
||||
2. If a match is found, match the guest's unit mask to the mask and match
|
||||
values of the included filter events.
|
||||
I.e. (unit mask & mask) == match && !exclude.
|
||||
3. If a match is found, match the guest's unit mask to the mask and match
|
||||
values of the excluded filter events.
|
||||
I.e. (unit mask & mask) == match && exclude.
|
||||
4.
|
||||
a. If an included match is found and an excluded match is not found, filter
|
||||
the event.
|
||||
b. For everything else, do not filter the event.
|
||||
5.
|
||||
a. If the event is filtered and it's an allow list, allow the guest to
|
||||
program the event.
|
||||
b. If the event is filtered and it's a deny list, do not allow the guest to
|
||||
program the event.
|
||||
|
||||
When setting a new pmu event filter, -EINVAL will be returned if any of the
|
||||
unused fields are set or if any of the high bits (35:32) in the event
|
||||
select are set when called on Intel.
|
||||
|
||||
Valid values for 'action'::
|
||||
|
||||
|
|
|
@ -302,6 +302,10 @@ Allows userspace to start migration mode, needed for PGSTE migration.
|
|||
Setting this attribute when migration mode is already active will have
|
||||
no effects.
|
||||
|
||||
Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When
|
||||
dirty tracking is disabled on any memslot, migration mode is automatically
|
||||
stopped.
|
||||
|
||||
:Parameters: none
|
||||
:Returns: -ENOMEM if there is not enough free memory to start migration mode;
|
||||
-EINVAL if the state of the VM is invalid (e.g. no memory defined);
|
||||
|
|
|
@ -9,6 +9,8 @@ KVM Lock Overview
|
|||
|
||||
The acquisition orders for mutexes are as follows:
|
||||
|
||||
- cpus_read_lock() is taken outside kvm_lock
|
||||
|
||||
- kvm->lock is taken outside vcpu->mutex
|
||||
|
||||
- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
|
||||
|
@ -226,15 +228,10 @@ time it will be set using the Dirty tracking mechanism described above.
|
|||
:Type: mutex
|
||||
:Arch: any
|
||||
:Protects: - vm_list
|
||||
|
||||
``kvm_count_lock``
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
:Type: raw_spinlock_t
|
||||
:Arch: any
|
||||
:Protects: - hardware virtualization enable/disable
|
||||
:Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
|
||||
migration.
|
||||
- kvm_usage_count
|
||||
- hardware virtualization enable/disable
|
||||
:Comment: KVM also disables CPU hotplug via cpus_read_lock() during
|
||||
enable/disable.
|
||||
|
||||
``kvm->mn_invalidate_lock``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -292,3 +289,13 @@ time it will be set using the Dirty tracking mechanism described above.
|
|||
wakeup notification event since external interrupts from the
|
||||
assigned devices happens, we will find the vCPU on the list to
|
||||
wakeup.
|
||||
|
||||
``vendor_module_lock``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
:Type: mutex
|
||||
:Arch: x86
|
||||
:Protects: loading a vendor module (kvm_amd or kvm_intel)
|
||||
:Comment: Exists because using kvm_lock leads to deadlock. cpu_hotplug_lock is
|
||||
taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and
|
||||
many operations need to take cpu_hotplug_lock when loading a vendor module,
|
||||
e.g. updating static calls.
|
||||
|
|
|
@ -37,3 +37,14 @@ Nested virtualization features
|
|||
------------------------------
|
||||
|
||||
TBD
|
||||
|
||||
x2APIC
|
||||
------
|
||||
When KVM_X2APIC_API_USE_32BIT_IDS is enabled, KVM activates a hack/quirk that
|
||||
allows sending events to a single vCPU using its x2APIC ID even if the target
|
||||
vCPU has legacy xAPIC enabled, e.g. to bring up hotplugged vCPUs via INIT-SIPI
|
||||
on VMs with > 255 vCPUs. A side effect of the quirk is that, if multiple vCPUs
|
||||
have the same physical APIC ID, KVM will deliver events targeting that APIC ID
|
||||
only to the vCPU with the lowest vCPU ID. If KVM_X2APIC_API_USE_32BIT_IDS is
|
||||
not enabled, KVM follows x86 architecture when processing interrupts (all vCPUs
|
||||
matching the target APIC ID receive the interrupt).
|
||||
|
|
|
@ -11269,13 +11269,12 @@ F: virt/kvm/*
|
|||
|
||||
KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)
|
||||
M: Marc Zyngier <maz@kernel.org>
|
||||
M: Oliver Upton <oliver.upton@linux.dev>
|
||||
R: James Morse <james.morse@arm.com>
|
||||
R: Suzuki K Poulose <suzuki.poulose@arm.com>
|
||||
R: Oliver Upton <oliver.upton@linux.dev>
|
||||
R: Zenghui Yu <yuzenghui@huawei.com>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
L: kvmarm@lists.linux.dev
|
||||
L: kvmarm@lists.cs.columbia.edu (deprecated, moderated for non-subscribers)
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
|
||||
F: arch/arm64/include/asm/kvm*
|
||||
|
|
|
@ -16,6 +16,15 @@
|
|||
#define CLIDR_LOC(clidr) (((clidr) >> CLIDR_LOC_SHIFT) & 0x7)
|
||||
#define CLIDR_LOUIS(clidr) (((clidr) >> CLIDR_LOUIS_SHIFT) & 0x7)
|
||||
|
||||
/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
|
||||
#define CLIDR_CTYPE_SHIFT(level) (3 * (level - 1))
|
||||
#define CLIDR_CTYPE_MASK(level) (7 << CLIDR_CTYPE_SHIFT(level))
|
||||
#define CLIDR_CTYPE(clidr, level) \
|
||||
(((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
|
||||
|
||||
/* Ttypen, bits [2(n - 1) + 34 : 2(n - 1) + 33], for n = 1 to 7 */
|
||||
#define CLIDR_TTYPE_SHIFT(level) (2 * ((level) - 1) + CLIDR_EL1_Ttypen_SHIFT)
|
||||
|
||||
/*
|
||||
* Memory returned by kmalloc() may be used for DMA, so we must make
|
||||
* sure that all such allocations are cache aligned. Otherwise,
|
||||
|
|
|
@ -196,4 +196,103 @@
|
|||
__init_el2_nvhe_prepare_eret
|
||||
.endm
|
||||
|
||||
#ifndef __KVM_NVHE_HYPERVISOR__
|
||||
// This will clobber tmp1 and tmp2, and expect tmp1 to contain
|
||||
// the id register value as read from the HW
|
||||
.macro __check_override idreg, fld, width, pass, fail, tmp1, tmp2
|
||||
ubfx \tmp1, \tmp1, #\fld, #\width
|
||||
cbz \tmp1, \fail
|
||||
|
||||
adr_l \tmp1, \idreg\()_override
|
||||
ldr \tmp2, [\tmp1, FTR_OVR_VAL_OFFSET]
|
||||
ldr \tmp1, [\tmp1, FTR_OVR_MASK_OFFSET]
|
||||
ubfx \tmp2, \tmp2, #\fld, #\width
|
||||
ubfx \tmp1, \tmp1, #\fld, #\width
|
||||
cmp \tmp1, xzr
|
||||
and \tmp2, \tmp2, \tmp1
|
||||
csinv \tmp2, \tmp2, xzr, ne
|
||||
cbnz \tmp2, \pass
|
||||
b \fail
|
||||
.endm
|
||||
|
||||
// This will clobber tmp1 and tmp2
|
||||
.macro check_override idreg, fld, pass, fail, tmp1, tmp2
|
||||
mrs \tmp1, \idreg\()_el1
|
||||
__check_override \idreg \fld 4 \pass \fail \tmp1 \tmp2
|
||||
.endm
|
||||
#else
|
||||
// This will clobber tmp
|
||||
.macro __check_override idreg, fld, width, pass, fail, tmp, ignore
|
||||
ldr_l \tmp, \idreg\()_el1_sys_val
|
||||
ubfx \tmp, \tmp, #\fld, #\width
|
||||
cbnz \tmp, \pass
|
||||
b \fail
|
||||
.endm
|
||||
|
||||
.macro check_override idreg, fld, pass, fail, tmp, ignore
|
||||
__check_override \idreg \fld 4 \pass \fail \tmp \ignore
|
||||
.endm
|
||||
#endif
|
||||
|
||||
.macro finalise_el2_state
|
||||
check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2
|
||||
|
||||
.Linit_sve_\@: /* SVE register access */
|
||||
mrs x0, cptr_el2 // Disable SVE traps
|
||||
bic x0, x0, #CPTR_EL2_TZ
|
||||
msr cptr_el2, x0
|
||||
isb
|
||||
mov x1, #ZCR_ELx_LEN_MASK // SVE: Enable full vector
|
||||
msr_s SYS_ZCR_EL2, x1 // length for EL1.
|
||||
|
||||
.Lskip_sve_\@:
|
||||
check_override id_aa64pfr1, ID_AA64PFR1_EL1_SME_SHIFT, .Linit_sme_\@, .Lskip_sme_\@, x1, x2
|
||||
|
||||
.Linit_sme_\@: /* SME register access and priority mapping */
|
||||
mrs x0, cptr_el2 // Disable SME traps
|
||||
bic x0, x0, #CPTR_EL2_TSM
|
||||
msr cptr_el2, x0
|
||||
isb
|
||||
|
||||
mrs x1, sctlr_el2
|
||||
orr x1, x1, #SCTLR_ELx_ENTP2 // Disable TPIDR2 traps
|
||||
msr sctlr_el2, x1
|
||||
isb
|
||||
|
||||
mov x0, #0 // SMCR controls
|
||||
|
||||
// Full FP in SM?
|
||||
mrs_s x1, SYS_ID_AA64SMFR0_EL1
|
||||
__check_override id_aa64smfr0, ID_AA64SMFR0_EL1_FA64_SHIFT, 1, .Linit_sme_fa64_\@, .Lskip_sme_fa64_\@, x1, x2
|
||||
|
||||
.Linit_sme_fa64_\@:
|
||||
orr x0, x0, SMCR_ELx_FA64_MASK
|
||||
.Lskip_sme_fa64_\@:
|
||||
|
||||
// ZT0 available?
|
||||
mrs_s x1, SYS_ID_AA64SMFR0_EL1
|
||||
__check_override id_aa64smfr0, ID_AA64SMFR0_EL1_SMEver_SHIFT, 4, .Linit_sme_zt0_\@, .Lskip_sme_zt0_\@, x1, x2
|
||||
.Linit_sme_zt0_\@:
|
||||
orr x0, x0, SMCR_ELx_EZT0_MASK
|
||||
.Lskip_sme_zt0_\@:
|
||||
|
||||
orr x0, x0, #SMCR_ELx_LEN_MASK // Enable full SME vector
|
||||
msr_s SYS_SMCR_EL2, x0 // length for EL1.
|
||||
|
||||
mrs_s x1, SYS_SMIDR_EL1 // Priority mapping supported?
|
||||
ubfx x1, x1, #SMIDR_EL1_SMPS_SHIFT, #1
|
||||
cbz x1, .Lskip_sme_\@
|
||||
|
||||
msr_s SYS_SMPRIMAP_EL2, xzr // Make all priorities equal
|
||||
|
||||
mrs x1, id_aa64mmfr1_el1 // HCRX_EL2 present?
|
||||
ubfx x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
|
||||
cbz x1, .Lskip_sme_\@
|
||||
|
||||
mrs_s x1, SYS_HCRX_EL2
|
||||
orr x1, x1, #HCRX_EL2_SMPME_MASK // Enable priority mapping
|
||||
msr_s SYS_HCRX_EL2, x1
|
||||
.Lskip_sme_\@:
|
||||
.endm
|
||||
|
||||
#endif /* __ARM_KVM_INIT_H__ */
|
||||
|
|
|
@ -272,6 +272,10 @@
|
|||
(((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
|
||||
ESR_ELx_SYS64_ISS_OP2_SHIFT))
|
||||
|
||||
/* ISS field definitions for ERET/ERETAA/ERETAB trapping */
|
||||
#define ESR_ELx_ERET_ISS_ERET 0x2
|
||||
#define ESR_ELx_ERET_ISS_ERETA 0x1
|
||||
|
||||
/*
|
||||
* ISS field definitions for floating-point exception traps
|
||||
* (FP_EXC_32/FP_EXC_64).
|
||||
|
|
|
@ -81,11 +81,12 @@
|
|||
* SWIO: Turn set/way invalidates into set/way clean+invalidate
|
||||
* PTW: Take a stage2 fault if a stage1 walk steps in device memory
|
||||
* TID3: Trap EL1 reads of group 3 ID registers
|
||||
* TID2: Trap CTR_EL0, CCSIDR2_EL1, CLIDR_EL1, and CSSELR_EL1
|
||||
*/
|
||||
#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
|
||||
HCR_BSU_IS | HCR_FB | HCR_TACR | \
|
||||
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
|
||||
HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3 )
|
||||
HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3 | HCR_TID2)
|
||||
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
|
||||
#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA)
|
||||
#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
|
||||
|
@ -344,10 +345,26 @@
|
|||
ECN(SP_ALIGN), ECN(FP_EXC32), ECN(FP_EXC64), ECN(SERROR), \
|
||||
ECN(BREAKPT_LOW), ECN(BREAKPT_CUR), ECN(SOFTSTP_LOW), \
|
||||
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
|
||||
ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
|
||||
ECN(BKPT32), ECN(VECTOR32), ECN(BRK64), ECN(ERET)
|
||||
|
||||
#define CPACR_EL1_TTA (1 << 28)
|
||||
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |\
|
||||
CPACR_EL1_ZEN_EL1EN)
|
||||
|
||||
#define kvm_mode_names \
|
||||
{ PSR_MODE_EL0t, "EL0t" }, \
|
||||
{ PSR_MODE_EL1t, "EL1t" }, \
|
||||
{ PSR_MODE_EL1h, "EL1h" }, \
|
||||
{ PSR_MODE_EL2t, "EL2t" }, \
|
||||
{ PSR_MODE_EL2h, "EL2h" }, \
|
||||
{ PSR_MODE_EL3t, "EL3t" }, \
|
||||
{ PSR_MODE_EL3h, "EL3h" }, \
|
||||
{ PSR_AA32_MODE_USR, "32-bit USR" }, \
|
||||
{ PSR_AA32_MODE_FIQ, "32-bit FIQ" }, \
|
||||
{ PSR_AA32_MODE_IRQ, "32-bit IRQ" }, \
|
||||
{ PSR_AA32_MODE_SVC, "32-bit SVC" }, \
|
||||
{ PSR_AA32_MODE_ABT, "32-bit ABT" }, \
|
||||
{ PSR_AA32_MODE_HYP, "32-bit HYP" }, \
|
||||
{ PSR_AA32_MODE_UND, "32-bit UND" }, \
|
||||
{ PSR_AA32_MODE_SYS, "32-bit SYS" }
|
||||
|
||||
#endif /* __ARM64_KVM_ARM_H__ */
|
||||
|
|
|
@ -33,6 +33,12 @@ enum exception_type {
|
|||
except_type_serror = 0x180,
|
||||
};
|
||||
|
||||
#define kvm_exception_type_names \
|
||||
{ except_type_sync, "SYNC" }, \
|
||||
{ except_type_irq, "IRQ" }, \
|
||||
{ except_type_fiq, "FIQ" }, \
|
||||
{ except_type_serror, "SERROR" }
|
||||
|
||||
bool kvm_condition_valid32(const struct kvm_vcpu *vcpu);
|
||||
void kvm_skip_instr32(struct kvm_vcpu *vcpu);
|
||||
|
||||
|
@ -44,6 +50,10 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu);
|
|||
|
||||
void kvm_vcpu_wfi(struct kvm_vcpu *vcpu);
|
||||
|
||||
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
|
||||
int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
|
||||
int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
|
||||
|
||||
#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__)
|
||||
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
@ -88,10 +98,6 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
|
|||
if (vcpu_el1_is_32bit(vcpu))
|
||||
vcpu->arch.hcr_el2 &= ~HCR_RW;
|
||||
|
||||
if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
|
||||
vcpu_el1_is_32bit(vcpu))
|
||||
vcpu->arch.hcr_el2 |= HCR_TID2;
|
||||
|
||||
if (kvm_has_mte(vcpu->kvm))
|
||||
vcpu->arch.hcr_el2 |= HCR_ATA;
|
||||
}
|
||||
|
@ -183,6 +189,62 @@ static __always_inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num,
|
|||
vcpu_gp_regs(vcpu)->regs[reg_num] = val;
|
||||
}
|
||||
|
||||
static inline bool vcpu_is_el2_ctxt(const struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
switch (ctxt->regs.pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
|
||||
case PSR_MODE_EL2h:
|
||||
case PSR_MODE_EL2t:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static inline bool vcpu_is_el2(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu_is_el2_ctxt(&vcpu->arch.ctxt);
|
||||
}
|
||||
|
||||
static inline bool __vcpu_el2_e2h_is_set(const struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_E2H;
|
||||
}
|
||||
|
||||
static inline bool vcpu_el2_e2h_is_set(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return __vcpu_el2_e2h_is_set(&vcpu->arch.ctxt);
|
||||
}
|
||||
|
||||
static inline bool __vcpu_el2_tge_is_set(const struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
return ctxt_sys_reg(ctxt, HCR_EL2) & HCR_TGE;
|
||||
}
|
||||
|
||||
static inline bool vcpu_el2_tge_is_set(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return __vcpu_el2_tge_is_set(&vcpu->arch.ctxt);
|
||||
}
|
||||
|
||||
static inline bool __is_hyp_ctxt(const struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
/*
|
||||
* We are in a hypervisor context if the vcpu mode is EL2 or
|
||||
* E2H and TGE bits are set. The latter means we are in the user space
|
||||
* of the VHE kernel. ARMv8.1 ARM describes this as 'InHost'
|
||||
*
|
||||
* Note that the HCR_EL2.{E2H,TGE}={0,1} isn't really handled in the
|
||||
* rest of the KVM code, and will result in a misbehaving guest.
|
||||
*/
|
||||
return vcpu_is_el2_ctxt(ctxt) ||
|
||||
(__vcpu_el2_e2h_is_set(ctxt) && __vcpu_el2_tge_is_set(ctxt)) ||
|
||||
__vcpu_el2_tge_is_set(ctxt);
|
||||
}
|
||||
|
||||
static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return __is_hyp_ctxt(&vcpu->arch.ctxt);
|
||||
}
|
||||
|
||||
/*
|
||||
* The layout of SPSR for an AArch32 state is different when observed from an
|
||||
* AArch64 SPSR_ELx or an AArch32 SPSR_*. This function generates the AArch32
|
||||
|
|
|
@ -60,14 +60,19 @@
|
|||
enum kvm_mode {
|
||||
KVM_MODE_DEFAULT,
|
||||
KVM_MODE_PROTECTED,
|
||||
KVM_MODE_NV,
|
||||
KVM_MODE_NONE,
|
||||
};
|
||||
#ifdef CONFIG_KVM
|
||||
enum kvm_mode kvm_get_mode(void);
|
||||
#else
|
||||
static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
|
||||
#endif
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
|
||||
|
||||
extern unsigned int kvm_sve_max_vl;
|
||||
int kvm_arm_init_sve(void);
|
||||
extern unsigned int __ro_after_init kvm_sve_max_vl;
|
||||
int __init kvm_arm_init_sve(void);
|
||||
|
||||
u32 __attribute_const__ kvm_target_cpu(void);
|
||||
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
|
||||
|
@ -252,6 +257,7 @@ struct kvm_vcpu_fault_info {
|
|||
enum vcpu_sysreg {
|
||||
__INVALID_SYSREG__, /* 0 is reserved as an invalid value */
|
||||
MPIDR_EL1, /* MultiProcessor Affinity Register */
|
||||
CLIDR_EL1, /* Cache Level ID Register */
|
||||
CSSELR_EL1, /* Cache Size Selection Register */
|
||||
SCTLR_EL1, /* System Control Register */
|
||||
ACTLR_EL1, /* Auxiliary Control Register */
|
||||
|
@ -320,12 +326,43 @@ enum vcpu_sysreg {
|
|||
TFSR_EL1, /* Tag Fault Status Register (EL1) */
|
||||
TFSRE0_EL1, /* Tag Fault Status Register (EL0) */
|
||||
|
||||
/* 32bit specific registers. Keep them at the end of the range */
|
||||
/* 32bit specific registers. */
|
||||
DACR32_EL2, /* Domain Access Control Register */
|
||||
IFSR32_EL2, /* Instruction Fault Status Register */
|
||||
FPEXC32_EL2, /* Floating-Point Exception Control Register */
|
||||
DBGVCR32_EL2, /* Debug Vector Catch Register */
|
||||
|
||||
/* EL2 registers */
|
||||
VPIDR_EL2, /* Virtualization Processor ID Register */
|
||||
VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
|
||||
SCTLR_EL2, /* System Control Register (EL2) */
|
||||
ACTLR_EL2, /* Auxiliary Control Register (EL2) */
|
||||
HCR_EL2, /* Hypervisor Configuration Register */
|
||||
MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
|
||||
CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
|
||||
HSTR_EL2, /* Hypervisor System Trap Register */
|
||||
HACR_EL2, /* Hypervisor Auxiliary Control Register */
|
||||
TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
|
||||
TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
|
||||
TCR_EL2, /* Translation Control Register (EL2) */
|
||||
VTTBR_EL2, /* Virtualization Translation Table Base Register */
|
||||
VTCR_EL2, /* Virtualization Translation Control Register */
|
||||
SPSR_EL2, /* EL2 saved program status register */
|
||||
ELR_EL2, /* EL2 exception link register */
|
||||
AFSR0_EL2, /* Auxiliary Fault Status Register 0 (EL2) */
|
||||
AFSR1_EL2, /* Auxiliary Fault Status Register 1 (EL2) */
|
||||
ESR_EL2, /* Exception Syndrome Register (EL2) */
|
||||
FAR_EL2, /* Fault Address Register (EL2) */
|
||||
HPFAR_EL2, /* Hypervisor IPA Fault Address Register */
|
||||
MAIR_EL2, /* Memory Attribute Indirection Register (EL2) */
|
||||
AMAIR_EL2, /* Auxiliary Memory Attribute Indirection Register (EL2) */
|
||||
VBAR_EL2, /* Vector Base Address Register (EL2) */
|
||||
RVBAR_EL2, /* Reset Vector Base Address Register */
|
||||
CONTEXTIDR_EL2, /* Context ID Register (EL2) */
|
||||
TPIDR_EL2, /* EL2 Software Thread ID Register */
|
||||
CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
|
||||
SP_EL2, /* EL2 Stack Pointer */
|
||||
|
||||
NR_SYS_REGS /* Nothing after this line! */
|
||||
};
|
||||
|
||||
|
@ -501,6 +538,9 @@ struct kvm_vcpu_arch {
|
|||
u64 last_steal;
|
||||
gpa_t base;
|
||||
} steal;
|
||||
|
||||
/* Per-vcpu CCSIDR override or NULL */
|
||||
u32 *ccsidr;
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -598,7 +638,7 @@ struct kvm_vcpu_arch {
|
|||
#define EXCEPT_AA64_EL1_IRQ __vcpu_except_flags(1)
|
||||
#define EXCEPT_AA64_EL1_FIQ __vcpu_except_flags(2)
|
||||
#define EXCEPT_AA64_EL1_SERR __vcpu_except_flags(3)
|
||||
/* For AArch64 with NV (one day): */
|
||||
/* For AArch64 with NV: */
|
||||
#define EXCEPT_AA64_EL2_SYNC __vcpu_except_flags(4)
|
||||
#define EXCEPT_AA64_EL2_IRQ __vcpu_except_flags(5)
|
||||
#define EXCEPT_AA64_EL2_FIQ __vcpu_except_flags(6)
|
||||
|
@ -609,6 +649,8 @@ struct kvm_vcpu_arch {
|
|||
#define DEBUG_STATE_SAVE_SPE __vcpu_single_flag(iflags, BIT(5))
|
||||
/* Save TRBE context if active */
|
||||
#define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6))
|
||||
/* vcpu running in HYP context */
|
||||
#define VCPU_HYP_CONTEXT __vcpu_single_flag(iflags, BIT(7))
|
||||
|
||||
/* SVE enabled for host EL0 */
|
||||
#define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0))
|
||||
|
@ -705,7 +747,6 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
|
|||
return false;
|
||||
|
||||
switch (reg) {
|
||||
case CSSELR_EL1: *val = read_sysreg_s(SYS_CSSELR_EL1); break;
|
||||
case SCTLR_EL1: *val = read_sysreg_s(SYS_SCTLR_EL12); break;
|
||||
case CPACR_EL1: *val = read_sysreg_s(SYS_CPACR_EL12); break;
|
||||
case TTBR0_EL1: *val = read_sysreg_s(SYS_TTBR0_EL12); break;
|
||||
|
@ -750,7 +791,6 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
|
|||
return false;
|
||||
|
||||
switch (reg) {
|
||||
case CSSELR_EL1: write_sysreg_s(val, SYS_CSSELR_EL1); break;
|
||||
case SCTLR_EL1: write_sysreg_s(val, SYS_SCTLR_EL12); break;
|
||||
case CPACR_EL1: write_sysreg_s(val, SYS_CPACR_EL12); break;
|
||||
case TTBR0_EL1: write_sysreg_s(val, SYS_TTBR0_EL12); break;
|
||||
|
@ -877,7 +917,7 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
|
|||
|
||||
void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
|
||||
|
||||
int kvm_sys_reg_table_init(void);
|
||||
int __init kvm_sys_reg_table_init(void);
|
||||
|
||||
/* MMIO helpers */
|
||||
void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
|
||||
|
@ -908,20 +948,20 @@ int kvm_arm_pvtime_get_attr(struct kvm_vcpu *vcpu,
|
|||
int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
|
||||
struct kvm_device_attr *attr);
|
||||
|
||||
extern unsigned int kvm_arm_vmid_bits;
|
||||
int kvm_arm_vmid_alloc_init(void);
|
||||
void kvm_arm_vmid_alloc_free(void);
|
||||
extern unsigned int __ro_after_init kvm_arm_vmid_bits;
|
||||
int __init kvm_arm_vmid_alloc_init(void);
|
||||
void __init kvm_arm_vmid_alloc_free(void);
|
||||
void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
|
||||
void kvm_arm_vmid_clear_active(void);
|
||||
|
||||
static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
|
||||
{
|
||||
vcpu_arch->steal.base = GPA_INVALID;
|
||||
vcpu_arch->steal.base = INVALID_GPA;
|
||||
}
|
||||
|
||||
static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
|
||||
{
|
||||
return (vcpu_arch->steal.base != GPA_INVALID);
|
||||
return (vcpu_arch->steal.base != INVALID_GPA);
|
||||
}
|
||||
|
||||
void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
|
||||
|
@ -943,7 +983,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
|
|||
|
||||
void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
|
||||
|
||||
static inline void kvm_arch_hardware_unsetup(void) {}
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
|
||||
|
||||
|
@ -994,7 +1033,7 @@ static inline void kvm_clr_pmu_events(u32 clr) {}
|
|||
void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
|
||||
void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu);
|
||||
|
||||
int kvm_set_ipa_limit(void);
|
||||
int __init kvm_set_ipa_limit(void);
|
||||
|
||||
#define __KVM_HAVE_ARCH_VM_ALLOC
|
||||
struct kvm *kvm_arch_alloc_vm(void);
|
||||
|
|
|
@ -122,6 +122,7 @@ extern u64 kvm_nvhe_sym(id_aa64isar2_el1_sys_val);
|
|||
extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
|
||||
|
||||
extern unsigned long kvm_nvhe_sym(__icache_flags);
|
||||
extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
|
||||
|
|
|
@ -115,6 +115,7 @@ alternative_cb_end
|
|||
#include <asm/cache.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_host.h>
|
||||
|
||||
void kvm_update_va_mask(struct alt_instr *alt,
|
||||
|
@ -163,7 +164,7 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
|
|||
void __iomem **haddr);
|
||||
int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
|
||||
void **haddr);
|
||||
void free_hyp_pgds(void);
|
||||
void __init free_hyp_pgds(void);
|
||||
|
||||
void stage2_unmap_vm(struct kvm *kvm);
|
||||
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type);
|
||||
|
@ -175,7 +176,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
|
|||
|
||||
phys_addr_t kvm_mmu_get_httbr(void);
|
||||
phys_addr_t kvm_get_idmap_vector(void);
|
||||
int kvm_mmu_init(u32 *hyp_va_bits);
|
||||
int __init kvm_mmu_init(u32 *hyp_va_bits);
|
||||
|
||||
static inline void *__kvm_vector_slot2addr(void *base,
|
||||
enum arm64_hyp_spectre_vector slot)
|
||||
|
@ -192,7 +193,15 @@ struct kvm;
|
|||
|
||||
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
|
||||
u64 cache_bits = SCTLR_ELx_M | SCTLR_ELx_C;
|
||||
int reg;
|
||||
|
||||
if (vcpu_is_el2(vcpu))
|
||||
reg = SCTLR_EL2;
|
||||
else
|
||||
reg = SCTLR_EL1;
|
||||
|
||||
return (vcpu_read_sys_reg(vcpu, reg) & cache_bits) == cache_bits;
|
||||
}
|
||||
|
||||
static inline void __clean_dcache_guest_page(void *va, size_t size)
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ARM64_KVM_NESTED_H
|
||||
#define __ARM64_KVM_NESTED_H
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (!__is_defined(__KVM_NVHE_HYPERVISOR__) &&
|
||||
cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
|
||||
test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
|
||||
}
|
||||
|
||||
struct sys_reg_params;
|
||||
struct sys_reg_desc;
|
||||
|
||||
void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r);
|
||||
|
||||
#endif /* __ARM64_KVM_NESTED_H */
|
|
@ -71,6 +71,11 @@ static inline kvm_pte_t kvm_phys_to_pte(u64 pa)
|
|||
return pte;
|
||||
}
|
||||
|
||||
static inline kvm_pfn_t kvm_pte_to_pfn(kvm_pte_t pte)
|
||||
{
|
||||
return __phys_to_pfn(kvm_pte_to_phys(pte));
|
||||
}
|
||||
|
||||
static inline u64 kvm_granule_shift(u32 level)
|
||||
{
|
||||
/* Assumes KVM_PGTABLE_MAX_LEVELS is 4 */
|
||||
|
@ -188,12 +193,15 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end,
|
|||
* children.
|
||||
* @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared
|
||||
* with other software walkers.
|
||||
* @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was
|
||||
* invoked from a fault handler.
|
||||
*/
|
||||
enum kvm_pgtable_walk_flags {
|
||||
KVM_PGTABLE_WALK_LEAF = BIT(0),
|
||||
KVM_PGTABLE_WALK_TABLE_PRE = BIT(1),
|
||||
KVM_PGTABLE_WALK_TABLE_POST = BIT(2),
|
||||
KVM_PGTABLE_WALK_SHARED = BIT(3),
|
||||
KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4),
|
||||
};
|
||||
|
||||
struct kvm_pgtable_visit_ctx {
|
||||
|
|
|
@ -325,7 +325,6 @@
|
|||
|
||||
#define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0)
|
||||
|
||||
#define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0)
|
||||
#define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7)
|
||||
|
||||
#define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0)
|
||||
|
@ -411,23 +410,51 @@
|
|||
|
||||
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
|
||||
|
||||
#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
|
||||
#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
|
||||
|
||||
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
|
||||
#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
|
||||
#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
|
||||
#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
|
||||
#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
|
||||
#define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3)
|
||||
#define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4)
|
||||
#define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5)
|
||||
#define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6)
|
||||
#define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7)
|
||||
|
||||
#define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0)
|
||||
#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
|
||||
#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
|
||||
#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
|
||||
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
|
||||
|
||||
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
|
||||
#define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4)
|
||||
#define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5)
|
||||
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
|
||||
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
|
||||
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
|
||||
#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
|
||||
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
|
||||
#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
|
||||
#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
|
||||
#define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0)
|
||||
#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3)
|
||||
#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
|
||||
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
|
||||
|
||||
#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
|
||||
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
|
||||
#define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4)
|
||||
|
||||
#define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0)
|
||||
#define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0)
|
||||
|
||||
#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
|
||||
#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
|
||||
#define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2)
|
||||
#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
|
||||
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
|
||||
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)
|
||||
#define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1)
|
||||
|
@ -469,6 +496,12 @@
|
|||
#define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6)
|
||||
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
|
||||
|
||||
#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
|
||||
#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
|
||||
|
||||
#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
|
||||
#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
|
||||
|
||||
/* VHE encodings for architectural EL0/1 system registers */
|
||||
#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
|
||||
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
|
||||
|
@ -491,6 +524,8 @@
|
|||
#define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1)
|
||||
#define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2)
|
||||
|
||||
#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
|
||||
|
||||
/* Common SCTLR_ELx flags. */
|
||||
#define SCTLR_ELx_ENTP2 (BIT(60))
|
||||
#define SCTLR_ELx_DSSBS (BIT(44))
|
||||
|
|
|
@ -109,6 +109,7 @@ struct kvm_regs {
|
|||
#define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */
|
||||
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
|
||||
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
|
||||
#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
|
||||
|
||||
struct kvm_vcpu_init {
|
||||
__u32 target;
|
||||
|
|
|
@ -11,11 +11,6 @@
|
|||
#include <linux/of.h>
|
||||
|
||||
#define MAX_CACHE_LEVEL 7 /* Max 7 level supported */
|
||||
/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
|
||||
#define CLIDR_CTYPE_SHIFT(level) (3 * (level - 1))
|
||||
#define CLIDR_CTYPE_MASK(level) (7 << CLIDR_CTYPE_SHIFT(level))
|
||||
#define CLIDR_CTYPE(clidr, level) \
|
||||
(((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
|
||||
|
||||
int cache_line_size(void)
|
||||
{
|
||||
|
|
|
@ -1967,6 +1967,20 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
|
|||
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
|
||||
}
|
||||
|
||||
static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
|
||||
int scope)
|
||||
{
|
||||
if (kvm_get_mode() != KVM_MODE_NV)
|
||||
return false;
|
||||
|
||||
if (!has_cpuid_feature(cap, scope)) {
|
||||
pr_warn("unavailable: %s\n", cap->desc);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_PAN
|
||||
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
|
||||
{
|
||||
|
@ -2262,6 +2276,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
|||
.matches = runs_at_el2,
|
||||
.cpu_enable = cpu_copy_el2regs,
|
||||
},
|
||||
{
|
||||
.desc = "Nested Virtualization Support",
|
||||
.capability = ARM64_HAS_NESTED_VIRT,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.matches = has_nested_virt_support,
|
||||
.sys_reg = SYS_ID_AA64MMFR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64MMFR2_EL1_NV_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64MMFR2_EL1_NV_IMP,
|
||||
},
|
||||
{
|
||||
.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
|
|
|
@ -16,30 +16,6 @@
|
|||
#include <asm/ptrace.h>
|
||||
#include <asm/virt.h>
|
||||
|
||||
// Warning, hardcoded register allocation
|
||||
// This will clobber x1 and x2, and expect x1 to contain
|
||||
// the id register value as read from the HW
|
||||
.macro __check_override idreg, fld, width, pass, fail
|
||||
ubfx x1, x1, #\fld, #\width
|
||||
cbz x1, \fail
|
||||
|
||||
adr_l x1, \idreg\()_override
|
||||
ldr x2, [x1, FTR_OVR_VAL_OFFSET]
|
||||
ldr x1, [x1, FTR_OVR_MASK_OFFSET]
|
||||
ubfx x2, x2, #\fld, #\width
|
||||
ubfx x1, x1, #\fld, #\width
|
||||
cmp x1, xzr
|
||||
and x2, x2, x1
|
||||
csinv x2, x2, xzr, ne
|
||||
cbnz x2, \pass
|
||||
b \fail
|
||||
.endm
|
||||
|
||||
.macro check_override idreg, fld, pass, fail
|
||||
mrs x1, \idreg\()_el1
|
||||
__check_override \idreg \fld 4 \pass \fail
|
||||
.endm
|
||||
|
||||
.text
|
||||
.pushsection .hyp.text, "ax"
|
||||
|
||||
|
@ -98,65 +74,7 @@ SYM_CODE_START_LOCAL(elx_sync)
|
|||
SYM_CODE_END(elx_sync)
|
||||
|
||||
SYM_CODE_START_LOCAL(__finalise_el2)
|
||||
check_override id_aa64pfr0 ID_AA64PFR0_EL1_SVE_SHIFT .Linit_sve .Lskip_sve
|
||||
|
||||
.Linit_sve: /* SVE register access */
|
||||
mrs x0, cptr_el2 // Disable SVE traps
|
||||
bic x0, x0, #CPTR_EL2_TZ
|
||||
msr cptr_el2, x0
|
||||
isb
|
||||
mov x1, #ZCR_ELx_LEN_MASK // SVE: Enable full vector
|
||||
msr_s SYS_ZCR_EL2, x1 // length for EL1.
|
||||
|
||||
.Lskip_sve:
|
||||
check_override id_aa64pfr1 ID_AA64PFR1_EL1_SME_SHIFT .Linit_sme .Lskip_sme
|
||||
|
||||
.Linit_sme: /* SME register access and priority mapping */
|
||||
mrs x0, cptr_el2 // Disable SME traps
|
||||
bic x0, x0, #CPTR_EL2_TSM
|
||||
msr cptr_el2, x0
|
||||
isb
|
||||
|
||||
mrs x1, sctlr_el2
|
||||
orr x1, x1, #SCTLR_ELx_ENTP2 // Disable TPIDR2 traps
|
||||
msr sctlr_el2, x1
|
||||
isb
|
||||
|
||||
mov x0, #0 // SMCR controls
|
||||
|
||||
// Full FP in SM?
|
||||
mrs_s x1, SYS_ID_AA64SMFR0_EL1
|
||||
__check_override id_aa64smfr0 ID_AA64SMFR0_EL1_FA64_SHIFT 1 .Linit_sme_fa64 .Lskip_sme_fa64
|
||||
|
||||
.Linit_sme_fa64:
|
||||
orr x0, x0, SMCR_ELx_FA64_MASK
|
||||
.Lskip_sme_fa64:
|
||||
|
||||
// ZT0 available?
|
||||
mrs_s x1, SYS_ID_AA64SMFR0_EL1
|
||||
__check_override id_aa64smfr0 ID_AA64SMFR0_EL1_SMEver_SHIFT 4 .Linit_sme_zt0 .Lskip_sme_zt0
|
||||
.Linit_sme_zt0:
|
||||
orr x0, x0, SMCR_ELx_EZT0_MASK
|
||||
.Lskip_sme_zt0:
|
||||
|
||||
orr x0, x0, #SMCR_ELx_LEN_MASK // Enable full SME vector
|
||||
msr_s SYS_SMCR_EL2, x0 // length for EL1.
|
||||
|
||||
mrs_s x1, SYS_SMIDR_EL1 // Priority mapping supported?
|
||||
ubfx x1, x1, #SMIDR_EL1_SMPS_SHIFT, #1
|
||||
cbz x1, .Lskip_sme
|
||||
|
||||
msr_s SYS_SMPRIMAP_EL2, xzr // Make all priorities equal
|
||||
|
||||
mrs x1, id_aa64mmfr1_el1 // HCRX_EL2 present?
|
||||
ubfx x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
|
||||
cbz x1, .Lskip_sme
|
||||
|
||||
mrs_s x1, SYS_HCRX_EL2
|
||||
orr x1, x1, #HCRX_EL2_SMPME_MASK // Enable priority mapping
|
||||
msr_s SYS_HCRX_EL2, x1
|
||||
|
||||
.Lskip_sme:
|
||||
finalise_el2_state
|
||||
|
||||
// nVHE? No way! Give me the real thing!
|
||||
// Sanity check: MMU *must* be off
|
||||
|
@ -164,7 +82,7 @@ SYM_CODE_START_LOCAL(__finalise_el2)
|
|||
tbnz x1, #0, 1f
|
||||
|
||||
// Needs to be VHE capable, obviously
|
||||
check_override id_aa64mmfr1 ID_AA64MMFR1_EL1_VH_SHIFT 2f 1f
|
||||
check_override id_aa64mmfr1 ID_AA64MMFR1_EL1_VH_SHIFT 2f 1f x1 x2
|
||||
|
||||
1: mov_q x0, HVC_STUB_ERR
|
||||
eret
|
||||
|
|
|
@ -21,6 +21,7 @@ if VIRTUALIZATION
|
|||
menuconfig KVM
|
||||
bool "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on HAVE_KVM
|
||||
select KVM_GENERIC_HARDWARE_ENABLING
|
||||
select MMU_NOTIFIER
|
||||
select PREEMPT_NOTIFIERS
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
|
|
|
@ -14,7 +14,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
|
|||
inject_fault.o va_layout.o handle_exit.o \
|
||||
guest.o debug.o reset.o sys_regs.o stacktrace.o \
|
||||
vgic-sys-reg-v3.o fpsimd.o pkvm.o \
|
||||
arch_timer.o trng.o vmid.o \
|
||||
arch_timer.o trng.o vmid.o emulate-nested.o nested.o \
|
||||
vgic/vgic.o vgic/vgic-init.o \
|
||||
vgic/vgic-irqfd.o vgic/vgic-v2.o \
|
||||
vgic/vgic-v3.o vgic/vgic-v4.o \
|
||||
|
|
|
@ -428,14 +428,17 @@ static void timer_emulate(struct arch_timer_context *ctx)
|
|||
* scheduled for the future. If the timer cannot fire at all,
|
||||
* then we also don't need a soft timer.
|
||||
*/
|
||||
if (!kvm_timer_irq_can_fire(ctx)) {
|
||||
soft_timer_cancel(&ctx->hrtimer);
|
||||
if (should_fire || !kvm_timer_irq_can_fire(ctx))
|
||||
return;
|
||||
}
|
||||
|
||||
soft_timer_start(&ctx->hrtimer, kvm_timer_compute_delta(ctx));
|
||||
}
|
||||
|
||||
static void set_cntvoff(u64 cntvoff)
|
||||
{
|
||||
kvm_call_hyp(__kvm_timer_set_cntvoff, cntvoff);
|
||||
}
|
||||
|
||||
static void timer_save_state(struct arch_timer_context *ctx)
|
||||
{
|
||||
struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu);
|
||||
|
@ -459,6 +462,22 @@ static void timer_save_state(struct arch_timer_context *ctx)
|
|||
write_sysreg_el0(0, SYS_CNTV_CTL);
|
||||
isb();
|
||||
|
||||
/*
|
||||
* The kernel may decide to run userspace after
|
||||
* calling vcpu_put, so we reset cntvoff to 0 to
|
||||
* ensure a consistent read between user accesses to
|
||||
* the virtual counter and kernel access to the
|
||||
* physical counter of non-VHE case.
|
||||
*
|
||||
* For VHE, the virtual counter uses a fixed virtual
|
||||
* offset of zero, so no need to zero CNTVOFF_EL2
|
||||
* register, but this is actually useful when switching
|
||||
* between EL1/vEL2 with NV.
|
||||
*
|
||||
* Do it unconditionally, as this is either unavoidable
|
||||
* or dirt cheap.
|
||||
*/
|
||||
set_cntvoff(0);
|
||||
break;
|
||||
case TIMER_PTIMER:
|
||||
timer_set_ctl(ctx, read_sysreg_el0(SYS_CNTP_CTL));
|
||||
|
@ -532,6 +551,7 @@ static void timer_restore_state(struct arch_timer_context *ctx)
|
|||
|
||||
switch (index) {
|
||||
case TIMER_VTIMER:
|
||||
set_cntvoff(timer_get_offset(ctx));
|
||||
write_sysreg_el0(timer_get_cval(ctx), SYS_CNTV_CVAL);
|
||||
isb();
|
||||
write_sysreg_el0(timer_get_ctl(ctx), SYS_CNTV_CTL);
|
||||
|
@ -552,11 +572,6 @@ out:
|
|||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
static void set_cntvoff(u64 cntvoff)
|
||||
{
|
||||
kvm_call_hyp(__kvm_timer_set_cntvoff, cntvoff);
|
||||
}
|
||||
|
||||
static inline void set_timer_irq_phys_active(struct arch_timer_context *ctx, bool active)
|
||||
{
|
||||
int r;
|
||||
|
@ -631,8 +646,6 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
|
|||
kvm_timer_vcpu_load_nogic(vcpu);
|
||||
}
|
||||
|
||||
set_cntvoff(timer_get_offset(map.direct_vtimer));
|
||||
|
||||
kvm_timer_unblocking(vcpu);
|
||||
|
||||
timer_restore_state(map.direct_vtimer);
|
||||
|
@ -688,15 +701,6 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
|
|||
|
||||
if (kvm_vcpu_is_blocking(vcpu))
|
||||
kvm_timer_blocking(vcpu);
|
||||
|
||||
/*
|
||||
* The kernel may decide to run userspace after calling vcpu_put, so
|
||||
* we reset cntvoff to 0 to ensure a consistent read between user
|
||||
* accesses to the virtual counter and kernel access to the physical
|
||||
* counter of non-VHE case. For VHE, the virtual counter uses a fixed
|
||||
* virtual offset of zero, so no need to zero CNTVOFF_EL2 register.
|
||||
*/
|
||||
set_cntvoff(0);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -811,10 +815,18 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
|
|||
ptimer->host_timer_irq_flags = host_ptimer_irq_flags;
|
||||
}
|
||||
|
||||
static void kvm_timer_init_interrupt(void *info)
|
||||
void kvm_timer_cpu_up(void)
|
||||
{
|
||||
enable_percpu_irq(host_vtimer_irq, host_vtimer_irq_flags);
|
||||
enable_percpu_irq(host_ptimer_irq, host_ptimer_irq_flags);
|
||||
if (host_ptimer_irq)
|
||||
enable_percpu_irq(host_ptimer_irq, host_ptimer_irq_flags);
|
||||
}
|
||||
|
||||
void kvm_timer_cpu_down(void)
|
||||
{
|
||||
disable_percpu_irq(host_vtimer_irq);
|
||||
if (host_ptimer_irq)
|
||||
disable_percpu_irq(host_ptimer_irq);
|
||||
}
|
||||
|
||||
int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
|
||||
|
@ -926,14 +938,22 @@ u64 kvm_arm_timer_read_sysreg(struct kvm_vcpu *vcpu,
|
|||
enum kvm_arch_timers tmr,
|
||||
enum kvm_arch_timer_regs treg)
|
||||
{
|
||||
struct arch_timer_context *timer;
|
||||
struct timer_map map;
|
||||
u64 val;
|
||||
|
||||
get_timer_map(vcpu, &map);
|
||||
timer = vcpu_get_timer(vcpu, tmr);
|
||||
|
||||
if (timer == map.emul_ptimer)
|
||||
return kvm_arm_timer_read(vcpu, timer, treg);
|
||||
|
||||
preempt_disable();
|
||||
kvm_timer_vcpu_put(vcpu);
|
||||
timer_save_state(timer);
|
||||
|
||||
val = kvm_arm_timer_read(vcpu, vcpu_get_timer(vcpu, tmr), treg);
|
||||
val = kvm_arm_timer_read(vcpu, timer, treg);
|
||||
|
||||
kvm_timer_vcpu_load(vcpu);
|
||||
timer_restore_state(timer);
|
||||
preempt_enable();
|
||||
|
||||
return val;
|
||||
|
@ -967,25 +987,22 @@ void kvm_arm_timer_write_sysreg(struct kvm_vcpu *vcpu,
|
|||
enum kvm_arch_timer_regs treg,
|
||||
u64 val)
|
||||
{
|
||||
preempt_disable();
|
||||
kvm_timer_vcpu_put(vcpu);
|
||||
struct arch_timer_context *timer;
|
||||
struct timer_map map;
|
||||
|
||||
kvm_arm_timer_write(vcpu, vcpu_get_timer(vcpu, tmr), treg, val);
|
||||
|
||||
kvm_timer_vcpu_load(vcpu);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static int kvm_timer_starting_cpu(unsigned int cpu)
|
||||
{
|
||||
kvm_timer_init_interrupt(NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_timer_dying_cpu(unsigned int cpu)
|
||||
{
|
||||
disable_percpu_irq(host_vtimer_irq);
|
||||
return 0;
|
||||
get_timer_map(vcpu, &map);
|
||||
timer = vcpu_get_timer(vcpu, tmr);
|
||||
if (timer == map.emul_ptimer) {
|
||||
soft_timer_cancel(&timer->hrtimer);
|
||||
kvm_arm_timer_write(vcpu, timer, treg, val);
|
||||
timer_emulate(timer);
|
||||
} else {
|
||||
preempt_disable();
|
||||
timer_save_state(timer);
|
||||
kvm_arm_timer_write(vcpu, timer, treg, val);
|
||||
timer_restore_state(timer);
|
||||
preempt_enable();
|
||||
}
|
||||
}
|
||||
|
||||
static int timer_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu)
|
||||
|
@ -1117,7 +1134,7 @@ static int kvm_irq_init(struct arch_timer_kvm_info *info)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int kvm_timer_hyp_init(bool has_gic)
|
||||
int __init kvm_timer_hyp_init(bool has_gic)
|
||||
{
|
||||
struct arch_timer_kvm_info *info;
|
||||
int err;
|
||||
|
@ -1185,9 +1202,6 @@ int kvm_timer_hyp_init(bool has_gic)
|
|||
goto out_free_irq;
|
||||
}
|
||||
|
||||
cpuhp_setup_state(CPUHP_AP_KVM_ARM_TIMER_STARTING,
|
||||
"kvm/arm/timer:starting", kvm_timer_starting_cpu,
|
||||
kvm_timer_dying_cpu);
|
||||
return 0;
|
||||
out_free_irq:
|
||||
free_percpu_irq(host_vtimer_irq, kvm_get_running_vcpus());
|
||||
|
|
|
@ -63,16 +63,6 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
|
|||
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
|
||||
}
|
||||
|
||||
int kvm_arch_hardware_setup(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_check_processor_compat(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
|
||||
struct kvm_enable_cap *cap)
|
||||
{
|
||||
|
@ -146,7 +136,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
|||
if (ret)
|
||||
goto err_unshare_kvm;
|
||||
|
||||
if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) {
|
||||
if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL_ACCOUNT)) {
|
||||
ret = -ENOMEM;
|
||||
goto err_unshare_kvm;
|
||||
}
|
||||
|
@ -1539,7 +1529,7 @@ static int kvm_init_vector_slots(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
|
||||
static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
|
||||
{
|
||||
struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
|
||||
unsigned long tcr;
|
||||
|
@ -1682,7 +1672,15 @@ static void _kvm_arch_hardware_enable(void *discard)
|
|||
|
||||
int kvm_arch_hardware_enable(void)
|
||||
{
|
||||
int was_enabled = __this_cpu_read(kvm_arm_hardware_enabled);
|
||||
|
||||
_kvm_arch_hardware_enable(NULL);
|
||||
|
||||
if (!was_enabled) {
|
||||
kvm_vgic_cpu_up();
|
||||
kvm_timer_cpu_up();
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1696,6 +1694,11 @@ static void _kvm_arch_hardware_disable(void *discard)
|
|||
|
||||
void kvm_arch_hardware_disable(void)
|
||||
{
|
||||
if (__this_cpu_read(kvm_arm_hardware_enabled)) {
|
||||
kvm_timer_cpu_down();
|
||||
kvm_vgic_cpu_down();
|
||||
}
|
||||
|
||||
if (!is_protected_kvm_enabled())
|
||||
_kvm_arch_hardware_disable(NULL);
|
||||
}
|
||||
|
@ -1738,26 +1741,26 @@ static struct notifier_block hyp_init_cpu_pm_nb = {
|
|||
.notifier_call = hyp_init_cpu_pm_notifier,
|
||||
};
|
||||
|
||||
static void hyp_cpu_pm_init(void)
|
||||
static void __init hyp_cpu_pm_init(void)
|
||||
{
|
||||
if (!is_protected_kvm_enabled())
|
||||
cpu_pm_register_notifier(&hyp_init_cpu_pm_nb);
|
||||
}
|
||||
static void hyp_cpu_pm_exit(void)
|
||||
static void __init hyp_cpu_pm_exit(void)
|
||||
{
|
||||
if (!is_protected_kvm_enabled())
|
||||
cpu_pm_unregister_notifier(&hyp_init_cpu_pm_nb);
|
||||
}
|
||||
#else
|
||||
static inline void hyp_cpu_pm_init(void)
|
||||
static inline void __init hyp_cpu_pm_init(void)
|
||||
{
|
||||
}
|
||||
static inline void hyp_cpu_pm_exit(void)
|
||||
static inline void __init hyp_cpu_pm_exit(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
static void init_cpu_logical_map(void)
|
||||
static void __init init_cpu_logical_map(void)
|
||||
{
|
||||
unsigned int cpu;
|
||||
|
||||
|
@ -1774,7 +1777,7 @@ static void init_cpu_logical_map(void)
|
|||
#define init_psci_0_1_impl_state(config, what) \
|
||||
config.psci_0_1_ ## what ## _implemented = psci_ops.what
|
||||
|
||||
static bool init_psci_relay(void)
|
||||
static bool __init init_psci_relay(void)
|
||||
{
|
||||
/*
|
||||
* If PSCI has not been initialized, protected KVM cannot install
|
||||
|
@ -1797,7 +1800,7 @@ static bool init_psci_relay(void)
|
|||
return true;
|
||||
}
|
||||
|
||||
static int init_subsystems(void)
|
||||
static int __init init_subsystems(void)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
|
@ -1838,13 +1841,22 @@ static int init_subsystems(void)
|
|||
kvm_register_perf_callbacks(NULL);
|
||||
|
||||
out:
|
||||
if (err)
|
||||
hyp_cpu_pm_exit();
|
||||
|
||||
if (err || !is_protected_kvm_enabled())
|
||||
on_each_cpu(_kvm_arch_hardware_disable, NULL, 1);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void teardown_hyp_mode(void)
|
||||
static void __init teardown_subsystems(void)
|
||||
{
|
||||
kvm_unregister_perf_callbacks();
|
||||
hyp_cpu_pm_exit();
|
||||
}
|
||||
|
||||
static void __init teardown_hyp_mode(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
|
@ -1855,7 +1867,7 @@ static void teardown_hyp_mode(void)
|
|||
}
|
||||
}
|
||||
|
||||
static int do_pkvm_init(u32 hyp_va_bits)
|
||||
static int __init do_pkvm_init(u32 hyp_va_bits)
|
||||
{
|
||||
void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base));
|
||||
int ret;
|
||||
|
@ -1887,11 +1899,12 @@ static void kvm_hyp_init_symbols(void)
|
|||
kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
|
||||
kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
|
||||
kvm_nvhe_sym(id_aa64smfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64SMFR0_EL1);
|
||||
kvm_nvhe_sym(__icache_flags) = __icache_flags;
|
||||
kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
|
||||
}
|
||||
|
||||
static int kvm_hyp_init_protection(u32 hyp_va_bits)
|
||||
static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
|
||||
{
|
||||
void *addr = phys_to_virt(hyp_mem_base);
|
||||
int ret;
|
||||
|
@ -1909,10 +1922,8 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Inits Hyp-mode on all online CPUs
|
||||
*/
|
||||
static int init_hyp_mode(void)
|
||||
/* Inits Hyp-mode on all online CPUs */
|
||||
static int __init init_hyp_mode(void)
|
||||
{
|
||||
u32 hyp_va_bits;
|
||||
int cpu;
|
||||
|
@ -2094,7 +2105,7 @@ out_err:
|
|||
return err;
|
||||
}
|
||||
|
||||
static void _kvm_host_prot_finalize(void *arg)
|
||||
static void __init _kvm_host_prot_finalize(void *arg)
|
||||
{
|
||||
int *err = arg;
|
||||
|
||||
|
@ -2102,7 +2113,7 @@ static void _kvm_host_prot_finalize(void *arg)
|
|||
WRITE_ONCE(*err, -EINVAL);
|
||||
}
|
||||
|
||||
static int pkvm_drop_host_privileges(void)
|
||||
static int __init pkvm_drop_host_privileges(void)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
|
@ -2115,7 +2126,7 @@ static int pkvm_drop_host_privileges(void)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int finalize_hyp_mode(void)
|
||||
static int __init finalize_hyp_mode(void)
|
||||
{
|
||||
if (!is_protected_kvm_enabled())
|
||||
return 0;
|
||||
|
@ -2187,10 +2198,8 @@ void kvm_arch_irq_bypass_start(struct irq_bypass_consumer *cons)
|
|||
kvm_arm_resume_guest(irqfd->kvm);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize Hyp-mode and memory mappings on all CPUs.
|
||||
*/
|
||||
int kvm_arch_init(void *opaque)
|
||||
/* Initialize Hyp-mode and memory mappings on all CPUs */
|
||||
static __init int kvm_arm_init(void)
|
||||
{
|
||||
int err;
|
||||
bool in_hyp_mode;
|
||||
|
@ -2241,7 +2250,7 @@ int kvm_arch_init(void *opaque)
|
|||
err = kvm_init_vector_slots();
|
||||
if (err) {
|
||||
kvm_err("Cannot initialise vector slots\n");
|
||||
goto out_err;
|
||||
goto out_hyp;
|
||||
}
|
||||
|
||||
err = init_subsystems();
|
||||
|
@ -2252,7 +2261,7 @@ int kvm_arch_init(void *opaque)
|
|||
err = finalize_hyp_mode();
|
||||
if (err) {
|
||||
kvm_err("Failed to finalize Hyp protection\n");
|
||||
goto out_hyp;
|
||||
goto out_subs;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2264,10 +2273,19 @@ int kvm_arch_init(void *opaque)
|
|||
kvm_info("Hyp mode initialized successfully\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* FIXME: Do something reasonable if kvm_init() fails after pKVM
|
||||
* hypervisor protection is finalized.
|
||||
*/
|
||||
err = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
if (err)
|
||||
goto out_subs;
|
||||
|
||||
return 0;
|
||||
|
||||
out_subs:
|
||||
teardown_subsystems();
|
||||
out_hyp:
|
||||
hyp_cpu_pm_exit();
|
||||
if (!in_hyp_mode)
|
||||
teardown_hyp_mode();
|
||||
out_err:
|
||||
|
@ -2275,12 +2293,6 @@ out_err:
|
|||
return err;
|
||||
}
|
||||
|
||||
/* NOP: Compiling as a module not supported */
|
||||
void kvm_arch_exit(void)
|
||||
{
|
||||
kvm_unregister_perf_callbacks();
|
||||
}
|
||||
|
||||
static int __init early_kvm_mode_cfg(char *arg)
|
||||
{
|
||||
if (!arg)
|
||||
|
@ -2310,6 +2322,11 @@ static int __init early_kvm_mode_cfg(char *arg)
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (strcmp(arg, "nested") == 0 && !WARN_ON(!is_kernel_in_hyp_mode())) {
|
||||
kvm_mode = KVM_MODE_NV;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
early_param("kvm-arm.mode", early_kvm_mode_cfg);
|
||||
|
@ -2319,10 +2336,4 @@ enum kvm_mode kvm_get_mode(void)
|
|||
return kvm_mode;
|
||||
}
|
||||
|
||||
static int arm_init(void)
|
||||
{
|
||||
int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
return rc;
|
||||
}
|
||||
|
||||
module_init(arm_init);
|
||||
module_init(kvm_arm_init);
|
||||
|
|
|
@ -0,0 +1,203 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (C) 2016 - Linaro and Columbia University
|
||||
* Author: Jintack Lim <jintack.lim@linaro.org>
|
||||
*/
|
||||
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
|
||||
#include "hyp/include/hyp/adjust_pc.h"
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
|
||||
{
|
||||
u64 mode = spsr & PSR_MODE_MASK;
|
||||
|
||||
/*
|
||||
* Possible causes for an Illegal Exception Return from EL2:
|
||||
* - trying to return to EL3
|
||||
* - trying to return to an illegal M value
|
||||
* - trying to return to a 32bit EL
|
||||
* - trying to return to EL1 with HCR_EL2.TGE set
|
||||
*/
|
||||
if (mode == PSR_MODE_EL3t || mode == PSR_MODE_EL3h ||
|
||||
mode == 0b00001 || (mode & BIT(1)) ||
|
||||
(spsr & PSR_MODE32_BIT) ||
|
||||
(vcpu_el2_tge_is_set(vcpu) && (mode == PSR_MODE_EL1t ||
|
||||
mode == PSR_MODE_EL1h))) {
|
||||
/*
|
||||
* The guest is playing with our nerves. Preserve EL, SP,
|
||||
* masks, flags from the existing PSTATE, and set IL.
|
||||
* The HW will then generate an Illegal State Exception
|
||||
* immediately after ERET.
|
||||
*/
|
||||
spsr = *vcpu_cpsr(vcpu);
|
||||
|
||||
spsr &= (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT |
|
||||
PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT |
|
||||
PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
spsr |= PSR_IL_BIT;
|
||||
}
|
||||
|
||||
return spsr;
|
||||
}
|
||||
|
||||
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 spsr, elr, mode;
|
||||
bool direct_eret;
|
||||
|
||||
/*
|
||||
* Going through the whole put/load motions is a waste of time
|
||||
* if this is a VHE guest hypervisor returning to its own
|
||||
* userspace, or the hypervisor performing a local exception
|
||||
* return. No need to save/restore registers, no need to
|
||||
* switch S2 MMU. Just do the canonical ERET.
|
||||
*/
|
||||
spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
|
||||
spsr = kvm_check_illegal_exception_return(vcpu, spsr);
|
||||
|
||||
mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
|
||||
direct_eret = (mode == PSR_MODE_EL0t &&
|
||||
vcpu_el2_e2h_is_set(vcpu) &&
|
||||
vcpu_el2_tge_is_set(vcpu));
|
||||
direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
|
||||
|
||||
if (direct_eret) {
|
||||
*vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
|
||||
*vcpu_cpsr(vcpu) = spsr;
|
||||
trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
|
||||
return;
|
||||
}
|
||||
|
||||
preempt_disable();
|
||||
kvm_arch_vcpu_put(vcpu);
|
||||
|
||||
elr = __vcpu_sys_reg(vcpu, ELR_EL2);
|
||||
|
||||
trace_kvm_nested_eret(vcpu, elr, spsr);
|
||||
|
||||
/*
|
||||
* Note that the current exception level is always the virtual EL2,
|
||||
* since we set HCR_EL2.NV bit only when entering the virtual EL2.
|
||||
*/
|
||||
*vcpu_pc(vcpu) = elr;
|
||||
*vcpu_cpsr(vcpu) = spsr;
|
||||
|
||||
kvm_arch_vcpu_load(vcpu, smp_processor_id());
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static void kvm_inject_el2_exception(struct kvm_vcpu *vcpu, u64 esr_el2,
|
||||
enum exception_type type)
|
||||
{
|
||||
trace_kvm_inject_nested_exception(vcpu, esr_el2, type);
|
||||
|
||||
switch (type) {
|
||||
case except_type_sync:
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL2_SYNC);
|
||||
vcpu_write_sys_reg(vcpu, esr_el2, ESR_EL2);
|
||||
break;
|
||||
case except_type_irq:
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL2_IRQ);
|
||||
break;
|
||||
default:
|
||||
WARN_ONCE(1, "Unsupported EL2 exception injection %d\n", type);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Emulate taking an exception to EL2.
|
||||
* See ARM ARM J8.1.2 AArch64.TakeException()
|
||||
*/
|
||||
static int kvm_inject_nested(struct kvm_vcpu *vcpu, u64 esr_el2,
|
||||
enum exception_type type)
|
||||
{
|
||||
u64 pstate, mode;
|
||||
bool direct_inject;
|
||||
|
||||
if (!vcpu_has_nv(vcpu)) {
|
||||
kvm_err("Unexpected call to %s for the non-nesting configuration\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* As for ERET, we can avoid doing too much on the injection path by
|
||||
* checking that we either took the exception from a VHE host
|
||||
* userspace or from vEL2. In these cases, there is no change in
|
||||
* translation regime (or anything else), so let's do as little as
|
||||
* possible.
|
||||
*/
|
||||
pstate = *vcpu_cpsr(vcpu);
|
||||
mode = pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
|
||||
direct_inject = (mode == PSR_MODE_EL0t &&
|
||||
vcpu_el2_e2h_is_set(vcpu) &&
|
||||
vcpu_el2_tge_is_set(vcpu));
|
||||
direct_inject |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
|
||||
|
||||
if (direct_inject) {
|
||||
kvm_inject_el2_exception(vcpu, esr_el2, type);
|
||||
return 1;
|
||||
}
|
||||
|
||||
preempt_disable();
|
||||
|
||||
/*
|
||||
* We may have an exception or PC update in the EL0/EL1 context.
|
||||
* Commit it before entering EL2.
|
||||
*/
|
||||
__kvm_adjust_pc(vcpu);
|
||||
|
||||
kvm_arch_vcpu_put(vcpu);
|
||||
|
||||
kvm_inject_el2_exception(vcpu, esr_el2, type);
|
||||
|
||||
/*
|
||||
* A hard requirement is that a switch between EL1 and EL2
|
||||
* contexts has to happen between a put/load, so that we can
|
||||
* pick the correct timer and interrupt configuration, among
|
||||
* other things.
|
||||
*
|
||||
* Make sure the exception actually took place before we load
|
||||
* the new context.
|
||||
*/
|
||||
__kvm_adjust_pc(vcpu);
|
||||
|
||||
kvm_arch_vcpu_load(vcpu, smp_processor_id());
|
||||
preempt_enable();
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2)
|
||||
{
|
||||
return kvm_inject_nested(vcpu, esr_el2, except_type_sync);
|
||||
}
|
||||
|
||||
int kvm_inject_nested_irq(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
/*
|
||||
* Do not inject an irq if the:
|
||||
* - Current exception level is EL2, and
|
||||
* - virtual HCR_EL2.TGE == 0
|
||||
* - virtual HCR_EL2.IMO == 0
|
||||
*
|
||||
* See Table D1-17 "Physical interrupt target and masking when EL3 is
|
||||
* not implemented and EL2 is implemented" in ARM DDI 0487C.a.
|
||||
*/
|
||||
|
||||
if (vcpu_is_el2(vcpu) && !vcpu_el2_tge_is_set(vcpu) &&
|
||||
!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_IMO))
|
||||
return 1;
|
||||
|
||||
/* esr_el2 value doesn't matter for exits due to irqs. */
|
||||
return kvm_inject_nested(vcpu, 0, except_type_irq);
|
||||
}
|
|
@ -184,6 +184,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
|
|||
sysreg_clear_set(CPACR_EL1,
|
||||
CPACR_EL1_SMEN_EL0EN,
|
||||
CPACR_EL1_SMEN_EL1EN);
|
||||
isb();
|
||||
}
|
||||
|
||||
if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) {
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <asm/fpsimd.h>
|
||||
#include <asm/kvm.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/sigcontext.h>
|
||||
|
||||
#include "trace.h"
|
||||
|
@ -253,6 +254,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
|
|||
if (!vcpu_el1_is_32bit(vcpu))
|
||||
return -EINVAL;
|
||||
break;
|
||||
case PSR_MODE_EL2h:
|
||||
case PSR_MODE_EL2t:
|
||||
if (!vcpu_has_nv(vcpu))
|
||||
return -EINVAL;
|
||||
fallthrough;
|
||||
case PSR_MODE_EL0t:
|
||||
case PSR_MODE_EL1t:
|
||||
case PSR_MODE_EL1h:
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/debug-monitors.h>
|
||||
#include <asm/stacktrace/nvhe.h>
|
||||
#include <asm/traps.h>
|
||||
|
@ -41,6 +42,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
|
|||
kvm_vcpu_hvc_get_imm(vcpu));
|
||||
vcpu->stat.hvc_exit_stat++;
|
||||
|
||||
/* Forward hvc instructions to the virtual EL2 if the guest has EL2. */
|
||||
if (vcpu_has_nv(vcpu)) {
|
||||
if (vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_HCD)
|
||||
kvm_inject_undefined(vcpu);
|
||||
else
|
||||
kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
ret = kvm_hvc_call_handler(vcpu);
|
||||
if (ret < 0) {
|
||||
vcpu_set_reg(vcpu, 0, ~0UL);
|
||||
|
@ -52,6 +63,8 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
|
|||
|
||||
static int handle_smc(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* "If an SMC instruction executed at Non-secure EL1 is
|
||||
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
|
||||
|
@ -59,10 +72,30 @@ static int handle_smc(struct kvm_vcpu *vcpu)
|
|||
*
|
||||
* We need to advance the PC after the trap, as it would
|
||||
* otherwise return to the same address...
|
||||
*
|
||||
* Only handle SMCs from the virtual EL2 with an immediate of zero and
|
||||
* skip it otherwise.
|
||||
*/
|
||||
vcpu_set_reg(vcpu, 0, ~0UL);
|
||||
if (!vcpu_is_el2(vcpu) || kvm_vcpu_hvc_get_imm(vcpu)) {
|
||||
vcpu_set_reg(vcpu, 0, ~0UL);
|
||||
kvm_incr_pc(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* If imm is zero then it is likely an SMCCC call.
|
||||
*
|
||||
* Note that on ARMv8.3, even if EL3 is not implemented, SMC executed
|
||||
* at Non-secure EL1 is trapped to EL2 if HCR_EL2.TSC==1, rather than
|
||||
* being treated as UNDEFINED.
|
||||
*/
|
||||
ret = kvm_hvc_call_handler(vcpu);
|
||||
if (ret < 0)
|
||||
vcpu_set_reg(vcpu, 0, ~0UL);
|
||||
|
||||
kvm_incr_pc(vcpu);
|
||||
return 1;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -196,6 +229,15 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
|
|||
return 1;
|
||||
}
|
||||
|
||||
static int kvm_handle_eret(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET)
|
||||
return kvm_handle_ptrauth(vcpu);
|
||||
|
||||
kvm_emulate_nested_eret(vcpu);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static exit_handle_fn arm_exit_handlers[] = {
|
||||
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
|
||||
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
|
||||
|
@ -211,6 +253,7 @@ static exit_handle_fn arm_exit_handlers[] = {
|
|||
[ESR_ELx_EC_SMC64] = handle_smc,
|
||||
[ESR_ELx_EC_SYS64] = kvm_handle_sys_reg,
|
||||
[ESR_ELx_EC_SVE] = handle_sve,
|
||||
[ESR_ELx_EC_ERET] = kvm_handle_eret,
|
||||
[ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort,
|
||||
[ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort,
|
||||
[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
|
||||
#if !defined (__KVM_NVHE_HYPERVISOR__) && !defined (__KVM_VHE_HYPERVISOR__)
|
||||
#error Hypervisor code only!
|
||||
|
@ -23,7 +24,9 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
|
|||
{
|
||||
u64 val;
|
||||
|
||||
if (__vcpu_read_sys_reg_from_cpu(reg, &val))
|
||||
if (unlikely(vcpu_has_nv(vcpu)))
|
||||
return vcpu_read_sys_reg(vcpu, reg);
|
||||
else if (__vcpu_read_sys_reg_from_cpu(reg, &val))
|
||||
return val;
|
||||
|
||||
return __vcpu_sys_reg(vcpu, reg);
|
||||
|
@ -31,18 +34,25 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
|
|||
|
||||
static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
|
||||
{
|
||||
if (__vcpu_write_sys_reg_to_cpu(val, reg))
|
||||
return;
|
||||
|
||||
__vcpu_sys_reg(vcpu, reg) = val;
|
||||
if (unlikely(vcpu_has_nv(vcpu)))
|
||||
vcpu_write_sys_reg(vcpu, val, reg);
|
||||
else if (!__vcpu_write_sys_reg_to_cpu(val, reg))
|
||||
__vcpu_sys_reg(vcpu, reg) = val;
|
||||
}
|
||||
|
||||
static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
|
||||
static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
|
||||
u64 val)
|
||||
{
|
||||
if (has_vhe())
|
||||
if (unlikely(vcpu_has_nv(vcpu))) {
|
||||
if (target_mode == PSR_MODE_EL1h)
|
||||
vcpu_write_sys_reg(vcpu, val, SPSR_EL1);
|
||||
else
|
||||
vcpu_write_sys_reg(vcpu, val, SPSR_EL2);
|
||||
} else if (has_vhe()) {
|
||||
write_sysreg_el1(val, SYS_SPSR);
|
||||
else
|
||||
} else {
|
||||
__vcpu_sys_reg(vcpu, SPSR_EL1) = val;
|
||||
}
|
||||
}
|
||||
|
||||
static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)
|
||||
|
@ -101,6 +111,11 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
|
|||
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
|
||||
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
|
||||
break;
|
||||
case PSR_MODE_EL2h:
|
||||
vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL2);
|
||||
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL2);
|
||||
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL2);
|
||||
break;
|
||||
default:
|
||||
/* Don't do that */
|
||||
BUG();
|
||||
|
@ -153,7 +168,7 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
|
|||
new |= target_mode;
|
||||
|
||||
*vcpu_cpsr(vcpu) = new;
|
||||
__vcpu_write_spsr(vcpu, old);
|
||||
__vcpu_write_spsr(vcpu, target_mode, old);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -323,11 +338,20 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
|
|||
case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC):
|
||||
enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync);
|
||||
break;
|
||||
|
||||
case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC):
|
||||
enter_exception64(vcpu, PSR_MODE_EL2h, except_type_sync);
|
||||
break;
|
||||
|
||||
case unpack_vcpu_flag(EXCEPT_AA64_EL2_IRQ):
|
||||
enter_exception64(vcpu, PSR_MODE_EL2h, except_type_irq);
|
||||
break;
|
||||
|
||||
default:
|
||||
/*
|
||||
* Only EL1_SYNC makes sense so far, EL2_{SYNC,IRQ}
|
||||
* will be implemented at some point. Everything
|
||||
* else gets silently ignored.
|
||||
* Only EL1_SYNC and EL2_{SYNC,IRQ} makes
|
||||
* sense so far. Everything else gets silently
|
||||
* ignored.
|
||||
*/
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -39,7 +39,6 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
|
|||
|
||||
static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
ctxt_sys_reg(ctxt, CSSELR_EL1) = read_sysreg(csselr_el1);
|
||||
ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
|
||||
ctxt_sys_reg(ctxt, CPACR_EL1) = read_sysreg_el1(SYS_CPACR);
|
||||
ctxt_sys_reg(ctxt, TTBR0_EL1) = read_sysreg_el1(SYS_TTBR0);
|
||||
|
@ -95,7 +94,6 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
|
|||
static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
|
||||
write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1), csselr_el1);
|
||||
|
||||
if (has_vhe() ||
|
||||
!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
|
||||
|
@ -156,9 +154,26 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
|
|||
write_sysreg_el1(ctxt_sys_reg(ctxt, SPSR_EL1), SYS_SPSR);
|
||||
}
|
||||
|
||||
/* Read the VCPU state's PSTATE, but translate (v)EL2 to EL1. */
|
||||
static inline u64 to_hw_pstate(const struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
u64 mode = ctxt->regs.pstate & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
|
||||
switch (mode) {
|
||||
case PSR_MODE_EL2t:
|
||||
mode = PSR_MODE_EL1t;
|
||||
break;
|
||||
case PSR_MODE_EL2h:
|
||||
mode = PSR_MODE_EL1h;
|
||||
break;
|
||||
}
|
||||
|
||||
return (ctxt->regs.pstate & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
|
||||
}
|
||||
|
||||
static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
u64 pstate = ctxt->regs.pstate;
|
||||
u64 pstate = to_hw_pstate(ctxt);
|
||||
u64 mode = pstate & PSR_AA32_MODE_MASK;
|
||||
|
||||
/*
|
||||
|
|
|
@ -183,6 +183,7 @@ SYM_CODE_START_LOCAL(__kvm_hyp_init_cpu)
|
|||
|
||||
/* Initialize EL2 CPU state to sane values. */
|
||||
init_el2_state // Clobbers x0..x2
|
||||
finalise_el2_state
|
||||
|
||||
/* Enable MMU, set vectors and stack. */
|
||||
mov x0, x28
|
||||
|
|
|
@ -26,6 +26,7 @@ u64 id_aa64isar2_el1_sys_val;
|
|||
u64 id_aa64mmfr0_el1_sys_val;
|
||||
u64 id_aa64mmfr1_el1_sys_val;
|
||||
u64 id_aa64mmfr2_el1_sys_val;
|
||||
u64 id_aa64smfr0_el1_sys_val;
|
||||
|
||||
/*
|
||||
* Inject an unknown/undefined exception to an AArch64 guest while most of its
|
||||
|
|
|
@ -168,6 +168,25 @@ static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data,
|
|||
return walker->cb(ctx, visit);
|
||||
}
|
||||
|
||||
static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
|
||||
int r)
|
||||
{
|
||||
/*
|
||||
* Visitor callbacks return EAGAIN when the conditions that led to a
|
||||
* fault are no longer reflected in the page tables due to a race to
|
||||
* update a PTE. In the context of a fault handler this is interpreted
|
||||
* as a signal to retry guest execution.
|
||||
*
|
||||
* Ignore the return code altogether for walkers outside a fault handler
|
||||
* (e.g. write protecting a range of memory) and chug along with the
|
||||
* page table walk.
|
||||
*/
|
||||
if (r == -EAGAIN)
|
||||
return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
|
||||
|
||||
return !r;
|
||||
}
|
||||
|
||||
static int __kvm_pgtable_walk(struct kvm_pgtable_walk_data *data,
|
||||
struct kvm_pgtable_mm_ops *mm_ops, kvm_pteref_t pgtable, u32 level);
|
||||
|
||||
|
@ -200,7 +219,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
|
|||
table = kvm_pte_table(ctx.old, level);
|
||||
}
|
||||
|
||||
if (ret)
|
||||
if (!kvm_pgtable_walk_continue(data->walker, ret))
|
||||
goto out;
|
||||
|
||||
if (!table) {
|
||||
|
@ -211,13 +230,16 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
|
|||
|
||||
childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops);
|
||||
ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1);
|
||||
if (ret)
|
||||
if (!kvm_pgtable_walk_continue(data->walker, ret))
|
||||
goto out;
|
||||
|
||||
if (ctx.flags & KVM_PGTABLE_WALK_TABLE_POST)
|
||||
ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_POST);
|
||||
|
||||
out:
|
||||
if (kvm_pgtable_walk_continue(data->walker, ret))
|
||||
return 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -584,12 +606,14 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
|
|||
lvls = 2;
|
||||
vtcr |= VTCR_EL2_LVLS_TO_SL0(lvls);
|
||||
|
||||
#ifdef CONFIG_ARM64_HW_AFDBM
|
||||
/*
|
||||
* Enable the Hardware Access Flag management, unconditionally
|
||||
* on all CPUs. The features is RES0 on CPUs without the support
|
||||
* and must be ignored by the CPUs.
|
||||
*/
|
||||
vtcr |= VTCR_EL2_HA;
|
||||
#endif /* CONFIG_ARM64_HW_AFDBM */
|
||||
|
||||
/* Set the vmid bits */
|
||||
vtcr |= (get_vmid_bits(mmfr1) == 16) ?
|
||||
|
@ -1026,7 +1050,7 @@ static int stage2_attr_walker(const struct kvm_pgtable_visit_ctx *ctx,
|
|||
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
|
||||
|
||||
if (!kvm_pte_valid(ctx->old))
|
||||
return 0;
|
||||
return -EAGAIN;
|
||||
|
||||
data->level = ctx->level;
|
||||
data->pte = pte;
|
||||
|
@ -1094,9 +1118,15 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size)
|
|||
kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr)
|
||||
{
|
||||
kvm_pte_t pte = 0;
|
||||
stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0,
|
||||
&pte, NULL, 0);
|
||||
dsb(ishst);
|
||||
int ret;
|
||||
|
||||
ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0,
|
||||
&pte, NULL,
|
||||
KVM_PGTABLE_WALK_HANDLE_FAULT |
|
||||
KVM_PGTABLE_WALK_SHARED);
|
||||
if (!ret)
|
||||
dsb(ishst);
|
||||
|
||||
return pte;
|
||||
}
|
||||
|
||||
|
@ -1141,6 +1171,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
|
|||
clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN;
|
||||
|
||||
ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level,
|
||||
KVM_PGTABLE_WALK_HANDLE_FAULT |
|
||||
KVM_PGTABLE_WALK_SHARED);
|
||||
if (!ret)
|
||||
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, pgt->mmu, addr, level);
|
||||
|
|
|
@ -40,7 +40,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
|
|||
___activate_traps(vcpu);
|
||||
|
||||
val = read_sysreg(cpacr_el1);
|
||||
val |= CPACR_EL1_TTA;
|
||||
val |= CPACR_ELx_TTA;
|
||||
val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN |
|
||||
CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN);
|
||||
|
||||
|
@ -120,6 +120,25 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
|
|||
|
||||
static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
|
||||
{
|
||||
/*
|
||||
* If we were in HYP context on entry, adjust the PSTATE view
|
||||
* so that the usual helpers work correctly.
|
||||
*/
|
||||
if (unlikely(vcpu_get_flag(vcpu, VCPU_HYP_CONTEXT))) {
|
||||
u64 mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
|
||||
switch (mode) {
|
||||
case PSR_MODE_EL1t:
|
||||
mode = PSR_MODE_EL2t;
|
||||
break;
|
||||
case PSR_MODE_EL1h:
|
||||
mode = PSR_MODE_EL2h;
|
||||
break;
|
||||
}
|
||||
|
||||
*vcpu_cpsr(vcpu) &= ~(PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
*vcpu_cpsr(vcpu) |= mode;
|
||||
}
|
||||
}
|
||||
|
||||
/* Switch to the guest for VHE systems running in EL2 */
|
||||
|
@ -154,6 +173,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
|
|||
sysreg_restore_guest_state_vhe(guest_ctxt);
|
||||
__debug_switch_to_guest(vcpu);
|
||||
|
||||
if (is_hyp_ctxt(vcpu))
|
||||
vcpu_set_flag(vcpu, VCPU_HYP_CONTEXT);
|
||||
else
|
||||
vcpu_clear_flag(vcpu, VCPU_HYP_CONTEXT);
|
||||
|
||||
do {
|
||||
/* Jump in the fire! */
|
||||
exit_code = __guest_enter(vcpu);
|
||||
|
|
|
@ -198,7 +198,7 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
|
|||
break;
|
||||
case ARM_SMCCC_HV_PV_TIME_ST:
|
||||
gpa = kvm_init_stolen_time(vcpu);
|
||||
if (gpa != GPA_INVALID)
|
||||
if (gpa != INVALID_GPA)
|
||||
val[0] = gpa;
|
||||
break;
|
||||
case ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID:
|
||||
|
|
|
@ -12,17 +12,55 @@
|
|||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/esr.h>
|
||||
|
||||
static void pend_sync_exception(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
/* If not nesting, EL1 is the only possible exception target */
|
||||
if (likely(!vcpu_has_nv(vcpu))) {
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* With NV, we need to pick between EL1 and EL2. Note that we
|
||||
* never deal with a nesting exception here, hence never
|
||||
* changing context, and the exception itself can be delayed
|
||||
* until the next entry.
|
||||
*/
|
||||
switch(*vcpu_cpsr(vcpu) & PSR_MODE_MASK) {
|
||||
case PSR_MODE_EL2h:
|
||||
case PSR_MODE_EL2t:
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL2_SYNC);
|
||||
break;
|
||||
case PSR_MODE_EL1h:
|
||||
case PSR_MODE_EL1t:
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
|
||||
break;
|
||||
case PSR_MODE_EL0t:
|
||||
if (vcpu_el2_tge_is_set(vcpu))
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL2_SYNC);
|
||||
else
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
}
|
||||
|
||||
static bool match_target_el(struct kvm_vcpu *vcpu, unsigned long target)
|
||||
{
|
||||
return (vcpu_get_flag(vcpu, EXCEPT_MASK) == target);
|
||||
}
|
||||
|
||||
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
|
||||
{
|
||||
unsigned long cpsr = *vcpu_cpsr(vcpu);
|
||||
bool is_aarch32 = vcpu_mode_is_32bit(vcpu);
|
||||
u64 esr = 0;
|
||||
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
|
||||
|
||||
vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
|
||||
pend_sync_exception(vcpu);
|
||||
|
||||
/*
|
||||
* Build an {i,d}abort, depending on the level and the
|
||||
|
@ -43,14 +81,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
|
|||
if (!is_iabt)
|
||||
esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
|
||||
|
||||
vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1);
|
||||
esr |= ESR_ELx_FSC_EXTABT;
|
||||
|
||||
if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC))) {
|
||||
vcpu_write_sys_reg(vcpu, addr, FAR_EL1);
|
||||
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
|
||||
} else {
|
||||
vcpu_write_sys_reg(vcpu, addr, FAR_EL2);
|
||||
vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
|
||||
}
|
||||
}
|
||||
|
||||
static void inject_undef64(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
|
||||
|
||||
kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC);
|
||||
pend_sync_exception(vcpu);
|
||||
|
||||
/*
|
||||
* Build an unknown exception, depending on the instruction
|
||||
|
@ -59,7 +105,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
|
|||
if (kvm_vcpu_trap_il_is32bit(vcpu))
|
||||
esr |= ESR_ELx_IL;
|
||||
|
||||
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
|
||||
if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC)))
|
||||
vcpu_write_sys_reg(vcpu, esr, ESR_EL1);
|
||||
else
|
||||
vcpu_write_sys_reg(vcpu, esr, ESR_EL2);
|
||||
}
|
||||
|
||||
#define DFSR_FSC_EXTABT_LPAE 0x10
|
||||
|
|
|
@ -25,11 +25,11 @@
|
|||
static struct kvm_pgtable *hyp_pgtable;
|
||||
static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
|
||||
|
||||
static unsigned long hyp_idmap_start;
|
||||
static unsigned long hyp_idmap_end;
|
||||
static phys_addr_t hyp_idmap_vector;
|
||||
static unsigned long __ro_after_init hyp_idmap_start;
|
||||
static unsigned long __ro_after_init hyp_idmap_end;
|
||||
static phys_addr_t __ro_after_init hyp_idmap_vector;
|
||||
|
||||
static unsigned long io_map_base;
|
||||
static unsigned long __ro_after_init io_map_base;
|
||||
|
||||
static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
|
||||
{
|
||||
|
@ -46,16 +46,17 @@ static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
|
|||
* long will also starve other vCPUs. We have to also make sure that the page
|
||||
* tables are not freed while we released the lock.
|
||||
*/
|
||||
static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
|
||||
static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr,
|
||||
phys_addr_t end,
|
||||
int (*fn)(struct kvm_pgtable *, u64, u64),
|
||||
bool resched)
|
||||
{
|
||||
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
|
||||
int ret;
|
||||
u64 next;
|
||||
|
||||
do {
|
||||
struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
|
||||
struct kvm_pgtable *pgt = mmu->pgt;
|
||||
if (!pgt)
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -71,8 +72,8 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
|
|||
return ret;
|
||||
}
|
||||
|
||||
#define stage2_apply_range_resched(kvm, addr, end, fn) \
|
||||
stage2_apply_range(kvm, addr, end, fn, true)
|
||||
#define stage2_apply_range_resched(mmu, addr, end, fn) \
|
||||
stage2_apply_range(mmu, addr, end, fn, true)
|
||||
|
||||
static bool memslot_is_logging(struct kvm_memory_slot *memslot)
|
||||
{
|
||||
|
@ -235,7 +236,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
|
|||
|
||||
lockdep_assert_held_write(&kvm->mmu_lock);
|
||||
WARN_ON(size & ~PAGE_MASK);
|
||||
WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap,
|
||||
WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap,
|
||||
may_block));
|
||||
}
|
||||
|
||||
|
@ -250,7 +251,7 @@ static void stage2_flush_memslot(struct kvm *kvm,
|
|||
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
|
||||
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
|
||||
|
||||
stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
|
||||
stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -280,7 +281,7 @@ static void stage2_flush_vm(struct kvm *kvm)
|
|||
/**
|
||||
* free_hyp_pgds - free Hyp-mode page tables
|
||||
*/
|
||||
void free_hyp_pgds(void)
|
||||
void __init free_hyp_pgds(void)
|
||||
{
|
||||
mutex_lock(&kvm_hyp_pgd_mutex);
|
||||
if (hyp_pgtable) {
|
||||
|
@ -934,8 +935,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
|
|||
*/
|
||||
static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
|
||||
{
|
||||
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
|
||||
stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect);
|
||||
stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1383,7 +1383,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
else
|
||||
ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
|
||||
__pfn_to_phys(pfn), prot,
|
||||
memcache, KVM_PGTABLE_WALK_SHARED);
|
||||
memcache,
|
||||
KVM_PGTABLE_WALK_HANDLE_FAULT |
|
||||
KVM_PGTABLE_WALK_SHARED);
|
||||
|
||||
/* Mark the page dirty only if the fault is handled successfully */
|
||||
if (writable && !ret) {
|
||||
|
@ -1401,20 +1403,18 @@ out_unlock:
|
|||
/* Resolve the access fault by making the page young again. */
|
||||
static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
|
||||
{
|
||||
pte_t pte;
|
||||
kvm_pte_t kpte;
|
||||
kvm_pte_t pte;
|
||||
struct kvm_s2_mmu *mmu;
|
||||
|
||||
trace_kvm_access_fault(fault_ipa);
|
||||
|
||||
write_lock(&vcpu->kvm->mmu_lock);
|
||||
read_lock(&vcpu->kvm->mmu_lock);
|
||||
mmu = vcpu->arch.hw_mmu;
|
||||
kpte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa);
|
||||
write_unlock(&vcpu->kvm->mmu_lock);
|
||||
pte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa);
|
||||
read_unlock(&vcpu->kvm->mmu_lock);
|
||||
|
||||
pte = __pte(kpte);
|
||||
if (pte_valid(pte))
|
||||
kvm_set_pfn_accessed(pte_pfn(pte));
|
||||
if (kvm_pte_valid(pte))
|
||||
kvm_set_pfn_accessed(kvm_pte_to_pfn(pte));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1668,7 +1668,7 @@ static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = {
|
|||
.virt_to_phys = kvm_host_pa,
|
||||
};
|
||||
|
||||
int kvm_mmu_init(u32 *hyp_va_bits)
|
||||
int __init kvm_mmu_init(u32 *hyp_va_bits)
|
||||
{
|
||||
int err;
|
||||
u32 idmap_bits;
|
||||
|
|
|
@ -0,0 +1,161 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (C) 2017 - Columbia University and Linaro Ltd.
|
||||
* Author: Jintack Lim <jintack.lim@linaro.org>
|
||||
*/
|
||||
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
#include "sys_regs.h"
|
||||
|
||||
/* Protection against the sysreg repainting madness... */
|
||||
#define NV_FTR(r, f) ID_AA64##r##_EL1_##f
|
||||
|
||||
/*
|
||||
* Our emulated CPU doesn't support all the possible features. For the
|
||||
* sake of simplicity (and probably mental sanity), wipe out a number
|
||||
* of feature bits we don't intend to support for the time being.
|
||||
* This list should get updated as new features get added to the NV
|
||||
* support, and new extension to the architecture.
|
||||
*/
|
||||
void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
u32 id = reg_to_encoding(r);
|
||||
u64 val, tmp;
|
||||
|
||||
val = p->regval;
|
||||
|
||||
switch (id) {
|
||||
case SYS_ID_AA64ISAR0_EL1:
|
||||
/* Support everything but TME, O.S. and Range TLBIs */
|
||||
val &= ~(NV_FTR(ISAR0, TLB) |
|
||||
NV_FTR(ISAR0, TME));
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64ISAR1_EL1:
|
||||
/* Support everything but PtrAuth and Spec Invalidation */
|
||||
val &= ~(GENMASK_ULL(63, 56) |
|
||||
NV_FTR(ISAR1, SPECRES) |
|
||||
NV_FTR(ISAR1, GPI) |
|
||||
NV_FTR(ISAR1, GPA) |
|
||||
NV_FTR(ISAR1, API) |
|
||||
NV_FTR(ISAR1, APA));
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64PFR0_EL1:
|
||||
/* No AMU, MPAM, S-EL2, RAS or SVE */
|
||||
val &= ~(GENMASK_ULL(55, 52) |
|
||||
NV_FTR(PFR0, AMU) |
|
||||
NV_FTR(PFR0, MPAM) |
|
||||
NV_FTR(PFR0, SEL2) |
|
||||
NV_FTR(PFR0, RAS) |
|
||||
NV_FTR(PFR0, SVE) |
|
||||
NV_FTR(PFR0, EL3) |
|
||||
NV_FTR(PFR0, EL2) |
|
||||
NV_FTR(PFR0, EL1));
|
||||
/* 64bit EL1/EL2/EL3 only */
|
||||
val |= FIELD_PREP(NV_FTR(PFR0, EL1), 0b0001);
|
||||
val |= FIELD_PREP(NV_FTR(PFR0, EL2), 0b0001);
|
||||
val |= FIELD_PREP(NV_FTR(PFR0, EL3), 0b0001);
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64PFR1_EL1:
|
||||
/* Only support SSBS */
|
||||
val &= NV_FTR(PFR1, SSBS);
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64MMFR0_EL1:
|
||||
/* Hide ECV, FGT, ExS, Secure Memory */
|
||||
val &= ~(GENMASK_ULL(63, 43) |
|
||||
NV_FTR(MMFR0, TGRAN4_2) |
|
||||
NV_FTR(MMFR0, TGRAN16_2) |
|
||||
NV_FTR(MMFR0, TGRAN64_2) |
|
||||
NV_FTR(MMFR0, SNSMEM));
|
||||
|
||||
/* Disallow unsupported S2 page sizes */
|
||||
switch (PAGE_SIZE) {
|
||||
case SZ_64K:
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN16_2), 0b0001);
|
||||
fallthrough;
|
||||
case SZ_16K:
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN4_2), 0b0001);
|
||||
fallthrough;
|
||||
case SZ_4K:
|
||||
/* Support everything */
|
||||
break;
|
||||
}
|
||||
/*
|
||||
* Since we can't support a guest S2 page size smaller than
|
||||
* the host's own page size (due to KVM only populating its
|
||||
* own S2 using the kernel's page size), advertise the
|
||||
* limitation using FEAT_GTG.
|
||||
*/
|
||||
switch (PAGE_SIZE) {
|
||||
case SZ_4K:
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN4_2), 0b0010);
|
||||
fallthrough;
|
||||
case SZ_16K:
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN16_2), 0b0010);
|
||||
fallthrough;
|
||||
case SZ_64K:
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, TGRAN64_2), 0b0010);
|
||||
break;
|
||||
}
|
||||
/* Cap PARange to 48bits */
|
||||
tmp = FIELD_GET(NV_FTR(MMFR0, PARANGE), val);
|
||||
if (tmp > 0b0101) {
|
||||
val &= ~NV_FTR(MMFR0, PARANGE);
|
||||
val |= FIELD_PREP(NV_FTR(MMFR0, PARANGE), 0b0101);
|
||||
}
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64MMFR1_EL1:
|
||||
val &= (NV_FTR(MMFR1, PAN) |
|
||||
NV_FTR(MMFR1, LO) |
|
||||
NV_FTR(MMFR1, HPDS) |
|
||||
NV_FTR(MMFR1, VH) |
|
||||
NV_FTR(MMFR1, VMIDBits));
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64MMFR2_EL1:
|
||||
val &= ~(NV_FTR(MMFR2, EVT) |
|
||||
NV_FTR(MMFR2, BBM) |
|
||||
NV_FTR(MMFR2, TTL) |
|
||||
GENMASK_ULL(47, 44) |
|
||||
NV_FTR(MMFR2, ST) |
|
||||
NV_FTR(MMFR2, CCIDX) |
|
||||
NV_FTR(MMFR2, VARange));
|
||||
|
||||
/* Force TTL support */
|
||||
val |= FIELD_PREP(NV_FTR(MMFR2, TTL), 0b0001);
|
||||
break;
|
||||
|
||||
case SYS_ID_AA64DFR0_EL1:
|
||||
/* Only limited support for PMU, Debug, BPs and WPs */
|
||||
val &= (NV_FTR(DFR0, PMUVer) |
|
||||
NV_FTR(DFR0, WRPs) |
|
||||
NV_FTR(DFR0, BRPs) |
|
||||
NV_FTR(DFR0, DebugVer));
|
||||
|
||||
/* Cap Debug to ARMv8.1 */
|
||||
tmp = FIELD_GET(NV_FTR(DFR0, DebugVer), val);
|
||||
if (tmp > 0b0111) {
|
||||
val &= ~NV_FTR(DFR0, DebugVer);
|
||||
val |= FIELD_PREP(NV_FTR(DFR0, DebugVer), 0b0111);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
/* Unknown register, just wipe it clean */
|
||||
val = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
p->regval = val;
|
||||
}
|
|
@ -19,7 +19,7 @@ void kvm_update_stolen_time(struct kvm_vcpu *vcpu)
|
|||
u64 steal = 0;
|
||||
int idx;
|
||||
|
||||
if (base == GPA_INVALID)
|
||||
if (base == INVALID_GPA)
|
||||
return;
|
||||
|
||||
idx = srcu_read_lock(&kvm->srcu);
|
||||
|
@ -40,7 +40,7 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu)
|
|||
switch (feature) {
|
||||
case ARM_SMCCC_HV_PV_TIME_FEATURES:
|
||||
case ARM_SMCCC_HV_PV_TIME_ST:
|
||||
if (vcpu->arch.steal.base != GPA_INVALID)
|
||||
if (vcpu->arch.steal.base != INVALID_GPA)
|
||||
val = SMCCC_RET_SUCCESS;
|
||||
break;
|
||||
}
|
||||
|
@ -54,7 +54,7 @@ gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu)
|
|||
struct kvm *kvm = vcpu->kvm;
|
||||
u64 base = vcpu->arch.steal.base;
|
||||
|
||||
if (base == GPA_INVALID)
|
||||
if (base == INVALID_GPA)
|
||||
return base;
|
||||
|
||||
/*
|
||||
|
@ -89,7 +89,7 @@ int kvm_arm_pvtime_set_attr(struct kvm_vcpu *vcpu,
|
|||
return -EFAULT;
|
||||
if (!IS_ALIGNED(ipa, 64))
|
||||
return -EINVAL;
|
||||
if (vcpu->arch.steal.base != GPA_INVALID)
|
||||
if (vcpu->arch.steal.base != INVALID_GPA)
|
||||
return -EEXIST;
|
||||
|
||||
/* Check the address is in a valid memslot */
|
||||
|
|
|
@ -27,10 +27,11 @@
|
|||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/virt.h>
|
||||
|
||||
/* Maximum phys_shift supported for any VM on this host */
|
||||
static u32 kvm_ipa_limit;
|
||||
static u32 __ro_after_init kvm_ipa_limit;
|
||||
|
||||
/*
|
||||
* ARMv8 Reset Values
|
||||
|
@ -38,12 +39,15 @@ static u32 kvm_ipa_limit;
|
|||
#define VCPU_RESET_PSTATE_EL1 (PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | \
|
||||
PSR_F_BIT | PSR_D_BIT)
|
||||
|
||||
#define VCPU_RESET_PSTATE_EL2 (PSR_MODE_EL2h | PSR_A_BIT | PSR_I_BIT | \
|
||||
PSR_F_BIT | PSR_D_BIT)
|
||||
|
||||
#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \
|
||||
PSR_AA32_I_BIT | PSR_AA32_F_BIT)
|
||||
|
||||
unsigned int kvm_sve_max_vl;
|
||||
unsigned int __ro_after_init kvm_sve_max_vl;
|
||||
|
||||
int kvm_arm_init_sve(void)
|
||||
int __init kvm_arm_init_sve(void)
|
||||
{
|
||||
if (system_supports_sve()) {
|
||||
kvm_sve_max_vl = sve_max_virtualisable_vl();
|
||||
|
@ -157,6 +161,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu)
|
|||
if (sve_state)
|
||||
kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
|
||||
kfree(sve_state);
|
||||
kfree(vcpu->arch.ccsidr);
|
||||
}
|
||||
|
||||
static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
|
||||
|
@ -220,6 +225,10 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
|
|||
if (kvm_has_mte(kvm) && is32bit)
|
||||
return -EINVAL;
|
||||
|
||||
/* NV is incompatible with AArch32 */
|
||||
if (vcpu_has_nv(vcpu) && is32bit)
|
||||
return -EINVAL;
|
||||
|
||||
if (is32bit)
|
||||
set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags);
|
||||
|
||||
|
@ -272,6 +281,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
|
|||
if (loaded)
|
||||
kvm_arch_vcpu_put(vcpu);
|
||||
|
||||
/* Disallow NV+SVE for the time being */
|
||||
if (vcpu_has_nv(vcpu) && vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!kvm_arm_vcpu_sve_finalized(vcpu)) {
|
||||
if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) {
|
||||
ret = kvm_vcpu_enable_sve(vcpu);
|
||||
|
@ -294,6 +309,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
|
|||
default:
|
||||
if (vcpu_el1_is_32bit(vcpu)) {
|
||||
pstate = VCPU_RESET_PSTATE_SVC;
|
||||
} else if (vcpu_has_nv(vcpu)) {
|
||||
pstate = VCPU_RESET_PSTATE_EL2;
|
||||
} else {
|
||||
pstate = VCPU_RESET_PSTATE_EL1;
|
||||
}
|
||||
|
@ -352,7 +369,7 @@ u32 get_kvm_ipa_limit(void)
|
|||
return kvm_ipa_limit;
|
||||
}
|
||||
|
||||
int kvm_set_ipa_limit(void)
|
||||
int __init kvm_set_ipa_limit(void)
|
||||
{
|
||||
unsigned int parange;
|
||||
u64 mmfr0;
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/bsearch.h>
|
||||
#include <linux/cacheinfo.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/printk.h>
|
||||
|
@ -24,6 +25,7 @@
|
|||
#include <asm/kvm_emulate.h>
|
||||
#include <asm/kvm_hyp.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/kvm_nested.h>
|
||||
#include <asm/perf_event.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
|
@ -78,28 +80,112 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
|
|||
__vcpu_write_sys_reg_to_cpu(val, reg))
|
||||
return;
|
||||
|
||||
__vcpu_sys_reg(vcpu, reg) = val;
|
||||
__vcpu_sys_reg(vcpu, reg) = val;
|
||||
}
|
||||
|
||||
/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
|
||||
static u32 cache_levels;
|
||||
|
||||
/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
|
||||
#define CSSELR_MAX 14
|
||||
|
||||
/* Which cache CCSIDR represents depends on CSSELR value. */
|
||||
static u32 get_ccsidr(u32 csselr)
|
||||
/*
|
||||
* Returns the minimum line size for the selected cache, expressed as
|
||||
* Log2(bytes).
|
||||
*/
|
||||
static u8 get_min_cache_line_size(bool icache)
|
||||
{
|
||||
u32 ccsidr;
|
||||
u64 ctr = read_sanitised_ftr_reg(SYS_CTR_EL0);
|
||||
u8 field;
|
||||
|
||||
/* Make sure noone else changes CSSELR during this! */
|
||||
local_irq_disable();
|
||||
write_sysreg(csselr, csselr_el1);
|
||||
isb();
|
||||
ccsidr = read_sysreg(ccsidr_el1);
|
||||
local_irq_enable();
|
||||
if (icache)
|
||||
field = SYS_FIELD_GET(CTR_EL0, IminLine, ctr);
|
||||
else
|
||||
field = SYS_FIELD_GET(CTR_EL0, DminLine, ctr);
|
||||
|
||||
return ccsidr;
|
||||
/*
|
||||
* Cache line size is represented as Log2(words) in CTR_EL0.
|
||||
* Log2(bytes) can be derived with the following:
|
||||
*
|
||||
* Log2(words) + 2 = Log2(bytes / 4) + 2
|
||||
* = Log2(bytes) - 2 + 2
|
||||
* = Log2(bytes)
|
||||
*/
|
||||
return field + 2;
|
||||
}
|
||||
|
||||
/* Which cache CCSIDR represents depends on CSSELR value. */
|
||||
static u32 get_ccsidr(struct kvm_vcpu *vcpu, u32 csselr)
|
||||
{
|
||||
u8 line_size;
|
||||
|
||||
if (vcpu->arch.ccsidr)
|
||||
return vcpu->arch.ccsidr[csselr];
|
||||
|
||||
line_size = get_min_cache_line_size(csselr & CSSELR_EL1_InD);
|
||||
|
||||
/*
|
||||
* Fabricate a CCSIDR value as the overriding value does not exist.
|
||||
* The real CCSIDR value will not be used as it can vary by the
|
||||
* physical CPU which the vcpu currently resides in.
|
||||
*
|
||||
* The line size is determined with get_min_cache_line_size(), which
|
||||
* should be valid for all CPUs even if they have different cache
|
||||
* configuration.
|
||||
*
|
||||
* The associativity bits are cleared, meaning the geometry of all data
|
||||
* and unified caches (which are guaranteed to be PIPT and thus
|
||||
* non-aliasing) are 1 set and 1 way.
|
||||
* Guests should not be doing cache operations by set/way at all, and
|
||||
* for this reason, we trap them and attempt to infer the intent, so
|
||||
* that we can flush the entire guest's address space at the appropriate
|
||||
* time. The exposed geometry minimizes the number of the traps.
|
||||
* [If guests should attempt to infer aliasing properties from the
|
||||
* geometry (which is not permitted by the architecture), they would
|
||||
* only do so for virtually indexed caches.]
|
||||
*
|
||||
* We don't check if the cache level exists as it is allowed to return
|
||||
* an UNKNOWN value if not.
|
||||
*/
|
||||
return SYS_FIELD_PREP(CCSIDR_EL1, LineSize, line_size - 4);
|
||||
}
|
||||
|
||||
static int set_ccsidr(struct kvm_vcpu *vcpu, u32 csselr, u32 val)
|
||||
{
|
||||
u8 line_size = FIELD_GET(CCSIDR_EL1_LineSize, val) + 4;
|
||||
u32 *ccsidr = vcpu->arch.ccsidr;
|
||||
u32 i;
|
||||
|
||||
if ((val & CCSIDR_EL1_RES0) ||
|
||||
line_size < get_min_cache_line_size(csselr & CSSELR_EL1_InD))
|
||||
return -EINVAL;
|
||||
|
||||
if (!ccsidr) {
|
||||
if (val == get_ccsidr(vcpu, csselr))
|
||||
return 0;
|
||||
|
||||
ccsidr = kmalloc_array(CSSELR_MAX, sizeof(u32), GFP_KERNEL_ACCOUNT);
|
||||
if (!ccsidr)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < CSSELR_MAX; i++)
|
||||
ccsidr[i] = get_ccsidr(vcpu, i);
|
||||
|
||||
vcpu->arch.ccsidr = ccsidr;
|
||||
}
|
||||
|
||||
ccsidr[csselr] = val;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool access_rw(struct kvm_vcpu *vcpu,
|
||||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (p->is_write)
|
||||
vcpu_write_sys_reg(vcpu, p->regval, r->reg);
|
||||
else
|
||||
p->regval = vcpu_read_sys_reg(vcpu, r->reg);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -260,6 +346,14 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
|
|||
return read_zero(vcpu, p);
|
||||
}
|
||||
|
||||
static bool trap_undef(struct kvm_vcpu *vcpu,
|
||||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
kvm_inject_undefined(vcpu);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* ARMv8.1 mandates at least a trivial LORegion implementation, where all the
|
||||
* RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
|
||||
|
@ -370,12 +464,9 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu,
|
|||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (p->is_write) {
|
||||
vcpu_write_sys_reg(vcpu, p->regval, r->reg);
|
||||
access_rw(vcpu, p, r);
|
||||
if (p->is_write)
|
||||
vcpu_set_flag(vcpu, DEBUG_DIRTY);
|
||||
} else {
|
||||
p->regval = vcpu_read_sys_reg(vcpu, r->reg);
|
||||
}
|
||||
|
||||
trace_trap_reg(__func__, r->reg, p->is_write, p->regval);
|
||||
|
||||
|
@ -1049,7 +1140,9 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
|
|||
treg = TIMER_REG_CVAL;
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
print_sys_reg_msg(p, "%s", "Unhandled trapped timer register");
|
||||
kvm_inject_undefined(vcpu);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (p->is_write)
|
||||
|
@ -1155,6 +1248,12 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
|
|||
val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon),
|
||||
pmuver_to_perfmon(vcpu_pmuver(vcpu)));
|
||||
break;
|
||||
case SYS_ID_AA64MMFR2_EL1:
|
||||
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
|
||||
break;
|
||||
case SYS_ID_MMFR4_EL1:
|
||||
val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
|
||||
break;
|
||||
}
|
||||
|
||||
return val;
|
||||
|
@ -1205,6 +1304,9 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
|
|||
return write_to_read_only(vcpu, p, r);
|
||||
|
||||
p->regval = read_id_reg(vcpu, r);
|
||||
if (vcpu_has_nv(vcpu))
|
||||
access_nested_id_reg(vcpu, p, r);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -1385,10 +1487,78 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
|
|||
if (p->is_write)
|
||||
return write_to_read_only(vcpu, p, r);
|
||||
|
||||
p->regval = read_sysreg(clidr_el1);
|
||||
p->regval = __vcpu_sys_reg(vcpu, r->reg);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fabricate a CLIDR_EL1 value instead of using the real value, which can vary
|
||||
* by the physical CPU which the vcpu currently resides in.
|
||||
*/
|
||||
static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
|
||||
{
|
||||
u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0);
|
||||
u64 clidr;
|
||||
u8 loc;
|
||||
|
||||
if ((ctr_el0 & CTR_EL0_IDC)) {
|
||||
/*
|
||||
* Data cache clean to the PoU is not required so LoUU and LoUIS
|
||||
* will not be set and a unified cache, which will be marked as
|
||||
* LoC, will be added.
|
||||
*
|
||||
* If not DIC, let the unified cache L2 so that an instruction
|
||||
* cache can be added as L1 later.
|
||||
*/
|
||||
loc = (ctr_el0 & CTR_EL0_DIC) ? 1 : 2;
|
||||
clidr = CACHE_TYPE_UNIFIED << CLIDR_CTYPE_SHIFT(loc);
|
||||
} else {
|
||||
/*
|
||||
* Data cache clean to the PoU is required so let L1 have a data
|
||||
* cache and mark it as LoUU and LoUIS. As L1 has a data cache,
|
||||
* it can be marked as LoC too.
|
||||
*/
|
||||
loc = 1;
|
||||
clidr = 1 << CLIDR_LOUU_SHIFT;
|
||||
clidr |= 1 << CLIDR_LOUIS_SHIFT;
|
||||
clidr |= CACHE_TYPE_DATA << CLIDR_CTYPE_SHIFT(1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Instruction cache invalidation to the PoU is required so let L1 have
|
||||
* an instruction cache. If L1 already has a data cache, it will be
|
||||
* CACHE_TYPE_SEPARATE.
|
||||
*/
|
||||
if (!(ctr_el0 & CTR_EL0_DIC))
|
||||
clidr |= CACHE_TYPE_INST << CLIDR_CTYPE_SHIFT(1);
|
||||
|
||||
clidr |= loc << CLIDR_LOC_SHIFT;
|
||||
|
||||
/*
|
||||
* Add tag cache unified to data cache. Allocation tags and data are
|
||||
* unified in a cache line so that it looks valid even if there is only
|
||||
* one cache line.
|
||||
*/
|
||||
if (kvm_has_mte(vcpu->kvm))
|
||||
clidr |= 2 << CLIDR_TTYPE_SHIFT(loc);
|
||||
|
||||
__vcpu_sys_reg(vcpu, r->reg) = clidr;
|
||||
}
|
||||
|
||||
static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
|
||||
u64 val)
|
||||
{
|
||||
u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0);
|
||||
u64 idc = !CLIDR_LOC(val) || (!CLIDR_LOUIS(val) && !CLIDR_LOUU(val));
|
||||
|
||||
if ((val & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) && idc))
|
||||
return -EINVAL;
|
||||
|
||||
__vcpu_sys_reg(vcpu, rd->reg) = val;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
|
@ -1410,22 +1580,10 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
|
|||
return write_to_read_only(vcpu, p, r);
|
||||
|
||||
csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
|
||||
p->regval = get_ccsidr(csselr);
|
||||
csselr &= CSSELR_EL1_Level | CSSELR_EL1_InD;
|
||||
if (csselr < CSSELR_MAX)
|
||||
p->regval = get_ccsidr(vcpu, csselr);
|
||||
|
||||
/*
|
||||
* Guests should not be doing cache operations by set/way at all, and
|
||||
* for this reason, we trap them and attempt to infer the intent, so
|
||||
* that we can flush the entire guest's address space at the appropriate
|
||||
* time.
|
||||
* To prevent this trapping from causing performance problems, let's
|
||||
* expose the geometry of all data and unified caches (which are
|
||||
* guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way.
|
||||
* [If guests should attempt to infer aliasing properties from the
|
||||
* geometry (which is not permitted by the architecture), they would
|
||||
* only do so for virtually indexed caches.]
|
||||
*/
|
||||
if (!(csselr & 1)) // data or unified cache
|
||||
p->regval &= ~GENMASK(27, 3);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -1446,6 +1604,44 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
|
|||
.visibility = mte_visibility, \
|
||||
}
|
||||
|
||||
static unsigned int el2_visibility(const struct kvm_vcpu *vcpu,
|
||||
const struct sys_reg_desc *rd)
|
||||
{
|
||||
if (vcpu_has_nv(vcpu))
|
||||
return 0;
|
||||
|
||||
return REG_HIDDEN;
|
||||
}
|
||||
|
||||
#define EL2_REG(name, acc, rst, v) { \
|
||||
SYS_DESC(SYS_##name), \
|
||||
.access = acc, \
|
||||
.reset = rst, \
|
||||
.reg = name, \
|
||||
.visibility = el2_visibility, \
|
||||
.val = v, \
|
||||
}
|
||||
|
||||
/*
|
||||
* EL{0,1}2 registers are the EL2 view on an EL0 or EL1 register when
|
||||
* HCR_EL2.E2H==1, and only in the sysreg table for convenience of
|
||||
* handling traps. Given that, they are always hidden from userspace.
|
||||
*/
|
||||
static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
|
||||
const struct sys_reg_desc *rd)
|
||||
{
|
||||
return REG_HIDDEN_USER;
|
||||
}
|
||||
|
||||
#define EL12_REG(name, acc, rst, v) { \
|
||||
SYS_DESC(SYS_##name##_EL12), \
|
||||
.access = acc, \
|
||||
.reset = rst, \
|
||||
.reg = name##_EL1, \
|
||||
.val = v, \
|
||||
.visibility = elx2_visibility, \
|
||||
}
|
||||
|
||||
/* sys_reg_desc initialiser for known cpufeature ID registers */
|
||||
#define ID_SANITISED(name) { \
|
||||
SYS_DESC(SYS_##name), \
|
||||
|
@ -1490,6 +1686,42 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
|
|||
.visibility = raz_visibility, \
|
||||
}
|
||||
|
||||
static bool access_sp_el1(struct kvm_vcpu *vcpu,
|
||||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (p->is_write)
|
||||
__vcpu_sys_reg(vcpu, SP_EL1) = p->regval;
|
||||
else
|
||||
p->regval = __vcpu_sys_reg(vcpu, SP_EL1);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool access_elr(struct kvm_vcpu *vcpu,
|
||||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (p->is_write)
|
||||
vcpu_write_sys_reg(vcpu, p->regval, ELR_EL1);
|
||||
else
|
||||
p->regval = vcpu_read_sys_reg(vcpu, ELR_EL1);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool access_spsr(struct kvm_vcpu *vcpu,
|
||||
struct sys_reg_params *p,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (p->is_write)
|
||||
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
|
||||
else
|
||||
p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Architected system registers.
|
||||
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
|
||||
|
@ -1646,6 +1878,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
|||
PTRAUTH_KEY(APDB),
|
||||
PTRAUTH_KEY(APGA),
|
||||
|
||||
{ SYS_DESC(SYS_SPSR_EL1), access_spsr},
|
||||
{ SYS_DESC(SYS_ELR_EL1), access_elr},
|
||||
|
||||
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
|
||||
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
|
||||
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
|
||||
|
@ -1693,7 +1928,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
|||
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
|
||||
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
|
||||
|
||||
{ SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 },
|
||||
{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
|
||||
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
|
||||
|
||||
{ SYS_DESC(SYS_ICC_IAR0_EL1), write_to_read_only },
|
||||
|
@ -1717,7 +1952,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
|||
{ SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
|
||||
|
||||
{ SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr },
|
||||
{ SYS_DESC(SYS_CLIDR_EL1), access_clidr },
|
||||
{ SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1,
|
||||
.set_user = set_clidr },
|
||||
{ SYS_DESC(SYS_CCSIDR2_EL1), undef_access },
|
||||
{ SYS_DESC(SYS_SMIDR_EL1), undef_access },
|
||||
{ SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
|
||||
{ SYS_DESC(SYS_CTR_EL0), access_ctr },
|
||||
|
@ -1913,9 +2150,67 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
|||
{ PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper,
|
||||
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
|
||||
|
||||
EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
|
||||
EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0),
|
||||
EL2_REG(SCTLR_EL2, access_rw, reset_val, SCTLR_EL2_RES1),
|
||||
EL2_REG(ACTLR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(HCR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(MDCR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_EL2_DEFAULT ),
|
||||
EL2_REG(HSTR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(HACR_EL2, access_rw, reset_val, 0),
|
||||
|
||||
EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(TTBR1_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
|
||||
EL2_REG(VTTBR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(VTCR_EL2, access_rw, reset_val, 0),
|
||||
|
||||
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
|
||||
EL2_REG(SPSR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(ELR_EL2, access_rw, reset_val, 0),
|
||||
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
|
||||
|
||||
{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
|
||||
EL2_REG(AFSR0_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(AFSR1_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(ESR_EL2, access_rw, reset_val, 0),
|
||||
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
|
||||
|
||||
EL2_REG(FAR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(HPFAR_EL2, access_rw, reset_val, 0),
|
||||
|
||||
EL2_REG(MAIR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(AMAIR_EL2, access_rw, reset_val, 0),
|
||||
|
||||
EL2_REG(VBAR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(RVBAR_EL2, access_rw, reset_val, 0),
|
||||
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
|
||||
|
||||
EL2_REG(CONTEXTIDR_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(TPIDR_EL2, access_rw, reset_val, 0),
|
||||
|
||||
EL2_REG(CNTVOFF_EL2, access_rw, reset_val, 0),
|
||||
EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0),
|
||||
|
||||
EL12_REG(SCTLR, access_vm_reg, reset_val, 0x00C50078),
|
||||
EL12_REG(CPACR, access_rw, reset_val, 0),
|
||||
EL12_REG(TTBR0, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(TTBR1, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(TCR, access_vm_reg, reset_val, 0),
|
||||
{ SYS_DESC(SYS_SPSR_EL12), access_spsr},
|
||||
{ SYS_DESC(SYS_ELR_EL12), access_elr},
|
||||
EL12_REG(AFSR0, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(AFSR1, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(ESR, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(FAR, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(MAIR, access_vm_reg, reset_unknown, 0),
|
||||
EL12_REG(AMAIR, access_vm_reg, reset_amair_el1, 0),
|
||||
EL12_REG(VBAR, access_rw, reset_val, 0),
|
||||
EL12_REG(CONTEXTIDR, access_vm_reg, reset_val, 0),
|
||||
EL12_REG(CNTKCTL, access_rw, reset_val, 0),
|
||||
|
||||
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
|
||||
};
|
||||
|
||||
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
|
||||
|
@ -2219,6 +2514,10 @@ static const struct sys_reg_desc cp15_regs[] = {
|
|||
|
||||
{ Op1(1), CRn( 0), CRm( 0), Op2(0), access_ccsidr },
|
||||
{ Op1(1), CRn( 0), CRm( 0), Op2(1), access_clidr },
|
||||
|
||||
/* CCSIDR2 */
|
||||
{ Op1(1), CRn( 0), CRm( 0), Op2(2), undef_access },
|
||||
|
||||
{ Op1(2), CRn( 0), CRm( 0), Op2(0), access_csselr, NULL, CSSELR_EL1 },
|
||||
};
|
||||
|
||||
|
@ -2724,7 +3023,6 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id,
|
|||
|
||||
FUNCTION_INVARIANT(midr_el1)
|
||||
FUNCTION_INVARIANT(revidr_el1)
|
||||
FUNCTION_INVARIANT(clidr_el1)
|
||||
FUNCTION_INVARIANT(aidr_el1)
|
||||
|
||||
static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
|
||||
|
@ -2733,10 +3031,9 @@ static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
|
|||
}
|
||||
|
||||
/* ->val is filled in by kvm_sys_reg_table_init() */
|
||||
static struct sys_reg_desc invariant_sys_regs[] = {
|
||||
static struct sys_reg_desc invariant_sys_regs[] __ro_after_init = {
|
||||
{ SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
|
||||
{ SYS_DESC(SYS_REVIDR_EL1), NULL, get_revidr_el1 },
|
||||
{ SYS_DESC(SYS_CLIDR_EL1), NULL, get_clidr_el1 },
|
||||
{ SYS_DESC(SYS_AIDR_EL1), NULL, get_aidr_el1 },
|
||||
{ SYS_DESC(SYS_CTR_EL0), NULL, get_ctr_el0 },
|
||||
};
|
||||
|
@ -2773,33 +3070,7 @@ static int set_invariant_sys_reg(u64 id, u64 __user *uaddr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool is_valid_cache(u32 val)
|
||||
{
|
||||
u32 level, ctype;
|
||||
|
||||
if (val >= CSSELR_MAX)
|
||||
return false;
|
||||
|
||||
/* Bottom bit is Instruction or Data bit. Next 3 bits are level. */
|
||||
level = (val >> 1);
|
||||
ctype = (cache_levels >> (level * 3)) & 7;
|
||||
|
||||
switch (ctype) {
|
||||
case 0: /* No cache */
|
||||
return false;
|
||||
case 1: /* Instruction cache only */
|
||||
return (val & 1);
|
||||
case 2: /* Data cache only */
|
||||
case 4: /* Unified cache */
|
||||
return !(val & 1);
|
||||
case 3: /* Separate instruction and data caches */
|
||||
return true;
|
||||
default: /* Reserved: we can't know instruction or data. */
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static int demux_c15_get(u64 id, void __user *uaddr)
|
||||
static int demux_c15_get(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
|
||||
{
|
||||
u32 val;
|
||||
u32 __user *uval = uaddr;
|
||||
|
@ -2815,16 +3086,16 @@ static int demux_c15_get(u64 id, void __user *uaddr)
|
|||
return -ENOENT;
|
||||
val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
|
||||
>> KVM_REG_ARM_DEMUX_VAL_SHIFT;
|
||||
if (!is_valid_cache(val))
|
||||
if (val >= CSSELR_MAX)
|
||||
return -ENOENT;
|
||||
|
||||
return put_user(get_ccsidr(val), uval);
|
||||
return put_user(get_ccsidr(vcpu, val), uval);
|
||||
default:
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
static int demux_c15_set(u64 id, void __user *uaddr)
|
||||
static int demux_c15_set(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
|
||||
{
|
||||
u32 val, newval;
|
||||
u32 __user *uval = uaddr;
|
||||
|
@ -2840,16 +3111,13 @@ static int demux_c15_set(u64 id, void __user *uaddr)
|
|||
return -ENOENT;
|
||||
val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
|
||||
>> KVM_REG_ARM_DEMUX_VAL_SHIFT;
|
||||
if (!is_valid_cache(val))
|
||||
if (val >= CSSELR_MAX)
|
||||
return -ENOENT;
|
||||
|
||||
if (get_user(newval, uval))
|
||||
return -EFAULT;
|
||||
|
||||
/* This is also invariant: you can't change it. */
|
||||
if (newval != get_ccsidr(val))
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
return set_ccsidr(vcpu, val, newval);
|
||||
default:
|
||||
return -ENOENT;
|
||||
}
|
||||
|
@ -2864,7 +3132,7 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
|
|||
int ret;
|
||||
|
||||
r = id_to_sys_reg_desc(vcpu, reg->id, table, num);
|
||||
if (!r)
|
||||
if (!r || sysreg_hidden_user(vcpu, r))
|
||||
return -ENOENT;
|
||||
|
||||
if (r->get_user) {
|
||||
|
@ -2886,7 +3154,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
|
|||
int err;
|
||||
|
||||
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
|
||||
return demux_c15_get(reg->id, uaddr);
|
||||
return demux_c15_get(vcpu, reg->id, uaddr);
|
||||
|
||||
err = get_invariant_sys_reg(reg->id, uaddr);
|
||||
if (err != -ENOENT)
|
||||
|
@ -2908,7 +3176,7 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
|
|||
return -EFAULT;
|
||||
|
||||
r = id_to_sys_reg_desc(vcpu, reg->id, table, num);
|
||||
if (!r)
|
||||
if (!r || sysreg_hidden_user(vcpu, r))
|
||||
return -ENOENT;
|
||||
|
||||
if (sysreg_user_write_ignore(vcpu, r))
|
||||
|
@ -2930,7 +3198,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
|
|||
int err;
|
||||
|
||||
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
|
||||
return demux_c15_set(reg->id, uaddr);
|
||||
return demux_c15_set(vcpu, reg->id, uaddr);
|
||||
|
||||
err = set_invariant_sys_reg(reg->id, uaddr);
|
||||
if (err != -ENOENT)
|
||||
|
@ -2942,13 +3210,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
|
|||
|
||||
static unsigned int num_demux_regs(void)
|
||||
{
|
||||
unsigned int i, count = 0;
|
||||
|
||||
for (i = 0; i < CSSELR_MAX; i++)
|
||||
if (is_valid_cache(i))
|
||||
count++;
|
||||
|
||||
return count;
|
||||
return CSSELR_MAX;
|
||||
}
|
||||
|
||||
static int write_demux_regids(u64 __user *uindices)
|
||||
|
@ -2958,8 +3220,6 @@ static int write_demux_regids(u64 __user *uindices)
|
|||
|
||||
val |= KVM_REG_ARM_DEMUX_ID_CCSIDR;
|
||||
for (i = 0; i < CSSELR_MAX; i++) {
|
||||
if (!is_valid_cache(i))
|
||||
continue;
|
||||
if (put_user(val | i, uindices))
|
||||
return -EFAULT;
|
||||
uindices++;
|
||||
|
@ -3002,7 +3262,7 @@ static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
|
|||
if (!(rd->reg || rd->get_user))
|
||||
return 0;
|
||||
|
||||
if (sysreg_hidden(vcpu, rd))
|
||||
if (sysreg_hidden_user(vcpu, rd))
|
||||
return 0;
|
||||
|
||||
if (!copy_reg_to_user(rd, uind))
|
||||
|
@ -3057,11 +3317,10 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
|
|||
return write_demux_regids(uindices);
|
||||
}
|
||||
|
||||
int kvm_sys_reg_table_init(void)
|
||||
int __init kvm_sys_reg_table_init(void)
|
||||
{
|
||||
bool valid = true;
|
||||
unsigned int i;
|
||||
struct sys_reg_desc clidr;
|
||||
|
||||
/* Make sure tables are unique and in order. */
|
||||
valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
|
||||
|
@ -3078,23 +3337,5 @@ int kvm_sys_reg_table_init(void)
|
|||
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
|
||||
invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
|
||||
|
||||
/*
|
||||
* CLIDR format is awkward, so clean it up. See ARM B4.1.20:
|
||||
*
|
||||
* If software reads the Cache Type fields from Ctype1
|
||||
* upwards, once it has seen a value of 0b000, no caches
|
||||
* exist at further-out levels of the hierarchy. So, for
|
||||
* example, if Ctype3 is the first Cache Type field with a
|
||||
* value of 0b000, the values of Ctype4 to Ctype7 must be
|
||||
* ignored.
|
||||
*/
|
||||
get_clidr_el1(NULL, &clidr); /* Ugly... */
|
||||
cache_levels = clidr.val;
|
||||
for (i = 0; i < 7; i++)
|
||||
if (((cache_levels >> (i*3)) & 7) == 0)
|
||||
break;
|
||||
/* Clear all higher bits. */
|
||||
cache_levels &= (1 << (i*3))-1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -85,8 +85,9 @@ struct sys_reg_desc {
|
|||
};
|
||||
|
||||
#define REG_HIDDEN (1 << 0) /* hidden from userspace and guest */
|
||||
#define REG_RAZ (1 << 1) /* RAZ from userspace and guest */
|
||||
#define REG_USER_WI (1 << 2) /* WI from userspace only */
|
||||
#define REG_HIDDEN_USER (1 << 1) /* hidden from userspace only */
|
||||
#define REG_RAZ (1 << 2) /* RAZ from userspace and guest */
|
||||
#define REG_USER_WI (1 << 3) /* WI from userspace only */
|
||||
|
||||
static __printf(2, 3)
|
||||
inline void print_sys_reg_msg(const struct sys_reg_params *p,
|
||||
|
@ -152,6 +153,15 @@ static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu,
|
|||
return sysreg_visibility(vcpu, r) & REG_HIDDEN;
|
||||
}
|
||||
|
||||
static inline bool sysreg_hidden_user(const struct kvm_vcpu *vcpu,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
if (likely(!r->visibility))
|
||||
return false;
|
||||
|
||||
return r->visibility(vcpu, r) & (REG_HIDDEN | REG_HIDDEN_USER);
|
||||
}
|
||||
|
||||
static inline bool sysreg_visible_as_raz(const struct kvm_vcpu *vcpu,
|
||||
const struct sys_reg_desc *r)
|
||||
{
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
#if !defined(_TRACE_ARM_ARM64_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _TRACE_ARM_ARM64_KVM_H
|
||||
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <kvm/arm_arch_timer.h>
|
||||
#include <linux/tracepoint.h>
|
||||
|
||||
|
@ -301,6 +302,64 @@ TRACE_EVENT(kvm_timer_emulate,
|
|||
__entry->timer_idx, __entry->should_fire)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_nested_eret,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, unsigned long elr_el2,
|
||||
unsigned long spsr_el2),
|
||||
TP_ARGS(vcpu, elr_el2, spsr_el2),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(struct kvm_vcpu *, vcpu)
|
||||
__field(unsigned long, elr_el2)
|
||||
__field(unsigned long, spsr_el2)
|
||||
__field(unsigned long, target_mode)
|
||||
__field(unsigned long, hcr_el2)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu = vcpu;
|
||||
__entry->elr_el2 = elr_el2;
|
||||
__entry->spsr_el2 = spsr_el2;
|
||||
__entry->target_mode = spsr_el2 & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
__entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
|
||||
),
|
||||
|
||||
TP_printk("elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
|
||||
__entry->elr_el2, __entry->spsr_el2,
|
||||
__print_symbolic(__entry->target_mode, kvm_mode_names),
|
||||
__entry->hcr_el2)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_inject_nested_exception,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, u64 esr_el2, int type),
|
||||
TP_ARGS(vcpu, esr_el2, type),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(struct kvm_vcpu *, vcpu)
|
||||
__field(unsigned long, esr_el2)
|
||||
__field(int, type)
|
||||
__field(unsigned long, spsr_el2)
|
||||
__field(unsigned long, pc)
|
||||
__field(unsigned long, source_mode)
|
||||
__field(unsigned long, hcr_el2)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu = vcpu;
|
||||
__entry->esr_el2 = esr_el2;
|
||||
__entry->type = type;
|
||||
__entry->spsr_el2 = *vcpu_cpsr(vcpu);
|
||||
__entry->pc = *vcpu_pc(vcpu);
|
||||
__entry->source_mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
|
||||
__entry->hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
|
||||
),
|
||||
|
||||
TP_printk("%s: esr_el2 0x%lx elr_el2: 0x%lx spsr_el2: 0x%08lx (M: %s) hcr_el2: %lx",
|
||||
__print_symbolic(__entry->type, kvm_exception_type_names),
|
||||
__entry->esr_el2, __entry->pc, __entry->spsr_el2,
|
||||
__print_symbolic(__entry->source_mode, kvm_mode_names),
|
||||
__entry->hcr_el2)
|
||||
);
|
||||
|
||||
#endif /* _TRACE_ARM_ARM64_KVM_H */
|
||||
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
|
|
|
@ -465,17 +465,15 @@ out:
|
|||
|
||||
/* GENERIC PROBE */
|
||||
|
||||
static int vgic_init_cpu_starting(unsigned int cpu)
|
||||
void kvm_vgic_cpu_up(void)
|
||||
{
|
||||
enable_percpu_irq(kvm_vgic_global_state.maint_irq, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static int vgic_init_cpu_dying(unsigned int cpu)
|
||||
void kvm_vgic_cpu_down(void)
|
||||
{
|
||||
disable_percpu_irq(kvm_vgic_global_state.maint_irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static irqreturn_t vgic_maintenance_handler(int irq, void *data)
|
||||
|
@ -572,7 +570,7 @@ int kvm_vgic_hyp_init(void)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!has_mask)
|
||||
if (!has_mask && !kvm_vgic_global_state.maint_irq)
|
||||
return 0;
|
||||
|
||||
ret = request_percpu_irq(kvm_vgic_global_state.maint_irq,
|
||||
|
@ -584,19 +582,6 @@ int kvm_vgic_hyp_init(void)
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = cpuhp_setup_state(CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING,
|
||||
"kvm/arm/vgic:starting",
|
||||
vgic_init_cpu_starting, vgic_init_cpu_dying);
|
||||
if (ret) {
|
||||
kvm_err("Cannot register vgic CPU notifier\n");
|
||||
goto out_free_irq;
|
||||
}
|
||||
|
||||
kvm_info("vgic interrupt IRQ%d\n", kvm_vgic_global_state.maint_irq);
|
||||
return 0;
|
||||
|
||||
out_free_irq:
|
||||
free_percpu_irq(kvm_vgic_global_state.maint_irq,
|
||||
kvm_get_running_vcpus());
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -473,9 +473,10 @@ int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
|
|||
* active state can be overwritten when the VCPU's state is synced coming back
|
||||
* from the guest.
|
||||
*
|
||||
* For shared interrupts as well as GICv3 private interrupts, we have to
|
||||
* stop all the VCPUs because interrupts can be migrated while we don't hold
|
||||
* the IRQ locks and we don't want to be chasing moving targets.
|
||||
* For shared interrupts as well as GICv3 private interrupts accessed from the
|
||||
* non-owning CPU, we have to stop all the VCPUs because interrupts can be
|
||||
* migrated while we don't hold the IRQ locks and we don't want to be chasing
|
||||
* moving targets.
|
||||
*
|
||||
* For GICv2 private interrupts we don't have to do anything because
|
||||
* userspace accesses to the VGIC state already require all VCPUs to be
|
||||
|
@ -484,7 +485,8 @@ int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
|
|||
*/
|
||||
static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
|
||||
{
|
||||
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
|
||||
if ((vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
|
||||
vcpu != kvm_get_running_vcpu()) ||
|
||||
intid >= VGIC_NR_PRIVATE_IRQS)
|
||||
kvm_arm_halt_guest(vcpu->kvm);
|
||||
}
|
||||
|
@ -492,7 +494,8 @@ static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
|
|||
/* See vgic_access_active_prepare */
|
||||
static void vgic_access_active_finish(struct kvm_vcpu *vcpu, u32 intid)
|
||||
{
|
||||
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
|
||||
if ((vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
|
||||
vcpu != kvm_get_running_vcpu()) ||
|
||||
intid >= VGIC_NR_PRIVATE_IRQS)
|
||||
kvm_arm_resume_guest(vcpu->kvm);
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
#include <linux/irqchip/arm-gic-v3.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/kstrtox.h>
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <kvm/arm_vgic.h>
|
||||
|
@ -584,25 +585,25 @@ DEFINE_STATIC_KEY_FALSE(vgic_v3_cpuif_trap);
|
|||
|
||||
static int __init early_group0_trap_cfg(char *buf)
|
||||
{
|
||||
return strtobool(buf, &group0_trap);
|
||||
return kstrtobool(buf, &group0_trap);
|
||||
}
|
||||
early_param("kvm-arm.vgic_v3_group0_trap", early_group0_trap_cfg);
|
||||
|
||||
static int __init early_group1_trap_cfg(char *buf)
|
||||
{
|
||||
return strtobool(buf, &group1_trap);
|
||||
return kstrtobool(buf, &group1_trap);
|
||||
}
|
||||
early_param("kvm-arm.vgic_v3_group1_trap", early_group1_trap_cfg);
|
||||
|
||||
static int __init early_common_trap_cfg(char *buf)
|
||||
{
|
||||
return strtobool(buf, &common_trap);
|
||||
return kstrtobool(buf, &common_trap);
|
||||
}
|
||||
early_param("kvm-arm.vgic_v3_common_trap", early_common_trap_cfg);
|
||||
|
||||
static int __init early_gicv4_enable(char *buf)
|
||||
{
|
||||
return strtobool(buf, &gicv4_enable);
|
||||
return kstrtobool(buf, &gicv4_enable);
|
||||
}
|
||||
early_param("kvm-arm.vgic_v4_enable", early_gicv4_enable);
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
#include <asm/kvm_asm.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
|
||||
unsigned int kvm_arm_vmid_bits;
|
||||
unsigned int __ro_after_init kvm_arm_vmid_bits;
|
||||
static DEFINE_RAW_SPINLOCK(cpu_vmid_lock);
|
||||
|
||||
static atomic64_t vmid_generation;
|
||||
|
@ -172,7 +172,7 @@ void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
|
|||
/*
|
||||
* Initialize the VMID allocator
|
||||
*/
|
||||
int kvm_arm_vmid_alloc_init(void)
|
||||
int __init kvm_arm_vmid_alloc_init(void)
|
||||
{
|
||||
kvm_arm_vmid_bits = kvm_get_vmid_bits();
|
||||
|
||||
|
@ -190,7 +190,7 @@ int kvm_arm_vmid_alloc_init(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void kvm_arm_vmid_alloc_free(void)
|
||||
void __init kvm_arm_vmid_alloc_free(void)
|
||||
{
|
||||
kfree(vmid_map);
|
||||
}
|
||||
|
|
|
@ -33,6 +33,7 @@ HAS_GIC_PRIO_MASKING
|
|||
HAS_GIC_PRIO_RELAXED_SYNC
|
||||
HAS_LDAPR
|
||||
HAS_LSE_ATOMICS
|
||||
HAS_NESTED_VIRT
|
||||
HAS_NO_FPSIMD
|
||||
HAS_NO_HW_PREFETCH
|
||||
HAS_PAN
|
||||
|
|
|
@ -103,6 +103,7 @@ END {
|
|||
|
||||
res0 = "UL(0)"
|
||||
res1 = "UL(0)"
|
||||
unkn = "UL(0)"
|
||||
|
||||
next_bit = 63
|
||||
|
||||
|
@ -117,11 +118,13 @@ END {
|
|||
|
||||
define(reg "_RES0", "(" res0 ")")
|
||||
define(reg "_RES1", "(" res1 ")")
|
||||
define(reg "_UNKN", "(" unkn ")")
|
||||
print ""
|
||||
|
||||
reg = null
|
||||
res0 = null
|
||||
res1 = null
|
||||
unkn = null
|
||||
|
||||
next
|
||||
}
|
||||
|
@ -139,6 +142,7 @@ END {
|
|||
|
||||
res0 = "UL(0)"
|
||||
res1 = "UL(0)"
|
||||
unkn = "UL(0)"
|
||||
|
||||
define("REG_" reg, "S" op0 "_" op1 "_C" crn "_C" crm "_" op2)
|
||||
define("SYS_" reg, "sys_reg(" op0 ", " op1 ", " crn ", " crm ", " op2 ")")
|
||||
|
@ -166,7 +170,9 @@ END {
|
|||
define(reg "_RES0", "(" res0 ")")
|
||||
if (res1 != null)
|
||||
define(reg "_RES1", "(" res1 ")")
|
||||
if (res0 != null || res1 != null)
|
||||
if (unkn != null)
|
||||
define(reg "_UNKN", "(" unkn ")")
|
||||
if (res0 != null || res1 != null || unkn != null)
|
||||
print ""
|
||||
|
||||
reg = null
|
||||
|
@ -177,6 +183,7 @@ END {
|
|||
op2 = null
|
||||
res0 = null
|
||||
res1 = null
|
||||
unkn = null
|
||||
|
||||
next
|
||||
}
|
||||
|
@ -195,6 +202,7 @@ END {
|
|||
next_bit = 0
|
||||
res0 = null
|
||||
res1 = null
|
||||
unkn = null
|
||||
|
||||
next
|
||||
}
|
||||
|
@ -220,6 +228,16 @@ END {
|
|||
next
|
||||
}
|
||||
|
||||
/^Unkn/ && (block == "Sysreg" || block == "SysregFields") {
|
||||
expect_fields(2)
|
||||
parse_bitdef(reg, "UNKN", $2)
|
||||
field = "UNKN_" msb "_" lsb
|
||||
|
||||
unkn = unkn " | GENMASK_ULL(" msb ", " lsb ")"
|
||||
|
||||
next
|
||||
}
|
||||
|
||||
/^Field/ && (block == "Sysreg" || block == "SysregFields") {
|
||||
expect_fields(3)
|
||||
field = $3
|
||||
|
|
|
@ -15,6 +15,8 @@
|
|||
|
||||
# Res1 <msb>[:<lsb>]
|
||||
|
||||
# Unkn <msb>[:<lsb>]
|
||||
|
||||
# Field <msb>[:<lsb>] <name>
|
||||
|
||||
# Enum <msb>[:<lsb>] <name>
|
||||
|
@ -1779,6 +1781,16 @@ Sysreg SCXTNUM_EL1 3 0 13 0 7
|
|||
Field 63:0 SoftwareContextNumber
|
||||
EndSysreg
|
||||
|
||||
# The bit layout for CCSIDR_EL1 depends on whether FEAT_CCIDX is implemented.
|
||||
# The following is for case when FEAT_CCIDX is not implemented.
|
||||
Sysreg CCSIDR_EL1 3 1 0 0 0
|
||||
Res0 63:32
|
||||
Unkn 31:28
|
||||
Field 27:13 NumSets
|
||||
Field 12:3 Associativity
|
||||
Field 2:0 LineSize
|
||||
EndSysreg
|
||||
|
||||
Sysreg CLIDR_EL1 3 1 0 0 1
|
||||
Res0 63:47
|
||||
Field 46:33 Ttypen
|
||||
|
@ -1795,6 +1807,11 @@ Field 5:3 Ctype2
|
|||
Field 2:0 Ctype1
|
||||
EndSysreg
|
||||
|
||||
Sysreg CCSIDR2_EL1 3 1 0 0 2
|
||||
Res0 63:24
|
||||
Field 23:0 NumSets
|
||||
EndSysreg
|
||||
|
||||
Sysreg GMID_EL1 3 1 0 0 4
|
||||
Res0 63:4
|
||||
Field 3:0 BS
|
||||
|
|
|
@ -758,7 +758,7 @@ struct kvm_mips_callbacks {
|
|||
void (*vcpu_reenter)(struct kvm_vcpu *vcpu);
|
||||
};
|
||||
extern struct kvm_mips_callbacks *kvm_mips_callbacks;
|
||||
int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
|
||||
int kvm_mips_emulation_init(void);
|
||||
|
||||
/* Debug: dump vcpu state */
|
||||
int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
|
||||
|
@ -888,7 +888,6 @@ extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
|
|||
extern int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
|
||||
struct kvm_mips_interrupt *irq);
|
||||
|
||||
static inline void kvm_arch_hardware_unsetup(void) {}
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_free_memslot(struct kvm *kvm,
|
||||
struct kvm_memory_slot *slot) {}
|
||||
|
|
|
@ -28,6 +28,7 @@ config KVM
|
|||
select MMU_NOTIFIER
|
||||
select SRCU
|
||||
select INTERVAL_TREE
|
||||
select KVM_GENERIC_HARDWARE_ENABLING
|
||||
help
|
||||
Support for hosting Guest kernels.
|
||||
|
||||
|
|
|
@ -17,4 +17,4 @@ kvm-$(CONFIG_CPU_LOONGSON64) += loongson_ipi.o
|
|||
|
||||
kvm-y += vz.o
|
||||
obj-$(CONFIG_KVM) += kvm.o
|
||||
obj-y += callback.o tlb.o
|
||||
obj-y += tlb.o
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
/*
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*
|
||||
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
|
||||
* Authors: Yann Le Du <ledu@kymasys.com>
|
||||
*/
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
struct kvm_mips_callbacks *kvm_mips_callbacks;
|
||||
EXPORT_SYMBOL_GPL(kvm_mips_callbacks);
|
|
@ -135,16 +135,6 @@ void kvm_arch_hardware_disable(void)
|
|||
kvm_mips_callbacks->hardware_disable();
|
||||
}
|
||||
|
||||
int kvm_arch_hardware_setup(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_check_processor_compat(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern void kvm_init_loongson_ipi(struct kvm *kvm);
|
||||
|
||||
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
||||
|
@ -1015,21 +1005,6 @@ long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
|
|||
return r;
|
||||
}
|
||||
|
||||
int kvm_arch_init(void *opaque)
|
||||
{
|
||||
if (kvm_mips_callbacks) {
|
||||
kvm_err("kvm: module already exists\n");
|
||||
return -EEXIST;
|
||||
}
|
||||
|
||||
return kvm_mips_emulation_init(&kvm_mips_callbacks);
|
||||
}
|
||||
|
||||
void kvm_arch_exit(void)
|
||||
{
|
||||
kvm_mips_callbacks = NULL;
|
||||
}
|
||||
|
||||
int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
|
||||
struct kvm_sregs *sregs)
|
||||
{
|
||||
|
@ -1646,16 +1621,21 @@ static int __init kvm_mips_init(void)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
|
||||
ret = kvm_mips_emulation_init();
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
||||
if (boot_cpu_type() == CPU_LOONGSON64)
|
||||
kvm_priority_to_irq = kvm_loongson3_priority_to_irq;
|
||||
|
||||
register_die_notifier(&kvm_mips_csr_die_notifier);
|
||||
|
||||
ret = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
if (ret) {
|
||||
unregister_die_notifier(&kvm_mips_csr_die_notifier);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -3304,7 +3304,10 @@ static struct kvm_mips_callbacks kvm_vz_callbacks = {
|
|||
.vcpu_reenter = kvm_vz_vcpu_reenter,
|
||||
};
|
||||
|
||||
int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks)
|
||||
/* FIXME: Get rid of the callbacks now that trap-and-emulate is gone. */
|
||||
struct kvm_mips_callbacks *kvm_mips_callbacks = &kvm_vz_callbacks;
|
||||
|
||||
int kvm_mips_emulation_init(void)
|
||||
{
|
||||
if (!cpu_has_vz)
|
||||
return -ENODEV;
|
||||
|
@ -3318,7 +3321,5 @@ int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks)
|
|||
return -ENODEV;
|
||||
|
||||
pr_info("Starting KVM with MIPS VZ extensions\n");
|
||||
|
||||
*install_callbacks = &kvm_vz_callbacks;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -876,13 +876,10 @@ struct kvm_vcpu_arch {
|
|||
#define __KVM_HAVE_ARCH_WQP
|
||||
#define __KVM_HAVE_CREATE_DEVICE
|
||||
|
||||
static inline void kvm_arch_hardware_disable(void) {}
|
||||
static inline void kvm_arch_hardware_unsetup(void) {}
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
|
||||
static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
|
||||
static inline void kvm_arch_exit(void) {}
|
||||
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
|
||||
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
|
||||
|
||||
|
|
|
@ -118,7 +118,6 @@ extern int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr,
|
|||
extern int kvmppc_core_vcpu_create(struct kvm_vcpu *vcpu);
|
||||
extern void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu);
|
||||
extern int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu);
|
||||
extern int kvmppc_core_check_processor_compat(void);
|
||||
extern int kvmppc_core_vcpu_translate(struct kvm_vcpu *vcpu,
|
||||
struct kvm_translation *tr);
|
||||
|
||||
|
|
|
@ -999,16 +999,6 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(kvmppc_h_logical_ci_store);
|
||||
|
||||
int kvmppc_core_check_processor_compat(void)
|
||||
{
|
||||
/*
|
||||
* We always return 0 for book3s. We check
|
||||
* for compatibility while loading the HV
|
||||
* or PR module
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hcall)
|
||||
{
|
||||
return kvm->arch.kvm_ops->hcall_implemented(hcall);
|
||||
|
@ -1062,7 +1052,7 @@ static int kvmppc_book3s_init(void)
|
|||
{
|
||||
int r;
|
||||
|
||||
r = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
r = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
if (r)
|
||||
return r;
|
||||
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
|
||||
|
|
|
@ -1210,7 +1210,7 @@ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr)
|
|||
|
||||
/*
|
||||
* On cores with Vector category, KVM is loaded only if CONFIG_ALTIVEC,
|
||||
* see kvmppc_core_check_processor_compat().
|
||||
* see kvmppc_e500mc_check_processor_compat().
|
||||
*/
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
case BOOKE_INTERRUPT_ALTIVEC_UNAVAIL:
|
||||
|
|
|
@ -314,7 +314,7 @@ static void kvmppc_core_vcpu_put_e500(struct kvm_vcpu *vcpu)
|
|||
kvmppc_booke_vcpu_put(vcpu);
|
||||
}
|
||||
|
||||
int kvmppc_core_check_processor_compat(void)
|
||||
static int kvmppc_e500_check_processor_compat(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
|
@ -507,7 +507,7 @@ static int __init kvmppc_e500_init(void)
|
|||
unsigned long handler_len;
|
||||
unsigned long max_ivor = 0;
|
||||
|
||||
r = kvmppc_core_check_processor_compat();
|
||||
r = kvmppc_e500_check_processor_compat();
|
||||
if (r)
|
||||
goto err_out;
|
||||
|
||||
|
@ -531,7 +531,7 @@ static int __init kvmppc_e500_init(void)
|
|||
flush_icache_range(kvmppc_booke_handlers, kvmppc_booke_handlers +
|
||||
ivor[max_ivor] + handler_len);
|
||||
|
||||
r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
|
||||
r = kvm_init(sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
|
||||
if (r)
|
||||
goto err_out;
|
||||
kvm_ops_e500.owner = THIS_MODULE;
|
||||
|
|
|
@ -168,7 +168,7 @@ static void kvmppc_core_vcpu_put_e500mc(struct kvm_vcpu *vcpu)
|
|||
kvmppc_booke_vcpu_put(vcpu);
|
||||
}
|
||||
|
||||
int kvmppc_core_check_processor_compat(void)
|
||||
int kvmppc_e500mc_check_processor_compat(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
|
@ -388,6 +388,10 @@ static int __init kvmppc_e500mc_init(void)
|
|||
{
|
||||
int r;
|
||||
|
||||
r = kvmppc_e500mc_check_processor_compat();
|
||||
if (r)
|
||||
goto err_out;
|
||||
|
||||
r = kvmppc_booke_init();
|
||||
if (r)
|
||||
goto err_out;
|
||||
|
@ -400,7 +404,7 @@ static int __init kvmppc_e500mc_init(void)
|
|||
*/
|
||||
kvmppc_init_lpid(KVMPPC_NR_LPIDS/threads_per_core);
|
||||
|
||||
r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
|
||||
r = kvm_init(sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
|
||||
if (r)
|
||||
goto err_out;
|
||||
kvm_ops_e500mc.owner = THIS_MODULE;
|
||||
|
|
|
@ -435,21 +435,6 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(kvmppc_ld);
|
||||
|
||||
int kvm_arch_hardware_enable(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_hardware_setup(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_check_processor_compat(void *opaque)
|
||||
{
|
||||
return kvmppc_core_check_processor_compat();
|
||||
}
|
||||
|
||||
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
||||
{
|
||||
struct kvmppc_ops *kvm_ops = NULL;
|
||||
|
@ -2544,11 +2529,6 @@ void kvmppc_init_lpid(unsigned long nr_lpids_param)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(kvmppc_init_lpid);
|
||||
|
||||
int kvm_arch_init(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_ppc_instr);
|
||||
|
||||
void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu, struct dentry *debugfs_dentry)
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include <asm/kvm_vcpu_insn.h>
|
||||
#include <asm/kvm_vcpu_sbi.h>
|
||||
#include <asm/kvm_vcpu_timer.h>
|
||||
#include <asm/kvm_vcpu_pmu.h>
|
||||
|
||||
#define KVM_MAX_VCPUS 1024
|
||||
|
||||
|
@ -228,9 +229,11 @@ struct kvm_vcpu_arch {
|
|||
|
||||
/* Don't run the VCPU (blocked) */
|
||||
bool pause;
|
||||
|
||||
/* Performance monitoring context */
|
||||
struct kvm_pmu pmu_context;
|
||||
};
|
||||
|
||||
static inline void kvm_arch_hardware_unsetup(void) {}
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
|
||||
|
||||
|
@ -297,11 +300,11 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
|
|||
int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm);
|
||||
void kvm_riscv_gstage_free_pgd(struct kvm *kvm);
|
||||
void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu);
|
||||
void kvm_riscv_gstage_mode_detect(void);
|
||||
unsigned long kvm_riscv_gstage_mode(void);
|
||||
void __init kvm_riscv_gstage_mode_detect(void);
|
||||
unsigned long __init kvm_riscv_gstage_mode(void);
|
||||
int kvm_riscv_gstage_gpa_bits(void);
|
||||
|
||||
void kvm_riscv_gstage_vmid_detect(void);
|
||||
void __init kvm_riscv_gstage_vmid_detect(void);
|
||||
unsigned long kvm_riscv_gstage_vmid_bits(void);
|
||||
int kvm_riscv_gstage_vmid_init(struct kvm *kvm);
|
||||
bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid);
|
||||
|
|
|
@ -0,0 +1,107 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (c) 2023 Rivos Inc
|
||||
*
|
||||
* Authors:
|
||||
* Atish Patra <atishp@rivosinc.com>
|
||||
*/
|
||||
|
||||
#ifndef __KVM_VCPU_RISCV_PMU_H
|
||||
#define __KVM_VCPU_RISCV_PMU_H
|
||||
|
||||
#include <linux/perf/riscv_pmu.h>
|
||||
#include <asm/sbi.h>
|
||||
|
||||
#ifdef CONFIG_RISCV_PMU_SBI
|
||||
#define RISCV_KVM_MAX_FW_CTRS 32
|
||||
#define RISCV_KVM_MAX_HW_CTRS 32
|
||||
#define RISCV_KVM_MAX_COUNTERS (RISCV_KVM_MAX_HW_CTRS + RISCV_KVM_MAX_FW_CTRS)
|
||||
static_assert(RISCV_KVM_MAX_COUNTERS <= 64);
|
||||
|
||||
struct kvm_fw_event {
|
||||
/* Current value of the event */
|
||||
unsigned long value;
|
||||
|
||||
/* Event monitoring status */
|
||||
bool started;
|
||||
};
|
||||
|
||||
/* Per virtual pmu counter data */
|
||||
struct kvm_pmc {
|
||||
u8 idx;
|
||||
struct perf_event *perf_event;
|
||||
u64 counter_val;
|
||||
union sbi_pmu_ctr_info cinfo;
|
||||
/* Event monitoring status */
|
||||
bool started;
|
||||
/* Monitoring event ID */
|
||||
unsigned long event_idx;
|
||||
};
|
||||
|
||||
/* PMU data structure per vcpu */
|
||||
struct kvm_pmu {
|
||||
struct kvm_pmc pmc[RISCV_KVM_MAX_COUNTERS];
|
||||
struct kvm_fw_event fw_event[RISCV_KVM_MAX_FW_CTRS];
|
||||
/* Number of the virtual firmware counters available */
|
||||
int num_fw_ctrs;
|
||||
/* Number of the virtual hardware counters available */
|
||||
int num_hw_ctrs;
|
||||
/* A flag to indicate that pmu initialization is done */
|
||||
bool init_done;
|
||||
/* Bit map of all the virtual counter used */
|
||||
DECLARE_BITMAP(pmc_in_use, RISCV_KVM_MAX_COUNTERS);
|
||||
};
|
||||
|
||||
#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context)
|
||||
#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context))
|
||||
|
||||
#if defined(CONFIG_32BIT)
|
||||
#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
|
||||
{.base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \
|
||||
{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm },
|
||||
#else
|
||||
#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
|
||||
{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm },
|
||||
#endif
|
||||
|
||||
int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid);
|
||||
int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num,
|
||||
unsigned long *val, unsigned long new_val,
|
||||
unsigned long wr_mask);
|
||||
|
||||
int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata);
|
||||
int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx,
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags, u64 ival,
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags,
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags,
|
||||
unsigned long eidx, u64 evtdata,
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx,
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu);
|
||||
void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu);
|
||||
void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu);
|
||||
|
||||
#else
|
||||
struct kvm_pmu {
|
||||
};
|
||||
|
||||
#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \
|
||||
{.base = 0, .count = 0, .func = NULL },
|
||||
|
||||
static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {}
|
||||
static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {}
|
||||
static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {}
|
||||
#endif /* CONFIG_RISCV_PMU_SBI */
|
||||
#endif /* !__KVM_VCPU_RISCV_PMU_H */
|
|
@ -18,6 +18,13 @@ struct kvm_vcpu_sbi_context {
|
|||
int return_handled;
|
||||
};
|
||||
|
||||
struct kvm_vcpu_sbi_return {
|
||||
unsigned long out_val;
|
||||
unsigned long err_val;
|
||||
struct kvm_cpu_trap *utrap;
|
||||
bool uexit;
|
||||
};
|
||||
|
||||
struct kvm_vcpu_sbi_extension {
|
||||
unsigned long extid_start;
|
||||
unsigned long extid_end;
|
||||
|
@ -27,8 +34,10 @@ struct kvm_vcpu_sbi_extension {
|
|||
* specific error codes.
|
||||
*/
|
||||
int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val, struct kvm_cpu_trap *utrap,
|
||||
bool *exit);
|
||||
struct kvm_vcpu_sbi_return *retdata);
|
||||
|
||||
/* Extension specific probe function */
|
||||
unsigned long (*probe)(struct kvm_vcpu *vcpu);
|
||||
};
|
||||
|
||||
void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run);
|
||||
|
|
|
@ -169,9 +169,9 @@ enum sbi_pmu_fw_generic_events_t {
|
|||
SBI_PMU_FW_ILLEGAL_INSN = 4,
|
||||
SBI_PMU_FW_SET_TIMER = 5,
|
||||
SBI_PMU_FW_IPI_SENT = 6,
|
||||
SBI_PMU_FW_IPI_RECVD = 7,
|
||||
SBI_PMU_FW_IPI_RCVD = 7,
|
||||
SBI_PMU_FW_FENCE_I_SENT = 8,
|
||||
SBI_PMU_FW_FENCE_I_RECVD = 9,
|
||||
SBI_PMU_FW_FENCE_I_RCVD = 9,
|
||||
SBI_PMU_FW_SFENCE_VMA_SENT = 10,
|
||||
SBI_PMU_FW_SFENCE_VMA_RCVD = 11,
|
||||
SBI_PMU_FW_SFENCE_VMA_ASID_SENT = 12,
|
||||
|
@ -215,6 +215,9 @@ enum sbi_pmu_ctr_type {
|
|||
#define SBI_PMU_EVENT_CACHE_OP_ID_CODE_MASK 0x06
|
||||
#define SBI_PMU_EVENT_CACHE_RESULT_ID_CODE_MASK 0x01
|
||||
|
||||
#define SBI_PMU_EVENT_CACHE_ID_SHIFT 3
|
||||
#define SBI_PMU_EVENT_CACHE_OP_SHIFT 1
|
||||
|
||||
#define SBI_PMU_EVENT_IDX_INVALID 0xFFFFFFFF
|
||||
|
||||
/* Flags defined for config matching function */
|
||||
|
|
|
@ -20,6 +20,7 @@ if VIRTUALIZATION
|
|||
config KVM
|
||||
tristate "Kernel-based Virtual Machine (KVM) support (EXPERIMENTAL)"
|
||||
depends on RISCV_SBI && MMU
|
||||
select KVM_GENERIC_HARDWARE_ENABLING
|
||||
select MMU_NOTIFIER
|
||||
select PREEMPT_NOTIFIERS
|
||||
select KVM_MMIO
|
||||
|
|
|
@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o
|
|||
kvm-y += vcpu_sbi_replace.o
|
||||
kvm-y += vcpu_sbi_hsm.o
|
||||
kvm-y += vcpu_timer.o
|
||||
kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o
|
||||
|
|
|
@ -20,16 +20,6 @@ long kvm_arch_dev_ioctl(struct file *filp,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
int kvm_arch_check_processor_compat(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_hardware_setup(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_hardware_enable(void)
|
||||
{
|
||||
unsigned long hideleg, hedeleg;
|
||||
|
@ -49,7 +39,8 @@ int kvm_arch_hardware_enable(void)
|
|||
hideleg |= (1UL << IRQ_VS_EXT);
|
||||
csr_write(CSR_HIDELEG, hideleg);
|
||||
|
||||
csr_write(CSR_HCOUNTEREN, -1UL);
|
||||
/* VS should access only the time counter directly. Everything else should trap */
|
||||
csr_write(CSR_HCOUNTEREN, 0x02);
|
||||
|
||||
csr_write(CSR_HVIP, 0);
|
||||
|
||||
|
@ -70,7 +61,7 @@ void kvm_arch_hardware_disable(void)
|
|||
csr_write(CSR_HIDELEG, 0);
|
||||
}
|
||||
|
||||
int kvm_arch_init(void *opaque)
|
||||
static int __init riscv_kvm_init(void)
|
||||
{
|
||||
const char *str;
|
||||
|
||||
|
@ -115,16 +106,7 @@ int kvm_arch_init(void *opaque)
|
|||
|
||||
kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void kvm_arch_exit(void)
|
||||
{
|
||||
}
|
||||
|
||||
static int __init riscv_kvm_init(void)
|
||||
{
|
||||
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
}
|
||||
module_init(riscv_kvm_init);
|
||||
|
||||
|
|
|
@ -20,12 +20,12 @@
|
|||
#include <asm/pgtable.h>
|
||||
|
||||
#ifdef CONFIG_64BIT
|
||||
static unsigned long gstage_mode = (HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT);
|
||||
static unsigned long gstage_pgd_levels = 3;
|
||||
static unsigned long gstage_mode __ro_after_init = (HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT);
|
||||
static unsigned long gstage_pgd_levels __ro_after_init = 3;
|
||||
#define gstage_index_bits 9
|
||||
#else
|
||||
static unsigned long gstage_mode = (HGATP_MODE_SV32X4 << HGATP_MODE_SHIFT);
|
||||
static unsigned long gstage_pgd_levels = 2;
|
||||
static unsigned long gstage_mode __ro_after_init = (HGATP_MODE_SV32X4 << HGATP_MODE_SHIFT);
|
||||
static unsigned long gstage_pgd_levels __ro_after_init = 2;
|
||||
#define gstage_index_bits 10
|
||||
#endif
|
||||
|
||||
|
@ -585,7 +585,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
|
|||
if (!kvm->arch.pgd)
|
||||
return false;
|
||||
|
||||
WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PGDIR_SIZE);
|
||||
WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
|
||||
|
||||
if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT,
|
||||
&ptep, &ptep_level))
|
||||
|
@ -603,7 +603,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
|
|||
if (!kvm->arch.pgd)
|
||||
return false;
|
||||
|
||||
WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PGDIR_SIZE);
|
||||
WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
|
||||
|
||||
if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT,
|
||||
&ptep, &ptep_level))
|
||||
|
@ -645,12 +645,12 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
|
|||
if (logging || (vma->vm_flags & VM_PFNMAP))
|
||||
vma_pagesize = PAGE_SIZE;
|
||||
|
||||
if (vma_pagesize == PMD_SIZE || vma_pagesize == PGDIR_SIZE)
|
||||
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
|
||||
gfn = (gpa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
|
||||
|
||||
mmap_read_unlock(current->mm);
|
||||
|
||||
if (vma_pagesize != PGDIR_SIZE &&
|
||||
if (vma_pagesize != PUD_SIZE &&
|
||||
vma_pagesize != PMD_SIZE &&
|
||||
vma_pagesize != PAGE_SIZE) {
|
||||
kvm_err("Invalid VMA page size 0x%lx\n", vma_pagesize);
|
||||
|
@ -758,7 +758,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu)
|
|||
kvm_riscv_local_hfence_gvma_all();
|
||||
}
|
||||
|
||||
void kvm_riscv_gstage_mode_detect(void)
|
||||
void __init kvm_riscv_gstage_mode_detect(void)
|
||||
{
|
||||
#ifdef CONFIG_64BIT
|
||||
/* Try Sv57x4 G-stage mode */
|
||||
|
@ -782,7 +782,7 @@ skip_sv48x4_test:
|
|||
#endif
|
||||
}
|
||||
|
||||
unsigned long kvm_riscv_gstage_mode(void)
|
||||
unsigned long __init kvm_riscv_gstage_mode(void)
|
||||
{
|
||||
return gstage_mode >> HGATP_MODE_SHIFT;
|
||||
}
|
||||
|
|
|
@ -180,6 +180,7 @@ void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu)
|
|||
|
||||
void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD);
|
||||
local_flush_icache_all();
|
||||
}
|
||||
|
||||
|
@ -263,15 +264,18 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu)
|
|||
d.addr, d.size, d.order);
|
||||
break;
|
||||
case KVM_RISCV_HFENCE_VVMA_ASID_GVA:
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD);
|
||||
kvm_riscv_local_hfence_vvma_asid_gva(
|
||||
READ_ONCE(v->vmid), d.asid,
|
||||
d.addr, d.size, d.order);
|
||||
break;
|
||||
case KVM_RISCV_HFENCE_VVMA_ASID_ALL:
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD);
|
||||
kvm_riscv_local_hfence_vvma_asid_all(
|
||||
READ_ONCE(v->vmid), d.asid);
|
||||
break;
|
||||
case KVM_RISCV_HFENCE_VVMA_GVA:
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD);
|
||||
kvm_riscv_local_hfence_vvma_gva(
|
||||
READ_ONCE(v->vmid),
|
||||
d.addr, d.size, d.order);
|
||||
|
|
|
@ -138,6 +138,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
|
|||
WRITE_ONCE(vcpu->arch.irqs_pending, 0);
|
||||
WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
|
||||
|
||||
kvm_riscv_vcpu_pmu_reset(vcpu);
|
||||
|
||||
vcpu->arch.hfence_head = 0;
|
||||
vcpu->arch.hfence_tail = 0;
|
||||
memset(vcpu->arch.hfence_queue, 0, sizeof(vcpu->arch.hfence_queue));
|
||||
|
@ -194,6 +196,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
|
|||
/* Setup VCPU timer */
|
||||
kvm_riscv_vcpu_timer_init(vcpu);
|
||||
|
||||
/* setup performance monitoring */
|
||||
kvm_riscv_vcpu_pmu_init(vcpu);
|
||||
|
||||
/* Reset VCPU */
|
||||
kvm_riscv_reset_vcpu(vcpu);
|
||||
|
||||
|
@ -216,6 +221,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
|
|||
/* Cleanup VCPU timer */
|
||||
kvm_riscv_vcpu_timer_deinit(vcpu);
|
||||
|
||||
kvm_riscv_vcpu_pmu_deinit(vcpu);
|
||||
|
||||
/* Free unused pages pre-allocated for G-stage page table mappings */
|
||||
kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
|
||||
}
|
||||
|
|
|
@ -160,6 +160,9 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
|
|||
|
||||
/* Set Guest PC to Guest exception vector */
|
||||
vcpu->arch.guest_context.sepc = csr_read(CSR_VSTVEC);
|
||||
|
||||
/* Set Guest privilege mode to supervisor */
|
||||
vcpu->arch.guest_context.sstatus |= SR_SPP;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -179,6 +182,12 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
ret = -EFAULT;
|
||||
run->exit_reason = KVM_EXIT_UNKNOWN;
|
||||
switch (trap->scause) {
|
||||
case EXC_INST_ILLEGAL:
|
||||
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
|
||||
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
|
||||
ret = 1;
|
||||
}
|
||||
break;
|
||||
case EXC_VIRTUAL_INST_FAULT:
|
||||
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
|
||||
ret = kvm_riscv_vcpu_virtual_insn(vcpu, run, trap);
|
||||
|
|
|
@ -213,7 +213,9 @@ struct csr_func {
|
|||
unsigned long wr_mask);
|
||||
};
|
||||
|
||||
static const struct csr_func csr_funcs[] = { };
|
||||
static const struct csr_func csr_funcs[] = {
|
||||
KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS
|
||||
};
|
||||
|
||||
/**
|
||||
* kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space
|
||||
|
|
|
@ -0,0 +1,633 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2023 Rivos Inc
|
||||
*
|
||||
* Authors:
|
||||
* Atish Patra <atishp@rivosinc.com>
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "riscv-kvm-pmu: " fmt
|
||||
#include <linux/errno.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/perf/riscv_pmu.h>
|
||||
#include <asm/csr.h>
|
||||
#include <asm/kvm_vcpu_sbi.h>
|
||||
#include <asm/kvm_vcpu_pmu.h>
|
||||
#include <linux/bitops.h>
|
||||
|
||||
#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs)
|
||||
#define get_event_type(x) (((x) & SBI_PMU_EVENT_IDX_TYPE_MASK) >> 16)
|
||||
#define get_event_code(x) ((x) & SBI_PMU_EVENT_IDX_CODE_MASK)
|
||||
|
||||
static enum perf_hw_id hw_event_perf_map[SBI_PMU_HW_GENERAL_MAX] = {
|
||||
[SBI_PMU_HW_CPU_CYCLES] = PERF_COUNT_HW_CPU_CYCLES,
|
||||
[SBI_PMU_HW_INSTRUCTIONS] = PERF_COUNT_HW_INSTRUCTIONS,
|
||||
[SBI_PMU_HW_CACHE_REFERENCES] = PERF_COUNT_HW_CACHE_REFERENCES,
|
||||
[SBI_PMU_HW_CACHE_MISSES] = PERF_COUNT_HW_CACHE_MISSES,
|
||||
[SBI_PMU_HW_BRANCH_INSTRUCTIONS] = PERF_COUNT_HW_BRANCH_INSTRUCTIONS,
|
||||
[SBI_PMU_HW_BRANCH_MISSES] = PERF_COUNT_HW_BRANCH_MISSES,
|
||||
[SBI_PMU_HW_BUS_CYCLES] = PERF_COUNT_HW_BUS_CYCLES,
|
||||
[SBI_PMU_HW_STALLED_CYCLES_FRONTEND] = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND,
|
||||
[SBI_PMU_HW_STALLED_CYCLES_BACKEND] = PERF_COUNT_HW_STALLED_CYCLES_BACKEND,
|
||||
[SBI_PMU_HW_REF_CPU_CYCLES] = PERF_COUNT_HW_REF_CPU_CYCLES,
|
||||
};
|
||||
|
||||
static u64 kvm_pmu_get_sample_period(struct kvm_pmc *pmc)
|
||||
{
|
||||
u64 counter_val_mask = GENMASK(pmc->cinfo.width, 0);
|
||||
u64 sample_period;
|
||||
|
||||
if (!pmc->counter_val)
|
||||
sample_period = counter_val_mask + 1;
|
||||
else
|
||||
sample_period = (-pmc->counter_val) & counter_val_mask;
|
||||
|
||||
return sample_period;
|
||||
}
|
||||
|
||||
static u32 kvm_pmu_get_perf_event_type(unsigned long eidx)
|
||||
{
|
||||
enum sbi_pmu_event_type etype = get_event_type(eidx);
|
||||
u32 type = PERF_TYPE_MAX;
|
||||
|
||||
switch (etype) {
|
||||
case SBI_PMU_EVENT_TYPE_HW:
|
||||
type = PERF_TYPE_HARDWARE;
|
||||
break;
|
||||
case SBI_PMU_EVENT_TYPE_CACHE:
|
||||
type = PERF_TYPE_HW_CACHE;
|
||||
break;
|
||||
case SBI_PMU_EVENT_TYPE_RAW:
|
||||
case SBI_PMU_EVENT_TYPE_FW:
|
||||
type = PERF_TYPE_RAW;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return type;
|
||||
}
|
||||
|
||||
static bool kvm_pmu_is_fw_event(unsigned long eidx)
|
||||
{
|
||||
return get_event_type(eidx) == SBI_PMU_EVENT_TYPE_FW;
|
||||
}
|
||||
|
||||
static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
|
||||
{
|
||||
if (pmc->perf_event) {
|
||||
perf_event_disable(pmc->perf_event);
|
||||
perf_event_release_kernel(pmc->perf_event);
|
||||
pmc->perf_event = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static u64 kvm_pmu_get_perf_event_hw_config(u32 sbi_event_code)
|
||||
{
|
||||
return hw_event_perf_map[sbi_event_code];
|
||||
}
|
||||
|
||||
static u64 kvm_pmu_get_perf_event_cache_config(u32 sbi_event_code)
|
||||
{
|
||||
u64 config = U64_MAX;
|
||||
unsigned int cache_type, cache_op, cache_result;
|
||||
|
||||
/* All the cache event masks lie within 0xFF. No separate masking is necessary */
|
||||
cache_type = (sbi_event_code & SBI_PMU_EVENT_CACHE_ID_CODE_MASK) >>
|
||||
SBI_PMU_EVENT_CACHE_ID_SHIFT;
|
||||
cache_op = (sbi_event_code & SBI_PMU_EVENT_CACHE_OP_ID_CODE_MASK) >>
|
||||
SBI_PMU_EVENT_CACHE_OP_SHIFT;
|
||||
cache_result = sbi_event_code & SBI_PMU_EVENT_CACHE_RESULT_ID_CODE_MASK;
|
||||
|
||||
if (cache_type >= PERF_COUNT_HW_CACHE_MAX ||
|
||||
cache_op >= PERF_COUNT_HW_CACHE_OP_MAX ||
|
||||
cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
|
||||
return config;
|
||||
|
||||
config = cache_type | (cache_op << 8) | (cache_result << 16);
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
static u64 kvm_pmu_get_perf_event_config(unsigned long eidx, uint64_t evt_data)
|
||||
{
|
||||
enum sbi_pmu_event_type etype = get_event_type(eidx);
|
||||
u32 ecode = get_event_code(eidx);
|
||||
u64 config = U64_MAX;
|
||||
|
||||
switch (etype) {
|
||||
case SBI_PMU_EVENT_TYPE_HW:
|
||||
if (ecode < SBI_PMU_HW_GENERAL_MAX)
|
||||
config = kvm_pmu_get_perf_event_hw_config(ecode);
|
||||
break;
|
||||
case SBI_PMU_EVENT_TYPE_CACHE:
|
||||
config = kvm_pmu_get_perf_event_cache_config(ecode);
|
||||
break;
|
||||
case SBI_PMU_EVENT_TYPE_RAW:
|
||||
config = evt_data & RISCV_PMU_RAW_EVENT_MASK;
|
||||
break;
|
||||
case SBI_PMU_EVENT_TYPE_FW:
|
||||
if (ecode < SBI_PMU_FW_MAX)
|
||||
config = (1ULL << 63) | ecode;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
static int kvm_pmu_get_fixed_pmc_index(unsigned long eidx)
|
||||
{
|
||||
u32 etype = kvm_pmu_get_perf_event_type(eidx);
|
||||
u32 ecode = get_event_code(eidx);
|
||||
|
||||
if (etype != SBI_PMU_EVENT_TYPE_HW)
|
||||
return -EINVAL;
|
||||
|
||||
if (ecode == SBI_PMU_HW_CPU_CYCLES)
|
||||
return 0;
|
||||
else if (ecode == SBI_PMU_HW_INSTRUCTIONS)
|
||||
return 2;
|
||||
else
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int kvm_pmu_get_programmable_pmc_index(struct kvm_pmu *kvpmu, unsigned long eidx,
|
||||
unsigned long cbase, unsigned long cmask)
|
||||
{
|
||||
int ctr_idx = -1;
|
||||
int i, pmc_idx;
|
||||
int min, max;
|
||||
|
||||
if (kvm_pmu_is_fw_event(eidx)) {
|
||||
/* Firmware counters are mapped 1:1 starting from num_hw_ctrs for simplicity */
|
||||
min = kvpmu->num_hw_ctrs;
|
||||
max = min + kvpmu->num_fw_ctrs;
|
||||
} else {
|
||||
/* First 3 counters are reserved for fixed counters */
|
||||
min = 3;
|
||||
max = kvpmu->num_hw_ctrs;
|
||||
}
|
||||
|
||||
for_each_set_bit(i, &cmask, BITS_PER_LONG) {
|
||||
pmc_idx = i + cbase;
|
||||
if ((pmc_idx >= min && pmc_idx < max) &&
|
||||
!test_bit(pmc_idx, kvpmu->pmc_in_use)) {
|
||||
ctr_idx = pmc_idx;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return ctr_idx;
|
||||
}
|
||||
|
||||
static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx,
|
||||
unsigned long cbase, unsigned long cmask)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* Fixed counters need to be have fixed mapping as they have different width */
|
||||
ret = kvm_pmu_get_fixed_pmc_index(eidx);
|
||||
if (ret >= 0)
|
||||
return ret;
|
||||
|
||||
return kvm_pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask);
|
||||
}
|
||||
|
||||
static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx,
|
||||
unsigned long *out_val)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
struct kvm_pmc *pmc;
|
||||
u64 enabled, running;
|
||||
int fevent_code;
|
||||
|
||||
pmc = &kvpmu->pmc[cidx];
|
||||
|
||||
if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) {
|
||||
fevent_code = get_event_code(pmc->event_idx);
|
||||
pmc->counter_val = kvpmu->fw_event[fevent_code].value;
|
||||
} else if (pmc->perf_event) {
|
||||
pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running);
|
||||
} else {
|
||||
return -EINVAL;
|
||||
}
|
||||
*out_val = pmc->counter_val;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_pmu_validate_counter_mask(struct kvm_pmu *kvpmu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask)
|
||||
{
|
||||
/* Make sure the we have a valid counter mask requested from the caller */
|
||||
if (!ctr_mask || (ctr_base + __fls(ctr_mask) >= kvm_pmu_num_counters(kvpmu)))
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_attr *attr,
|
||||
unsigned long flags, unsigned long eidx, unsigned long evtdata)
|
||||
{
|
||||
struct perf_event *event;
|
||||
|
||||
kvm_pmu_release_perf_event(pmc);
|
||||
attr->config = kvm_pmu_get_perf_event_config(eidx, evtdata);
|
||||
if (flags & SBI_PMU_CFG_FLAG_CLEAR_VALUE) {
|
||||
//TODO: Do we really want to clear the value in hardware counter
|
||||
pmc->counter_val = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the default sample_period for now. The guest specified value
|
||||
* will be updated in the start call.
|
||||
*/
|
||||
attr->sample_period = kvm_pmu_get_sample_period(pmc);
|
||||
|
||||
event = perf_event_create_kernel_counter(attr, -1, current, NULL, pmc);
|
||||
if (IS_ERR(event)) {
|
||||
pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event));
|
||||
return PTR_ERR(event);
|
||||
}
|
||||
|
||||
pmc->perf_event = event;
|
||||
if (flags & SBI_PMU_CFG_FLAG_AUTO_START)
|
||||
perf_event_enable(pmc->perf_event);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
struct kvm_fw_event *fevent;
|
||||
|
||||
if (!kvpmu || fid >= SBI_PMU_FW_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
fevent = &kvpmu->fw_event[fid];
|
||||
if (fevent->started)
|
||||
fevent->value++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num,
|
||||
unsigned long *val, unsigned long new_val,
|
||||
unsigned long wr_mask)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC;
|
||||
|
||||
if (!kvpmu || !kvpmu->init_done) {
|
||||
/*
|
||||
* In absence of sscofpmf in the platform, the guest OS may use
|
||||
* the legacy PMU driver to read cycle/instret. In that case,
|
||||
* just return 0 to avoid any illegal trap. However, any other
|
||||
* hpmcounter access should result in illegal trap as they must
|
||||
* be access through SBI PMU only.
|
||||
*/
|
||||
if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) {
|
||||
*val = 0;
|
||||
return ret;
|
||||
} else {
|
||||
return KVM_INSN_ILLEGAL_TRAP;
|
||||
}
|
||||
}
|
||||
|
||||
/* The counter CSR are read only. Thus, any write should result in illegal traps */
|
||||
if (wr_mask)
|
||||
return KVM_INSN_ILLEGAL_TRAP;
|
||||
|
||||
cidx = csr_num - CSR_CYCLE;
|
||||
|
||||
if (pmu_ctr_read(vcpu, cidx, val) < 0)
|
||||
return KVM_INSN_ILLEGAL_TRAP;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
|
||||
retdata->out_val = kvm_pmu_num_counters(kvpmu);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
|
||||
if (cidx > RISCV_KVM_MAX_COUNTERS || cidx == 1) {
|
||||
retdata->err_val = SBI_ERR_INVALID_PARAM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
retdata->out_val = kvpmu->pmc[cidx].cinfo.value;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags, u64 ival,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
int i, pmc_index, sbiret = 0;
|
||||
struct kvm_pmc *pmc;
|
||||
int fevent_code;
|
||||
|
||||
if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Start the counters that have been configured and requested by the guest */
|
||||
for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) {
|
||||
pmc_index = i + ctr_base;
|
||||
if (!test_bit(pmc_index, kvpmu->pmc_in_use))
|
||||
continue;
|
||||
pmc = &kvpmu->pmc[pmc_index];
|
||||
if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE)
|
||||
pmc->counter_val = ival;
|
||||
if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) {
|
||||
fevent_code = get_event_code(pmc->event_idx);
|
||||
if (fevent_code >= SBI_PMU_FW_MAX) {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Check if the counter was already started for some reason */
|
||||
if (kvpmu->fw_event[fevent_code].started) {
|
||||
sbiret = SBI_ERR_ALREADY_STARTED;
|
||||
continue;
|
||||
}
|
||||
|
||||
kvpmu->fw_event[fevent_code].started = true;
|
||||
kvpmu->fw_event[fevent_code].value = pmc->counter_val;
|
||||
} else if (pmc->perf_event) {
|
||||
if (unlikely(pmc->started)) {
|
||||
sbiret = SBI_ERR_ALREADY_STARTED;
|
||||
continue;
|
||||
}
|
||||
perf_event_period(pmc->perf_event, kvm_pmu_get_sample_period(pmc));
|
||||
perf_event_enable(pmc->perf_event);
|
||||
pmc->started = true;
|
||||
} else {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
retdata->err_val = sbiret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
int i, pmc_index, sbiret = 0;
|
||||
u64 enabled, running;
|
||||
struct kvm_pmc *pmc;
|
||||
int fevent_code;
|
||||
|
||||
if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Stop the counters that have been configured and requested by the guest */
|
||||
for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) {
|
||||
pmc_index = i + ctr_base;
|
||||
if (!test_bit(pmc_index, kvpmu->pmc_in_use))
|
||||
continue;
|
||||
pmc = &kvpmu->pmc[pmc_index];
|
||||
if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) {
|
||||
fevent_code = get_event_code(pmc->event_idx);
|
||||
if (fevent_code >= SBI_PMU_FW_MAX) {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!kvpmu->fw_event[fevent_code].started)
|
||||
sbiret = SBI_ERR_ALREADY_STOPPED;
|
||||
|
||||
kvpmu->fw_event[fevent_code].started = false;
|
||||
} else if (pmc->perf_event) {
|
||||
if (pmc->started) {
|
||||
/* Stop counting the counter */
|
||||
perf_event_disable(pmc->perf_event);
|
||||
pmc->started = false;
|
||||
} else {
|
||||
sbiret = SBI_ERR_ALREADY_STOPPED;
|
||||
}
|
||||
|
||||
if (flags & SBI_PMU_STOP_FLAG_RESET) {
|
||||
/* Relase the counter if this is a reset request */
|
||||
pmc->counter_val += perf_event_read_value(pmc->perf_event,
|
||||
&enabled, &running);
|
||||
kvm_pmu_release_perf_event(pmc);
|
||||
}
|
||||
} else {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
}
|
||||
if (flags & SBI_PMU_STOP_FLAG_RESET) {
|
||||
pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID;
|
||||
clear_bit(pmc_index, kvpmu->pmc_in_use);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
retdata->err_val = sbiret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base,
|
||||
unsigned long ctr_mask, unsigned long flags,
|
||||
unsigned long eidx, u64 evtdata,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ctr_idx, ret, sbiret = 0;
|
||||
bool is_fevent;
|
||||
unsigned long event_code;
|
||||
u32 etype = kvm_pmu_get_perf_event_type(eidx);
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
struct kvm_pmc *pmc = NULL;
|
||||
struct perf_event_attr attr = {
|
||||
.type = etype,
|
||||
.size = sizeof(struct perf_event_attr),
|
||||
.pinned = true,
|
||||
/*
|
||||
* It should never reach here if the platform doesn't support the sscofpmf
|
||||
* extension as mode filtering won't work without it.
|
||||
*/
|
||||
.exclude_host = true,
|
||||
.exclude_hv = true,
|
||||
.exclude_user = !!(flags & SBI_PMU_CFG_FLAG_SET_UINH),
|
||||
.exclude_kernel = !!(flags & SBI_PMU_CFG_FLAG_SET_SINH),
|
||||
.config1 = RISCV_PMU_CONFIG1_GUEST_EVENTS,
|
||||
};
|
||||
|
||||
if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) {
|
||||
sbiret = SBI_ERR_INVALID_PARAM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
event_code = get_event_code(eidx);
|
||||
is_fevent = kvm_pmu_is_fw_event(eidx);
|
||||
if (is_fevent && event_code >= SBI_PMU_FW_MAX) {
|
||||
sbiret = SBI_ERR_NOT_SUPPORTED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* SKIP_MATCH flag indicates the caller is aware of the assigned counter
|
||||
* for this event. Just do a sanity check if it already marked used.
|
||||
*/
|
||||
if (flags & SBI_PMU_CFG_FLAG_SKIP_MATCH) {
|
||||
if (!test_bit(ctr_base + __ffs(ctr_mask), kvpmu->pmc_in_use)) {
|
||||
sbiret = SBI_ERR_FAILURE;
|
||||
goto out;
|
||||
}
|
||||
ctr_idx = ctr_base + __ffs(ctr_mask);
|
||||
} else {
|
||||
ctr_idx = pmu_get_pmc_index(kvpmu, eidx, ctr_base, ctr_mask);
|
||||
if (ctr_idx < 0) {
|
||||
sbiret = SBI_ERR_NOT_SUPPORTED;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
pmc = &kvpmu->pmc[ctr_idx];
|
||||
pmc->idx = ctr_idx;
|
||||
|
||||
if (is_fevent) {
|
||||
if (flags & SBI_PMU_CFG_FLAG_AUTO_START)
|
||||
kvpmu->fw_event[event_code].started = true;
|
||||
} else {
|
||||
ret = kvm_pmu_create_perf_event(pmc, &attr, flags, eidx, evtdata);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
set_bit(ctr_idx, kvpmu->pmc_in_use);
|
||||
pmc->event_idx = eidx;
|
||||
retdata->out_val = ctr_idx;
|
||||
out:
|
||||
retdata->err_val = sbiret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = pmu_ctr_read(vcpu, cidx, &retdata->out_val);
|
||||
if (ret == -EINVAL)
|
||||
retdata->err_val = SBI_ERR_INVALID_PARAM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int i = 0, ret, num_hw_ctrs = 0, hpm_width = 0;
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
struct kvm_pmc *pmc;
|
||||
|
||||
/*
|
||||
* PMU functionality should be only available to guests if privilege mode
|
||||
* filtering is available in the host. Otherwise, guest will always count
|
||||
* events while the execution is in hypervisor mode.
|
||||
*/
|
||||
if (!riscv_isa_extension_available(NULL, SSCOFPMF))
|
||||
return;
|
||||
|
||||
ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs);
|
||||
if (ret < 0 || !hpm_width || !num_hw_ctrs)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Increase the number of hardware counters to offset the time counter.
|
||||
*/
|
||||
kvpmu->num_hw_ctrs = num_hw_ctrs + 1;
|
||||
kvpmu->num_fw_ctrs = SBI_PMU_FW_MAX;
|
||||
memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event));
|
||||
|
||||
if (kvpmu->num_hw_ctrs > RISCV_KVM_MAX_HW_CTRS) {
|
||||
pr_warn_once("Limiting the hardware counters to 32 as specified by the ISA");
|
||||
kvpmu->num_hw_ctrs = RISCV_KVM_MAX_HW_CTRS;
|
||||
}
|
||||
|
||||
/*
|
||||
* There is no correlation between the logical hardware counter and virtual counters.
|
||||
* However, we need to encode a hpmcounter CSR in the counter info field so that
|
||||
* KVM can trap n emulate the read. This works well in the migration use case as
|
||||
* KVM doesn't care if the actual hpmcounter is available in the hardware or not.
|
||||
*/
|
||||
for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) {
|
||||
/* TIME CSR shouldn't be read from perf interface */
|
||||
if (i == 1)
|
||||
continue;
|
||||
pmc = &kvpmu->pmc[i];
|
||||
pmc->idx = i;
|
||||
pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID;
|
||||
if (i < kvpmu->num_hw_ctrs) {
|
||||
pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW;
|
||||
if (i < 3)
|
||||
/* CY, IR counters */
|
||||
pmc->cinfo.width = 63;
|
||||
else
|
||||
pmc->cinfo.width = hpm_width;
|
||||
/*
|
||||
* The CSR number doesn't have any relation with the logical
|
||||
* hardware counters. The CSR numbers are encoded sequentially
|
||||
* to avoid maintaining a map between the virtual counter
|
||||
* and CSR number.
|
||||
*/
|
||||
pmc->cinfo.csr = CSR_CYCLE + i;
|
||||
} else {
|
||||
pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW;
|
||||
pmc->cinfo.width = BITS_PER_LONG - 1;
|
||||
}
|
||||
}
|
||||
|
||||
kvpmu->init_done = true;
|
||||
}
|
||||
|
||||
void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
struct kvm_pmc *pmc;
|
||||
int i;
|
||||
|
||||
if (!kvpmu)
|
||||
return;
|
||||
|
||||
for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_MAX_COUNTERS) {
|
||||
pmc = &kvpmu->pmc[i];
|
||||
pmc->counter_val = 0;
|
||||
kvm_pmu_release_perf_event(pmc);
|
||||
pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID;
|
||||
}
|
||||
bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS);
|
||||
memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event));
|
||||
}
|
||||
|
||||
void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
kvm_riscv_vcpu_pmu_deinit(vcpu);
|
||||
}
|
|
@ -12,26 +12,6 @@
|
|||
#include <asm/sbi.h>
|
||||
#include <asm/kvm_vcpu_sbi.h>
|
||||
|
||||
static int kvm_linux_err_map_sbi(int err)
|
||||
{
|
||||
switch (err) {
|
||||
case 0:
|
||||
return SBI_SUCCESS;
|
||||
case -EPERM:
|
||||
return SBI_ERR_DENIED;
|
||||
case -EINVAL:
|
||||
return SBI_ERR_INVALID_PARAM;
|
||||
case -EFAULT:
|
||||
return SBI_ERR_INVALID_ADDRESS;
|
||||
case -EOPNOTSUPP:
|
||||
return SBI_ERR_NOT_SUPPORTED;
|
||||
case -EALREADY:
|
||||
return SBI_ERR_ALREADY_AVAILABLE;
|
||||
default:
|
||||
return SBI_ERR_FAILURE;
|
||||
};
|
||||
}
|
||||
|
||||
#ifndef CONFIG_RISCV_SBI_V01
|
||||
static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = {
|
||||
.extid_start = -1UL,
|
||||
|
@ -40,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = {
|
|||
};
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_RISCV_PMU_SBI
|
||||
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu;
|
||||
#else
|
||||
static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = {
|
||||
.extid_start = -1UL,
|
||||
.extid_end = -1UL,
|
||||
.handler = NULL,
|
||||
};
|
||||
#endif
|
||||
|
||||
static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
|
||||
&vcpu_sbi_ext_v01,
|
||||
&vcpu_sbi_ext_base,
|
||||
|
@ -48,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
|
|||
&vcpu_sbi_ext_rfence,
|
||||
&vcpu_sbi_ext_srst,
|
||||
&vcpu_sbi_ext_hsm,
|
||||
&vcpu_sbi_ext_pmu,
|
||||
&vcpu_sbi_ext_experimental,
|
||||
&vcpu_sbi_ext_vendor,
|
||||
};
|
||||
|
@ -125,11 +116,14 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
{
|
||||
int ret = 1;
|
||||
bool next_sepc = true;
|
||||
bool userspace_exit = false;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
const struct kvm_vcpu_sbi_extension *sbi_ext;
|
||||
struct kvm_cpu_trap utrap = { 0 };
|
||||
unsigned long out_val = 0;
|
||||
struct kvm_cpu_trap utrap = {0};
|
||||
struct kvm_vcpu_sbi_return sbi_ret = {
|
||||
.out_val = 0,
|
||||
.err_val = 0,
|
||||
.utrap = &utrap,
|
||||
};
|
||||
bool ext_is_v01 = false;
|
||||
|
||||
sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
|
||||
|
@ -139,42 +133,46 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
|||
cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
|
||||
ext_is_v01 = true;
|
||||
#endif
|
||||
ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
|
||||
ret = sbi_ext->handler(vcpu, run, &sbi_ret);
|
||||
} else {
|
||||
/* Return error for unsupported SBI calls */
|
||||
cp->a0 = SBI_ERR_NOT_SUPPORTED;
|
||||
goto ecall_done;
|
||||
}
|
||||
|
||||
/*
|
||||
* When the SBI extension returns a Linux error code, it exits the ioctl
|
||||
* loop and forwards the error to userspace.
|
||||
*/
|
||||
if (ret < 0) {
|
||||
next_sepc = false;
|
||||
goto ecall_done;
|
||||
}
|
||||
|
||||
/* Handle special error cases i.e trap, exit or userspace forward */
|
||||
if (utrap.scause) {
|
||||
if (sbi_ret.utrap->scause) {
|
||||
/* No need to increment sepc or exit ioctl loop */
|
||||
ret = 1;
|
||||
utrap.sepc = cp->sepc;
|
||||
kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
|
||||
sbi_ret.utrap->sepc = cp->sepc;
|
||||
kvm_riscv_vcpu_trap_redirect(vcpu, sbi_ret.utrap);
|
||||
next_sepc = false;
|
||||
goto ecall_done;
|
||||
}
|
||||
|
||||
/* Exit ioctl loop or Propagate the error code the guest */
|
||||
if (userspace_exit) {
|
||||
if (sbi_ret.uexit) {
|
||||
next_sepc = false;
|
||||
ret = 0;
|
||||
} else {
|
||||
/**
|
||||
* SBI extension handler always returns an Linux error code. Convert
|
||||
* it to the SBI specific error code that can be propagated the SBI
|
||||
* caller.
|
||||
*/
|
||||
ret = kvm_linux_err_map_sbi(ret);
|
||||
cp->a0 = ret;
|
||||
cp->a0 = sbi_ret.err_val;
|
||||
ret = 1;
|
||||
}
|
||||
ecall_done:
|
||||
if (next_sepc)
|
||||
cp->sepc += 4;
|
||||
if (!ext_is_v01)
|
||||
cp->a1 = out_val;
|
||||
/* a1 should only be updated when we continue the ioctl loop */
|
||||
if (!ext_is_v01 && ret == 1)
|
||||
cp->a1 = sbi_ret.out_val;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -14,11 +14,11 @@
|
|||
#include <asm/kvm_vcpu_sbi.h>
|
||||
|
||||
static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *trap, bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
const struct kvm_vcpu_sbi_extension *sbi_ext;
|
||||
unsigned long *out_val = &retdata->out_val;
|
||||
|
||||
switch (cp->a6) {
|
||||
case SBI_EXT_BASE_GET_SPEC_VERSION:
|
||||
|
@ -42,9 +42,12 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
* forward it to the userspace
|
||||
*/
|
||||
kvm_riscv_vcpu_sbi_forward(vcpu, run);
|
||||
*exit = true;
|
||||
} else
|
||||
*out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0;
|
||||
retdata->uexit = true;
|
||||
} else {
|
||||
sbi_ext = kvm_vcpu_sbi_find_ext(cp->a0);
|
||||
*out_val = sbi_ext && sbi_ext->probe ?
|
||||
sbi_ext->probe(vcpu) : !!sbi_ext;
|
||||
}
|
||||
break;
|
||||
case SBI_EXT_BASE_GET_MVENDORID:
|
||||
*out_val = vcpu->arch.mvendorid;
|
||||
|
@ -56,11 +59,11 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
*out_val = vcpu->arch.mimpid;
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = {
|
||||
|
@ -70,17 +73,15 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = {
|
|||
};
|
||||
|
||||
static int kvm_sbi_ext_forward_handler(struct kvm_vcpu *vcpu,
|
||||
struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap,
|
||||
bool *exit)
|
||||
struct kvm_run *run,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
/*
|
||||
* Both SBI experimental and vendor extensions are
|
||||
* unconditionally forwarded to userspace.
|
||||
*/
|
||||
kvm_riscv_vcpu_sbi_forward(vcpu, run);
|
||||
*exit = true;
|
||||
retdata->uexit = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -21,9 +21,9 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
|
|||
|
||||
target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
|
||||
if (!target_vcpu)
|
||||
return -EINVAL;
|
||||
return SBI_ERR_INVALID_PARAM;
|
||||
if (!target_vcpu->arch.power_off)
|
||||
return -EALREADY;
|
||||
return SBI_ERR_ALREADY_AVAILABLE;
|
||||
|
||||
reset_cntx = &target_vcpu->arch.guest_reset_context;
|
||||
/* start address */
|
||||
|
@ -42,7 +42,7 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
|
|||
static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (vcpu->arch.power_off)
|
||||
return -EINVAL;
|
||||
return SBI_ERR_FAILURE;
|
||||
|
||||
kvm_riscv_vcpu_power_off(vcpu);
|
||||
|
||||
|
@ -57,7 +57,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
|
|||
|
||||
target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
|
||||
if (!target_vcpu)
|
||||
return -EINVAL;
|
||||
return SBI_ERR_INVALID_PARAM;
|
||||
if (!target_vcpu->arch.power_off)
|
||||
return SBI_HSM_STATE_STARTED;
|
||||
else if (vcpu->stat.generic.blocking)
|
||||
|
@ -67,9 +67,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
|
||||
static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap,
|
||||
bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
|
@ -88,27 +86,29 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
case SBI_EXT_HSM_HART_STATUS:
|
||||
ret = kvm_sbi_hsm_vcpu_get_status(vcpu);
|
||||
if (ret >= 0) {
|
||||
*out_val = ret;
|
||||
ret = 0;
|
||||
retdata->out_val = ret;
|
||||
retdata->err_val = 0;
|
||||
}
|
||||
break;
|
||||
return 0;
|
||||
case SBI_EXT_HSM_HART_SUSPEND:
|
||||
switch (cp->a0) {
|
||||
case SBI_HSM_SUSPEND_RET_DEFAULT:
|
||||
kvm_riscv_vcpu_wfi(vcpu);
|
||||
break;
|
||||
case SBI_HSM_SUSPEND_NON_RET_DEFAULT:
|
||||
ret = -EOPNOTSUPP;
|
||||
ret = SBI_ERR_NOT_SUPPORTED;
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
ret = SBI_ERR_INVALID_PARAM;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
ret = SBI_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
return ret;
|
||||
retdata->err_val = ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = {
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2023 Rivos Inc
|
||||
*
|
||||
* Authors:
|
||||
* Atish Patra <atishp@rivosinc.com>
|
||||
*/
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <asm/csr.h>
|
||||
#include <asm/sbi.h>
|
||||
#include <asm/kvm_vcpu_sbi.h>
|
||||
|
||||
static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
unsigned long funcid = cp->a6;
|
||||
u64 temp;
|
||||
|
||||
if (!kvpmu->init_done) {
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (funcid) {
|
||||
case SBI_EXT_PMU_NUM_COUNTERS:
|
||||
ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, retdata);
|
||||
break;
|
||||
case SBI_EXT_PMU_COUNTER_GET_INFO:
|
||||
ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, retdata);
|
||||
break;
|
||||
case SBI_EXT_PMU_COUNTER_CFG_MATCH:
|
||||
#if defined(CONFIG_32BIT)
|
||||
temp = ((uint64_t)cp->a5 << 32) | cp->a4;
|
||||
#else
|
||||
temp = cp->a4;
|
||||
#endif
|
||||
/*
|
||||
* This can fail if perf core framework fails to create an event.
|
||||
* Forward the error to userspace because it's an error which
|
||||
* happened within the host kernel. The other option would be
|
||||
* to convert to an SBI error and forward to the guest.
|
||||
*/
|
||||
ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1,
|
||||
cp->a2, cp->a3, temp, retdata);
|
||||
break;
|
||||
case SBI_EXT_PMU_COUNTER_START:
|
||||
#if defined(CONFIG_32BIT)
|
||||
temp = ((uint64_t)cp->a4 << 32) | cp->a3;
|
||||
#else
|
||||
temp = cp->a3;
|
||||
#endif
|
||||
ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2,
|
||||
temp, retdata);
|
||||
break;
|
||||
case SBI_EXT_PMU_COUNTER_STOP:
|
||||
ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, retdata);
|
||||
break;
|
||||
case SBI_EXT_PMU_COUNTER_FW_READ:
|
||||
ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata);
|
||||
break;
|
||||
default:
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
|
||||
|
||||
return kvpmu->init_done;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = {
|
||||
.extid_start = SBI_EXT_PMU,
|
||||
.extid_end = SBI_EXT_PMU,
|
||||
.handler = kvm_sbi_ext_pmu_handler,
|
||||
.probe = kvm_sbi_ext_pmu_probe,
|
||||
};
|
|
@ -11,19 +11,21 @@
|
|||
#include <linux/kvm_host.h>
|
||||
#include <asm/sbi.h>
|
||||
#include <asm/kvm_vcpu_timer.h>
|
||||
#include <asm/kvm_vcpu_pmu.h>
|
||||
#include <asm/kvm_vcpu_sbi.h>
|
||||
|
||||
static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap, bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
u64 next_cycle;
|
||||
|
||||
if (cp->a6 != SBI_EXT_TIME_SET_TIMER)
|
||||
return -EINVAL;
|
||||
if (cp->a6 != SBI_EXT_TIME_SET_TIMER) {
|
||||
retdata->err_val = SBI_ERR_INVALID_PARAM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_SET_TIMER);
|
||||
#if __riscv_xlen == 32
|
||||
next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
|
||||
#else
|
||||
|
@ -31,7 +33,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
#endif
|
||||
kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = {
|
||||
|
@ -41,8 +43,7 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = {
|
|||
};
|
||||
|
||||
static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap, bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned long i;
|
||||
|
@ -51,9 +52,12 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
unsigned long hmask = cp->a0;
|
||||
unsigned long hbase = cp->a1;
|
||||
|
||||
if (cp->a6 != SBI_EXT_IPI_SEND_IPI)
|
||||
return -EINVAL;
|
||||
if (cp->a6 != SBI_EXT_IPI_SEND_IPI) {
|
||||
retdata->err_val = SBI_ERR_INVALID_PARAM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_IPI_SENT);
|
||||
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
|
||||
if (hbase != -1UL) {
|
||||
if (tmp->vcpu_id < hbase)
|
||||
|
@ -64,6 +68,7 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT);
|
||||
if (ret < 0)
|
||||
break;
|
||||
kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RCVD);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
@ -76,10 +81,8 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = {
|
|||
};
|
||||
|
||||
static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap, bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
unsigned long hmask = cp->a0;
|
||||
unsigned long hbase = cp->a1;
|
||||
|
@ -88,6 +91,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
|
|||
switch (funcid) {
|
||||
case SBI_EXT_RFENCE_REMOTE_FENCE_I:
|
||||
kvm_riscv_fence_i(vcpu->kvm, hbase, hmask);
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT);
|
||||
break;
|
||||
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
|
||||
if (cp->a2 == 0 && cp->a3 == 0)
|
||||
|
@ -95,6 +99,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
|
|||
else
|
||||
kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask,
|
||||
cp->a2, cp->a3, PAGE_SHIFT);
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT);
|
||||
break;
|
||||
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
|
||||
if (cp->a2 == 0 && cp->a3 == 0)
|
||||
|
@ -105,6 +110,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
|
|||
hbase, hmask,
|
||||
cp->a2, cp->a3,
|
||||
PAGE_SHIFT, cp->a4);
|
||||
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT);
|
||||
break;
|
||||
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA:
|
||||
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID:
|
||||
|
@ -116,10 +122,10 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
|
|||
*/
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = {
|
||||
|
@ -130,14 +136,12 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = {
|
|||
|
||||
static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu,
|
||||
struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap, bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
unsigned long funcid = cp->a6;
|
||||
u32 reason = cp->a1;
|
||||
u32 type = cp->a0;
|
||||
int ret = 0;
|
||||
|
||||
switch (funcid) {
|
||||
case SBI_EXT_SRST_RESET:
|
||||
|
@ -146,24 +150,24 @@ static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu,
|
|||
kvm_riscv_vcpu_sbi_system_reset(vcpu, run,
|
||||
KVM_SYSTEM_EVENT_SHUTDOWN,
|
||||
reason);
|
||||
*exit = true;
|
||||
retdata->uexit = true;
|
||||
break;
|
||||
case SBI_SRST_RESET_TYPE_COLD_REBOOT:
|
||||
case SBI_SRST_RESET_TYPE_WARM_REBOOT:
|
||||
kvm_riscv_vcpu_sbi_system_reset(vcpu, run,
|
||||
KVM_SYSTEM_EVENT_RESET,
|
||||
reason);
|
||||
*exit = true;
|
||||
retdata->uexit = true;
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst = {
|
||||
|
|
|
@ -14,9 +14,7 @@
|
|||
#include <asm/kvm_vcpu_sbi.h>
|
||||
|
||||
static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
unsigned long *out_val,
|
||||
struct kvm_cpu_trap *utrap,
|
||||
bool *exit)
|
||||
struct kvm_vcpu_sbi_return *retdata)
|
||||
{
|
||||
ulong hmask;
|
||||
int i, ret = 0;
|
||||
|
@ -24,6 +22,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
struct kvm_vcpu *rvcpu;
|
||||
struct kvm *kvm = vcpu->kvm;
|
||||
struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
|
||||
struct kvm_cpu_trap *utrap = retdata->utrap;
|
||||
|
||||
switch (cp->a7) {
|
||||
case SBI_EXT_0_1_CONSOLE_GETCHAR:
|
||||
|
@ -33,7 +32,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
* handled in kernel so we forward these to user-space
|
||||
*/
|
||||
kvm_riscv_vcpu_sbi_forward(vcpu, run);
|
||||
*exit = true;
|
||||
retdata->uexit = true;
|
||||
break;
|
||||
case SBI_EXT_0_1_SET_TIMER:
|
||||
#if __riscv_xlen == 32
|
||||
|
@ -48,8 +47,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
break;
|
||||
case SBI_EXT_0_1_SEND_IPI:
|
||||
if (cp->a0)
|
||||
hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
|
||||
utrap);
|
||||
hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, utrap);
|
||||
else
|
||||
hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
|
||||
if (utrap->scause)
|
||||
|
@ -65,14 +63,13 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
case SBI_EXT_0_1_SHUTDOWN:
|
||||
kvm_riscv_vcpu_sbi_system_reset(vcpu, run,
|
||||
KVM_SYSTEM_EVENT_SHUTDOWN, 0);
|
||||
*exit = true;
|
||||
retdata->uexit = true;
|
||||
break;
|
||||
case SBI_EXT_0_1_REMOTE_FENCE_I:
|
||||
case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
|
||||
case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
|
||||
if (cp->a0)
|
||||
hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
|
||||
utrap);
|
||||
hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, utrap);
|
||||
else
|
||||
hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
|
||||
if (utrap->scause)
|
||||
|
@ -103,7 +100,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
}
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -17,10 +17,10 @@
|
|||
|
||||
static unsigned long vmid_version = 1;
|
||||
static unsigned long vmid_next;
|
||||
static unsigned long vmid_bits;
|
||||
static unsigned long vmid_bits __ro_after_init;
|
||||
static DEFINE_SPINLOCK(vmid_lock);
|
||||
|
||||
void kvm_riscv_gstage_vmid_detect(void)
|
||||
void __init kvm_riscv_gstage_vmid_detect(void)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
|
|
|
@ -1031,7 +1031,6 @@ extern char sie_exit;
|
|||
extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
|
||||
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
|
||||
|
||||
static inline void kvm_arch_hardware_disable(void) {}
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
|
||||
static inline void kvm_arch_free_memslot(struct kvm *kvm,
|
||||
|
|
|
@ -1161,6 +1161,115 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
|
|||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* cmpxchg_guest_abs_with_key() - Perform cmpxchg on guest absolute address.
|
||||
* @kvm: Virtual machine instance.
|
||||
* @gpa: Absolute guest address of the location to be changed.
|
||||
* @len: Operand length of the cmpxchg, required: 1 <= len <= 16. Providing a
|
||||
* non power of two will result in failure.
|
||||
* @old_addr: Pointer to old value. If the location at @gpa contains this value,
|
||||
* the exchange will succeed. After calling cmpxchg_guest_abs_with_key()
|
||||
* *@old_addr contains the value at @gpa before the attempt to
|
||||
* exchange the value.
|
||||
* @new: The value to place at @gpa.
|
||||
* @access_key: The access key to use for the guest access.
|
||||
* @success: output value indicating if an exchange occurred.
|
||||
*
|
||||
* Atomically exchange the value at @gpa by @new, if it contains *@old.
|
||||
* Honors storage keys.
|
||||
*
|
||||
* Return: * 0: successful exchange
|
||||
* * >0: a program interruption code indicating the reason cmpxchg could
|
||||
* not be attempted
|
||||
* * -EINVAL: address misaligned or len not power of two
|
||||
* * -EAGAIN: transient failure (len 1 or 2)
|
||||
* * -EOPNOTSUPP: read-only memslot (should never occur)
|
||||
*/
|
||||
int cmpxchg_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, int len,
|
||||
__uint128_t *old_addr, __uint128_t new,
|
||||
u8 access_key, bool *success)
|
||||
{
|
||||
gfn_t gfn = gpa_to_gfn(gpa);
|
||||
struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
|
||||
bool writable;
|
||||
hva_t hva;
|
||||
int ret;
|
||||
|
||||
if (!IS_ALIGNED(gpa, len))
|
||||
return -EINVAL;
|
||||
|
||||
hva = gfn_to_hva_memslot_prot(slot, gfn, &writable);
|
||||
if (kvm_is_error_hva(hva))
|
||||
return PGM_ADDRESSING;
|
||||
/*
|
||||
* Check if it's a read-only memslot, even though that cannot occur
|
||||
* since those are unsupported.
|
||||
* Don't try to actually handle that case.
|
||||
*/
|
||||
if (!writable)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
hva += offset_in_page(gpa);
|
||||
/*
|
||||
* The cmpxchg_user_key macro depends on the type of "old", so we need
|
||||
* a case for each valid length and get some code duplication as long
|
||||
* as we don't introduce a new macro.
|
||||
*/
|
||||
switch (len) {
|
||||
case 1: {
|
||||
u8 old;
|
||||
|
||||
ret = cmpxchg_user_key((u8 __user *)hva, &old, *old_addr, new, access_key);
|
||||
*success = !ret && old == *old_addr;
|
||||
*old_addr = old;
|
||||
break;
|
||||
}
|
||||
case 2: {
|
||||
u16 old;
|
||||
|
||||
ret = cmpxchg_user_key((u16 __user *)hva, &old, *old_addr, new, access_key);
|
||||
*success = !ret && old == *old_addr;
|
||||
*old_addr = old;
|
||||
break;
|
||||
}
|
||||
case 4: {
|
||||
u32 old;
|
||||
|
||||
ret = cmpxchg_user_key((u32 __user *)hva, &old, *old_addr, new, access_key);
|
||||
*success = !ret && old == *old_addr;
|
||||
*old_addr = old;
|
||||
break;
|
||||
}
|
||||
case 8: {
|
||||
u64 old;
|
||||
|
||||
ret = cmpxchg_user_key((u64 __user *)hva, &old, *old_addr, new, access_key);
|
||||
*success = !ret && old == *old_addr;
|
||||
*old_addr = old;
|
||||
break;
|
||||
}
|
||||
case 16: {
|
||||
__uint128_t old;
|
||||
|
||||
ret = cmpxchg_user_key((__uint128_t __user *)hva, &old, *old_addr, new, access_key);
|
||||
*success = !ret && old == *old_addr;
|
||||
*old_addr = old;
|
||||
break;
|
||||
}
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
if (*success)
|
||||
mark_page_dirty_in_slot(kvm, slot, gfn);
|
||||
/*
|
||||
* Assume that the fault is caused by protection, either key protection
|
||||
* or user page write protection.
|
||||
*/
|
||||
if (ret == -EFAULT)
|
||||
ret = PGM_PROTECTION;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* guest_translate_address_with_key - translate guest logical into guest absolute address
|
||||
* @vcpu: virtual cpu
|
||||
|
|
|
@ -206,6 +206,9 @@ int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
|
|||
int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
|
||||
void *data, unsigned long len, enum gacc_mode mode);
|
||||
|
||||
int cmpxchg_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, int len, __uint128_t *old,
|
||||
__uint128_t new, u8 access_key, bool *success);
|
||||
|
||||
/**
|
||||
* write_guest_with_key - copy data from kernel space to guest space
|
||||
* @vcpu: virtual cpu
|
||||
|
|
|
@ -3103,9 +3103,9 @@ static enum hrtimer_restart gisa_vcpu_kicker(struct hrtimer *timer)
|
|||
static void process_gib_alert_list(void)
|
||||
{
|
||||
struct kvm_s390_gisa_interrupt *gi;
|
||||
u32 final, gisa_phys, origin = 0UL;
|
||||
struct kvm_s390_gisa *gisa;
|
||||
struct kvm *kvm;
|
||||
u32 final, origin = 0UL;
|
||||
|
||||
do {
|
||||
/*
|
||||
|
@ -3131,9 +3131,10 @@ static void process_gib_alert_list(void)
|
|||
* interruptions asap.
|
||||
*/
|
||||
while (origin & GISA_ADDR_MASK) {
|
||||
gisa = (struct kvm_s390_gisa *)(u64)origin;
|
||||
gisa_phys = origin;
|
||||
gisa = phys_to_virt(gisa_phys);
|
||||
origin = gisa->next_alert;
|
||||
gisa->next_alert = (u32)(u64)gisa;
|
||||
gisa->next_alert = gisa_phys;
|
||||
kvm = container_of(gisa, struct sie_page2, gisa)->kvm;
|
||||
gi = &kvm->arch.gisa_int;
|
||||
if (hrtimer_active(&gi->timer))
|
||||
|
@ -3415,8 +3416,9 @@ void kvm_s390_gib_destroy(void)
|
|||
gib = NULL;
|
||||
}
|
||||
|
||||
int kvm_s390_gib_init(u8 nisc)
|
||||
int __init kvm_s390_gib_init(u8 nisc)
|
||||
{
|
||||
u32 gib_origin;
|
||||
int rc = 0;
|
||||
|
||||
if (!css_general_characteristics.aiv) {
|
||||
|
@ -3438,7 +3440,8 @@ int kvm_s390_gib_init(u8 nisc)
|
|||
}
|
||||
|
||||
gib->nisc = nisc;
|
||||
if (chsc_sgib((u32)(u64)gib)) {
|
||||
gib_origin = virt_to_phys(gib);
|
||||
if (chsc_sgib(gib_origin)) {
|
||||
pr_err("Associating the GIB with the AIV facility failed\n");
|
||||
free_page((unsigned long)gib);
|
||||
gib = NULL;
|
||||
|
|
|
@ -256,17 +256,6 @@ debug_info_t *kvm_s390_dbf;
|
|||
debug_info_t *kvm_s390_dbf_uv;
|
||||
|
||||
/* Section: not file related */
|
||||
int kvm_arch_hardware_enable(void)
|
||||
{
|
||||
/* every s390 is virtualization enabled ;-) */
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_arch_check_processor_compat(void *opaque)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* forward declarations */
|
||||
static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
|
||||
unsigned long end);
|
||||
|
@ -329,25 +318,6 @@ static struct notifier_block kvm_clock_notifier = {
|
|||
.notifier_call = kvm_clock_sync,
|
||||
};
|
||||
|
||||
int kvm_arch_hardware_setup(void *opaque)
|
||||
{
|
||||
gmap_notifier.notifier_call = kvm_gmap_notifier;
|
||||
gmap_register_pte_notifier(&gmap_notifier);
|
||||
vsie_gmap_notifier.notifier_call = kvm_s390_vsie_gmap_notifier;
|
||||
gmap_register_pte_notifier(&vsie_gmap_notifier);
|
||||
atomic_notifier_chain_register(&s390_epoch_delta_notifier,
|
||||
&kvm_clock_notifier);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void kvm_arch_hardware_unsetup(void)
|
||||
{
|
||||
gmap_unregister_pte_notifier(&gmap_notifier);
|
||||
gmap_unregister_pte_notifier(&vsie_gmap_notifier);
|
||||
atomic_notifier_chain_unregister(&s390_epoch_delta_notifier,
|
||||
&kvm_clock_notifier);
|
||||
}
|
||||
|
||||
static void allow_cpu_feat(unsigned long nr)
|
||||
{
|
||||
set_bit_inv(nr, kvm_s390_available_cpu_feat);
|
||||
|
@ -385,7 +355,7 @@ static __always_inline void __insn32_query(unsigned int opcode, u8 *query)
|
|||
#define INSN_SORTL 0xb938
|
||||
#define INSN_DFLTCC 0xb939
|
||||
|
||||
static void kvm_s390_cpu_feat_init(void)
|
||||
static void __init kvm_s390_cpu_feat_init(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -488,7 +458,7 @@ static void kvm_s390_cpu_feat_init(void)
|
|||
*/
|
||||
}
|
||||
|
||||
int kvm_arch_init(void *opaque)
|
||||
static int __init __kvm_s390_init(void)
|
||||
{
|
||||
int rc = -ENOMEM;
|
||||
|
||||
|
@ -498,11 +468,11 @@ int kvm_arch_init(void *opaque)
|
|||
|
||||
kvm_s390_dbf_uv = debug_register("kvm-uv", 32, 1, 7 * sizeof(long));
|
||||
if (!kvm_s390_dbf_uv)
|
||||
goto out;
|
||||
goto err_kvm_uv;
|
||||
|
||||
if (debug_register_view(kvm_s390_dbf, &debug_sprintf_view) ||
|
||||
debug_register_view(kvm_s390_dbf_uv, &debug_sprintf_view))
|
||||
goto out;
|
||||
goto err_debug_view;
|
||||
|
||||
kvm_s390_cpu_feat_init();
|
||||
|
||||
|
@ -510,30 +480,49 @@ int kvm_arch_init(void *opaque)
|
|||
rc = kvm_register_device_ops(&kvm_flic_ops, KVM_DEV_TYPE_FLIC);
|
||||
if (rc) {
|
||||
pr_err("A FLIC registration call failed with rc=%d\n", rc);
|
||||
goto out;
|
||||
goto err_flic;
|
||||
}
|
||||
|
||||
if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM)) {
|
||||
rc = kvm_s390_pci_init();
|
||||
if (rc) {
|
||||
pr_err("Unable to allocate AIFT for PCI\n");
|
||||
goto out;
|
||||
goto err_pci;
|
||||
}
|
||||
}
|
||||
|
||||
rc = kvm_s390_gib_init(GAL_ISC);
|
||||
if (rc)
|
||||
goto out;
|
||||
goto err_gib;
|
||||
|
||||
gmap_notifier.notifier_call = kvm_gmap_notifier;
|
||||
gmap_register_pte_notifier(&gmap_notifier);
|
||||
vsie_gmap_notifier.notifier_call = kvm_s390_vsie_gmap_notifier;
|
||||
gmap_register_pte_notifier(&vsie_gmap_notifier);
|
||||
atomic_notifier_chain_register(&s390_epoch_delta_notifier,
|
||||
&kvm_clock_notifier);
|
||||
|
||||
return 0;
|
||||
|
||||
out:
|
||||
kvm_arch_exit();
|
||||
err_gib:
|
||||
if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM))
|
||||
kvm_s390_pci_exit();
|
||||
err_pci:
|
||||
err_flic:
|
||||
err_debug_view:
|
||||
debug_unregister(kvm_s390_dbf_uv);
|
||||
err_kvm_uv:
|
||||
debug_unregister(kvm_s390_dbf);
|
||||
return rc;
|
||||
}
|
||||
|
||||
void kvm_arch_exit(void)
|
||||
static void __kvm_s390_exit(void)
|
||||
{
|
||||
gmap_unregister_pte_notifier(&gmap_notifier);
|
||||
gmap_unregister_pte_notifier(&vsie_gmap_notifier);
|
||||
atomic_notifier_chain_unregister(&s390_epoch_delta_notifier,
|
||||
&kvm_clock_notifier);
|
||||
|
||||
kvm_s390_gib_destroy();
|
||||
if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM))
|
||||
kvm_s390_pci_exit();
|
||||
|
@ -584,7 +573,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
|||
case KVM_CAP_S390_VCPU_RESETS:
|
||||
case KVM_CAP_SET_GUEST_DEBUG:
|
||||
case KVM_CAP_S390_DIAG318:
|
||||
case KVM_CAP_S390_MEM_OP_EXTENSION:
|
||||
r = 1;
|
||||
break;
|
||||
case KVM_CAP_SET_GUEST_DEBUG2:
|
||||
|
@ -598,6 +586,15 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
|||
case KVM_CAP_S390_MEM_OP:
|
||||
r = MEM_OP_MAX_SIZE;
|
||||
break;
|
||||
case KVM_CAP_S390_MEM_OP_EXTENSION:
|
||||
/*
|
||||
* Flag bits indicating which extensions are supported.
|
||||
* If r > 0, the base extension must also be supported/indicated,
|
||||
* in order to maintain backwards compatibility.
|
||||
*/
|
||||
r = KVM_S390_MEMOP_EXTENSION_CAP_BASE |
|
||||
KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG;
|
||||
break;
|
||||
case KVM_CAP_NR_VCPUS:
|
||||
case KVM_CAP_MAX_VCPUS:
|
||||
case KVM_CAP_MAX_VCPU_ID:
|
||||
|
@ -2764,41 +2761,33 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
|
|||
return r;
|
||||
}
|
||||
|
||||
static bool access_key_invalid(u8 access_key)
|
||||
static int mem_op_validate_common(struct kvm_s390_mem_op *mop, u64 supported_flags)
|
||||
{
|
||||
return access_key > 0xf;
|
||||
}
|
||||
|
||||
static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
|
||||
{
|
||||
void __user *uaddr = (void __user *)mop->buf;
|
||||
u64 supported_flags;
|
||||
void *tmpbuf = NULL;
|
||||
int r, srcu_idx;
|
||||
|
||||
supported_flags = KVM_S390_MEMOP_F_SKEY_PROTECTION
|
||||
| KVM_S390_MEMOP_F_CHECK_ONLY;
|
||||
if (mop->flags & ~supported_flags || !mop->size)
|
||||
return -EINVAL;
|
||||
if (mop->size > MEM_OP_MAX_SIZE)
|
||||
return -E2BIG;
|
||||
/*
|
||||
* This is technically a heuristic only, if the kvm->lock is not
|
||||
* taken, it is not guaranteed that the vm is/remains non-protected.
|
||||
* This is ok from a kernel perspective, wrongdoing is detected
|
||||
* on the access, -EFAULT is returned and the vm may crash the
|
||||
* next time it accesses the memory in question.
|
||||
* There is no sane usecase to do switching and a memop on two
|
||||
* different CPUs at the same time.
|
||||
*/
|
||||
if (kvm_s390_pv_get_handle(kvm))
|
||||
return -EINVAL;
|
||||
if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
|
||||
if (access_key_invalid(mop->key))
|
||||
if (mop->key > 0xf)
|
||||
return -EINVAL;
|
||||
} else {
|
||||
mop->key = 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_s390_vm_mem_op_abs(struct kvm *kvm, struct kvm_s390_mem_op *mop)
|
||||
{
|
||||
void __user *uaddr = (void __user *)mop->buf;
|
||||
enum gacc_mode acc_mode;
|
||||
void *tmpbuf = NULL;
|
||||
int r, srcu_idx;
|
||||
|
||||
r = mem_op_validate_common(mop, KVM_S390_MEMOP_F_SKEY_PROTECTION |
|
||||
KVM_S390_MEMOP_F_CHECK_ONLY);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
|
||||
tmpbuf = vmalloc(mop->size);
|
||||
if (!tmpbuf)
|
||||
|
@ -2812,35 +2801,25 @@ static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
switch (mop->op) {
|
||||
case KVM_S390_MEMOP_ABSOLUTE_READ: {
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_FETCH, mop->key);
|
||||
} else {
|
||||
r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
|
||||
mop->size, GACC_FETCH, mop->key);
|
||||
if (r == 0) {
|
||||
if (copy_to_user(uaddr, tmpbuf, mop->size))
|
||||
r = -EFAULT;
|
||||
}
|
||||
}
|
||||
break;
|
||||
acc_mode = mop->op == KVM_S390_MEMOP_ABSOLUTE_READ ? GACC_FETCH : GACC_STORE;
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gpa_range(kvm, mop->gaddr, mop->size, acc_mode, mop->key);
|
||||
goto out_unlock;
|
||||
}
|
||||
case KVM_S390_MEMOP_ABSOLUTE_WRITE: {
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_STORE, mop->key);
|
||||
} else {
|
||||
if (copy_from_user(tmpbuf, uaddr, mop->size)) {
|
||||
r = -EFAULT;
|
||||
break;
|
||||
}
|
||||
r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
|
||||
mop->size, GACC_STORE, mop->key);
|
||||
if (acc_mode == GACC_FETCH) {
|
||||
r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
|
||||
mop->size, GACC_FETCH, mop->key);
|
||||
if (r)
|
||||
goto out_unlock;
|
||||
if (copy_to_user(uaddr, tmpbuf, mop->size))
|
||||
r = -EFAULT;
|
||||
} else {
|
||||
if (copy_from_user(tmpbuf, uaddr, mop->size)) {
|
||||
r = -EFAULT;
|
||||
goto out_unlock;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
r = -EINVAL;
|
||||
r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
|
||||
mop->size, GACC_STORE, mop->key);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
|
@ -2850,6 +2829,75 @@ out_unlock:
|
|||
return r;
|
||||
}
|
||||
|
||||
static int kvm_s390_vm_mem_op_cmpxchg(struct kvm *kvm, struct kvm_s390_mem_op *mop)
|
||||
{
|
||||
void __user *uaddr = (void __user *)mop->buf;
|
||||
void __user *old_addr = (void __user *)mop->old_addr;
|
||||
union {
|
||||
__uint128_t quad;
|
||||
char raw[sizeof(__uint128_t)];
|
||||
} old = { .quad = 0}, new = { .quad = 0 };
|
||||
unsigned int off_in_quad = sizeof(new) - mop->size;
|
||||
int r, srcu_idx;
|
||||
bool success;
|
||||
|
||||
r = mem_op_validate_common(mop, KVM_S390_MEMOP_F_SKEY_PROTECTION);
|
||||
if (r)
|
||||
return r;
|
||||
/*
|
||||
* This validates off_in_quad. Checking that size is a power
|
||||
* of two is not necessary, as cmpxchg_guest_abs_with_key
|
||||
* takes care of that
|
||||
*/
|
||||
if (mop->size > sizeof(new))
|
||||
return -EINVAL;
|
||||
if (copy_from_user(&new.raw[off_in_quad], uaddr, mop->size))
|
||||
return -EFAULT;
|
||||
if (copy_from_user(&old.raw[off_in_quad], old_addr, mop->size))
|
||||
return -EFAULT;
|
||||
|
||||
srcu_idx = srcu_read_lock(&kvm->srcu);
|
||||
|
||||
if (kvm_is_error_gpa(kvm, mop->gaddr)) {
|
||||
r = PGM_ADDRESSING;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
r = cmpxchg_guest_abs_with_key(kvm, mop->gaddr, mop->size, &old.quad,
|
||||
new.quad, mop->key, &success);
|
||||
if (!success && copy_to_user(old_addr, &old.raw[off_in_quad], mop->size))
|
||||
r = -EFAULT;
|
||||
|
||||
out_unlock:
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
return r;
|
||||
}
|
||||
|
||||
static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
|
||||
{
|
||||
/*
|
||||
* This is technically a heuristic only, if the kvm->lock is not
|
||||
* taken, it is not guaranteed that the vm is/remains non-protected.
|
||||
* This is ok from a kernel perspective, wrongdoing is detected
|
||||
* on the access, -EFAULT is returned and the vm may crash the
|
||||
* next time it accesses the memory in question.
|
||||
* There is no sane usecase to do switching and a memop on two
|
||||
* different CPUs at the same time.
|
||||
*/
|
||||
if (kvm_s390_pv_get_handle(kvm))
|
||||
return -EINVAL;
|
||||
|
||||
switch (mop->op) {
|
||||
case KVM_S390_MEMOP_ABSOLUTE_READ:
|
||||
case KVM_S390_MEMOP_ABSOLUTE_WRITE:
|
||||
return kvm_s390_vm_mem_op_abs(kvm, mop);
|
||||
case KVM_S390_MEMOP_ABSOLUTE_CMPXCHG:
|
||||
return kvm_s390_vm_mem_op_cmpxchg(kvm, mop);
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
long kvm_arch_vm_ioctl(struct file *filp,
|
||||
unsigned int ioctl, unsigned long arg)
|
||||
{
|
||||
|
@ -5249,62 +5297,54 @@ static long kvm_s390_vcpu_mem_op(struct kvm_vcpu *vcpu,
|
|||
struct kvm_s390_mem_op *mop)
|
||||
{
|
||||
void __user *uaddr = (void __user *)mop->buf;
|
||||
enum gacc_mode acc_mode;
|
||||
void *tmpbuf = NULL;
|
||||
int r = 0;
|
||||
const u64 supported_flags = KVM_S390_MEMOP_F_INJECT_EXCEPTION
|
||||
| KVM_S390_MEMOP_F_CHECK_ONLY
|
||||
| KVM_S390_MEMOP_F_SKEY_PROTECTION;
|
||||
int r;
|
||||
|
||||
if (mop->flags & ~supported_flags || mop->ar >= NUM_ACRS || !mop->size)
|
||||
r = mem_op_validate_common(mop, KVM_S390_MEMOP_F_INJECT_EXCEPTION |
|
||||
KVM_S390_MEMOP_F_CHECK_ONLY |
|
||||
KVM_S390_MEMOP_F_SKEY_PROTECTION);
|
||||
if (r)
|
||||
return r;
|
||||
if (mop->ar >= NUM_ACRS)
|
||||
return -EINVAL;
|
||||
if (mop->size > MEM_OP_MAX_SIZE)
|
||||
return -E2BIG;
|
||||
if (kvm_s390_pv_cpu_is_protected(vcpu))
|
||||
return -EINVAL;
|
||||
if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
|
||||
if (access_key_invalid(mop->key))
|
||||
return -EINVAL;
|
||||
} else {
|
||||
mop->key = 0;
|
||||
}
|
||||
if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
|
||||
tmpbuf = vmalloc(mop->size);
|
||||
if (!tmpbuf)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
switch (mop->op) {
|
||||
case KVM_S390_MEMOP_LOGICAL_READ:
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gva_range(vcpu, mop->gaddr, mop->ar, mop->size,
|
||||
GACC_FETCH, mop->key);
|
||||
break;
|
||||
}
|
||||
acc_mode = mop->op == KVM_S390_MEMOP_LOGICAL_READ ? GACC_FETCH : GACC_STORE;
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gva_range(vcpu, mop->gaddr, mop->ar, mop->size,
|
||||
acc_mode, mop->key);
|
||||
goto out_inject;
|
||||
}
|
||||
if (acc_mode == GACC_FETCH) {
|
||||
r = read_guest_with_key(vcpu, mop->gaddr, mop->ar, tmpbuf,
|
||||
mop->size, mop->key);
|
||||
if (r == 0) {
|
||||
if (copy_to_user(uaddr, tmpbuf, mop->size))
|
||||
r = -EFAULT;
|
||||
}
|
||||
break;
|
||||
case KVM_S390_MEMOP_LOGICAL_WRITE:
|
||||
if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
|
||||
r = check_gva_range(vcpu, mop->gaddr, mop->ar, mop->size,
|
||||
GACC_STORE, mop->key);
|
||||
break;
|
||||
if (r)
|
||||
goto out_inject;
|
||||
if (copy_to_user(uaddr, tmpbuf, mop->size)) {
|
||||
r = -EFAULT;
|
||||
goto out_free;
|
||||
}
|
||||
} else {
|
||||
if (copy_from_user(tmpbuf, uaddr, mop->size)) {
|
||||
r = -EFAULT;
|
||||
break;
|
||||
goto out_free;
|
||||
}
|
||||
r = write_guest_with_key(vcpu, mop->gaddr, mop->ar, tmpbuf,
|
||||
mop->size, mop->key);
|
||||
break;
|
||||
}
|
||||
|
||||
out_inject:
|
||||
if (r > 0 && (mop->flags & KVM_S390_MEMOP_F_INJECT_EXCEPTION) != 0)
|
||||
kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
|
||||
|
||||
out_free:
|
||||
vfree(tmpbuf);
|
||||
return r;
|
||||
}
|
||||
|
@ -5633,23 +5673,40 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
|
|||
if (kvm_s390_pv_get_handle(kvm))
|
||||
return -EINVAL;
|
||||
|
||||
if (change == KVM_MR_DELETE || change == KVM_MR_FLAGS_ONLY)
|
||||
if (change != KVM_MR_DELETE && change != KVM_MR_FLAGS_ONLY) {
|
||||
/*
|
||||
* A few sanity checks. We can have memory slots which have to be
|
||||
* located/ended at a segment boundary (1MB). The memory in userland is
|
||||
* ok to be fragmented into various different vmas. It is okay to mmap()
|
||||
* and munmap() stuff in this slot after doing this call at any time
|
||||
*/
|
||||
|
||||
if (new->userspace_addr & 0xffffful)
|
||||
return -EINVAL;
|
||||
|
||||
size = new->npages * PAGE_SIZE;
|
||||
if (size & 0xffffful)
|
||||
return -EINVAL;
|
||||
|
||||
if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!kvm->arch.migration_mode)
|
||||
return 0;
|
||||
|
||||
/* A few sanity checks. We can have memory slots which have to be
|
||||
located/ended at a segment boundary (1MB). The memory in userland is
|
||||
ok to be fragmented into various different vmas. It is okay to mmap()
|
||||
and munmap() stuff in this slot after doing this call at any time */
|
||||
|
||||
if (new->userspace_addr & 0xffffful)
|
||||
return -EINVAL;
|
||||
|
||||
size = new->npages * PAGE_SIZE;
|
||||
if (size & 0xffffful)
|
||||
return -EINVAL;
|
||||
|
||||
if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit)
|
||||
return -EINVAL;
|
||||
/*
|
||||
* Turn off migration mode when:
|
||||
* - userspace creates a new memslot with dirty logging off,
|
||||
* - userspace modifies an existing memslot (MOVE or FLAGS_ONLY) and
|
||||
* dirty logging is turned off.
|
||||
* Migration mode expects dirty page logging being enabled to store
|
||||
* its dirty bitmap.
|
||||
*/
|
||||
if (change != KVM_MR_DELETE &&
|
||||
!(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
|
||||
WARN(kvm_s390_vm_stop_migration(kvm),
|
||||
"Failed to stop migration mode");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -5696,7 +5753,7 @@ static inline unsigned long nonhyp_mask(int i)
|
|||
|
||||
static int __init kvm_s390_init(void)
|
||||
{
|
||||
int i;
|
||||
int i, r;
|
||||
|
||||
if (!sclp.has_sief2) {
|
||||
pr_info("SIE is not available\n");
|
||||
|
@ -5712,12 +5769,23 @@ static int __init kvm_s390_init(void)
|
|||
kvm_s390_fac_base[i] |=
|
||||
stfle_fac_list[i] & nonhyp_mask(i);
|
||||
|
||||
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
r = __kvm_s390_init();
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
r = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
|
||||
if (r) {
|
||||
__kvm_s390_exit();
|
||||
return r;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit kvm_s390_exit(void)
|
||||
{
|
||||
kvm_exit();
|
||||
|
||||
__kvm_s390_exit();
|
||||
}
|
||||
|
||||
module_init(kvm_s390_init);
|
||||
|
|
|
@ -470,7 +470,7 @@ void kvm_s390_gisa_clear(struct kvm *kvm);
|
|||
void kvm_s390_gisa_destroy(struct kvm *kvm);
|
||||
void kvm_s390_gisa_disable(struct kvm *kvm);
|
||||
void kvm_s390_gisa_enable(struct kvm *kvm);
|
||||
int kvm_s390_gib_init(u8 nisc);
|
||||
int __init kvm_s390_gib_init(u8 nisc);
|
||||
void kvm_s390_gib_destroy(void);
|
||||
|
||||
/* implemented in guestdbg.c */
|
||||
|
|
|
@ -672,7 +672,7 @@ out:
|
|||
return r;
|
||||
}
|
||||
|
||||
int kvm_s390_pci_init(void)
|
||||
int __init kvm_s390_pci_init(void)
|
||||
{
|
||||
zpci_kvm_hook.kvm_register = kvm_s390_pci_register_kvm;
|
||||
zpci_kvm_hook.kvm_unregister = kvm_s390_pci_unregister_kvm;
|
||||
|
|
|
@ -60,7 +60,7 @@ void kvm_s390_pci_clear_list(struct kvm *kvm);
|
|||
|
||||
int kvm_s390_pci_zpci_op(struct kvm *kvm, struct kvm_s390_zpci_op *args);
|
||||
|
||||
int kvm_s390_pci_init(void);
|
||||
int __init kvm_s390_pci_init(void);
|
||||
void kvm_s390_pci_exit(void);
|
||||
|
||||
static inline bool kvm_s390_pci_interp_allowed(void)
|
||||
|
|
|
@ -6495,6 +6495,7 @@ __init int intel_pmu_init(void)
|
|||
x86_pmu.pebs_constraints = intel_spr_pebs_event_constraints;
|
||||
x86_pmu.extra_regs = intel_spr_extra_regs;
|
||||
x86_pmu.limit_period = spr_limit_period;
|
||||
x86_pmu.pebs_ept = 1;
|
||||
x86_pmu.pebs_aliases = NULL;
|
||||
x86_pmu.pebs_prec_dist = true;
|
||||
x86_pmu.pebs_block = true;
|
||||
|
|
|
@ -2366,8 +2366,10 @@ void __init intel_ds_init(void)
|
|||
x86_pmu.large_pebs_flags |= PERF_SAMPLE_TIME;
|
||||
break;
|
||||
|
||||
case 4:
|
||||
case 5:
|
||||
x86_pmu.pebs_ept = 1;
|
||||
fallthrough;
|
||||
case 4:
|
||||
x86_pmu.drain_pebs = intel_pmu_drain_pebs_icl;
|
||||
x86_pmu.pebs_record_size = sizeof(struct pebs_basic);
|
||||
if (x86_pmu.intel_cap.pebs_baseline) {
|
||||
|
|
|
@ -315,6 +315,9 @@
|
|||
#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
|
||||
#define X86_FEATURE_CMPCCXADD (12*32+ 7) /* "" CMPccXADD instructions */
|
||||
#define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* "" Intel Architectural PerfMon Extension */
|
||||
#define X86_FEATURE_FZRM (12*32+10) /* "" Fast zero-length REP MOVSB */
|
||||
#define X86_FEATURE_FSRS (12*32+11) /* "" Fast short REP STOSB */
|
||||
#define X86_FEATURE_FSRC (12*32+12) /* "" Fast short REP {CMPSB,SCASB} */
|
||||
#define X86_FEATURE_LKGS (12*32+18) /* "" Load "kernel" (userspace) GS */
|
||||
#define X86_FEATURE_AMX_FP16 (12*32+21) /* "" AMX fp16 Support */
|
||||
#define X86_FEATURE_AVX_IFMA (12*32+23) /* "" Support for VPMADD52[H,L]UQ */
|
||||
|
|
|
@ -269,6 +269,9 @@ enum hv_isolation_type {
|
|||
/* TSC invariant control */
|
||||
#define HV_X64_MSR_TSC_INVARIANT_CONTROL 0x40000118
|
||||
|
||||
/* HV_X64_MSR_TSC_INVARIANT_CONTROL bits */
|
||||
#define HV_EXPOSE_INVARIANT_TSC BIT_ULL(0)
|
||||
|
||||
/* Register name aliases for temporary compatibility */
|
||||
#define HV_X64_MSR_STIMER0_COUNT HV_REGISTER_STIMER0_COUNT
|
||||
#define HV_X64_MSR_STIMER0_CONFIG HV_REGISTER_STIMER0_CONFIG
|
||||
|
|
|
@ -582,18 +582,14 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_MC, xenpv_exc_machine_check);
|
|||
|
||||
/* NMI */
|
||||
|
||||
#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
|
||||
#if IS_ENABLED(CONFIG_KVM_INTEL)
|
||||
/*
|
||||
* Special NOIST entry point for VMX which invokes this on the kernel
|
||||
* stack. asm_exc_nmi() requires an IST to work correctly vs. the NMI
|
||||
* 'executing' marker.
|
||||
*
|
||||
* On 32bit this just uses the regular NMI entry point because 32-bit does
|
||||
* not have ISTs.
|
||||
* Special entry point for VMX which invokes this on the kernel stack, even for
|
||||
* 64-bit, i.e. without using an IST. asm_exc_nmi() requires an IST to work
|
||||
* correctly vs. the NMI 'executing' marker. Used for 32-bit kernels as well
|
||||
* to avoid more ifdeffery.
|
||||
*/
|
||||
DECLARE_IDTENTRY(X86_TRAP_NMI, exc_nmi_noist);
|
||||
#else
|
||||
#define asm_exc_nmi_noist asm_exc_nmi
|
||||
DECLARE_IDTENTRY(X86_TRAP_NMI, exc_nmi_kvm_vmx);
|
||||
#endif
|
||||
|
||||
DECLARE_IDTENTRY_NMI(X86_TRAP_NMI, exc_nmi);
|
||||
|
|
|
@ -14,6 +14,7 @@ BUILD_BUG_ON(1)
|
|||
* to make a definition optional, but in this case the default will
|
||||
* be __static_call_return0.
|
||||
*/
|
||||
KVM_X86_OP(check_processor_compatibility)
|
||||
KVM_X86_OP(hardware_enable)
|
||||
KVM_X86_OP(hardware_disable)
|
||||
KVM_X86_OP(hardware_unsetup)
|
||||
|
@ -76,7 +77,6 @@ KVM_X86_OP(set_nmi_mask)
|
|||
KVM_X86_OP(enable_nmi_window)
|
||||
KVM_X86_OP(enable_irq_window)
|
||||
KVM_X86_OP_OPTIONAL(update_cr8_intercept)
|
||||
KVM_X86_OP(check_apicv_inhibit_reasons)
|
||||
KVM_X86_OP(refresh_apicv_exec_ctrl)
|
||||
KVM_X86_OP_OPTIONAL(hwapic_irr_update)
|
||||
KVM_X86_OP_OPTIONAL(hwapic_isr_update)
|
||||
|
|
|
@ -134,8 +134,6 @@
|
|||
#define INVALID_PAGE (~(hpa_t)0)
|
||||
#define VALID_PAGE(x) ((x) != INVALID_PAGE)
|
||||
|
||||
#define INVALID_GPA (~(gpa_t)0)
|
||||
|
||||
/* KVM Hugepage definitions for x86 */
|
||||
#define KVM_MAX_HUGEPAGE_LEVEL PG_LEVEL_1G
|
||||
#define KVM_NR_PAGE_SIZES (KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1)
|
||||
|
@ -514,6 +512,7 @@ struct kvm_pmc {
|
|||
#define MSR_ARCH_PERFMON_PERFCTR_MAX (MSR_ARCH_PERFMON_PERFCTR0 + KVM_INTEL_PMC_MAX_GENERIC - 1)
|
||||
#define MSR_ARCH_PERFMON_EVENTSEL_MAX (MSR_ARCH_PERFMON_EVENTSEL0 + KVM_INTEL_PMC_MAX_GENERIC - 1)
|
||||
#define KVM_PMC_MAX_FIXED 3
|
||||
#define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1)
|
||||
#define KVM_AMD_PMC_MAX_GENERIC 6
|
||||
struct kvm_pmu {
|
||||
unsigned nr_arch_gp_counters;
|
||||
|
@ -678,6 +677,11 @@ struct kvm_vcpu_hv {
|
|||
} nested;
|
||||
};
|
||||
|
||||
struct kvm_hypervisor_cpuid {
|
||||
u32 base;
|
||||
u32 limit;
|
||||
};
|
||||
|
||||
/* Xen HVM per vcpu emulation context */
|
||||
struct kvm_vcpu_xen {
|
||||
u64 hypercall_rip;
|
||||
|
@ -698,6 +702,7 @@ struct kvm_vcpu_xen {
|
|||
struct hrtimer timer;
|
||||
int poll_evtchn;
|
||||
struct timer_list poll_timer;
|
||||
struct kvm_hypervisor_cpuid cpuid;
|
||||
};
|
||||
|
||||
struct kvm_queued_exception {
|
||||
|
@ -826,7 +831,7 @@ struct kvm_vcpu_arch {
|
|||
|
||||
int cpuid_nent;
|
||||
struct kvm_cpuid_entry2 *cpuid_entries;
|
||||
u32 kvm_cpuid_base;
|
||||
struct kvm_hypervisor_cpuid kvm_cpuid;
|
||||
|
||||
u64 reserved_gpa_bits;
|
||||
int maxphyaddr;
|
||||
|
@ -1022,19 +1027,30 @@ struct kvm_arch_memory_slot {
|
|||
};
|
||||
|
||||
/*
|
||||
* We use as the mode the number of bits allocated in the LDR for the
|
||||
* logical processor ID. It happens that these are all powers of two.
|
||||
* This makes it is very easy to detect cases where the APICs are
|
||||
* configured for multiple modes; in that case, we cannot use the map and
|
||||
* hence cannot use kvm_irq_delivery_to_apic_fast either.
|
||||
* Track the mode of the optimized logical map, as the rules for decoding the
|
||||
* destination vary per mode. Enabling the optimized logical map requires all
|
||||
* software-enabled local APIs to be in the same mode, each addressable APIC to
|
||||
* be mapped to only one MDA, and each MDA to map to at most one APIC.
|
||||
*/
|
||||
#define KVM_APIC_MODE_XAPIC_CLUSTER 4
|
||||
#define KVM_APIC_MODE_XAPIC_FLAT 8
|
||||
#define KVM_APIC_MODE_X2APIC 16
|
||||
enum kvm_apic_logical_mode {
|
||||
/* All local APICs are software disabled. */
|
||||
KVM_APIC_MODE_SW_DISABLED,
|
||||
/* All software enabled local APICs in xAPIC cluster addressing mode. */
|
||||
KVM_APIC_MODE_XAPIC_CLUSTER,
|
||||
/* All software enabled local APICs in xAPIC flat addressing mode. */
|
||||
KVM_APIC_MODE_XAPIC_FLAT,
|
||||
/* All software enabled local APICs in x2APIC mode. */
|
||||
KVM_APIC_MODE_X2APIC,
|
||||
/*
|
||||
* Optimized map disabled, e.g. not all local APICs in the same logical
|
||||
* mode, same logical ID assigned to multiple APICs, etc.
|
||||
*/
|
||||
KVM_APIC_MODE_MAP_DISABLED,
|
||||
};
|
||||
|
||||
struct kvm_apic_map {
|
||||
struct rcu_head rcu;
|
||||
u8 mode;
|
||||
enum kvm_apic_logical_mode logical_mode;
|
||||
u32 max_apic_id;
|
||||
union {
|
||||
struct kvm_lapic *xapic_flat_map[8];
|
||||
|
@ -1088,6 +1104,7 @@ struct kvm_hv {
|
|||
u64 hv_reenlightenment_control;
|
||||
u64 hv_tsc_emulation_control;
|
||||
u64 hv_tsc_emulation_status;
|
||||
u64 hv_invtsc_control;
|
||||
|
||||
/* How many vCPUs have VP index != vCPU index */
|
||||
atomic_t num_mismatched_vp_indexes;
|
||||
|
@ -1133,6 +1150,18 @@ struct kvm_x86_msr_filter {
|
|||
struct msr_bitmap_range ranges[16];
|
||||
};
|
||||
|
||||
struct kvm_x86_pmu_event_filter {
|
||||
__u32 action;
|
||||
__u32 nevents;
|
||||
__u32 fixed_counter_bitmap;
|
||||
__u32 flags;
|
||||
__u32 nr_includes;
|
||||
__u32 nr_excludes;
|
||||
__u64 *includes;
|
||||
__u64 *excludes;
|
||||
__u64 events[];
|
||||
};
|
||||
|
||||
enum kvm_apicv_inhibit {
|
||||
|
||||
/********************************************************************/
|
||||
|
@ -1163,6 +1192,12 @@ enum kvm_apicv_inhibit {
|
|||
*/
|
||||
APICV_INHIBIT_REASON_BLOCKIRQ,
|
||||
|
||||
/*
|
||||
* APICv is disabled because not all vCPUs have a 1:1 mapping between
|
||||
* APIC ID and vCPU, _and_ KVM is not applying its x2APIC hotplug hack.
|
||||
*/
|
||||
APICV_INHIBIT_REASON_PHYSICAL_ID_ALIASED,
|
||||
|
||||
/*
|
||||
* For simplicity, the APIC acceleration is inhibited
|
||||
* first time either APIC ID or APIC base are changed by the guest
|
||||
|
@ -1201,6 +1236,12 @@ enum kvm_apicv_inhibit {
|
|||
* AVIC is disabled because SEV doesn't support it.
|
||||
*/
|
||||
APICV_INHIBIT_REASON_SEV,
|
||||
|
||||
/*
|
||||
* AVIC is disabled because not all vCPUs with a valid LDR have a 1:1
|
||||
* mapping between logical ID and vCPU.
|
||||
*/
|
||||
APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED,
|
||||
};
|
||||
|
||||
struct kvm_arch {
|
||||
|
@ -1249,10 +1290,11 @@ struct kvm_arch {
|
|||
struct kvm_apic_map __rcu *apic_map;
|
||||
atomic_t apic_map_dirty;
|
||||
|
||||
/* Protects apic_access_memslot_enabled and apicv_inhibit_reasons */
|
||||
struct rw_semaphore apicv_update_lock;
|
||||
|
||||
bool apic_access_memslot_enabled;
|
||||
bool apic_access_memslot_inhibited;
|
||||
|
||||
/* Protects apicv_inhibit_reasons */
|
||||
struct rw_semaphore apicv_update_lock;
|
||||
unsigned long apicv_inhibit_reasons;
|
||||
|
||||
gpa_t wall_clock;
|
||||
|
@ -1302,7 +1344,6 @@ struct kvm_arch {
|
|||
u32 bsp_vcpu_id;
|
||||
|
||||
u64 disabled_quirks;
|
||||
int cpu_dirty_logging_count;
|
||||
|
||||
enum kvm_irqchip_mode irqchip_mode;
|
||||
u8 nr_reserved_ioapic_pins;
|
||||
|
@ -1338,25 +1379,16 @@ struct kvm_arch {
|
|||
/* Guest can access the SGX PROVISIONKEY. */
|
||||
bool sgx_provisioning_allowed;
|
||||
|
||||
struct kvm_pmu_event_filter __rcu *pmu_event_filter;
|
||||
struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
|
||||
struct task_struct *nx_huge_page_recovery_thread;
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
/*
|
||||
* Whether the TDP MMU is enabled for this VM. This contains a
|
||||
* snapshot of the TDP MMU module parameter from when the VM was
|
||||
* created and remains unchanged for the life of the VM. If this is
|
||||
* true, TDP MMU handler functions will run for various MMU
|
||||
* operations.
|
||||
*/
|
||||
bool tdp_mmu_enabled;
|
||||
|
||||
/* The number of TDP MMU pages across all roots. */
|
||||
atomic64_t tdp_mmu_pages;
|
||||
|
||||
/*
|
||||
* List of kvm_mmu_page structs being used as roots.
|
||||
* All kvm_mmu_page structs in the list should have
|
||||
* List of struct kvm_mmu_pages being used as roots.
|
||||
* All struct kvm_mmu_pages in the list should have
|
||||
* tdp_mmu_page set.
|
||||
*
|
||||
* For reads, this list is protected by:
|
||||
|
@ -1520,6 +1552,8 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
|
|||
struct kvm_x86_ops {
|
||||
const char *name;
|
||||
|
||||
int (*check_processor_compatibility)(void);
|
||||
|
||||
int (*hardware_enable)(void);
|
||||
void (*hardware_disable)(void);
|
||||
void (*hardware_unsetup)(void);
|
||||
|
@ -1608,6 +1642,8 @@ struct kvm_x86_ops {
|
|||
void (*enable_irq_window)(struct kvm_vcpu *vcpu);
|
||||
void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
|
||||
bool (*check_apicv_inhibit_reasons)(enum kvm_apicv_inhibit reason);
|
||||
const unsigned long required_apicv_inhibits;
|
||||
bool allow_apicv_in_x2apic_without_x2apic_virtualization;
|
||||
void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
|
||||
void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
|
||||
void (*hwapic_isr_update)(int isr);
|
||||
|
@ -1731,9 +1767,6 @@ struct kvm_x86_nested_ops {
|
|||
};
|
||||
|
||||
struct kvm_x86_init_ops {
|
||||
int (*cpu_has_kvm_support)(void);
|
||||
int (*disabled_by_bios)(void);
|
||||
int (*check_processor_compatibility)(void);
|
||||
int (*hardware_setup)(void);
|
||||
unsigned int (*handle_intel_pt_intr)(void);
|
||||
|
||||
|
@ -1760,6 +1793,9 @@ extern struct kvm_x86_ops kvm_x86_ops;
|
|||
#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
|
||||
#include <asm/kvm-x86-ops.h>
|
||||
|
||||
int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops);
|
||||
void kvm_x86_vendor_exit(void);
|
||||
|
||||
#define __KVM_HAVE_ARCH_VM_ALLOC
|
||||
static inline struct kvm *kvm_arch_alloc_vm(void)
|
||||
{
|
||||
|
@ -1982,7 +2018,7 @@ gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,
|
|||
|
||||
bool kvm_apicv_activated(struct kvm *kvm);
|
||||
bool kvm_vcpu_apicv_activated(struct kvm_vcpu *vcpu);
|
||||
void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu);
|
||||
void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu);
|
||||
void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm,
|
||||
enum kvm_apicv_inhibit reason, bool set);
|
||||
void kvm_set_or_clear_apicv_inhibit(struct kvm *kvm,
|
||||
|
@ -2054,14 +2090,11 @@ enum {
|
|||
TASK_SWITCH_GATE = 3,
|
||||
};
|
||||
|
||||
#define HF_GIF_MASK (1 << 0)
|
||||
#define HF_NMI_MASK (1 << 3)
|
||||
#define HF_IRET_MASK (1 << 4)
|
||||
#define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */
|
||||
#define HF_GUEST_MASK (1 << 0) /* VCPU is in guest-mode */
|
||||
|
||||
#ifdef CONFIG_KVM_SMM
|
||||
#define HF_SMM_MASK (1 << 6)
|
||||
#define HF_SMM_INSIDE_NMI_MASK (1 << 7)
|
||||
#define HF_SMM_MASK (1 << 1)
|
||||
#define HF_SMM_INSIDE_NMI_MASK (1 << 2)
|
||||
|
||||
# define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE
|
||||
# define KVM_ADDRESS_SPACE_NUM 2
|
||||
|
|
|
@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
|
|||
#define MRR_BIOS 0
|
||||
#define MRR_APM 1
|
||||
|
||||
void cpu_emergency_disable_virtualization(void);
|
||||
|
||||
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
|
||||
void nmi_panic_self_stop(struct pt_regs *regs);
|
||||
void nmi_shootdown_cpus(nmi_shootdown_cb callback);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue