3.19 changes for KVM:
- spring cleaning: removed support for IA64, and for hardware-assisted virtualization on the PPC970 - ARM, PPC, s390 all had only small fixes For x86: - small performance improvements (though only on weird guests) - usual round of hardware-compliancy fixes from Nadav - APICv fixes - XSAVES support for hosts and guests. XSAVES hosts were broken because the (non-KVM) XSAVES patches inadvertently changed the KVM userspace ABI whenever XSAVES was enabled; hence, this part is going to stable. Guest support is just a matter of exposing the feature and CPUID leaves support. Right now KVM is broken for PPC BookE in your tree (doesn't compile). I'll reply to the pull request with a patch, please apply it either before the pull request or in the merge commit, in order to preserve bisectability somewhat. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEcBAABAgAGBQJUkpg+AAoJEL/70l94x66DUmoH/jzXYkptSW9NGgm79KqxGJlD lzLnLBkitVvx++Mz5YBhdJEhKKLUlCtifFT1zPJQ/pthQhIRSaaAwZyNGgUs5w5x yMGKHiPQFyZRbmQtZhCInW0BftJoYHHciO3nUfHCZnp34My9MP2D55W7/z+fYFfQ DuqBSE9ThyZJtZ4zh8NRA9fCOeuqwVYRyoBs820Wbsh4cpIBoIK63Dg7k+CLE+ZV MZa/mRL6bAfsn9W5bnOUAgHJ3SPznnWbO3/g0aV+roL/5pffblprJx9lKNR08xUM 6hDFLop2gDehDJesDkY/o8Ckp1hEouvfsVpSShry4vcgtn0hgh2O5/6Orbmj6vE= =Zwq1 -----END PGP SIGNATURE----- Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm Pull KVM update from Paolo Bonzini: "3.19 changes for KVM: - spring cleaning: removed support for IA64, and for hardware- assisted virtualization on the PPC970 - ARM, PPC, s390 all had only small fixes For x86: - small performance improvements (though only on weird guests) - usual round of hardware-compliancy fixes from Nadav - APICv fixes - XSAVES support for hosts and guests. XSAVES hosts were broken because the (non-KVM) XSAVES patches inadvertently changed the KVM userspace ABI whenever XSAVES was enabled; hence, this part is going to stable. Guest support is just a matter of exposing the feature and CPUID leaves support" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (179 commits) KVM: move APIC types to arch/x86/ KVM: PPC: Book3S: Enable in-kernel XICS emulation by default KVM: PPC: Book3S HV: Improve H_CONFER implementation KVM: PPC: Book3S HV: Fix endianness of instruction obtained from HEIR register KVM: PPC: Book3S HV: Remove code for PPC970 processors KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions KVM: PPC: Book3S HV: Simplify locking around stolen time calculations arch: powerpc: kvm: book3s_paired_singles.c: Remove unused function arch: powerpc: kvm: book3s_pr.c: Remove unused function arch: powerpc: kvm: book3s.c: Remove some unused functions arch: powerpc: kvm: book3s_32_mmu.c: Remove unused function KVM: PPC: Book3S HV: Check wait conditions before sleeping in kvmppc_vcore_blocked KVM: PPC: Book3S HV: ptes are big endian KVM: PPC: Book3S HV: Fix inaccuracies in ICP emulation for H_IPI KVM: PPC: Book3S HV: Fix KSM memory corruption KVM: PPC: Book3S HV: Fix an issue where guest is paused on receiving HMI KVM: PPC: Book3S HV: Fix computation of tlbie operand KVM: PPC: Book3S HV: Add missing HPTE unlock KVM: PPC: BookE: Improve irq inject tracepoint arm/arm64: KVM: Require in-kernel vgic for the arch timers ...
This commit is contained in:
commit
66dcff86ba
|
@ -1,83 +0,0 @@
|
|||
Currently, kvm module is in EXPERIMENTAL stage on IA64. This means that
|
||||
interfaces are not stable enough to use. So, please don't run critical
|
||||
applications in virtual machine.
|
||||
We will try our best to improve it in future versions!
|
||||
|
||||
Guide: How to boot up guests on kvm/ia64
|
||||
|
||||
This guide is to describe how to enable kvm support for IA-64 systems.
|
||||
|
||||
1. Get the kvm source from git.kernel.org.
|
||||
Userspace source:
|
||||
git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git
|
||||
Kernel Source:
|
||||
git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git
|
||||
|
||||
2. Compile the source code.
|
||||
2.1 Compile userspace code:
|
||||
(1)cd ./kvm-userspace
|
||||
(2)./configure
|
||||
(3)cd kernel
|
||||
(4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.)
|
||||
(5)cd ..
|
||||
(6)make qemu
|
||||
(7)cd qemu; make install
|
||||
|
||||
2.2 Compile kernel source code:
|
||||
(1) cd ./$kernel_dir
|
||||
(2) Make menuconfig
|
||||
(3) Enter into virtualization option, and choose kvm.
|
||||
(4) make
|
||||
(5) Once (4) done, make modules_install
|
||||
(6) Make initrd, and use new kernel to reboot up host machine.
|
||||
(7) Once (6) done, cd $kernel_dir/arch/ia64/kvm
|
||||
(8) insmod kvm.ko; insmod kvm-intel.ko
|
||||
|
||||
Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail.
|
||||
|
||||
3. Get Guest Firmware named as Flash.fd, and put it under right place:
|
||||
(1) If you have the guest firmware (binary) released by Intel Corp for Xen, use it directly.
|
||||
|
||||
(2) If you have no firmware at hand, Please download its source from
|
||||
hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg
|
||||
you can get the firmware's binary in the directory of efi-vfirmware.hg/binaries.
|
||||
|
||||
(3) Rename the firmware you owned to Flash.fd, and copy it to /usr/local/share/qemu
|
||||
|
||||
4. Boot up Linux or Windows guests:
|
||||
4.1 Create or install a image for guest boot. If you have xen experience, it should be easy.
|
||||
|
||||
4.2 Boot up guests use the following command.
|
||||
/usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image
|
||||
(xx is the number of virtual processors for the guest, now the maximum value is 4)
|
||||
|
||||
5. Known possible issue on some platforms with old Firmware.
|
||||
|
||||
In the event of strange host crash issues, try to solve it through either of the following ways:
|
||||
|
||||
(1): Upgrade your Firmware to the latest one.
|
||||
|
||||
(2): Applying the below patch to kernel source.
|
||||
diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S
|
||||
index 0b53344..f02b0f7 100644
|
||||
--- a/arch/ia64/kernel/pal.S
|
||||
+++ b/arch/ia64/kernel/pal.S
|
||||
@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static)
|
||||
mov ar.pfs = loc1
|
||||
mov rp = loc0
|
||||
;;
|
||||
- srlz.d // serialize restoration of psr.l
|
||||
+ srlz.i // serialize restoration of psr.l
|
||||
+ ;;
|
||||
br.ret.sptk.many b0
|
||||
END(ia64_pal_call_static)
|
||||
|
||||
6. Bug report:
|
||||
If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list.
|
||||
https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/
|
||||
|
||||
Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger!
|
||||
|
||||
|
||||
Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
2008.3.10
|
|
@ -68,9 +68,12 @@ description:
|
|||
|
||||
Capability: which KVM extension provides this ioctl. Can be 'basic',
|
||||
which means that is will be provided by any kernel that supports
|
||||
API version 12 (see section 4.1), or a KVM_CAP_xyz constant, which
|
||||
API version 12 (see section 4.1), a KVM_CAP_xyz constant, which
|
||||
means availability needs to be checked with KVM_CHECK_EXTENSION
|
||||
(see section 4.4).
|
||||
(see section 4.4), or 'none' which means that while not all kernels
|
||||
support this ioctl, there's no capability bit to check its
|
||||
availability: for kernels that don't support the ioctl,
|
||||
the ioctl returns -ENOTTY.
|
||||
|
||||
Architectures: which instruction set architectures provide this ioctl.
|
||||
x86 includes both i386 and x86_64.
|
||||
|
@ -604,7 +607,7 @@ struct kvm_fpu {
|
|||
4.24 KVM_CREATE_IRQCHIP
|
||||
|
||||
Capability: KVM_CAP_IRQCHIP, KVM_CAP_S390_IRQCHIP (s390)
|
||||
Architectures: x86, ia64, ARM, arm64, s390
|
||||
Architectures: x86, ARM, arm64, s390
|
||||
Type: vm ioctl
|
||||
Parameters: none
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -612,7 +615,7 @@ Returns: 0 on success, -1 on error
|
|||
Creates an interrupt controller model in the kernel. On x86, creates a virtual
|
||||
ioapic, a virtual PIC (two PICs, nested), and sets up future vcpus to have a
|
||||
local APIC. IRQ routing for GSIs 0-15 is set to both PIC and IOAPIC; GSI 16-23
|
||||
only go to the IOAPIC. On ia64, a IOSAPIC is created. On ARM/arm64, a GIC is
|
||||
only go to the IOAPIC. On ARM/arm64, a GIC is
|
||||
created. On s390, a dummy irq routing table is created.
|
||||
|
||||
Note that on s390 the KVM_CAP_S390_IRQCHIP vm capability needs to be enabled
|
||||
|
@ -622,7 +625,7 @@ before KVM_CREATE_IRQCHIP can be used.
|
|||
4.25 KVM_IRQ_LINE
|
||||
|
||||
Capability: KVM_CAP_IRQCHIP
|
||||
Architectures: x86, ia64, arm, arm64
|
||||
Architectures: x86, arm, arm64
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_irq_level
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -676,7 +679,7 @@ struct kvm_irq_level {
|
|||
4.26 KVM_GET_IRQCHIP
|
||||
|
||||
Capability: KVM_CAP_IRQCHIP
|
||||
Architectures: x86, ia64
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_irqchip (in/out)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -698,7 +701,7 @@ struct kvm_irqchip {
|
|||
4.27 KVM_SET_IRQCHIP
|
||||
|
||||
Capability: KVM_CAP_IRQCHIP
|
||||
Architectures: x86, ia64
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_irqchip (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -991,7 +994,7 @@ for vm-wide capabilities.
|
|||
4.38 KVM_GET_MP_STATE
|
||||
|
||||
Capability: KVM_CAP_MP_STATE
|
||||
Architectures: x86, ia64, s390
|
||||
Architectures: x86, s390
|
||||
Type: vcpu ioctl
|
||||
Parameters: struct kvm_mp_state (out)
|
||||
Returns: 0 on success; -1 on error
|
||||
|
@ -1005,16 +1008,15 @@ uniprocessor guests).
|
|||
|
||||
Possible values are:
|
||||
|
||||
- KVM_MP_STATE_RUNNABLE: the vcpu is currently running [x86, ia64]
|
||||
- KVM_MP_STATE_RUNNABLE: the vcpu is currently running [x86]
|
||||
- KVM_MP_STATE_UNINITIALIZED: the vcpu is an application processor (AP)
|
||||
which has not yet received an INIT signal [x86,
|
||||
ia64]
|
||||
which has not yet received an INIT signal [x86]
|
||||
- KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
|
||||
now ready for a SIPI [x86, ia64]
|
||||
now ready for a SIPI [x86]
|
||||
- KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
|
||||
is waiting for an interrupt [x86, ia64]
|
||||
is waiting for an interrupt [x86]
|
||||
- KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
|
||||
accessible via KVM_GET_VCPU_EVENTS) [x86, ia64]
|
||||
accessible via KVM_GET_VCPU_EVENTS) [x86]
|
||||
- KVM_MP_STATE_STOPPED: the vcpu is stopped [s390]
|
||||
- KVM_MP_STATE_CHECK_STOP: the vcpu is in a special error state [s390]
|
||||
- KVM_MP_STATE_OPERATING: the vcpu is operating (running or halted)
|
||||
|
@ -1022,7 +1024,7 @@ Possible values are:
|
|||
- KVM_MP_STATE_LOAD: the vcpu is in a special load/startup state
|
||||
[s390]
|
||||
|
||||
On x86 and ia64, this ioctl is only useful after KVM_CREATE_IRQCHIP. Without an
|
||||
On x86, this ioctl is only useful after KVM_CREATE_IRQCHIP. Without an
|
||||
in-kernel irqchip, the multiprocessing state must be maintained by userspace on
|
||||
these architectures.
|
||||
|
||||
|
@ -1030,7 +1032,7 @@ these architectures.
|
|||
4.39 KVM_SET_MP_STATE
|
||||
|
||||
Capability: KVM_CAP_MP_STATE
|
||||
Architectures: x86, ia64, s390
|
||||
Architectures: x86, s390
|
||||
Type: vcpu ioctl
|
||||
Parameters: struct kvm_mp_state (in)
|
||||
Returns: 0 on success; -1 on error
|
||||
|
@ -1038,7 +1040,7 @@ Returns: 0 on success; -1 on error
|
|||
Sets the vcpu's current "multiprocessing state"; see KVM_GET_MP_STATE for
|
||||
arguments.
|
||||
|
||||
On x86 and ia64, this ioctl is only useful after KVM_CREATE_IRQCHIP. Without an
|
||||
On x86, this ioctl is only useful after KVM_CREATE_IRQCHIP. Without an
|
||||
in-kernel irqchip, the multiprocessing state must be maintained by userspace on
|
||||
these architectures.
|
||||
|
||||
|
@ -1065,7 +1067,7 @@ documentation when it pops into existence).
|
|||
4.41 KVM_SET_BOOT_CPU_ID
|
||||
|
||||
Capability: KVM_CAP_SET_BOOT_CPU_ID
|
||||
Architectures: x86, ia64
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: unsigned long vcpu_id
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1257,8 +1259,8 @@ The flags bitmap is defined as:
|
|||
|
||||
4.48 KVM_ASSIGN_PCI_DEVICE
|
||||
|
||||
Capability: KVM_CAP_DEVICE_ASSIGNMENT
|
||||
Architectures: x86 ia64
|
||||
Capability: none
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_pci_dev (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1298,25 +1300,36 @@ Only PCI header type 0 devices with PCI BAR resources are supported by
|
|||
device assignment. The user requesting this ioctl must have read/write
|
||||
access to the PCI sysfs resource files associated with the device.
|
||||
|
||||
Errors:
|
||||
ENOTTY: kernel does not support this ioctl
|
||||
|
||||
Other error conditions may be defined by individual device types or
|
||||
have their standard meanings.
|
||||
|
||||
|
||||
4.49 KVM_DEASSIGN_PCI_DEVICE
|
||||
|
||||
Capability: KVM_CAP_DEVICE_DEASSIGNMENT
|
||||
Architectures: x86 ia64
|
||||
Capability: none
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_pci_dev (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
||||
Ends PCI device assignment, releasing all associated resources.
|
||||
|
||||
See KVM_CAP_DEVICE_ASSIGNMENT for the data structure. Only assigned_dev_id is
|
||||
See KVM_ASSIGN_PCI_DEVICE for the data structure. Only assigned_dev_id is
|
||||
used in kvm_assigned_pci_dev to identify the device.
|
||||
|
||||
Errors:
|
||||
ENOTTY: kernel does not support this ioctl
|
||||
|
||||
Other error conditions may be defined by individual device types or
|
||||
have their standard meanings.
|
||||
|
||||
4.50 KVM_ASSIGN_DEV_IRQ
|
||||
|
||||
Capability: KVM_CAP_ASSIGN_DEV_IRQ
|
||||
Architectures: x86 ia64
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_irq (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1346,11 +1359,17 @@ The following flags are defined:
|
|||
It is not valid to specify multiple types per host or guest IRQ. However, the
|
||||
IRQ type of host and guest can differ or can even be null.
|
||||
|
||||
Errors:
|
||||
ENOTTY: kernel does not support this ioctl
|
||||
|
||||
Other error conditions may be defined by individual device types or
|
||||
have their standard meanings.
|
||||
|
||||
|
||||
4.51 KVM_DEASSIGN_DEV_IRQ
|
||||
|
||||
Capability: KVM_CAP_ASSIGN_DEV_IRQ
|
||||
Architectures: x86 ia64
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_irq (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1365,7 +1384,7 @@ KVM_ASSIGN_DEV_IRQ. Partial deassignment of host or guest IRQ is allowed.
|
|||
4.52 KVM_SET_GSI_ROUTING
|
||||
|
||||
Capability: KVM_CAP_IRQ_ROUTING
|
||||
Architectures: x86 ia64 s390
|
||||
Architectures: x86 s390
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_irq_routing (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1423,8 +1442,8 @@ struct kvm_irq_routing_s390_adapter {
|
|||
|
||||
4.53 KVM_ASSIGN_SET_MSIX_NR
|
||||
|
||||
Capability: KVM_CAP_DEVICE_MSIX
|
||||
Architectures: x86 ia64
|
||||
Capability: none
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_msix_nr (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1445,8 +1464,8 @@ struct kvm_assigned_msix_nr {
|
|||
|
||||
4.54 KVM_ASSIGN_SET_MSIX_ENTRY
|
||||
|
||||
Capability: KVM_CAP_DEVICE_MSIX
|
||||
Architectures: x86 ia64
|
||||
Capability: none
|
||||
Architectures: x86
|
||||
Type: vm ioctl
|
||||
Parameters: struct kvm_assigned_msix_entry (in)
|
||||
Returns: 0 on success, -1 on error
|
||||
|
@ -1461,6 +1480,12 @@ struct kvm_assigned_msix_entry {
|
|||
__u16 padding[3];
|
||||
};
|
||||
|
||||
Errors:
|
||||
ENOTTY: kernel does not support this ioctl
|
||||
|
||||
Other error conditions may be defined by individual device types or
|
||||
have their standard meanings.
|
||||
|
||||
|
||||
4.55 KVM_SET_TSC_KHZ
|
||||
|
||||
|
@ -2453,9 +2478,15 @@ return ENOEXEC for that vcpu.
|
|||
Note that because some registers reflect machine topology, all vcpus
|
||||
should be created before this ioctl is invoked.
|
||||
|
||||
Userspace can call this function multiple times for a given vcpu, including
|
||||
after the vcpu has been run. This will reset the vcpu to its initial
|
||||
state. All calls to this function after the initial call must use the same
|
||||
target and same set of feature flags, otherwise EINVAL will be returned.
|
||||
|
||||
Possible features:
|
||||
- KVM_ARM_VCPU_POWER_OFF: Starts the CPU in a power-off state.
|
||||
Depends on KVM_CAP_ARM_PSCI.
|
||||
Depends on KVM_CAP_ARM_PSCI. If not set, the CPU will be powered on
|
||||
and execute guest code when KVM_RUN is called.
|
||||
- KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.
|
||||
Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
|
||||
- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
|
||||
|
@ -2951,6 +2982,15 @@ HVC instruction based PSCI call from the vcpu. The 'type' field describes
|
|||
the system-level event type. The 'flags' field describes architecture
|
||||
specific flags for the system-level event.
|
||||
|
||||
Valid values for 'type' are:
|
||||
KVM_SYSTEM_EVENT_SHUTDOWN -- the guest has requested a shutdown of the
|
||||
VM. Userspace is not obliged to honour this, and if it does honour
|
||||
this does not need to destroy the VM synchronously (ie it may call
|
||||
KVM_RUN again before shutdown finally occurs).
|
||||
KVM_SYSTEM_EVENT_RESET -- the guest has requested a reset of the VM.
|
||||
As with SHUTDOWN, userspace can choose to ignore the request, or
|
||||
to schedule the reset to occur in the future and may call KVM_RUN again.
|
||||
|
||||
/* Fix the size of the union. */
|
||||
char padding[256];
|
||||
};
|
||||
|
|
|
@ -12,14 +12,14 @@ specific.
|
|||
1. GROUP: KVM_S390_VM_MEM_CTRL
|
||||
Architectures: s390
|
||||
|
||||
1.1. ATTRIBUTE: KVM_S390_VM_MEM_CTRL
|
||||
1.1. ATTRIBUTE: KVM_S390_VM_MEM_ENABLE_CMMA
|
||||
Parameters: none
|
||||
Returns: -EBUSY if already a vcpus is defined, otherwise 0
|
||||
Returns: -EBUSY if a vcpu is already defined, otherwise 0
|
||||
|
||||
Enables CMMA for the virtual machine
|
||||
Enables Collaborative Memory Management Assist (CMMA) for the virtual machine.
|
||||
|
||||
1.2. ATTRIBUTE: KVM_S390_VM_CLR_CMMA
|
||||
Parameteres: none
|
||||
1.2. ATTRIBUTE: KVM_S390_VM_MEM_CLR_CMMA
|
||||
Parameters: none
|
||||
Returns: 0
|
||||
|
||||
Clear the CMMA status for all guest pages, so any pages the guest marked
|
||||
|
|
|
@ -168,7 +168,7 @@ MSR_KVM_ASYNC_PF_EN: 0x4b564d02
|
|||
64 byte memory area which must be in guest RAM and must be
|
||||
zeroed. Bits 5-2 are reserved and should be zero. Bit 0 is 1
|
||||
when asynchronous page faults are enabled on the vcpu 0 when
|
||||
disabled. Bit 2 is 1 if asynchronous page faults can be injected
|
||||
disabled. Bit 1 is 1 if asynchronous page faults can be injected
|
||||
when vcpu is in cpl == 0.
|
||||
|
||||
First 4 byte of 64 byte memory location will be written to by
|
||||
|
|
|
@ -5495,15 +5495,6 @@ S: Supported
|
|||
F: arch/powerpc/include/asm/kvm*
|
||||
F: arch/powerpc/kvm/
|
||||
|
||||
KERNEL VIRTUAL MACHINE For Itanium (KVM/IA64)
|
||||
M: Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
L: kvm-ia64@vger.kernel.org
|
||||
W: http://kvm.qumranet.com
|
||||
S: Supported
|
||||
F: Documentation/ia64/kvm.txt
|
||||
F: arch/ia64/include/asm/kvm*
|
||||
F: arch/ia64/kvm/
|
||||
|
||||
KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
|
||||
M: Christian Borntraeger <borntraeger@de.ibm.com>
|
||||
M: Cornelia Huck <cornelia.huck@de.ibm.com>
|
||||
|
|
|
@ -33,6 +33,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
|
|||
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||
|
||||
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.hcr = HCR_GUEST_MASK;
|
||||
}
|
||||
|
||||
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return 1;
|
||||
|
|
|
@ -150,8 +150,6 @@ struct kvm_vcpu_stat {
|
|||
u32 halt_wakeup;
|
||||
};
|
||||
|
||||
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init);
|
||||
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
|
||||
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
|
||||
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
|
||||
|
|
|
@ -52,6 +52,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
|
|||
void free_boot_hyp_pgd(void);
|
||||
void free_hyp_pgds(void);
|
||||
|
||||
void stage2_unmap_vm(struct kvm *kvm);
|
||||
int kvm_alloc_stage2_pgd(struct kvm *kvm);
|
||||
void kvm_free_stage2_pgd(struct kvm *kvm);
|
||||
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
|
||||
|
@ -161,9 +162,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
|
||||
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
|
||||
unsigned long size)
|
||||
unsigned long size,
|
||||
bool ipa_uncached)
|
||||
{
|
||||
if (!vcpu_has_cache_enabled(vcpu))
|
||||
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
|
||||
kvm_flush_dcache_to_poc((void *)hva, size);
|
||||
|
||||
/*
|
||||
|
|
|
@ -213,6 +213,11 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
|
|||
int err;
|
||||
struct kvm_vcpu *vcpu;
|
||||
|
||||
if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) {
|
||||
err = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
|
||||
if (!vcpu) {
|
||||
err = -ENOMEM;
|
||||
|
@ -263,6 +268,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
/* Force users to call KVM_ARM_VCPU_INIT */
|
||||
vcpu->arch.target = -1;
|
||||
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
|
||||
|
||||
/* Set up the timer */
|
||||
kvm_timer_vcpu_init(vcpu);
|
||||
|
@ -419,6 +425,7 @@ static void update_vttbr(struct kvm *kvm)
|
|||
|
||||
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm *kvm = vcpu->kvm;
|
||||
int ret;
|
||||
|
||||
if (likely(vcpu->arch.has_run_once))
|
||||
|
@ -427,15 +434,23 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
|
|||
vcpu->arch.has_run_once = true;
|
||||
|
||||
/*
|
||||
* Initialize the VGIC before running a vcpu the first time on
|
||||
* this VM.
|
||||
* Map the VGIC hardware resources before running a vcpu the first
|
||||
* time on this VM.
|
||||
*/
|
||||
if (unlikely(!vgic_initialized(vcpu->kvm))) {
|
||||
ret = kvm_vgic_init(vcpu->kvm);
|
||||
if (unlikely(!vgic_ready(kvm))) {
|
||||
ret = kvm_vgic_map_resources(kvm);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable the arch timers only if we have an in-kernel VGIC
|
||||
* and it has been properly initialized, since we cannot handle
|
||||
* interrupts from the virtual timer with a userspace gic.
|
||||
*/
|
||||
if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
|
||||
kvm_timer_enable(kvm);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -649,6 +664,48 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init)
|
||||
{
|
||||
unsigned int i;
|
||||
int phys_target = kvm_target_cpu();
|
||||
|
||||
if (init->target != phys_target)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
|
||||
* use the same target.
|
||||
*/
|
||||
if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
|
||||
return -EINVAL;
|
||||
|
||||
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
|
||||
for (i = 0; i < sizeof(init->features) * 8; i++) {
|
||||
bool set = (init->features[i / 32] & (1 << (i % 32)));
|
||||
|
||||
if (set && i >= KVM_VCPU_MAX_FEATURES)
|
||||
return -ENOENT;
|
||||
|
||||
/*
|
||||
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
|
||||
* use the same feature set.
|
||||
*/
|
||||
if (vcpu->arch.target != -1 && i < KVM_VCPU_MAX_FEATURES &&
|
||||
test_bit(i, vcpu->arch.features) != set)
|
||||
return -EINVAL;
|
||||
|
||||
if (set)
|
||||
set_bit(i, vcpu->arch.features);
|
||||
}
|
||||
|
||||
vcpu->arch.target = phys_target;
|
||||
|
||||
/* Now we know what it is, we can reset it. */
|
||||
return kvm_reset_vcpu(vcpu);
|
||||
}
|
||||
|
||||
|
||||
static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu_init *init)
|
||||
{
|
||||
|
@ -658,11 +715,22 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Ensure a rebooted VM will fault in RAM pages and detect if the
|
||||
* guest MMU is turned off and flush the caches as needed.
|
||||
*/
|
||||
if (vcpu->arch.has_run_once)
|
||||
stage2_unmap_vm(vcpu->kvm);
|
||||
|
||||
vcpu_reset_hcr(vcpu);
|
||||
|
||||
/*
|
||||
* Handle the "start in power-off" case by marking the VCPU as paused.
|
||||
*/
|
||||
if (__test_and_clear_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
|
||||
if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
|
||||
vcpu->arch.pause = true;
|
||||
else
|
||||
vcpu->arch.pause = false;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
|
|||
|
||||
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.hcr = HCR_GUEST_MASK;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -274,31 +273,6 @@ int __attribute_const__ kvm_target_cpu(void)
|
|||
}
|
||||
}
|
||||
|
||||
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
/* We can only cope with guest==host and only on A15/A7 (for now). */
|
||||
if (init->target != kvm_target_cpu())
|
||||
return -EINVAL;
|
||||
|
||||
vcpu->arch.target = init->target;
|
||||
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
|
||||
|
||||
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
|
||||
for (i = 0; i < sizeof(init->features) * 8; i++) {
|
||||
if (test_bit(i, (void *)init->features)) {
|
||||
if (i >= KVM_VCPU_MAX_FEATURES)
|
||||
return -ENOENT;
|
||||
set_bit(i, vcpu->arch.features);
|
||||
}
|
||||
}
|
||||
|
||||
/* Now we know what it is, we can reset it. */
|
||||
return kvm_reset_vcpu(vcpu);
|
||||
}
|
||||
|
||||
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
|
||||
{
|
||||
int target = kvm_target_cpu();
|
||||
|
|
|
@ -187,15 +187,18 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
|||
}
|
||||
|
||||
rt = vcpu->arch.mmio_decode.rt;
|
||||
data = vcpu_data_guest_to_host(vcpu, *vcpu_reg(vcpu, rt), mmio.len);
|
||||
|
||||
trace_kvm_mmio((mmio.is_write) ? KVM_TRACE_MMIO_WRITE :
|
||||
KVM_TRACE_MMIO_READ_UNSATISFIED,
|
||||
mmio.len, fault_ipa,
|
||||
(mmio.is_write) ? data : 0);
|
||||
if (mmio.is_write) {
|
||||
data = vcpu_data_guest_to_host(vcpu, *vcpu_reg(vcpu, rt),
|
||||
mmio.len);
|
||||
|
||||
if (mmio.is_write)
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, mmio.len,
|
||||
fault_ipa, data);
|
||||
mmio_write_buf(mmio.data, mmio.len, data);
|
||||
} else {
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, mmio.len,
|
||||
fault_ipa, 0);
|
||||
}
|
||||
|
||||
if (vgic_handle_mmio(vcpu, run, &mmio))
|
||||
return 1;
|
||||
|
|
|
@ -612,6 +612,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
|
|||
unmap_range(kvm, kvm->arch.pgd, start, size);
|
||||
}
|
||||
|
||||
static void stage2_unmap_memslot(struct kvm *kvm,
|
||||
struct kvm_memory_slot *memslot)
|
||||
{
|
||||
hva_t hva = memslot->userspace_addr;
|
||||
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
|
||||
phys_addr_t size = PAGE_SIZE * memslot->npages;
|
||||
hva_t reg_end = hva + size;
|
||||
|
||||
/*
|
||||
* A memory region could potentially cover multiple VMAs, and any holes
|
||||
* between them, so iterate over all of them to find out if we should
|
||||
* unmap any of them.
|
||||
*
|
||||
* +--------------------------------------------+
|
||||
* +---------------+----------------+ +----------------+
|
||||
* | : VMA 1 | VMA 2 | | VMA 3 : |
|
||||
* +---------------+----------------+ +----------------+
|
||||
* | memory region |
|
||||
* +--------------------------------------------+
|
||||
*/
|
||||
do {
|
||||
struct vm_area_struct *vma = find_vma(current->mm, hva);
|
||||
hva_t vm_start, vm_end;
|
||||
|
||||
if (!vma || vma->vm_start >= reg_end)
|
||||
break;
|
||||
|
||||
/*
|
||||
* Take the intersection of this VMA with the memory region
|
||||
*/
|
||||
vm_start = max(hva, vma->vm_start);
|
||||
vm_end = min(reg_end, vma->vm_end);
|
||||
|
||||
if (!(vma->vm_flags & VM_PFNMAP)) {
|
||||
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
|
||||
unmap_stage2_range(kvm, gpa, vm_end - vm_start);
|
||||
}
|
||||
hva = vm_end;
|
||||
} while (hva < reg_end);
|
||||
}
|
||||
|
||||
/**
|
||||
* stage2_unmap_vm - Unmap Stage-2 RAM mappings
|
||||
* @kvm: The struct kvm pointer
|
||||
*
|
||||
* Go through the memregions and unmap any reguler RAM
|
||||
* backing memory already mapped to the VM.
|
||||
*/
|
||||
void stage2_unmap_vm(struct kvm *kvm)
|
||||
{
|
||||
struct kvm_memslots *slots;
|
||||
struct kvm_memory_slot *memslot;
|
||||
int idx;
|
||||
|
||||
idx = srcu_read_lock(&kvm->srcu);
|
||||
spin_lock(&kvm->mmu_lock);
|
||||
|
||||
slots = kvm_memslots(kvm);
|
||||
kvm_for_each_memslot(memslot, slots)
|
||||
stage2_unmap_memslot(kvm, memslot);
|
||||
|
||||
spin_unlock(&kvm->mmu_lock);
|
||||
srcu_read_unlock(&kvm->srcu, idx);
|
||||
}
|
||||
|
||||
/**
|
||||
* kvm_free_stage2_pgd - free all stage-2 tables
|
||||
* @kvm: The KVM struct pointer for the VM.
|
||||
|
@ -853,6 +918,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
struct vm_area_struct *vma;
|
||||
pfn_t pfn;
|
||||
pgprot_t mem_type = PAGE_S2;
|
||||
bool fault_ipa_uncached;
|
||||
|
||||
write_fault = kvm_is_write_fault(vcpu);
|
||||
if (fault_status == FSC_PERM && !write_fault) {
|
||||
|
@ -919,6 +985,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
if (!hugetlb && !force_pte)
|
||||
hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
|
||||
|
||||
fault_ipa_uncached = memslot->flags & KVM_MEMSLOT_INCOHERENT;
|
||||
|
||||
if (hugetlb) {
|
||||
pmd_t new_pmd = pfn_pmd(pfn, mem_type);
|
||||
new_pmd = pmd_mkhuge(new_pmd);
|
||||
|
@ -926,7 +994,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
kvm_set_s2pmd_writable(&new_pmd);
|
||||
kvm_set_pfn_dirty(pfn);
|
||||
}
|
||||
coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE);
|
||||
coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE,
|
||||
fault_ipa_uncached);
|
||||
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
|
||||
} else {
|
||||
pte_t new_pte = pfn_pte(pfn, mem_type);
|
||||
|
@ -934,7 +1003,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
|||
kvm_set_s2pte_writable(&new_pte);
|
||||
kvm_set_pfn_dirty(pfn);
|
||||
}
|
||||
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
|
||||
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE,
|
||||
fault_ipa_uncached);
|
||||
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
|
||||
pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE));
|
||||
}
|
||||
|
@ -1294,11 +1364,12 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
|
|||
hva = vm_end;
|
||||
} while (hva < reg_end);
|
||||
|
||||
if (ret) {
|
||||
spin_lock(&kvm->mmu_lock);
|
||||
spin_lock(&kvm->mmu_lock);
|
||||
if (ret)
|
||||
unmap_stage2_range(kvm, mem->guest_phys_addr, mem->memory_size);
|
||||
spin_unlock(&kvm->mmu_lock);
|
||||
}
|
||||
else
|
||||
stage2_flush_memslot(kvm, memslot);
|
||||
spin_unlock(&kvm->mmu_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1310,6 +1381,15 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
|
|||
int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
|
||||
unsigned long npages)
|
||||
{
|
||||
/*
|
||||
* Readonly memslots are not incoherent with the caches by definition,
|
||||
* but in practice, they are used mostly to emulate ROMs or NOR flashes
|
||||
* that the guest may consider devices and hence map as uncached.
|
||||
* To prevent incoherency issues in these cases, tag all readonly
|
||||
* regions as incoherent.
|
||||
*/
|
||||
if (slot->flags & KVM_MEM_READONLY)
|
||||
slot->flags |= KVM_MEMSLOT_INCOHERENT;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/preempt.h>
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
|
@ -166,6 +167,23 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
|
|||
|
||||
static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
|
||||
{
|
||||
int i;
|
||||
struct kvm_vcpu *tmp;
|
||||
|
||||
/*
|
||||
* The KVM ABI specifies that a system event exit may call KVM_RUN
|
||||
* again and may perform shutdown/reboot at a later time that when the
|
||||
* actual request is made. Since we are implementing PSCI and a
|
||||
* caller of PSCI reboot and shutdown expects that the system shuts
|
||||
* down or reboots immediately, let's make sure that VCPUs are not run
|
||||
* after this call is handled and before the VCPUs have been
|
||||
* re-initialized.
|
||||
*/
|
||||
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
|
||||
tmp->arch.pause = true;
|
||||
kvm_vcpu_kick(tmp);
|
||||
}
|
||||
|
||||
memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
|
||||
vcpu->run->system_event.type = type;
|
||||
vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
|
||||
|
|
|
@ -38,6 +38,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
|
|||
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
|
||||
|
||||
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
|
||||
}
|
||||
|
||||
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc;
|
||||
|
|
|
@ -165,8 +165,6 @@ struct kvm_vcpu_stat {
|
|||
u32 halt_wakeup;
|
||||
};
|
||||
|
||||
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init);
|
||||
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
|
||||
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
|
||||
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
|
||||
|
@ -200,6 +198,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
|
|||
struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
|
||||
|
||||
u64 kvm_call_hyp(void *hypfn, ...);
|
||||
void force_vm_exit(const cpumask_t *mask);
|
||||
|
||||
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
int exception_index);
|
||||
|
|
|
@ -83,6 +83,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
|
|||
void free_boot_hyp_pgd(void);
|
||||
void free_hyp_pgds(void);
|
||||
|
||||
void stage2_unmap_vm(struct kvm *kvm);
|
||||
int kvm_alloc_stage2_pgd(struct kvm *kvm);
|
||||
void kvm_free_stage2_pgd(struct kvm *kvm);
|
||||
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
|
||||
|
@ -243,9 +244,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
|
||||
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
|
||||
unsigned long size)
|
||||
unsigned long size,
|
||||
bool ipa_uncached)
|
||||
{
|
||||
if (!vcpu_has_cache_enabled(vcpu))
|
||||
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
|
||||
kvm_flush_dcache_to_poc((void *)hva, size);
|
||||
|
||||
if (!icache_is_aliasing()) { /* PIPT */
|
||||
|
|
|
@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
|
|||
|
||||
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -297,31 +296,6 @@ int __attribute_const__ kvm_target_cpu(void)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
|
||||
const struct kvm_vcpu_init *init)
|
||||
{
|
||||
unsigned int i;
|
||||
int phys_target = kvm_target_cpu();
|
||||
|
||||
if (init->target != phys_target)
|
||||
return -EINVAL;
|
||||
|
||||
vcpu->arch.target = phys_target;
|
||||
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
|
||||
|
||||
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
|
||||
for (i = 0; i < sizeof(init->features) * 8; i++) {
|
||||
if (init->features[i / 32] & (1 << (i % 32))) {
|
||||
if (i >= KVM_VCPU_MAX_FEATURES)
|
||||
return -ENOENT;
|
||||
set_bit(i, vcpu->arch.features);
|
||||
}
|
||||
}
|
||||
|
||||
/* Now we know what it is, we can reset it. */
|
||||
return kvm_reset_vcpu(vcpu);
|
||||
}
|
||||
|
||||
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
|
||||
{
|
||||
int target = kvm_target_cpu();
|
||||
|
|
|
@ -20,7 +20,6 @@ config IA64
|
|||
select HAVE_DYNAMIC_FTRACE if (!ITANIUM)
|
||||
select HAVE_FUNCTION_TRACER
|
||||
select HAVE_DMA_ATTRS
|
||||
select HAVE_KVM
|
||||
select TTY
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
select HAVE_DMA_API_DEBUG
|
||||
|
@ -640,8 +639,6 @@ source "security/Kconfig"
|
|||
|
||||
source "crypto/Kconfig"
|
||||
|
||||
source "arch/ia64/kvm/Kconfig"
|
||||
|
||||
source "lib/Kconfig"
|
||||
|
||||
config IOMMU_HELPER
|
||||
|
|
|
@ -53,7 +53,6 @@ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/
|
|||
core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/
|
||||
core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
|
||||
core-$(CONFIG_IA64_SGI_UV) += arch/ia64/uv/
|
||||
core-$(CONFIG_KVM) += arch/ia64/kvm/
|
||||
|
||||
drivers-$(CONFIG_PCI) += arch/ia64/pci/
|
||||
drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
|
||||
|
|
|
@ -1,609 +0,0 @@
|
|||
/*
|
||||
* kvm_host.h: used for kvm module, and hold ia64-specific sections.
|
||||
*
|
||||
* Copyright (C) 2007, Intel Corporation.
|
||||
*
|
||||
* Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef __ASM_KVM_HOST_H
|
||||
#define __ASM_KVM_HOST_H
|
||||
|
||||
#define KVM_USER_MEM_SLOTS 32
|
||||
|
||||
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
|
||||
#define KVM_IRQCHIP_NUM_PINS KVM_IOAPIC_NUM_PINS
|
||||
|
||||
/* define exit reasons from vmm to kvm*/
|
||||
#define EXIT_REASON_VM_PANIC 0
|
||||
#define EXIT_REASON_MMIO_INSTRUCTION 1
|
||||
#define EXIT_REASON_PAL_CALL 2
|
||||
#define EXIT_REASON_SAL_CALL 3
|
||||
#define EXIT_REASON_SWITCH_RR6 4
|
||||
#define EXIT_REASON_VM_DESTROY 5
|
||||
#define EXIT_REASON_EXTERNAL_INTERRUPT 6
|
||||
#define EXIT_REASON_IPI 7
|
||||
#define EXIT_REASON_PTC_G 8
|
||||
#define EXIT_REASON_DEBUG 20
|
||||
|
||||
/*Define vmm address space and vm data space.*/
|
||||
#define KVM_VMM_SIZE (__IA64_UL_CONST(16)<<20)
|
||||
#define KVM_VMM_SHIFT 24
|
||||
#define KVM_VMM_BASE 0xD000000000000000
|
||||
#define VMM_SIZE (__IA64_UL_CONST(8)<<20)
|
||||
|
||||
/*
|
||||
* Define vm_buffer, used by PAL Services, base address.
|
||||
* Note: vm_buffer is in the VMM-BLOCK, the size must be < 8M
|
||||
*/
|
||||
#define KVM_VM_BUFFER_BASE (KVM_VMM_BASE + VMM_SIZE)
|
||||
#define KVM_VM_BUFFER_SIZE (__IA64_UL_CONST(8)<<20)
|
||||
|
||||
/*
|
||||
* kvm guest's data area looks as follow:
|
||||
*
|
||||
* +----------------------+ ------- KVM_VM_DATA_SIZE
|
||||
* | vcpu[n]'s data | | ___________________KVM_STK_OFFSET
|
||||
* | | | / |
|
||||
* | .......... | | /vcpu's struct&stack |
|
||||
* | .......... | | /---------------------|---- 0
|
||||
* | vcpu[5]'s data | | / vpd |
|
||||
* | vcpu[4]'s data | |/-----------------------|
|
||||
* | vcpu[3]'s data | / vtlb |
|
||||
* | vcpu[2]'s data | /|------------------------|
|
||||
* | vcpu[1]'s data |/ | vhpt |
|
||||
* | vcpu[0]'s data |____________________________|
|
||||
* +----------------------+ |
|
||||
* | memory dirty log | |
|
||||
* +----------------------+ |
|
||||
* | vm's data struct | |
|
||||
* +----------------------+ |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | vm's p2m table | |
|
||||
* | | |
|
||||
* | | |
|
||||
* | | | |
|
||||
* vm's data->| | | |
|
||||
* +----------------------+ ------- 0
|
||||
* To support large memory, needs to increase the size of p2m.
|
||||
* To support more vcpus, needs to ensure it has enough space to
|
||||
* hold vcpus' data.
|
||||
*/
|
||||
|
||||
#define KVM_VM_DATA_SHIFT 26
|
||||
#define KVM_VM_DATA_SIZE (__IA64_UL_CONST(1) << KVM_VM_DATA_SHIFT)
|
||||
#define KVM_VM_DATA_BASE (KVM_VMM_BASE + KVM_VM_DATA_SIZE)
|
||||
|
||||
#define KVM_P2M_BASE KVM_VM_DATA_BASE
|
||||
#define KVM_P2M_SIZE (__IA64_UL_CONST(24) << 20)
|
||||
|
||||
#define VHPT_SHIFT 16
|
||||
#define VHPT_SIZE (__IA64_UL_CONST(1) << VHPT_SHIFT)
|
||||
#define VHPT_NUM_ENTRIES (__IA64_UL_CONST(1) << (VHPT_SHIFT-5))
|
||||
|
||||
#define VTLB_SHIFT 16
|
||||
#define VTLB_SIZE (__IA64_UL_CONST(1) << VTLB_SHIFT)
|
||||
#define VTLB_NUM_ENTRIES (1UL << (VHPT_SHIFT-5))
|
||||
|
||||
#define VPD_SHIFT 16
|
||||
#define VPD_SIZE (__IA64_UL_CONST(1) << VPD_SHIFT)
|
||||
|
||||
#define VCPU_STRUCT_SHIFT 16
|
||||
#define VCPU_STRUCT_SIZE (__IA64_UL_CONST(1) << VCPU_STRUCT_SHIFT)
|
||||
|
||||
/*
|
||||
* This must match KVM_IA64_VCPU_STACK_{SHIFT,SIZE} arch/ia64/include/asm/kvm.h
|
||||
*/
|
||||
#define KVM_STK_SHIFT 16
|
||||
#define KVM_STK_OFFSET (__IA64_UL_CONST(1)<< KVM_STK_SHIFT)
|
||||
|
||||
#define KVM_VM_STRUCT_SHIFT 19
|
||||
#define KVM_VM_STRUCT_SIZE (__IA64_UL_CONST(1) << KVM_VM_STRUCT_SHIFT)
|
||||
|
||||
#define KVM_MEM_DIRY_LOG_SHIFT 19
|
||||
#define KVM_MEM_DIRTY_LOG_SIZE (__IA64_UL_CONST(1) << KVM_MEM_DIRY_LOG_SHIFT)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*Define the max vcpus and memory for Guests.*/
|
||||
#define KVM_MAX_VCPUS (KVM_VM_DATA_SIZE - KVM_P2M_SIZE - KVM_VM_STRUCT_SIZE -\
|
||||
KVM_MEM_DIRTY_LOG_SIZE) / sizeof(struct kvm_vcpu_data)
|
||||
#define KVM_MAX_MEM_SIZE (KVM_P2M_SIZE >> 3 << PAGE_SHIFT)
|
||||
|
||||
#define VMM_LOG_LEN 256
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/kvm.h>
|
||||
#include <linux/kvm_para.h>
|
||||
#include <linux/kvm_types.h>
|
||||
|
||||
#include <asm/pal.h>
|
||||
#include <asm/sal.h>
|
||||
#include <asm/page.h>
|
||||
|
||||
struct kvm_vcpu_data {
|
||||
char vcpu_vhpt[VHPT_SIZE];
|
||||
char vcpu_vtlb[VTLB_SIZE];
|
||||
char vcpu_vpd[VPD_SIZE];
|
||||
char vcpu_struct[VCPU_STRUCT_SIZE];
|
||||
};
|
||||
|
||||
struct kvm_vm_data {
|
||||
char kvm_p2m[KVM_P2M_SIZE];
|
||||
char kvm_vm_struct[KVM_VM_STRUCT_SIZE];
|
||||
char kvm_mem_dirty_log[KVM_MEM_DIRTY_LOG_SIZE];
|
||||
struct kvm_vcpu_data vcpu_data[KVM_MAX_VCPUS];
|
||||
};
|
||||
|
||||
#define VCPU_BASE(n) (KVM_VM_DATA_BASE + \
|
||||
offsetof(struct kvm_vm_data, vcpu_data[n]))
|
||||
#define KVM_VM_BASE (KVM_VM_DATA_BASE + \
|
||||
offsetof(struct kvm_vm_data, kvm_vm_struct))
|
||||
#define KVM_MEM_DIRTY_LOG_BASE KVM_VM_DATA_BASE + \
|
||||
offsetof(struct kvm_vm_data, kvm_mem_dirty_log)
|
||||
|
||||
#define VHPT_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vhpt))
|
||||
#define VTLB_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vtlb))
|
||||
#define VPD_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vpd))
|
||||
#define VCPU_STRUCT_BASE(n) (VCPU_BASE(n) + \
|
||||
offsetof(struct kvm_vcpu_data, vcpu_struct))
|
||||
|
||||
/*IO section definitions*/
|
||||
#define IOREQ_READ 1
|
||||
#define IOREQ_WRITE 0
|
||||
|
||||
#define STATE_IOREQ_NONE 0
|
||||
#define STATE_IOREQ_READY 1
|
||||
#define STATE_IOREQ_INPROCESS 2
|
||||
#define STATE_IORESP_READY 3
|
||||
|
||||
/*Guest Physical address layout.*/
|
||||
#define GPFN_MEM (0UL << 60) /* Guest pfn is normal mem */
|
||||
#define GPFN_FRAME_BUFFER (1UL << 60) /* VGA framebuffer */
|
||||
#define GPFN_LOW_MMIO (2UL << 60) /* Low MMIO range */
|
||||
#define GPFN_PIB (3UL << 60) /* PIB base */
|
||||
#define GPFN_IOSAPIC (4UL << 60) /* IOSAPIC base */
|
||||
#define GPFN_LEGACY_IO (5UL << 60) /* Legacy I/O base */
|
||||
#define GPFN_GFW (6UL << 60) /* Guest Firmware */
|
||||
#define GPFN_PHYS_MMIO (7UL << 60) /* Directed MMIO Range */
|
||||
|
||||
#define GPFN_IO_MASK (7UL << 60) /* Guest pfn is I/O type */
|
||||
#define GPFN_INV_MASK (1UL << 63) /* Guest pfn is invalid */
|
||||
#define INVALID_MFN (~0UL)
|
||||
#define MEM_G (1UL << 30)
|
||||
#define MEM_M (1UL << 20)
|
||||
#define MMIO_START (3 * MEM_G)
|
||||
#define MMIO_SIZE (512 * MEM_M)
|
||||
#define VGA_IO_START 0xA0000UL
|
||||
#define VGA_IO_SIZE 0x20000
|
||||
#define LEGACY_IO_START (MMIO_START + MMIO_SIZE)
|
||||
#define LEGACY_IO_SIZE (64 * MEM_M)
|
||||
#define IO_SAPIC_START 0xfec00000UL
|
||||
#define IO_SAPIC_SIZE 0x100000
|
||||
#define PIB_START 0xfee00000UL
|
||||
#define PIB_SIZE 0x200000
|
||||
#define GFW_START (4 * MEM_G - 16 * MEM_M)
|
||||
#define GFW_SIZE (16 * MEM_M)
|
||||
|
||||
/*Deliver mode, defined for ioapic.c*/
|
||||
#define dest_Fixed IOSAPIC_FIXED
|
||||
#define dest_LowestPrio IOSAPIC_LOWEST_PRIORITY
|
||||
|
||||
#define NMI_VECTOR 2
|
||||
#define ExtINT_VECTOR 0
|
||||
#define NULL_VECTOR (-1)
|
||||
#define IA64_SPURIOUS_INT_VECTOR 0x0f
|
||||
|
||||
#define VCPU_LID(v) (((u64)(v)->vcpu_id) << 24)
|
||||
|
||||
/*
|
||||
*Delivery mode
|
||||
*/
|
||||
#define SAPIC_DELIV_SHIFT 8
|
||||
#define SAPIC_FIXED 0x0
|
||||
#define SAPIC_LOWEST_PRIORITY 0x1
|
||||
#define SAPIC_PMI 0x2
|
||||
#define SAPIC_NMI 0x4
|
||||
#define SAPIC_INIT 0x5
|
||||
#define SAPIC_EXTINT 0x7
|
||||
|
||||
/*
|
||||
* vcpu->requests bit members for arch
|
||||
*/
|
||||
#define KVM_REQ_PTC_G 32
|
||||
#define KVM_REQ_RESUME 33
|
||||
|
||||
struct kvm_mmio_req {
|
||||
uint64_t addr; /* physical address */
|
||||
uint64_t size; /* size in bytes */
|
||||
uint64_t data; /* data (or paddr of data) */
|
||||
uint8_t state:4;
|
||||
uint8_t dir:1; /* 1=read, 0=write */
|
||||
};
|
||||
|
||||
/*Pal data struct */
|
||||
struct kvm_pal_call{
|
||||
/*In area*/
|
||||
uint64_t gr28;
|
||||
uint64_t gr29;
|
||||
uint64_t gr30;
|
||||
uint64_t gr31;
|
||||
/*Out area*/
|
||||
struct ia64_pal_retval ret;
|
||||
};
|
||||
|
||||
/* Sal data structure */
|
||||
struct kvm_sal_call{
|
||||
/*In area*/
|
||||
uint64_t in0;
|
||||
uint64_t in1;
|
||||
uint64_t in2;
|
||||
uint64_t in3;
|
||||
uint64_t in4;
|
||||
uint64_t in5;
|
||||
uint64_t in6;
|
||||
uint64_t in7;
|
||||
struct sal_ret_values ret;
|
||||
};
|
||||
|
||||
/*Guest change rr6*/
|
||||
struct kvm_switch_rr6 {
|
||||
uint64_t old_rr;
|
||||
uint64_t new_rr;
|
||||
};
|
||||
|
||||
union ia64_ipi_a{
|
||||
unsigned long val;
|
||||
struct {
|
||||
unsigned long rv : 3;
|
||||
unsigned long ir : 1;
|
||||
unsigned long eid : 8;
|
||||
unsigned long id : 8;
|
||||
unsigned long ib_base : 44;
|
||||
};
|
||||
};
|
||||
|
||||
union ia64_ipi_d {
|
||||
unsigned long val;
|
||||
struct {
|
||||
unsigned long vector : 8;
|
||||
unsigned long dm : 3;
|
||||
unsigned long ig : 53;
|
||||
};
|
||||
};
|
||||
|
||||
/*ipi check exit data*/
|
||||
struct kvm_ipi_data{
|
||||
union ia64_ipi_a addr;
|
||||
union ia64_ipi_d data;
|
||||
};
|
||||
|
||||
/*global purge data*/
|
||||
struct kvm_ptc_g {
|
||||
unsigned long vaddr;
|
||||
unsigned long rr;
|
||||
unsigned long ps;
|
||||
struct kvm_vcpu *vcpu;
|
||||
};
|
||||
|
||||
/*Exit control data */
|
||||
struct exit_ctl_data{
|
||||
uint32_t exit_reason;
|
||||
uint32_t vm_status;
|
||||
union {
|
||||
struct kvm_mmio_req ioreq;
|
||||
struct kvm_pal_call pal_data;
|
||||
struct kvm_sal_call sal_data;
|
||||
struct kvm_switch_rr6 rr_data;
|
||||
struct kvm_ipi_data ipi_data;
|
||||
struct kvm_ptc_g ptc_g_data;
|
||||
} u;
|
||||
};
|
||||
|
||||
union pte_flags {
|
||||
unsigned long val;
|
||||
struct {
|
||||
unsigned long p : 1; /*0 */
|
||||
unsigned long : 1; /* 1 */
|
||||
unsigned long ma : 3; /* 2-4 */
|
||||
unsigned long a : 1; /* 5 */
|
||||
unsigned long d : 1; /* 6 */
|
||||
unsigned long pl : 2; /* 7-8 */
|
||||
unsigned long ar : 3; /* 9-11 */
|
||||
unsigned long ppn : 38; /* 12-49 */
|
||||
unsigned long : 2; /* 50-51 */
|
||||
unsigned long ed : 1; /* 52 */
|
||||
};
|
||||
};
|
||||
|
||||
union ia64_pta {
|
||||
unsigned long val;
|
||||
struct {
|
||||
unsigned long ve : 1;
|
||||
unsigned long reserved0 : 1;
|
||||
unsigned long size : 6;
|
||||
unsigned long vf : 1;
|
||||
unsigned long reserved1 : 6;
|
||||
unsigned long base : 49;
|
||||
};
|
||||
};
|
||||
|
||||
struct thash_cb {
|
||||
/* THASH base information */
|
||||
struct thash_data *hash; /* hash table pointer */
|
||||
union ia64_pta pta;
|
||||
int num;
|
||||
};
|
||||
|
||||
struct kvm_vcpu_stat {
|
||||
u32 halt_wakeup;
|
||||
};
|
||||
|
||||
struct kvm_vcpu_arch {
|
||||
int launched;
|
||||
int last_exit;
|
||||
int last_run_cpu;
|
||||
int vmm_tr_slot;
|
||||
int vm_tr_slot;
|
||||
int sn_rtc_tr_slot;
|
||||
|
||||
#define KVM_MP_STATE_RUNNABLE 0
|
||||
#define KVM_MP_STATE_UNINITIALIZED 1
|
||||
#define KVM_MP_STATE_INIT_RECEIVED 2
|
||||
#define KVM_MP_STATE_HALTED 3
|
||||
int mp_state;
|
||||
|
||||
#define MAX_PTC_G_NUM 3
|
||||
int ptc_g_count;
|
||||
struct kvm_ptc_g ptc_g_data[MAX_PTC_G_NUM];
|
||||
|
||||
/*halt timer to wake up sleepy vcpus*/
|
||||
struct hrtimer hlt_timer;
|
||||
long ht_active;
|
||||
|
||||
struct kvm_lapic *apic; /* kernel irqchip context */
|
||||
struct vpd *vpd;
|
||||
|
||||
/* Exit data for vmm_transition*/
|
||||
struct exit_ctl_data exit_data;
|
||||
|
||||
cpumask_t cache_coherent_map;
|
||||
|
||||
unsigned long vmm_rr;
|
||||
unsigned long host_rr6;
|
||||
unsigned long psbits[8];
|
||||
unsigned long cr_iipa;
|
||||
unsigned long cr_isr;
|
||||
unsigned long vsa_base;
|
||||
unsigned long dirty_log_lock_pa;
|
||||
unsigned long __gp;
|
||||
/* TR and TC. */
|
||||
struct thash_data itrs[NITRS];
|
||||
struct thash_data dtrs[NDTRS];
|
||||
/* Bit is set if there is a tr/tc for the region. */
|
||||
unsigned char itr_regions;
|
||||
unsigned char dtr_regions;
|
||||
unsigned char tc_regions;
|
||||
/* purge all */
|
||||
unsigned long ptce_base;
|
||||
unsigned long ptce_count[2];
|
||||
unsigned long ptce_stride[2];
|
||||
/* itc/itm */
|
||||
unsigned long last_itc;
|
||||
long itc_offset;
|
||||
unsigned long itc_check;
|
||||
unsigned long timer_check;
|
||||
unsigned int timer_pending;
|
||||
unsigned int timer_fired;
|
||||
|
||||
unsigned long vrr[8];
|
||||
unsigned long ibr[8];
|
||||
unsigned long dbr[8];
|
||||
unsigned long insvc[4]; /* Interrupt in service. */
|
||||
unsigned long xtp;
|
||||
|
||||
unsigned long metaphysical_rr0; /* from kvm_arch (so is pinned) */
|
||||
unsigned long metaphysical_rr4; /* from kvm_arch (so is pinned) */
|
||||
unsigned long metaphysical_saved_rr0; /* from kvm_arch */
|
||||
unsigned long metaphysical_saved_rr4; /* from kvm_arch */
|
||||
unsigned long fp_psr; /*used for lazy float register */
|
||||
unsigned long saved_gp;
|
||||
/*for phycial emulation */
|
||||
int mode_flags;
|
||||
struct thash_cb vtlb;
|
||||
struct thash_cb vhpt;
|
||||
char irq_check;
|
||||
char irq_new_pending;
|
||||
|
||||
unsigned long opcode;
|
||||
unsigned long cause;
|
||||
char log_buf[VMM_LOG_LEN];
|
||||
union context host;
|
||||
union context guest;
|
||||
|
||||
char mmio_data[8];
|
||||
};
|
||||
|
||||
struct kvm_vm_stat {
|
||||
u64 remote_tlb_flush;
|
||||
};
|
||||
|
||||
struct kvm_sal_data {
|
||||
unsigned long boot_ip;
|
||||
unsigned long boot_gp;
|
||||
};
|
||||
|
||||
struct kvm_arch_memory_slot {
|
||||
};
|
||||
|
||||
struct kvm_arch {
|
||||
spinlock_t dirty_log_lock;
|
||||
|
||||
unsigned long vm_base;
|
||||
unsigned long metaphysical_rr0;
|
||||
unsigned long metaphysical_rr4;
|
||||
unsigned long vmm_init_rr;
|
||||
|
||||
int is_sn2;
|
||||
|
||||
struct kvm_ioapic *vioapic;
|
||||
struct kvm_vm_stat stat;
|
||||
struct kvm_sal_data rdv_sal_data;
|
||||
|
||||
struct list_head assigned_dev_head;
|
||||
struct iommu_domain *iommu_domain;
|
||||
bool iommu_noncoherent;
|
||||
|
||||
unsigned long irq_sources_bitmap;
|
||||
unsigned long irq_states[KVM_IOAPIC_NUM_PINS];
|
||||
};
|
||||
|
||||
union cpuid3_t {
|
||||
u64 value;
|
||||
struct {
|
||||
u64 number : 8;
|
||||
u64 revision : 8;
|
||||
u64 model : 8;
|
||||
u64 family : 8;
|
||||
u64 archrev : 8;
|
||||
u64 rv : 24;
|
||||
};
|
||||
};
|
||||
|
||||
struct kvm_pt_regs {
|
||||
/* The following registers are saved by SAVE_MIN: */
|
||||
unsigned long b6; /* scratch */
|
||||
unsigned long b7; /* scratch */
|
||||
|
||||
unsigned long ar_csd; /* used by cmp8xchg16 (scratch) */
|
||||
unsigned long ar_ssd; /* reserved for future use (scratch) */
|
||||
|
||||
unsigned long r8; /* scratch (return value register 0) */
|
||||
unsigned long r9; /* scratch (return value register 1) */
|
||||
unsigned long r10; /* scratch (return value register 2) */
|
||||
unsigned long r11; /* scratch (return value register 3) */
|
||||
|
||||
unsigned long cr_ipsr; /* interrupted task's psr */
|
||||
unsigned long cr_iip; /* interrupted task's instruction pointer */
|
||||
unsigned long cr_ifs; /* interrupted task's function state */
|
||||
|
||||
unsigned long ar_unat; /* interrupted task's NaT register (preserved) */
|
||||
unsigned long ar_pfs; /* prev function state */
|
||||
unsigned long ar_rsc; /* RSE configuration */
|
||||
/* The following two are valid only if cr_ipsr.cpl > 0: */
|
||||
unsigned long ar_rnat; /* RSE NaT */
|
||||
unsigned long ar_bspstore; /* RSE bspstore */
|
||||
|
||||
unsigned long pr; /* 64 predicate registers (1 bit each) */
|
||||
unsigned long b0; /* return pointer (bp) */
|
||||
unsigned long loadrs; /* size of dirty partition << 16 */
|
||||
|
||||
unsigned long r1; /* the gp pointer */
|
||||
unsigned long r12; /* interrupted task's memory stack pointer */
|
||||
unsigned long r13; /* thread pointer */
|
||||
|
||||
unsigned long ar_fpsr; /* floating point status (preserved) */
|
||||
unsigned long r15; /* scratch */
|
||||
|
||||
/* The remaining registers are NOT saved for system calls. */
|
||||
unsigned long r14; /* scratch */
|
||||
unsigned long r2; /* scratch */
|
||||
unsigned long r3; /* scratch */
|
||||
unsigned long r16; /* scratch */
|
||||
unsigned long r17; /* scratch */
|
||||
unsigned long r18; /* scratch */
|
||||
unsigned long r19; /* scratch */
|
||||
unsigned long r20; /* scratch */
|
||||
unsigned long r21; /* scratch */
|
||||
unsigned long r22; /* scratch */
|
||||
unsigned long r23; /* scratch */
|
||||
unsigned long r24; /* scratch */
|
||||
unsigned long r25; /* scratch */
|
||||
unsigned long r26; /* scratch */
|
||||
unsigned long r27; /* scratch */
|
||||
unsigned long r28; /* scratch */
|
||||
unsigned long r29; /* scratch */
|
||||
unsigned long r30; /* scratch */
|
||||
unsigned long r31; /* scratch */
|
||||
unsigned long ar_ccv; /* compare/exchange value (scratch) */
|
||||
|
||||
/*
|
||||
* Floating point registers that the kernel considers scratch:
|
||||
*/
|
||||
struct ia64_fpreg f6; /* scratch */
|
||||
struct ia64_fpreg f7; /* scratch */
|
||||
struct ia64_fpreg f8; /* scratch */
|
||||
struct ia64_fpreg f9; /* scratch */
|
||||
struct ia64_fpreg f10; /* scratch */
|
||||
struct ia64_fpreg f11; /* scratch */
|
||||
|
||||
unsigned long r4; /* preserved */
|
||||
unsigned long r5; /* preserved */
|
||||
unsigned long r6; /* preserved */
|
||||
unsigned long r7; /* preserved */
|
||||
unsigned long eml_unat; /* used for emulating instruction */
|
||||
unsigned long pad0; /* alignment pad */
|
||||
};
|
||||
|
||||
static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
|
||||
{
|
||||
return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
|
||||
}
|
||||
|
||||
typedef int kvm_vmm_entry(void);
|
||||
typedef void kvm_tramp_entry(union context *host, union context *guest);
|
||||
|
||||
struct kvm_vmm_info{
|
||||
struct module *module;
|
||||
kvm_vmm_entry *vmm_entry;
|
||||
kvm_tramp_entry *tramp_entry;
|
||||
unsigned long vmm_ivt;
|
||||
unsigned long patch_mov_ar;
|
||||
unsigned long patch_mov_ar_sn2;
|
||||
};
|
||||
|
||||
int kvm_highest_pending_irq(struct kvm_vcpu *vcpu);
|
||||
int kvm_emulate_halt(struct kvm_vcpu *vcpu);
|
||||
int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
|
||||
void kvm_sal_emul(struct kvm_vcpu *vcpu);
|
||||
|
||||
#define __KVM_HAVE_ARCH_VM_ALLOC 1
|
||||
struct kvm *kvm_arch_alloc_vm(void);
|
||||
void kvm_arch_free_vm(struct kvm *kvm);
|
||||
|
||||
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) {}
|
||||
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu) {}
|
||||
static inline void kvm_arch_free_memslot(struct kvm *kvm,
|
||||
struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
|
||||
static inline void kvm_arch_memslots_updated(struct kvm *kvm) {}
|
||||
static inline void kvm_arch_commit_memory_region(struct kvm *kvm,
|
||||
struct kvm_userspace_memory_region *mem,
|
||||
const struct kvm_memory_slot *old,
|
||||
enum kvm_mr_change change) {}
|
||||
static inline void kvm_arch_hardware_unsetup(void) {}
|
||||
|
||||
#endif /* __ASSEMBLY__*/
|
||||
|
||||
#endif
|
|
@ -1,48 +0,0 @@
|
|||
/*
|
||||
* same structure to x86's
|
||||
* Hopefully asm-x86/pvclock-abi.h would be moved to somewhere more generic.
|
||||
* For now, define same duplicated definitions.
|
||||
*/
|
||||
|
||||
#ifndef _ASM_IA64__PVCLOCK_ABI_H
|
||||
#define _ASM_IA64__PVCLOCK_ABI_H
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* These structs MUST NOT be changed.
|
||||
* They are the ABI between hypervisor and guest OS.
|
||||
* KVM is using this.
|
||||
*
|
||||
* pvclock_vcpu_time_info holds the system time and the tsc timestamp
|
||||
* of the last update. So the guest can use the tsc delta to get a
|
||||
* more precise system time. There is one per virtual cpu.
|
||||
*
|
||||
* pvclock_wall_clock references the point in time when the system
|
||||
* time was zero (usually boot time), thus the guest calculates the
|
||||
* current wall clock by adding the system time.
|
||||
*
|
||||
* Protocol for the "version" fields is: hypervisor raises it (making
|
||||
* it uneven) before it starts updating the fields and raises it again
|
||||
* (making it even) when it is done. Thus the guest can make sure the
|
||||
* time values it got are consistent by checking the version before
|
||||
* and after reading them.
|
||||
*/
|
||||
|
||||
struct pvclock_vcpu_time_info {
|
||||
u32 version;
|
||||
u32 pad0;
|
||||
u64 tsc_timestamp;
|
||||
u64 system_time;
|
||||
u32 tsc_to_system_mul;
|
||||
s8 tsc_shift;
|
||||
u8 pad[3];
|
||||
} __attribute__((__packed__)); /* 32 bytes */
|
||||
|
||||
struct pvclock_wall_clock {
|
||||
u32 version;
|
||||
u32 sec;
|
||||
u32 nsec;
|
||||
} __attribute__((__packed__));
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _ASM_IA64__PVCLOCK_ABI_H */
|
|
@ -1,268 +0,0 @@
|
|||
#ifndef __ASM_IA64_KVM_H
|
||||
#define __ASM_IA64_KVM_H
|
||||
|
||||
/*
|
||||
* kvm structure definitions for ia64
|
||||
*
|
||||
* Copyright (C) 2007 Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
#define __KVM_HAVE_IRQ_LINE
|
||||
|
||||
/* Architectural interrupt line count. */
|
||||
#define KVM_NR_INTERRUPTS 256
|
||||
|
||||
#define KVM_IOAPIC_NUM_PINS 48
|
||||
|
||||
struct kvm_ioapic_state {
|
||||
__u64 base_address;
|
||||
__u32 ioregsel;
|
||||
__u32 id;
|
||||
__u32 irr;
|
||||
__u32 pad;
|
||||
union {
|
||||
__u64 bits;
|
||||
struct {
|
||||
__u8 vector;
|
||||
__u8 delivery_mode:3;
|
||||
__u8 dest_mode:1;
|
||||
__u8 delivery_status:1;
|
||||
__u8 polarity:1;
|
||||
__u8 remote_irr:1;
|
||||
__u8 trig_mode:1;
|
||||
__u8 mask:1;
|
||||
__u8 reserve:7;
|
||||
__u8 reserved[4];
|
||||
__u8 dest_id;
|
||||
} fields;
|
||||
} redirtbl[KVM_IOAPIC_NUM_PINS];
|
||||
};
|
||||
|
||||
#define KVM_IRQCHIP_PIC_MASTER 0
|
||||
#define KVM_IRQCHIP_PIC_SLAVE 1
|
||||
#define KVM_IRQCHIP_IOAPIC 2
|
||||
#define KVM_NR_IRQCHIPS 3
|
||||
|
||||
#define KVM_CONTEXT_SIZE 8*1024
|
||||
|
||||
struct kvm_fpreg {
|
||||
union {
|
||||
unsigned long bits[2];
|
||||
long double __dummy; /* force 16-byte alignment */
|
||||
} u;
|
||||
};
|
||||
|
||||
union context {
|
||||
/* 8K size */
|
||||
char dummy[KVM_CONTEXT_SIZE];
|
||||
struct {
|
||||
unsigned long psr;
|
||||
unsigned long pr;
|
||||
unsigned long caller_unat;
|
||||
unsigned long pad;
|
||||
unsigned long gr[32];
|
||||
unsigned long ar[128];
|
||||
unsigned long br[8];
|
||||
unsigned long cr[128];
|
||||
unsigned long rr[8];
|
||||
unsigned long ibr[8];
|
||||
unsigned long dbr[8];
|
||||
unsigned long pkr[8];
|
||||
struct kvm_fpreg fr[128];
|
||||
};
|
||||
};
|
||||
|
||||
struct thash_data {
|
||||
union {
|
||||
struct {
|
||||
unsigned long p : 1; /* 0 */
|
||||
unsigned long rv1 : 1; /* 1 */
|
||||
unsigned long ma : 3; /* 2-4 */
|
||||
unsigned long a : 1; /* 5 */
|
||||
unsigned long d : 1; /* 6 */
|
||||
unsigned long pl : 2; /* 7-8 */
|
||||
unsigned long ar : 3; /* 9-11 */
|
||||
unsigned long ppn : 38; /* 12-49 */
|
||||
unsigned long rv2 : 2; /* 50-51 */
|
||||
unsigned long ed : 1; /* 52 */
|
||||
unsigned long ig1 : 11; /* 53-63 */
|
||||
};
|
||||
struct {
|
||||
unsigned long __rv1 : 53; /* 0-52 */
|
||||
unsigned long contiguous : 1; /*53 */
|
||||
unsigned long tc : 1; /* 54 TR or TC */
|
||||
unsigned long cl : 1;
|
||||
/* 55 I side or D side cache line */
|
||||
unsigned long len : 4; /* 56-59 */
|
||||
unsigned long io : 1; /* 60 entry is for io or not */
|
||||
unsigned long nomap : 1;
|
||||
/* 61 entry cann't be inserted into machine TLB.*/
|
||||
unsigned long checked : 1;
|
||||
/* 62 for VTLB/VHPT sanity check */
|
||||
unsigned long invalid : 1;
|
||||
/* 63 invalid entry */
|
||||
};
|
||||
unsigned long page_flags;
|
||||
}; /* same for VHPT and TLB */
|
||||
|
||||
union {
|
||||
struct {
|
||||
unsigned long rv3 : 2;
|
||||
unsigned long ps : 6;
|
||||
unsigned long key : 24;
|
||||
unsigned long rv4 : 32;
|
||||
};
|
||||
unsigned long itir;
|
||||
};
|
||||
union {
|
||||
struct {
|
||||
unsigned long ig2 : 12;
|
||||
unsigned long vpn : 49;
|
||||
unsigned long vrn : 3;
|
||||
};
|
||||
unsigned long ifa;
|
||||
unsigned long vadr;
|
||||
struct {
|
||||
unsigned long tag : 63;
|
||||
unsigned long ti : 1;
|
||||
};
|
||||
unsigned long etag;
|
||||
};
|
||||
union {
|
||||
struct thash_data *next;
|
||||
unsigned long rid;
|
||||
unsigned long gpaddr;
|
||||
};
|
||||
};
|
||||
|
||||
#define NITRS 8
|
||||
#define NDTRS 8
|
||||
|
||||
struct saved_vpd {
|
||||
unsigned long vhpi;
|
||||
unsigned long vgr[16];
|
||||
unsigned long vbgr[16];
|
||||
unsigned long vnat;
|
||||
unsigned long vbnat;
|
||||
unsigned long vcpuid[5];
|
||||
unsigned long vpsr;
|
||||
unsigned long vpr;
|
||||
union {
|
||||
unsigned long vcr[128];
|
||||
struct {
|
||||
unsigned long dcr;
|
||||
unsigned long itm;
|
||||
unsigned long iva;
|
||||
unsigned long rsv1[5];
|
||||
unsigned long pta;
|
||||
unsigned long rsv2[7];
|
||||
unsigned long ipsr;
|
||||
unsigned long isr;
|
||||
unsigned long rsv3;
|
||||
unsigned long iip;
|
||||
unsigned long ifa;
|
||||
unsigned long itir;
|
||||
unsigned long iipa;
|
||||
unsigned long ifs;
|
||||
unsigned long iim;
|
||||
unsigned long iha;
|
||||
unsigned long rsv4[38];
|
||||
unsigned long lid;
|
||||
unsigned long ivr;
|
||||
unsigned long tpr;
|
||||
unsigned long eoi;
|
||||
unsigned long irr[4];
|
||||
unsigned long itv;
|
||||
unsigned long pmv;
|
||||
unsigned long cmcv;
|
||||
unsigned long rsv5[5];
|
||||
unsigned long lrr0;
|
||||
unsigned long lrr1;
|
||||
unsigned long rsv6[46];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
struct kvm_regs {
|
||||
struct saved_vpd vpd;
|
||||
/*Arch-regs*/
|
||||
int mp_state;
|
||||
unsigned long vmm_rr;
|
||||
/* TR and TC. */
|
||||
struct thash_data itrs[NITRS];
|
||||
struct thash_data dtrs[NDTRS];
|
||||
/* Bit is set if there is a tr/tc for the region. */
|
||||
unsigned char itr_regions;
|
||||
unsigned char dtr_regions;
|
||||
unsigned char tc_regions;
|
||||
|
||||
char irq_check;
|
||||
unsigned long saved_itc;
|
||||
unsigned long itc_check;
|
||||
unsigned long timer_check;
|
||||
unsigned long timer_pending;
|
||||
unsigned long last_itc;
|
||||
|
||||
unsigned long vrr[8];
|
||||
unsigned long ibr[8];
|
||||
unsigned long dbr[8];
|
||||
unsigned long insvc[4]; /* Interrupt in service. */
|
||||
unsigned long xtp;
|
||||
|
||||
unsigned long metaphysical_rr0; /* from kvm_arch (so is pinned) */
|
||||
unsigned long metaphysical_rr4; /* from kvm_arch (so is pinned) */
|
||||
unsigned long metaphysical_saved_rr0; /* from kvm_arch */
|
||||
unsigned long metaphysical_saved_rr4; /* from kvm_arch */
|
||||
unsigned long fp_psr; /*used for lazy float register */
|
||||
unsigned long saved_gp;
|
||||
/*for phycial emulation */
|
||||
|
||||
union context saved_guest;
|
||||
|
||||
unsigned long reserved[64]; /* for future use */
|
||||
};
|
||||
|
||||
struct kvm_sregs {
|
||||
};
|
||||
|
||||
struct kvm_fpu {
|
||||
};
|
||||
|
||||
#define KVM_IA64_VCPU_STACK_SHIFT 16
|
||||
#define KVM_IA64_VCPU_STACK_SIZE (1UL << KVM_IA64_VCPU_STACK_SHIFT)
|
||||
|
||||
struct kvm_ia64_vcpu_stack {
|
||||
unsigned char stack[KVM_IA64_VCPU_STACK_SIZE];
|
||||
};
|
||||
|
||||
struct kvm_debug_exit_arch {
|
||||
};
|
||||
|
||||
/* for KVM_SET_GUEST_DEBUG */
|
||||
struct kvm_guest_debug_arch {
|
||||
};
|
||||
|
||||
/* definition of registers in kvm_run */
|
||||
struct kvm_sync_regs {
|
||||
};
|
||||
|
||||
#endif
|
|
@ -1,66 +0,0 @@
|
|||
#
|
||||
# KVM configuration
|
||||
#
|
||||
|
||||
source "virt/kvm/Kconfig"
|
||||
|
||||
menuconfig VIRTUALIZATION
|
||||
bool "Virtualization"
|
||||
depends on HAVE_KVM || IA64
|
||||
default y
|
||||
---help---
|
||||
Say Y here to get to see options for using your Linux host to run other
|
||||
operating systems inside virtual machines (guests).
|
||||
This option alone does not add any kernel code.
|
||||
|
||||
If you say N, all options in this submenu will be skipped and disabled.
|
||||
|
||||
if VIRTUALIZATION
|
||||
|
||||
config KVM
|
||||
tristate "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on BROKEN
|
||||
depends on HAVE_KVM && MODULES
|
||||
depends on BROKEN
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_IRQCHIP
|
||||
select HAVE_KVM_IRQFD
|
||||
select HAVE_KVM_IRQ_ROUTING
|
||||
select KVM_APIC_ARCHITECTURE
|
||||
select KVM_MMIO
|
||||
---help---
|
||||
Support hosting fully virtualized guest machines using hardware
|
||||
virtualization extensions. You will need a fairly recent
|
||||
processor equipped with virtualization extensions. You will also
|
||||
need to select one or more of the processor modules below.
|
||||
|
||||
This module provides access to the hardware capabilities through
|
||||
a character device node named /dev/kvm.
|
||||
|
||||
To compile this as a module, choose M here: the module
|
||||
will be called kvm.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config KVM_INTEL
|
||||
tristate "KVM for Intel Itanium 2 processors support"
|
||||
depends on KVM && m
|
||||
---help---
|
||||
Provides support for KVM on Itanium 2 processors equipped with the VT
|
||||
extensions.
|
||||
|
||||
config KVM_DEVICE_ASSIGNMENT
|
||||
bool "KVM legacy PCI device assignment support"
|
||||
depends on KVM && PCI && IOMMU_API
|
||||
default y
|
||||
---help---
|
||||
Provide support for legacy PCI device assignment through KVM. The
|
||||
kernel now also supports a full featured userspace device driver
|
||||
framework through VFIO, which supersedes much of this support.
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
source drivers/vhost/Kconfig
|
||||
|
||||
endif # VIRTUALIZATION
|
|
@ -1,67 +0,0 @@
|
|||
#This Make file is to generate asm-offsets.h and build source.
|
||||
#
|
||||
|
||||
#Generate asm-offsets.h for vmm module build
|
||||
offsets-file := asm-offsets.h
|
||||
|
||||
always := $(offsets-file)
|
||||
targets := $(offsets-file)
|
||||
targets += arch/ia64/kvm/asm-offsets.s
|
||||
|
||||
# Default sed regexp - multiline due to syntax constraints
|
||||
define sed-y
|
||||
"/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}"
|
||||
endef
|
||||
|
||||
quiet_cmd_offsets = GEN $@
|
||||
define cmd_offsets
|
||||
(set -e; \
|
||||
echo "#ifndef __ASM_KVM_OFFSETS_H__"; \
|
||||
echo "#define __ASM_KVM_OFFSETS_H__"; \
|
||||
echo "/*"; \
|
||||
echo " * DO NOT MODIFY."; \
|
||||
echo " *"; \
|
||||
echo " * This file was generated by Makefile"; \
|
||||
echo " *"; \
|
||||
echo " */"; \
|
||||
echo ""; \
|
||||
sed -ne $(sed-y) $<; \
|
||||
echo ""; \
|
||||
echo "#endif" ) > $@
|
||||
endef
|
||||
|
||||
# We use internal rules to avoid the "is up to date" message from make
|
||||
arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c \
|
||||
$(wildcard $(srctree)/arch/ia64/include/asm/*.h)\
|
||||
$(wildcard $(srctree)/include/linux/*.h)
|
||||
$(call if_changed_dep,cc_s_c)
|
||||
|
||||
$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s
|
||||
$(call cmd,offsets)
|
||||
|
||||
FORCE : $(obj)/$(offsets-file)
|
||||
|
||||
#
|
||||
# Makefile for Kernel-based Virtual Machine module
|
||||
#
|
||||
|
||||
ccflags-y := -Ivirt/kvm -Iarch/ia64/kvm/
|
||||
asflags-y := -Ivirt/kvm -Iarch/ia64/kvm/
|
||||
KVM := ../../../virt/kvm
|
||||
|
||||
common-objs = $(KVM)/kvm_main.o $(KVM)/ioapic.o \
|
||||
$(KVM)/coalesced_mmio.o $(KVM)/irq_comm.o
|
||||
|
||||
ifeq ($(CONFIG_KVM_DEVICE_ASSIGNMENT),y)
|
||||
common-objs += $(KVM)/assigned-dev.o $(KVM)/iommu.o
|
||||
endif
|
||||
|
||||
kvm-objs := $(common-objs) kvm-ia64.o kvm_fw.o
|
||||
obj-$(CONFIG_KVM) += kvm.o
|
||||
|
||||
CFLAGS_vcpu.o += -mfixed-range=f2-f5,f12-f127
|
||||
kvm-intel-objs = vmm.o vmm_ivt.o trampoline.o vcpu.o optvfault.o mmio.o \
|
||||
vtlb.o process.o kvm_lib.o
|
||||
#Add link memcpy and memset to avoid possible structure assignment error
|
||||
kvm-intel-objs += memcpy.o memset.o
|
||||
obj-$(CONFIG_KVM_INTEL) += kvm-intel.o
|
|
@ -1,241 +0,0 @@
|
|||
/*
|
||||
* asm-offsets.c Generate definitions needed by assembly language modules.
|
||||
* This code generates raw asm output which is post-processed
|
||||
* to extract and format the required data.
|
||||
*
|
||||
* Anthony Xu <anthony.xu@intel.com>
|
||||
* Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
* Copyright (c) 2007 Intel Corporation KVM support.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/kbuild.h>
|
||||
|
||||
#include "vcpu.h"
|
||||
|
||||
void foo(void)
|
||||
{
|
||||
DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu));
|
||||
DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs));
|
||||
|
||||
BLANK();
|
||||
|
||||
DEFINE(VMM_VCPU_META_RR0_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.metaphysical_rr0));
|
||||
DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET,
|
||||
offsetof(struct kvm_vcpu,
|
||||
arch.metaphysical_saved_rr0));
|
||||
DEFINE(VMM_VCPU_VRR0_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.vrr[0]));
|
||||
DEFINE(VMM_VPD_IRR0_OFFSET,
|
||||
offsetof(struct vpd, irr[0]));
|
||||
DEFINE(VMM_VCPU_ITC_CHECK_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.itc_check));
|
||||
DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.irq_check));
|
||||
DEFINE(VMM_VPD_VHPI_OFFSET,
|
||||
offsetof(struct vpd, vhpi));
|
||||
DEFINE(VMM_VCPU_VSA_BASE_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.vsa_base));
|
||||
DEFINE(VMM_VCPU_VPD_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.vpd));
|
||||
DEFINE(VMM_VCPU_IRQ_CHECK,
|
||||
offsetof(struct kvm_vcpu, arch.irq_check));
|
||||
DEFINE(VMM_VCPU_TIMER_PENDING,
|
||||
offsetof(struct kvm_vcpu, arch.timer_pending));
|
||||
DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0));
|
||||
DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.mode_flags));
|
||||
DEFINE(VMM_VCPU_ITC_OFS_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.itc_offset));
|
||||
DEFINE(VMM_VCPU_LAST_ITC_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.last_itc));
|
||||
DEFINE(VMM_VCPU_SAVED_GP_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.saved_gp));
|
||||
|
||||
BLANK();
|
||||
|
||||
DEFINE(VMM_PT_REGS_B6_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, b6));
|
||||
DEFINE(VMM_PT_REGS_B7_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, b7));
|
||||
DEFINE(VMM_PT_REGS_AR_CSD_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_csd));
|
||||
DEFINE(VMM_PT_REGS_AR_SSD_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_ssd));
|
||||
DEFINE(VMM_PT_REGS_R8_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r8));
|
||||
DEFINE(VMM_PT_REGS_R9_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r9));
|
||||
DEFINE(VMM_PT_REGS_R10_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r10));
|
||||
DEFINE(VMM_PT_REGS_R11_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r11));
|
||||
DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, cr_ipsr));
|
||||
DEFINE(VMM_PT_REGS_CR_IIP_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, cr_iip));
|
||||
DEFINE(VMM_PT_REGS_CR_IFS_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, cr_ifs));
|
||||
DEFINE(VMM_PT_REGS_AR_UNAT_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_unat));
|
||||
DEFINE(VMM_PT_REGS_AR_PFS_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_pfs));
|
||||
DEFINE(VMM_PT_REGS_AR_RSC_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_rsc));
|
||||
DEFINE(VMM_PT_REGS_AR_RNAT_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_rnat));
|
||||
|
||||
DEFINE(VMM_PT_REGS_AR_BSPSTORE_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_bspstore));
|
||||
DEFINE(VMM_PT_REGS_PR_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, pr));
|
||||
DEFINE(VMM_PT_REGS_B0_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, b0));
|
||||
DEFINE(VMM_PT_REGS_LOADRS_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, loadrs));
|
||||
DEFINE(VMM_PT_REGS_R1_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r1));
|
||||
DEFINE(VMM_PT_REGS_R12_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r12));
|
||||
DEFINE(VMM_PT_REGS_R13_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r13));
|
||||
DEFINE(VMM_PT_REGS_AR_FPSR_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_fpsr));
|
||||
DEFINE(VMM_PT_REGS_R15_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r15));
|
||||
DEFINE(VMM_PT_REGS_R14_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r14));
|
||||
DEFINE(VMM_PT_REGS_R2_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r2));
|
||||
DEFINE(VMM_PT_REGS_R3_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r3));
|
||||
DEFINE(VMM_PT_REGS_R16_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r16));
|
||||
DEFINE(VMM_PT_REGS_R17_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r17));
|
||||
DEFINE(VMM_PT_REGS_R18_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r18));
|
||||
DEFINE(VMM_PT_REGS_R19_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r19));
|
||||
DEFINE(VMM_PT_REGS_R20_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r20));
|
||||
DEFINE(VMM_PT_REGS_R21_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r21));
|
||||
DEFINE(VMM_PT_REGS_R22_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r22));
|
||||
DEFINE(VMM_PT_REGS_R23_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r23));
|
||||
DEFINE(VMM_PT_REGS_R24_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r24));
|
||||
DEFINE(VMM_PT_REGS_R25_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r25));
|
||||
DEFINE(VMM_PT_REGS_R26_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r26));
|
||||
DEFINE(VMM_PT_REGS_R27_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r27));
|
||||
DEFINE(VMM_PT_REGS_R28_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r28));
|
||||
DEFINE(VMM_PT_REGS_R29_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r29));
|
||||
DEFINE(VMM_PT_REGS_R30_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r30));
|
||||
DEFINE(VMM_PT_REGS_R31_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r31));
|
||||
DEFINE(VMM_PT_REGS_AR_CCV_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, ar_ccv));
|
||||
DEFINE(VMM_PT_REGS_F6_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f6));
|
||||
DEFINE(VMM_PT_REGS_F7_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f7));
|
||||
DEFINE(VMM_PT_REGS_F8_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f8));
|
||||
DEFINE(VMM_PT_REGS_F9_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f9));
|
||||
DEFINE(VMM_PT_REGS_F10_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f10));
|
||||
DEFINE(VMM_PT_REGS_F11_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, f11));
|
||||
DEFINE(VMM_PT_REGS_R4_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r4));
|
||||
DEFINE(VMM_PT_REGS_R5_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r5));
|
||||
DEFINE(VMM_PT_REGS_R6_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r6));
|
||||
DEFINE(VMM_PT_REGS_R7_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, r7));
|
||||
DEFINE(VMM_PT_REGS_EML_UNAT_OFFSET,
|
||||
offsetof(struct kvm_pt_regs, eml_unat));
|
||||
DEFINE(VMM_VCPU_IIPA_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.cr_iipa));
|
||||
DEFINE(VMM_VCPU_OPCODE_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.opcode));
|
||||
DEFINE(VMM_VCPU_CAUSE_OFFSET, offsetof(struct kvm_vcpu, arch.cause));
|
||||
DEFINE(VMM_VCPU_ISR_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.cr_isr));
|
||||
DEFINE(VMM_PT_REGS_R16_SLOT,
|
||||
(((offsetof(struct kvm_pt_regs, r16)
|
||||
- sizeof(struct kvm_pt_regs)) >> 3) & 0x3f));
|
||||
DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.mode_flags));
|
||||
DEFINE(VMM_VCPU_GP_OFFSET, offsetof(struct kvm_vcpu, arch.__gp));
|
||||
BLANK();
|
||||
|
||||
DEFINE(VMM_VPD_BASE_OFFSET, offsetof(struct kvm_vcpu, arch.vpd));
|
||||
DEFINE(VMM_VPD_VIFS_OFFSET, offsetof(struct vpd, ifs));
|
||||
DEFINE(VMM_VLSAPIC_INSVC_BASE_OFFSET,
|
||||
offsetof(struct kvm_vcpu, arch.insvc[0]));
|
||||
DEFINE(VMM_VPD_VPTA_OFFSET, offsetof(struct vpd, pta));
|
||||
DEFINE(VMM_VPD_VPSR_OFFSET, offsetof(struct vpd, vpsr));
|
||||
|
||||
DEFINE(VMM_CTX_R4_OFFSET, offsetof(union context, gr[4]));
|
||||
DEFINE(VMM_CTX_R5_OFFSET, offsetof(union context, gr[5]));
|
||||
DEFINE(VMM_CTX_R12_OFFSET, offsetof(union context, gr[12]));
|
||||
DEFINE(VMM_CTX_R13_OFFSET, offsetof(union context, gr[13]));
|
||||
DEFINE(VMM_CTX_KR0_OFFSET, offsetof(union context, ar[0]));
|
||||
DEFINE(VMM_CTX_KR1_OFFSET, offsetof(union context, ar[1]));
|
||||
DEFINE(VMM_CTX_B0_OFFSET, offsetof(union context, br[0]));
|
||||
DEFINE(VMM_CTX_B1_OFFSET, offsetof(union context, br[1]));
|
||||
DEFINE(VMM_CTX_B2_OFFSET, offsetof(union context, br[2]));
|
||||
DEFINE(VMM_CTX_RR0_OFFSET, offsetof(union context, rr[0]));
|
||||
DEFINE(VMM_CTX_RSC_OFFSET, offsetof(union context, ar[16]));
|
||||
DEFINE(VMM_CTX_BSPSTORE_OFFSET, offsetof(union context, ar[18]));
|
||||
DEFINE(VMM_CTX_RNAT_OFFSET, offsetof(union context, ar[19]));
|
||||
DEFINE(VMM_CTX_FCR_OFFSET, offsetof(union context, ar[21]));
|
||||
DEFINE(VMM_CTX_EFLAG_OFFSET, offsetof(union context, ar[24]));
|
||||
DEFINE(VMM_CTX_CFLG_OFFSET, offsetof(union context, ar[27]));
|
||||
DEFINE(VMM_CTX_FSR_OFFSET, offsetof(union context, ar[28]));
|
||||
DEFINE(VMM_CTX_FIR_OFFSET, offsetof(union context, ar[29]));
|
||||
DEFINE(VMM_CTX_FDR_OFFSET, offsetof(union context, ar[30]));
|
||||
DEFINE(VMM_CTX_UNAT_OFFSET, offsetof(union context, ar[36]));
|
||||
DEFINE(VMM_CTX_FPSR_OFFSET, offsetof(union context, ar[40]));
|
||||
DEFINE(VMM_CTX_PFS_OFFSET, offsetof(union context, ar[64]));
|
||||
DEFINE(VMM_CTX_LC_OFFSET, offsetof(union context, ar[65]));
|
||||
DEFINE(VMM_CTX_DCR_OFFSET, offsetof(union context, cr[0]));
|
||||
DEFINE(VMM_CTX_IVA_OFFSET, offsetof(union context, cr[2]));
|
||||
DEFINE(VMM_CTX_PTA_OFFSET, offsetof(union context, cr[8]));
|
||||
DEFINE(VMM_CTX_IBR0_OFFSET, offsetof(union context, ibr[0]));
|
||||
DEFINE(VMM_CTX_DBR0_OFFSET, offsetof(union context, dbr[0]));
|
||||
DEFINE(VMM_CTX_F2_OFFSET, offsetof(union context, fr[2]));
|
||||
DEFINE(VMM_CTX_F3_OFFSET, offsetof(union context, fr[3]));
|
||||
DEFINE(VMM_CTX_F32_OFFSET, offsetof(union context, fr[32]));
|
||||
DEFINE(VMM_CTX_F33_OFFSET, offsetof(union context, fr[33]));
|
||||
DEFINE(VMM_CTX_PKR0_OFFSET, offsetof(union context, pkr[0]));
|
||||
DEFINE(VMM_CTX_PSR_OFFSET, offsetof(union context, psr));
|
||||
BLANK();
|
||||
}
|
|
@ -1,33 +0,0 @@
|
|||
/*
|
||||
* irq.h: In-kernel interrupt controller related definitions
|
||||
* Copyright (c) 2008, Intel Corporation.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
* Authors:
|
||||
* Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef __IRQ_H
|
||||
#define __IRQ_H
|
||||
|
||||
#include "lapic.h"
|
||||
|
||||
static inline int irqchip_in_kernel(struct kvm *kvm)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
#endif
|
File diff suppressed because it is too large
Load Diff
|
@ -1,674 +0,0 @@
|
|||
/*
|
||||
* PAL/SAL call delegation
|
||||
*
|
||||
* Copyright (c) 2004 Li Susie <susie.li@intel.com>
|
||||
* Copyright (c) 2005 Yu Ke <ke.yu@intel.com>
|
||||
* Copyright (c) 2007 Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*/
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/smp.h>
|
||||
#include <asm/sn/addrs.h>
|
||||
#include <asm/sn/clksupport.h>
|
||||
#include <asm/sn/shub_mmr.h>
|
||||
|
||||
#include "vti.h"
|
||||
#include "misc.h"
|
||||
|
||||
#include <asm/pal.h>
|
||||
#include <asm/sal.h>
|
||||
#include <asm/tlb.h>
|
||||
|
||||
/*
|
||||
* Handy macros to make sure that the PAL return values start out
|
||||
* as something meaningful.
|
||||
*/
|
||||
#define INIT_PAL_STATUS_UNIMPLEMENTED(x) \
|
||||
{ \
|
||||
x.status = PAL_STATUS_UNIMPLEMENTED; \
|
||||
x.v0 = 0; \
|
||||
x.v1 = 0; \
|
||||
x.v2 = 0; \
|
||||
}
|
||||
|
||||
#define INIT_PAL_STATUS_SUCCESS(x) \
|
||||
{ \
|
||||
x.status = PAL_STATUS_SUCCESS; \
|
||||
x.v0 = 0; \
|
||||
x.v1 = 0; \
|
||||
x.v2 = 0; \
|
||||
}
|
||||
|
||||
static void kvm_get_pal_call_data(struct kvm_vcpu *vcpu,
|
||||
u64 *gr28, u64 *gr29, u64 *gr30, u64 *gr31) {
|
||||
struct exit_ctl_data *p;
|
||||
|
||||
if (vcpu) {
|
||||
p = &vcpu->arch.exit_data;
|
||||
if (p->exit_reason == EXIT_REASON_PAL_CALL) {
|
||||
*gr28 = p->u.pal_data.gr28;
|
||||
*gr29 = p->u.pal_data.gr29;
|
||||
*gr30 = p->u.pal_data.gr30;
|
||||
*gr31 = p->u.pal_data.gr31;
|
||||
return ;
|
||||
}
|
||||
}
|
||||
printk(KERN_DEBUG"Failed to get vcpu pal data!!!\n");
|
||||
}
|
||||
|
||||
static void set_pal_result(struct kvm_vcpu *vcpu,
|
||||
struct ia64_pal_retval result) {
|
||||
|
||||
struct exit_ctl_data *p;
|
||||
|
||||
p = kvm_get_exit_data(vcpu);
|
||||
if (p->exit_reason == EXIT_REASON_PAL_CALL) {
|
||||
p->u.pal_data.ret = result;
|
||||
return ;
|
||||
}
|
||||
INIT_PAL_STATUS_UNIMPLEMENTED(p->u.pal_data.ret);
|
||||
}
|
||||
|
||||
static void set_sal_result(struct kvm_vcpu *vcpu,
|
||||
struct sal_ret_values result) {
|
||||
struct exit_ctl_data *p;
|
||||
|
||||
p = kvm_get_exit_data(vcpu);
|
||||
if (p->exit_reason == EXIT_REASON_SAL_CALL) {
|
||||
p->u.sal_data.ret = result;
|
||||
return ;
|
||||
}
|
||||
printk(KERN_WARNING"Failed to set sal result!!\n");
|
||||
}
|
||||
|
||||
struct cache_flush_args {
|
||||
u64 cache_type;
|
||||
u64 operation;
|
||||
u64 progress;
|
||||
long status;
|
||||
};
|
||||
|
||||
cpumask_t cpu_cache_coherent_map;
|
||||
|
||||
static void remote_pal_cache_flush(void *data)
|
||||
{
|
||||
struct cache_flush_args *args = data;
|
||||
long status;
|
||||
u64 progress = args->progress;
|
||||
|
||||
status = ia64_pal_cache_flush(args->cache_type, args->operation,
|
||||
&progress, NULL);
|
||||
if (status != 0)
|
||||
args->status = status;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_cache_flush(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 gr28, gr29, gr30, gr31;
|
||||
struct ia64_pal_retval result = {0, 0, 0, 0};
|
||||
struct cache_flush_args args = {0, 0, 0, 0};
|
||||
long psr;
|
||||
|
||||
gr28 = gr29 = gr30 = gr31 = 0;
|
||||
kvm_get_pal_call_data(vcpu, &gr28, &gr29, &gr30, &gr31);
|
||||
|
||||
if (gr31 != 0)
|
||||
printk(KERN_ERR"vcpu:%p called cache_flush error!\n", vcpu);
|
||||
|
||||
/* Always call Host Pal in int=1 */
|
||||
gr30 &= ~PAL_CACHE_FLUSH_CHK_INTRS;
|
||||
args.cache_type = gr29;
|
||||
args.operation = gr30;
|
||||
smp_call_function(remote_pal_cache_flush,
|
||||
(void *)&args, 1);
|
||||
if (args.status != 0)
|
||||
printk(KERN_ERR"pal_cache_flush error!,"
|
||||
"status:0x%lx\n", args.status);
|
||||
/*
|
||||
* Call Host PAL cache flush
|
||||
* Clear psr.ic when call PAL_CACHE_FLUSH
|
||||
*/
|
||||
local_irq_save(psr);
|
||||
result.status = ia64_pal_cache_flush(gr29, gr30, &result.v1,
|
||||
&result.v0);
|
||||
local_irq_restore(psr);
|
||||
if (result.status != 0)
|
||||
printk(KERN_ERR"vcpu:%p crashed due to cache_flush err:%ld"
|
||||
"in1:%lx,in2:%lx\n",
|
||||
vcpu, result.status, gr29, gr30);
|
||||
|
||||
#if 0
|
||||
if (gr29 == PAL_CACHE_TYPE_COHERENT) {
|
||||
cpus_setall(vcpu->arch.cache_coherent_map);
|
||||
cpu_clear(vcpu->cpu, vcpu->arch.cache_coherent_map);
|
||||
cpus_setall(cpu_cache_coherent_map);
|
||||
cpu_clear(vcpu->cpu, cpu_cache_coherent_map);
|
||||
}
|
||||
#endif
|
||||
return result;
|
||||
}
|
||||
|
||||
struct ia64_pal_retval pal_cache_summary(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
PAL_CALL(result, PAL_CACHE_SUMMARY, 0, 0, 0);
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_freq_base(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
PAL_CALL(result, PAL_FREQ_BASE, 0, 0, 0);
|
||||
|
||||
/*
|
||||
* PAL_FREQ_BASE may not be implemented in some platforms,
|
||||
* call SAL instead.
|
||||
*/
|
||||
if (result.v0 == 0) {
|
||||
result.status = ia64_sal_freq_base(SAL_FREQ_BASE_PLATFORM,
|
||||
&result.v0,
|
||||
&result.v1);
|
||||
result.v2 = 0;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
* On the SGI SN2, the ITC isn't stable. Emulation backed by the SN2
|
||||
* RTC is used instead. This function patches the ratios from SAL
|
||||
* to match the RTC before providing them to the guest.
|
||||
*/
|
||||
static void sn2_patch_itc_freq_ratios(struct ia64_pal_retval *result)
|
||||
{
|
||||
struct pal_freq_ratio *ratio;
|
||||
unsigned long sal_freq, sal_drift, factor;
|
||||
|
||||
result->status = ia64_sal_freq_base(SAL_FREQ_BASE_PLATFORM,
|
||||
&sal_freq, &sal_drift);
|
||||
ratio = (struct pal_freq_ratio *)&result->v2;
|
||||
factor = ((sal_freq * 3) + (sn_rtc_cycles_per_second / 2)) /
|
||||
sn_rtc_cycles_per_second;
|
||||
|
||||
ratio->num = 3;
|
||||
ratio->den = factor;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_freq_ratios(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
PAL_CALL(result, PAL_FREQ_RATIOS, 0, 0, 0);
|
||||
|
||||
if (vcpu->kvm->arch.is_sn2)
|
||||
sn2_patch_itc_freq_ratios(&result);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_logical_to_physica(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
INIT_PAL_STATUS_UNIMPLEMENTED(result);
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_platform_addr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
INIT_PAL_STATUS_SUCCESS(result);
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_proc_get_features(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct ia64_pal_retval result = {0, 0, 0, 0};
|
||||
long in0, in1, in2, in3;
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
result.status = ia64_pal_proc_get_features(&result.v0, &result.v1,
|
||||
&result.v2, in2);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_register_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct ia64_pal_retval result = {0, 0, 0, 0};
|
||||
long in0, in1, in2, in3;
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
result.status = ia64_pal_register_info(in1, &result.v1, &result.v2);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_cache_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
pal_cache_config_info_t ci;
|
||||
long status;
|
||||
unsigned long in0, in1, in2, in3, r9, r10;
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
status = ia64_pal_cache_config_info(in1, in2, &ci);
|
||||
r9 = ci.pcci_info_1.pcci1_data;
|
||||
r10 = ci.pcci_info_2.pcci2_data;
|
||||
return ((struct ia64_pal_retval){status, r9, r10, 0});
|
||||
}
|
||||
|
||||
#define GUEST_IMPL_VA_MSB 59
|
||||
#define GUEST_RID_BITS 18
|
||||
|
||||
static struct ia64_pal_retval pal_vm_summary(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
pal_vm_info_1_u_t vminfo1;
|
||||
pal_vm_info_2_u_t vminfo2;
|
||||
struct ia64_pal_retval result;
|
||||
|
||||
PAL_CALL(result, PAL_VM_SUMMARY, 0, 0, 0);
|
||||
if (!result.status) {
|
||||
vminfo1.pvi1_val = result.v0;
|
||||
vminfo1.pal_vm_info_1_s.max_itr_entry = 8;
|
||||
vminfo1.pal_vm_info_1_s.max_dtr_entry = 8;
|
||||
result.v0 = vminfo1.pvi1_val;
|
||||
vminfo2.pal_vm_info_2_s.impl_va_msb = GUEST_IMPL_VA_MSB;
|
||||
vminfo2.pal_vm_info_2_s.rid_size = GUEST_RID_BITS;
|
||||
result.v1 = vminfo2.pvi2_val;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_vm_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct ia64_pal_retval result;
|
||||
unsigned long in0, in1, in2, in3;
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
|
||||
result.status = ia64_pal_vm_info(in1, in2,
|
||||
(pal_tc_info_u_t *)&result.v1, &result.v2);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static u64 kvm_get_pal_call_index(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 index = 0;
|
||||
struct exit_ctl_data *p;
|
||||
|
||||
p = kvm_get_exit_data(vcpu);
|
||||
if (p->exit_reason == EXIT_REASON_PAL_CALL)
|
||||
index = p->u.pal_data.gr28;
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
static void prepare_for_halt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.timer_pending = 1;
|
||||
vcpu->arch.timer_fired = 0;
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_perf_mon_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
long status;
|
||||
unsigned long in0, in1, in2, in3, r9;
|
||||
unsigned long pm_buffer[16];
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
status = ia64_pal_perf_mon_info(pm_buffer,
|
||||
(pal_perf_mon_info_u_t *) &r9);
|
||||
if (status != 0) {
|
||||
printk(KERN_DEBUG"PAL_PERF_MON_INFO fails ret=%ld\n", status);
|
||||
} else {
|
||||
if (in1)
|
||||
memcpy((void *)in1, pm_buffer, sizeof(pm_buffer));
|
||||
else {
|
||||
status = PAL_STATUS_EINVAL;
|
||||
printk(KERN_WARNING"Invalid parameters "
|
||||
"for PAL call:0x%lx!\n", in0);
|
||||
}
|
||||
}
|
||||
return (struct ia64_pal_retval){status, r9, 0, 0};
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_halt_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
unsigned long in0, in1, in2, in3;
|
||||
long status;
|
||||
unsigned long res = 1000UL | (1000UL << 16) | (10UL << 32)
|
||||
| (1UL << 61) | (1UL << 60);
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
if (in1) {
|
||||
memcpy((void *)in1, &res, sizeof(res));
|
||||
status = 0;
|
||||
} else{
|
||||
status = PAL_STATUS_EINVAL;
|
||||
printk(KERN_WARNING"Invalid parameters "
|
||||
"for PAL call:0x%lx!\n", in0);
|
||||
}
|
||||
|
||||
return (struct ia64_pal_retval){status, 0, 0, 0};
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_mem_attrib(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
unsigned long r9;
|
||||
long status;
|
||||
|
||||
status = ia64_pal_mem_attrib(&r9);
|
||||
|
||||
return (struct ia64_pal_retval){status, r9, 0, 0};
|
||||
}
|
||||
|
||||
static void remote_pal_prefetch_visibility(void *v)
|
||||
{
|
||||
s64 trans_type = (s64)v;
|
||||
ia64_pal_prefetch_visibility(trans_type);
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_prefetch_visibility(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct ia64_pal_retval result = {0, 0, 0, 0};
|
||||
unsigned long in0, in1, in2, in3;
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
result.status = ia64_pal_prefetch_visibility(in1);
|
||||
if (result.status == 0) {
|
||||
/* Must be performed on all remote processors
|
||||
in the coherence domain. */
|
||||
smp_call_function(remote_pal_prefetch_visibility,
|
||||
(void *)in1, 1);
|
||||
/* Unnecessary on remote processor for other vcpus!*/
|
||||
result.status = 1;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
static void remote_pal_mc_drain(void *v)
|
||||
{
|
||||
ia64_pal_mc_drain();
|
||||
}
|
||||
|
||||
static struct ia64_pal_retval pal_get_brand_info(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct ia64_pal_retval result = {0, 0, 0, 0};
|
||||
unsigned long in0, in1, in2, in3;
|
||||
|
||||
kvm_get_pal_call_data(vcpu, &in0, &in1, &in2, &in3);
|
||||
|
||||
if (in1 == 0 && in2) {
|
||||
char brand_info[128];
|
||||
result.status = ia64_pal_get_brand_info(brand_info);
|
||||
if (result.status == PAL_STATUS_SUCCESS)
|
||||
memcpy((void *)in2, brand_info, 128);
|
||||
} else {
|
||||
result.status = PAL_STATUS_REQUIRES_MEMORY;
|
||||
printk(KERN_WARNING"Invalid parameters for "
|
||||
"PAL call:0x%lx!\n", in0);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
{
|
||||
|
||||
u64 gr28;
|
||||
struct ia64_pal_retval result;
|
||||
int ret = 1;
|
||||
|
||||
gr28 = kvm_get_pal_call_index(vcpu);
|
||||
switch (gr28) {
|
||||
case PAL_CACHE_FLUSH:
|
||||
result = pal_cache_flush(vcpu);
|
||||
break;
|
||||
case PAL_MEM_ATTRIB:
|
||||
result = pal_mem_attrib(vcpu);
|
||||
break;
|
||||
case PAL_CACHE_SUMMARY:
|
||||
result = pal_cache_summary(vcpu);
|
||||
break;
|
||||
case PAL_PERF_MON_INFO:
|
||||
result = pal_perf_mon_info(vcpu);
|
||||
break;
|
||||
case PAL_HALT_INFO:
|
||||
result = pal_halt_info(vcpu);
|
||||
break;
|
||||
case PAL_HALT_LIGHT:
|
||||
{
|
||||
INIT_PAL_STATUS_SUCCESS(result);
|
||||
prepare_for_halt(vcpu);
|
||||
if (kvm_highest_pending_irq(vcpu) == -1)
|
||||
ret = kvm_emulate_halt(vcpu);
|
||||
}
|
||||
break;
|
||||
|
||||
case PAL_PREFETCH_VISIBILITY:
|
||||
result = pal_prefetch_visibility(vcpu);
|
||||
break;
|
||||
case PAL_MC_DRAIN:
|
||||
result.status = ia64_pal_mc_drain();
|
||||
/* FIXME: All vcpus likely call PAL_MC_DRAIN.
|
||||
That causes the congestion. */
|
||||
smp_call_function(remote_pal_mc_drain, NULL, 1);
|
||||
break;
|
||||
|
||||
case PAL_FREQ_RATIOS:
|
||||
result = pal_freq_ratios(vcpu);
|
||||
break;
|
||||
|
||||
case PAL_FREQ_BASE:
|
||||
result = pal_freq_base(vcpu);
|
||||
break;
|
||||
|
||||
case PAL_LOGICAL_TO_PHYSICAL :
|
||||
result = pal_logical_to_physica(vcpu);
|
||||
break;
|
||||
|
||||
case PAL_VM_SUMMARY :
|
||||
result = pal_vm_summary(vcpu);
|
||||
break;
|
||||
|
||||
case PAL_VM_INFO :
|
||||
result = pal_vm_info(vcpu);
|
||||
break;
|
||||
case PAL_PLATFORM_ADDR :
|
||||
result = pal_platform_addr(vcpu);
|
||||
break;
|
||||
case PAL_CACHE_INFO:
|
||||
result = pal_cache_info(vcpu);
|
||||
break;
|
||||
case PAL_PTCE_INFO:
|
||||
INIT_PAL_STATUS_SUCCESS(result);
|
||||
result.v1 = (1L << 32) | 1L;
|
||||
break;
|
||||
case PAL_REGISTER_INFO:
|
||||
result = pal_register_info(vcpu);
|
||||
break;
|
||||
case PAL_VM_PAGE_SIZE:
|
||||
result.status = ia64_pal_vm_page_size(&result.v0,
|
||||
&result.v1);
|
||||
break;
|
||||
case PAL_RSE_INFO:
|
||||
result.status = ia64_pal_rse_info(&result.v0,
|
||||
(pal_hints_u_t *)&result.v1);
|
||||
break;
|
||||
case PAL_PROC_GET_FEATURES:
|
||||
result = pal_proc_get_features(vcpu);
|
||||
break;
|
||||
case PAL_DEBUG_INFO:
|
||||
result.status = ia64_pal_debug_info(&result.v0,
|
||||
&result.v1);
|
||||
break;
|
||||
case PAL_VERSION:
|
||||
result.status = ia64_pal_version(
|
||||
(pal_version_u_t *)&result.v0,
|
||||
(pal_version_u_t *)&result.v1);
|
||||
break;
|
||||
case PAL_FIXED_ADDR:
|
||||
result.status = PAL_STATUS_SUCCESS;
|
||||
result.v0 = vcpu->vcpu_id;
|
||||
break;
|
||||
case PAL_BRAND_INFO:
|
||||
result = pal_get_brand_info(vcpu);
|
||||
break;
|
||||
case PAL_GET_PSTATE:
|
||||
case PAL_CACHE_SHARED_INFO:
|
||||
INIT_PAL_STATUS_UNIMPLEMENTED(result);
|
||||
break;
|
||||
default:
|
||||
INIT_PAL_STATUS_UNIMPLEMENTED(result);
|
||||
printk(KERN_WARNING"kvm: Unsupported pal call,"
|
||||
" index:0x%lx\n", gr28);
|
||||
}
|
||||
set_pal_result(vcpu, result);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct sal_ret_values sal_emulator(struct kvm *kvm,
|
||||
long index, unsigned long in1,
|
||||
unsigned long in2, unsigned long in3,
|
||||
unsigned long in4, unsigned long in5,
|
||||
unsigned long in6, unsigned long in7)
|
||||
{
|
||||
unsigned long r9 = 0;
|
||||
unsigned long r10 = 0;
|
||||
long r11 = 0;
|
||||
long status;
|
||||
|
||||
status = 0;
|
||||
switch (index) {
|
||||
case SAL_FREQ_BASE:
|
||||
status = ia64_sal_freq_base(in1, &r9, &r10);
|
||||
break;
|
||||
case SAL_PCI_CONFIG_READ:
|
||||
printk(KERN_WARNING"kvm: Not allowed to call here!"
|
||||
" SAL_PCI_CONFIG_READ\n");
|
||||
break;
|
||||
case SAL_PCI_CONFIG_WRITE:
|
||||
printk(KERN_WARNING"kvm: Not allowed to call here!"
|
||||
" SAL_PCI_CONFIG_WRITE\n");
|
||||
break;
|
||||
case SAL_SET_VECTORS:
|
||||
if (in1 == SAL_VECTOR_OS_BOOT_RENDEZ) {
|
||||
if (in4 != 0 || in5 != 0 || in6 != 0 || in7 != 0) {
|
||||
status = -2;
|
||||
} else {
|
||||
kvm->arch.rdv_sal_data.boot_ip = in2;
|
||||
kvm->arch.rdv_sal_data.boot_gp = in3;
|
||||
}
|
||||
printk("Rendvous called! iip:%lx\n\n", in2);
|
||||
} else
|
||||
printk(KERN_WARNING"kvm: CALLED SAL_SET_VECTORS %lu."
|
||||
"ignored...\n", in1);
|
||||
break;
|
||||
case SAL_GET_STATE_INFO:
|
||||
/* No more info. */
|
||||
status = -5;
|
||||
r9 = 0;
|
||||
break;
|
||||
case SAL_GET_STATE_INFO_SIZE:
|
||||
/* Return a dummy size. */
|
||||
status = 0;
|
||||
r9 = 128;
|
||||
break;
|
||||
case SAL_CLEAR_STATE_INFO:
|
||||
/* Noop. */
|
||||
break;
|
||||
case SAL_MC_RENDEZ:
|
||||
printk(KERN_WARNING
|
||||
"kvm: called SAL_MC_RENDEZ. ignored...\n");
|
||||
break;
|
||||
case SAL_MC_SET_PARAMS:
|
||||
printk(KERN_WARNING
|
||||
"kvm: called SAL_MC_SET_PARAMS.ignored!\n");
|
||||
break;
|
||||
case SAL_CACHE_FLUSH:
|
||||
if (1) {
|
||||
/*Flush using SAL.
|
||||
This method is faster but has a side
|
||||
effect on other vcpu running on
|
||||
this cpu. */
|
||||
status = ia64_sal_cache_flush(in1);
|
||||
} else {
|
||||
/*Maybe need to implement the method
|
||||
without side effect!*/
|
||||
status = 0;
|
||||
}
|
||||
break;
|
||||
case SAL_CACHE_INIT:
|
||||
printk(KERN_WARNING
|
||||
"kvm: called SAL_CACHE_INIT. ignored...\n");
|
||||
break;
|
||||
case SAL_UPDATE_PAL:
|
||||
printk(KERN_WARNING
|
||||
"kvm: CALLED SAL_UPDATE_PAL. ignored...\n");
|
||||
break;
|
||||
default:
|
||||
printk(KERN_WARNING"kvm: called SAL_CALL with unknown index."
|
||||
" index:%ld\n", index);
|
||||
status = -1;
|
||||
break;
|
||||
}
|
||||
return ((struct sal_ret_values) {status, r9, r10, r11});
|
||||
}
|
||||
|
||||
static void kvm_get_sal_call_data(struct kvm_vcpu *vcpu, u64 *in0, u64 *in1,
|
||||
u64 *in2, u64 *in3, u64 *in4, u64 *in5, u64 *in6, u64 *in7){
|
||||
|
||||
struct exit_ctl_data *p;
|
||||
|
||||
p = kvm_get_exit_data(vcpu);
|
||||
|
||||
if (p->exit_reason == EXIT_REASON_SAL_CALL) {
|
||||
*in0 = p->u.sal_data.in0;
|
||||
*in1 = p->u.sal_data.in1;
|
||||
*in2 = p->u.sal_data.in2;
|
||||
*in3 = p->u.sal_data.in3;
|
||||
*in4 = p->u.sal_data.in4;
|
||||
*in5 = p->u.sal_data.in5;
|
||||
*in6 = p->u.sal_data.in6;
|
||||
*in7 = p->u.sal_data.in7;
|
||||
return ;
|
||||
}
|
||||
*in0 = 0;
|
||||
}
|
||||
|
||||
void kvm_sal_emul(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
||||
struct sal_ret_values result;
|
||||
u64 index, in1, in2, in3, in4, in5, in6, in7;
|
||||
|
||||
kvm_get_sal_call_data(vcpu, &index, &in1, &in2,
|
||||
&in3, &in4, &in5, &in6, &in7);
|
||||
result = sal_emulator(vcpu->kvm, index, in1, in2, in3,
|
||||
in4, in5, in6, in7);
|
||||
set_sal_result(vcpu, result);
|
||||
}
|
|
@ -1,21 +0,0 @@
|
|||
/*
|
||||
* kvm_lib.c: Compile some libraries for kvm-intel module.
|
||||
*
|
||||
* Just include kernel's library, and disable symbols export.
|
||||
* Copyright (C) 2008, Intel Corporation.
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
*/
|
||||
#undef CONFIG_MODULES
|
||||
#include <linux/module.h>
|
||||
#undef CONFIG_KALLSYMS
|
||||
#undef EXPORT_SYMBOL
|
||||
#undef EXPORT_SYMBOL_GPL
|
||||
#define EXPORT_SYMBOL(sym)
|
||||
#define EXPORT_SYMBOL_GPL(sym)
|
||||
#include "../../../lib/vsprintf.c"
|
||||
#include "../../../lib/ctype.c"
|
|
@ -1,266 +0,0 @@
|
|||
/*
|
||||
* kvm_minstate.h: min save macros
|
||||
* Copyright (c) 2007, Intel Corporation.
|
||||
*
|
||||
* Xuefei Xu (Anthony Xu) (Anthony.xu@intel.com)
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
|
||||
#include <asm/asmmacro.h>
|
||||
#include <asm/types.h>
|
||||
#include <asm/kregs.h>
|
||||
#include <asm/kvm_host.h>
|
||||
|
||||
#include "asm-offsets.h"
|
||||
|
||||
#define KVM_MINSTATE_START_SAVE_MIN \
|
||||
mov ar.rsc = 0;/* set enforced lazy mode, pl 0, little-endian, loadrs=0 */\
|
||||
;; \
|
||||
mov.m r28 = ar.rnat; \
|
||||
addl r22 = VMM_RBS_OFFSET,r1; /* compute base of RBS */ \
|
||||
;; \
|
||||
lfetch.fault.excl.nt1 [r22]; \
|
||||
addl r1 = KVM_STK_OFFSET-VMM_PT_REGS_SIZE, r1; \
|
||||
mov r23 = ar.bspstore; /* save ar.bspstore */ \
|
||||
;; \
|
||||
mov ar.bspstore = r22; /* switch to kernel RBS */\
|
||||
;; \
|
||||
mov r18 = ar.bsp; \
|
||||
mov ar.rsc = 0x3; /* set eager mode, pl 0, little-endian, loadrs=0 */
|
||||
|
||||
|
||||
|
||||
#define KVM_MINSTATE_END_SAVE_MIN \
|
||||
bsw.1; /* switch back to bank 1 (must be last in insn group) */\
|
||||
;;
|
||||
|
||||
|
||||
#define PAL_VSA_SYNC_READ \
|
||||
/* begin to call pal vps sync_read */ \
|
||||
{.mii; \
|
||||
add r25 = VMM_VPD_BASE_OFFSET, r21; \
|
||||
nop 0x0; \
|
||||
mov r24=ip; \
|
||||
;; \
|
||||
} \
|
||||
{.mmb \
|
||||
add r24=0x20, r24; \
|
||||
ld8 r25 = [r25]; /* read vpd base */ \
|
||||
br.cond.sptk kvm_vps_sync_read; /*call the service*/ \
|
||||
;; \
|
||||
}; \
|
||||
|
||||
|
||||
#define KVM_MINSTATE_GET_CURRENT(reg) mov reg=r21
|
||||
|
||||
/*
|
||||
* KVM_DO_SAVE_MIN switches to the kernel stacks (if necessary) and saves
|
||||
* the minimum state necessary that allows us to turn psr.ic back
|
||||
* on.
|
||||
*
|
||||
* Assumed state upon entry:
|
||||
* psr.ic: off
|
||||
* r31: contains saved predicates (pr)
|
||||
*
|
||||
* Upon exit, the state is as follows:
|
||||
* psr.ic: off
|
||||
* r2 = points to &pt_regs.r16
|
||||
* r8 = contents of ar.ccv
|
||||
* r9 = contents of ar.csd
|
||||
* r10 = contents of ar.ssd
|
||||
* r11 = FPSR_DEFAULT
|
||||
* r12 = kernel sp (kernel virtual address)
|
||||
* r13 = points to current task_struct (kernel virtual address)
|
||||
* p15 = TRUE if psr.i is set in cr.ipsr
|
||||
* predicate registers (other than p2, p3, and p15), b6, r3, r14, r15:
|
||||
* preserved
|
||||
*
|
||||
* Note that psr.ic is NOT turned on by this macro. This is so that
|
||||
* we can pass interruption state as arguments to a handler.
|
||||
*/
|
||||
|
||||
|
||||
#define PT(f) (VMM_PT_REGS_##f##_OFFSET)
|
||||
|
||||
#define KVM_DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA) \
|
||||
KVM_MINSTATE_GET_CURRENT(r16); /* M (or M;;I) */ \
|
||||
mov r27 = ar.rsc; /* M */ \
|
||||
mov r20 = r1; /* A */ \
|
||||
mov r25 = ar.unat; /* M */ \
|
||||
mov r29 = cr.ipsr; /* M */ \
|
||||
mov r26 = ar.pfs; /* I */ \
|
||||
mov r18 = cr.isr; \
|
||||
COVER; /* B;; (or nothing) */ \
|
||||
;; \
|
||||
tbit.z p0,p15 = r29,IA64_PSR_I_BIT; \
|
||||
mov r1 = r16; \
|
||||
/* mov r21=r16; */ \
|
||||
/* switch from user to kernel RBS: */ \
|
||||
;; \
|
||||
invala; /* M */ \
|
||||
SAVE_IFS; \
|
||||
;; \
|
||||
KVM_MINSTATE_START_SAVE_MIN \
|
||||
adds r17 = 2*L1_CACHE_BYTES,r1;/* cache-line size */ \
|
||||
adds r16 = PT(CR_IPSR),r1; \
|
||||
;; \
|
||||
lfetch.fault.excl.nt1 [r17],L1_CACHE_BYTES; \
|
||||
st8 [r16] = r29; /* save cr.ipsr */ \
|
||||
;; \
|
||||
lfetch.fault.excl.nt1 [r17]; \
|
||||
tbit.nz p15,p0 = r29,IA64_PSR_I_BIT; \
|
||||
mov r29 = b0 \
|
||||
;; \
|
||||
adds r16 = PT(R8),r1; /* initialize first base pointer */\
|
||||
adds r17 = PT(R9),r1; /* initialize second base pointer */\
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r8,16; \
|
||||
.mem.offset 8,0; st8.spill [r17] = r9,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r10,24; \
|
||||
.mem.offset 8,0; st8.spill [r17] = r11,24; \
|
||||
;; \
|
||||
mov r9 = cr.iip; /* M */ \
|
||||
mov r10 = ar.fpsr; /* M */ \
|
||||
;; \
|
||||
st8 [r16] = r9,16; /* save cr.iip */ \
|
||||
st8 [r17] = r30,16; /* save cr.ifs */ \
|
||||
sub r18 = r18,r22; /* r18=RSE.ndirty*8 */ \
|
||||
;; \
|
||||
st8 [r16] = r25,16; /* save ar.unat */ \
|
||||
st8 [r17] = r26,16; /* save ar.pfs */ \
|
||||
shl r18 = r18,16; /* calu ar.rsc used for "loadrs" */\
|
||||
;; \
|
||||
st8 [r16] = r27,16; /* save ar.rsc */ \
|
||||
st8 [r17] = r28,16; /* save ar.rnat */ \
|
||||
;; /* avoid RAW on r16 & r17 */ \
|
||||
st8 [r16] = r23,16; /* save ar.bspstore */ \
|
||||
st8 [r17] = r31,16; /* save predicates */ \
|
||||
;; \
|
||||
st8 [r16] = r29,16; /* save b0 */ \
|
||||
st8 [r17] = r18,16; /* save ar.rsc value for "loadrs" */\
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r20,16;/* save original r1 */ \
|
||||
.mem.offset 8,0; st8.spill [r17] = r12,16; \
|
||||
adds r12 = -16,r1; /* switch to kernel memory stack */ \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r13,16; \
|
||||
.mem.offset 8,0; st8.spill [r17] = r10,16; /* save ar.fpsr */\
|
||||
mov r13 = r21; /* establish `current' */ \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r15,16; \
|
||||
.mem.offset 8,0; st8.spill [r17] = r14,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r16] = r2,16; \
|
||||
.mem.offset 8,0; st8.spill [r17] = r3,16; \
|
||||
adds r2 = VMM_PT_REGS_R16_OFFSET,r1; \
|
||||
;; \
|
||||
adds r16 = VMM_VCPU_IIPA_OFFSET,r13; \
|
||||
adds r17 = VMM_VCPU_ISR_OFFSET,r13; \
|
||||
mov r26 = cr.iipa; \
|
||||
mov r27 = cr.isr; \
|
||||
;; \
|
||||
st8 [r16] = r26; \
|
||||
st8 [r17] = r27; \
|
||||
;; \
|
||||
EXTRA; \
|
||||
mov r8 = ar.ccv; \
|
||||
mov r9 = ar.csd; \
|
||||
mov r10 = ar.ssd; \
|
||||
movl r11 = FPSR_DEFAULT; /* L-unit */ \
|
||||
adds r17 = VMM_VCPU_GP_OFFSET,r13; \
|
||||
;; \
|
||||
ld8 r1 = [r17];/* establish kernel global pointer */ \
|
||||
;; \
|
||||
PAL_VSA_SYNC_READ \
|
||||
KVM_MINSTATE_END_SAVE_MIN
|
||||
|
||||
/*
|
||||
* SAVE_REST saves the remainder of pt_regs (with psr.ic on).
|
||||
*
|
||||
* Assumed state upon entry:
|
||||
* psr.ic: on
|
||||
* r2: points to &pt_regs.f6
|
||||
* r3: points to &pt_regs.f7
|
||||
* r8: contents of ar.ccv
|
||||
* r9: contents of ar.csd
|
||||
* r10: contents of ar.ssd
|
||||
* r11: FPSR_DEFAULT
|
||||
*
|
||||
* Registers r14 and r15 are guaranteed not to be touched by SAVE_REST.
|
||||
*/
|
||||
#define KVM_SAVE_REST \
|
||||
.mem.offset 0,0; st8.spill [r2] = r16,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r17,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r18,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r19,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r20,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r21,16; \
|
||||
mov r18=b6; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r22,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r23,16; \
|
||||
mov r19 = b7; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r24,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r25,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r26,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r27,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r28,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r29,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r30,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r31,32; \
|
||||
;; \
|
||||
mov ar.fpsr = r11; \
|
||||
st8 [r2] = r8,8; \
|
||||
adds r24 = PT(B6)-PT(F7),r3; \
|
||||
adds r25 = PT(B7)-PT(F7),r3; \
|
||||
;; \
|
||||
st8 [r24] = r18,16; /* b6 */ \
|
||||
st8 [r25] = r19,16; /* b7 */ \
|
||||
adds r2 = PT(R4)-PT(F6),r2; \
|
||||
adds r3 = PT(R5)-PT(F7),r3; \
|
||||
;; \
|
||||
st8 [r24] = r9; /* ar.csd */ \
|
||||
st8 [r25] = r10; /* ar.ssd */ \
|
||||
;; \
|
||||
mov r18 = ar.unat; \
|
||||
adds r19 = PT(EML_UNAT)-PT(R4),r2; \
|
||||
;; \
|
||||
st8 [r19] = r18; /* eml_unat */ \
|
||||
|
||||
|
||||
#define KVM_SAVE_EXTRA \
|
||||
.mem.offset 0,0; st8.spill [r2] = r4,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r5,16; \
|
||||
;; \
|
||||
.mem.offset 0,0; st8.spill [r2] = r6,16; \
|
||||
.mem.offset 8,0; st8.spill [r3] = r7; \
|
||||
;; \
|
||||
mov r26 = ar.unat; \
|
||||
;; \
|
||||
st8 [r2] = r26;/* eml_unat */ \
|
||||
|
||||
#define KVM_SAVE_MIN_WITH_COVER KVM_DO_SAVE_MIN(cover, mov r30 = cr.ifs,)
|
||||
#define KVM_SAVE_MIN_WITH_COVER_R19 KVM_DO_SAVE_MIN(cover, mov r30 = cr.ifs, mov r15 = r19)
|
||||
#define KVM_SAVE_MIN KVM_DO_SAVE_MIN( , mov r30 = r0, )
|
|
@ -1,30 +0,0 @@
|
|||
#ifndef __KVM_IA64_LAPIC_H
|
||||
#define __KVM_IA64_LAPIC_H
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
/*
|
||||
* vlsapic
|
||||
*/
|
||||
struct kvm_lapic{
|
||||
struct kvm_vcpu *vcpu;
|
||||
uint64_t insvc[4];
|
||||
uint64_t vhpi;
|
||||
uint8_t xtp;
|
||||
uint8_t pal_init_pending;
|
||||
uint8_t pad[2];
|
||||
};
|
||||
|
||||
int kvm_create_lapic(struct kvm_vcpu *vcpu);
|
||||
void kvm_free_lapic(struct kvm_vcpu *vcpu);
|
||||
|
||||
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
|
||||
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
|
||||
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
|
||||
int short_hand, int dest, int dest_mode);
|
||||
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
|
||||
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq);
|
||||
#define kvm_apic_present(x) (true)
|
||||
#define kvm_lapic_enabled(x) (true)
|
||||
|
||||
#endif
|
|
@ -1 +0,0 @@
|
|||
#include "../lib/memcpy.S"
|
|
@ -1 +0,0 @@
|
|||
#include "../lib/memset.S"
|
|
@ -1,94 +0,0 @@
|
|||
#ifndef __KVM_IA64_MISC_H
|
||||
#define __KVM_IA64_MISC_H
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
/*
|
||||
* misc.h
|
||||
* Copyright (C) 2007, Intel Corporation.
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
/*
|
||||
*Return p2m base address at host side!
|
||||
*/
|
||||
static inline uint64_t *kvm_host_get_pmt(struct kvm *kvm)
|
||||
{
|
||||
return (uint64_t *)(kvm->arch.vm_base +
|
||||
offsetof(struct kvm_vm_data, kvm_p2m));
|
||||
}
|
||||
|
||||
static inline void kvm_set_pmt_entry(struct kvm *kvm, gfn_t gfn,
|
||||
u64 paddr, u64 mem_flags)
|
||||
{
|
||||
uint64_t *pmt_base = kvm_host_get_pmt(kvm);
|
||||
unsigned long pte;
|
||||
|
||||
pte = PAGE_ALIGN(paddr) | mem_flags;
|
||||
pmt_base[gfn] = pte;
|
||||
}
|
||||
|
||||
/*Function for translating host address to guest address*/
|
||||
|
||||
static inline void *to_guest(struct kvm *kvm, void *addr)
|
||||
{
|
||||
return (void *)((unsigned long)(addr) - kvm->arch.vm_base +
|
||||
KVM_VM_DATA_BASE);
|
||||
}
|
||||
|
||||
/*Function for translating guest address to host address*/
|
||||
|
||||
static inline void *to_host(struct kvm *kvm, void *addr)
|
||||
{
|
||||
return (void *)((unsigned long)addr - KVM_VM_DATA_BASE
|
||||
+ kvm->arch.vm_base);
|
||||
}
|
||||
|
||||
/* Get host context of the vcpu */
|
||||
static inline union context *kvm_get_host_context(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
union context *ctx = &vcpu->arch.host;
|
||||
return to_guest(vcpu->kvm, ctx);
|
||||
}
|
||||
|
||||
/* Get guest context of the vcpu */
|
||||
static inline union context *kvm_get_guest_context(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
union context *ctx = &vcpu->arch.guest;
|
||||
return to_guest(vcpu->kvm, ctx);
|
||||
}
|
||||
|
||||
/* kvm get exit data from gvmm! */
|
||||
static inline struct exit_ctl_data *kvm_get_exit_data(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return &vcpu->arch.exit_data;
|
||||
}
|
||||
|
||||
/*kvm get vcpu ioreq for kvm module!*/
|
||||
static inline struct kvm_mmio_req *kvm_get_vcpu_ioreq(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct exit_ctl_data *p_ctl_data;
|
||||
|
||||
if (vcpu) {
|
||||
p_ctl_data = kvm_get_exit_data(vcpu);
|
||||
if (p_ctl_data->exit_reason == EXIT_REASON_MMIO_INSTRUCTION)
|
||||
return &p_ctl_data->u.ioreq;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#endif
|
|
@ -1,336 +0,0 @@
|
|||
/*
|
||||
* mmio.c: MMIO emulation components.
|
||||
* Copyright (c) 2004, Intel Corporation.
|
||||
* Yaozu Dong (Eddie Dong) (Eddie.dong@intel.com)
|
||||
* Kun Tian (Kevin Tian) (Kevin.tian@intel.com)
|
||||
*
|
||||
* Copyright (c) 2007 Intel Corporation KVM support.
|
||||
* Xuefei Xu (Anthony Xu) (anthony.xu@intel.com)
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
#include "vcpu.h"
|
||||
|
||||
static void vlsapic_write_xtp(struct kvm_vcpu *v, uint8_t val)
|
||||
{
|
||||
VLSAPIC_XTP(v) = val;
|
||||
}
|
||||
|
||||
/*
|
||||
* LSAPIC OFFSET
|
||||
*/
|
||||
#define PIB_LOW_HALF(ofst) !(ofst & (1 << 20))
|
||||
#define PIB_OFST_INTA 0x1E0000
|
||||
#define PIB_OFST_XTP 0x1E0008
|
||||
|
||||
/*
|
||||
* execute write IPI op.
|
||||
*/
|
||||
static void vlsapic_write_ipi(struct kvm_vcpu *vcpu,
|
||||
uint64_t addr, uint64_t data)
|
||||
{
|
||||
struct exit_ctl_data *p = ¤t_vcpu->arch.exit_data;
|
||||
unsigned long psr;
|
||||
|
||||
local_irq_save(psr);
|
||||
|
||||
p->exit_reason = EXIT_REASON_IPI;
|
||||
p->u.ipi_data.addr.val = addr;
|
||||
p->u.ipi_data.data.val = data;
|
||||
vmm_transition(current_vcpu);
|
||||
|
||||
local_irq_restore(psr);
|
||||
|
||||
}
|
||||
|
||||
void lsapic_write(struct kvm_vcpu *v, unsigned long addr,
|
||||
unsigned long length, unsigned long val)
|
||||
{
|
||||
addr &= (PIB_SIZE - 1);
|
||||
|
||||
switch (addr) {
|
||||
case PIB_OFST_INTA:
|
||||
panic_vm(v, "Undefined write on PIB INTA\n");
|
||||
break;
|
||||
case PIB_OFST_XTP:
|
||||
if (length == 1) {
|
||||
vlsapic_write_xtp(v, val);
|
||||
} else {
|
||||
panic_vm(v, "Undefined write on PIB XTP\n");
|
||||
}
|
||||
break;
|
||||
default:
|
||||
if (PIB_LOW_HALF(addr)) {
|
||||
/*Lower half */
|
||||
if (length != 8)
|
||||
panic_vm(v, "Can't LHF write with size %ld!\n",
|
||||
length);
|
||||
else
|
||||
vlsapic_write_ipi(v, addr, val);
|
||||
} else { /*Upper half */
|
||||
panic_vm(v, "IPI-UHF write %lx\n", addr);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
unsigned long lsapic_read(struct kvm_vcpu *v, unsigned long addr,
|
||||
unsigned long length)
|
||||
{
|
||||
uint64_t result = 0;
|
||||
|
||||
addr &= (PIB_SIZE - 1);
|
||||
|
||||
switch (addr) {
|
||||
case PIB_OFST_INTA:
|
||||
if (length == 1) /* 1 byte load */
|
||||
; /* There is no i8259, there is no INTA access*/
|
||||
else
|
||||
panic_vm(v, "Undefined read on PIB INTA\n");
|
||||
|
||||
break;
|
||||
case PIB_OFST_XTP:
|
||||
if (length == 1) {
|
||||
result = VLSAPIC_XTP(v);
|
||||
} else {
|
||||
panic_vm(v, "Undefined read on PIB XTP\n");
|
||||
}
|
||||
break;
|
||||
default:
|
||||
panic_vm(v, "Undefined addr access for lsapic!\n");
|
||||
break;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
static void mmio_access(struct kvm_vcpu *vcpu, u64 src_pa, u64 *dest,
|
||||
u16 s, int ma, int dir)
|
||||
{
|
||||
unsigned long iot;
|
||||
struct exit_ctl_data *p = &vcpu->arch.exit_data;
|
||||
unsigned long psr;
|
||||
|
||||
iot = __gpfn_is_io(src_pa >> PAGE_SHIFT);
|
||||
|
||||
local_irq_save(psr);
|
||||
|
||||
/*Intercept the access for PIB range*/
|
||||
if (iot == GPFN_PIB) {
|
||||
if (!dir)
|
||||
lsapic_write(vcpu, src_pa, s, *dest);
|
||||
else
|
||||
*dest = lsapic_read(vcpu, src_pa, s);
|
||||
goto out;
|
||||
}
|
||||
p->exit_reason = EXIT_REASON_MMIO_INSTRUCTION;
|
||||
p->u.ioreq.addr = src_pa;
|
||||
p->u.ioreq.size = s;
|
||||
p->u.ioreq.dir = dir;
|
||||
if (dir == IOREQ_WRITE)
|
||||
p->u.ioreq.data = *dest;
|
||||
p->u.ioreq.state = STATE_IOREQ_READY;
|
||||
vmm_transition(vcpu);
|
||||
|
||||
if (p->u.ioreq.state == STATE_IORESP_READY) {
|
||||
if (dir == IOREQ_READ)
|
||||
/* it's necessary to ensure zero extending */
|
||||
*dest = p->u.ioreq.data & (~0UL >> (64-(s*8)));
|
||||
} else
|
||||
panic_vm(vcpu, "Unhandled mmio access returned!\n");
|
||||
out:
|
||||
local_irq_restore(psr);
|
||||
return ;
|
||||
}
|
||||
|
||||
/*
|
||||
dir 1: read 0:write
|
||||
inst_type 0:integer 1:floating point
|
||||
*/
|
||||
#define SL_INTEGER 0 /* store/load interger*/
|
||||
#define SL_FLOATING 1 /* store/load floating*/
|
||||
|
||||
void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma)
|
||||
{
|
||||
struct kvm_pt_regs *regs;
|
||||
IA64_BUNDLE bundle;
|
||||
int slot, dir = 0;
|
||||
int inst_type = -1;
|
||||
u16 size = 0;
|
||||
u64 data, slot1a, slot1b, temp, update_reg;
|
||||
s32 imm;
|
||||
INST64 inst;
|
||||
|
||||
regs = vcpu_regs(vcpu);
|
||||
|
||||
if (fetch_code(vcpu, regs->cr_iip, &bundle)) {
|
||||
/* if fetch code fail, return and try again */
|
||||
return;
|
||||
}
|
||||
slot = ((struct ia64_psr *)&(regs->cr_ipsr))->ri;
|
||||
if (!slot)
|
||||
inst.inst = bundle.slot0;
|
||||
else if (slot == 1) {
|
||||
slot1a = bundle.slot1a;
|
||||
slot1b = bundle.slot1b;
|
||||
inst.inst = slot1a + (slot1b << 18);
|
||||
} else if (slot == 2)
|
||||
inst.inst = bundle.slot2;
|
||||
|
||||
/* Integer Load/Store */
|
||||
if (inst.M1.major == 4 && inst.M1.m == 0 && inst.M1.x == 0) {
|
||||
inst_type = SL_INTEGER;
|
||||
size = (inst.M1.x6 & 0x3);
|
||||
if ((inst.M1.x6 >> 2) > 0xb) {
|
||||
/*write*/
|
||||
dir = IOREQ_WRITE;
|
||||
data = vcpu_get_gr(vcpu, inst.M4.r2);
|
||||
} else if ((inst.M1.x6 >> 2) < 0xb) {
|
||||
/*read*/
|
||||
dir = IOREQ_READ;
|
||||
}
|
||||
} else if (inst.M2.major == 4 && inst.M2.m == 1 && inst.M2.x == 0) {
|
||||
/* Integer Load + Reg update */
|
||||
inst_type = SL_INTEGER;
|
||||
dir = IOREQ_READ;
|
||||
size = (inst.M2.x6 & 0x3);
|
||||
temp = vcpu_get_gr(vcpu, inst.M2.r3);
|
||||
update_reg = vcpu_get_gr(vcpu, inst.M2.r2);
|
||||
temp += update_reg;
|
||||
vcpu_set_gr(vcpu, inst.M2.r3, temp, 0);
|
||||
} else if (inst.M3.major == 5) {
|
||||
/*Integer Load/Store + Imm update*/
|
||||
inst_type = SL_INTEGER;
|
||||
size = (inst.M3.x6&0x3);
|
||||
if ((inst.M5.x6 >> 2) > 0xb) {
|
||||
/*write*/
|
||||
dir = IOREQ_WRITE;
|
||||
data = vcpu_get_gr(vcpu, inst.M5.r2);
|
||||
temp = vcpu_get_gr(vcpu, inst.M5.r3);
|
||||
imm = (inst.M5.s << 31) | (inst.M5.i << 30) |
|
||||
(inst.M5.imm7 << 23);
|
||||
temp += imm >> 23;
|
||||
vcpu_set_gr(vcpu, inst.M5.r3, temp, 0);
|
||||
|
||||
} else if ((inst.M3.x6 >> 2) < 0xb) {
|
||||
/*read*/
|
||||
dir = IOREQ_READ;
|
||||
temp = vcpu_get_gr(vcpu, inst.M3.r3);
|
||||
imm = (inst.M3.s << 31) | (inst.M3.i << 30) |
|
||||
(inst.M3.imm7 << 23);
|
||||
temp += imm >> 23;
|
||||
vcpu_set_gr(vcpu, inst.M3.r3, temp, 0);
|
||||
|
||||
}
|
||||
} else if (inst.M9.major == 6 && inst.M9.x6 == 0x3B
|
||||
&& inst.M9.m == 0 && inst.M9.x == 0) {
|
||||
/* Floating-point spill*/
|
||||
struct ia64_fpreg v;
|
||||
|
||||
inst_type = SL_FLOATING;
|
||||
dir = IOREQ_WRITE;
|
||||
vcpu_get_fpreg(vcpu, inst.M9.f2, &v);
|
||||
/* Write high word. FIXME: this is a kludge! */
|
||||
v.u.bits[1] &= 0x3ffff;
|
||||
mmio_access(vcpu, padr + 8, (u64 *)&v.u.bits[1], 8,
|
||||
ma, IOREQ_WRITE);
|
||||
data = v.u.bits[0];
|
||||
size = 3;
|
||||
} else if (inst.M10.major == 7 && inst.M10.x6 == 0x3B) {
|
||||
/* Floating-point spill + Imm update */
|
||||
struct ia64_fpreg v;
|
||||
|
||||
inst_type = SL_FLOATING;
|
||||
dir = IOREQ_WRITE;
|
||||
vcpu_get_fpreg(vcpu, inst.M10.f2, &v);
|
||||
temp = vcpu_get_gr(vcpu, inst.M10.r3);
|
||||
imm = (inst.M10.s << 31) | (inst.M10.i << 30) |
|
||||
(inst.M10.imm7 << 23);
|
||||
temp += imm >> 23;
|
||||
vcpu_set_gr(vcpu, inst.M10.r3, temp, 0);
|
||||
|
||||
/* Write high word.FIXME: this is a kludge! */
|
||||
v.u.bits[1] &= 0x3ffff;
|
||||
mmio_access(vcpu, padr + 8, (u64 *)&v.u.bits[1],
|
||||
8, ma, IOREQ_WRITE);
|
||||
data = v.u.bits[0];
|
||||
size = 3;
|
||||
} else if (inst.M10.major == 7 && inst.M10.x6 == 0x31) {
|
||||
/* Floating-point stf8 + Imm update */
|
||||
struct ia64_fpreg v;
|
||||
inst_type = SL_FLOATING;
|
||||
dir = IOREQ_WRITE;
|
||||
size = 3;
|
||||
vcpu_get_fpreg(vcpu, inst.M10.f2, &v);
|
||||
data = v.u.bits[0]; /* Significand. */
|
||||
temp = vcpu_get_gr(vcpu, inst.M10.r3);
|
||||
imm = (inst.M10.s << 31) | (inst.M10.i << 30) |
|
||||
(inst.M10.imm7 << 23);
|
||||
temp += imm >> 23;
|
||||
vcpu_set_gr(vcpu, inst.M10.r3, temp, 0);
|
||||
} else if (inst.M15.major == 7 && inst.M15.x6 >= 0x2c
|
||||
&& inst.M15.x6 <= 0x2f) {
|
||||
temp = vcpu_get_gr(vcpu, inst.M15.r3);
|
||||
imm = (inst.M15.s << 31) | (inst.M15.i << 30) |
|
||||
(inst.M15.imm7 << 23);
|
||||
temp += imm >> 23;
|
||||
vcpu_set_gr(vcpu, inst.M15.r3, temp, 0);
|
||||
|
||||
vcpu_increment_iip(vcpu);
|
||||
return;
|
||||
} else if (inst.M12.major == 6 && inst.M12.m == 1
|
||||
&& inst.M12.x == 1 && inst.M12.x6 == 1) {
|
||||
/* Floating-point Load Pair + Imm ldfp8 M12*/
|
||||
struct ia64_fpreg v;
|
||||
|
||||
inst_type = SL_FLOATING;
|
||||
dir = IOREQ_READ;
|
||||
size = 8; /*ldfd*/
|
||||
mmio_access(vcpu, padr, &data, size, ma, dir);
|
||||
v.u.bits[0] = data;
|
||||
v.u.bits[1] = 0x1003E;
|
||||
vcpu_set_fpreg(vcpu, inst.M12.f1, &v);
|
||||
padr += 8;
|
||||
mmio_access(vcpu, padr, &data, size, ma, dir);
|
||||
v.u.bits[0] = data;
|
||||
v.u.bits[1] = 0x1003E;
|
||||
vcpu_set_fpreg(vcpu, inst.M12.f2, &v);
|
||||
padr += 8;
|
||||
vcpu_set_gr(vcpu, inst.M12.r3, padr, 0);
|
||||
vcpu_increment_iip(vcpu);
|
||||
return;
|
||||
} else {
|
||||
inst_type = -1;
|
||||
panic_vm(vcpu, "Unsupported MMIO access instruction! "
|
||||
"Bunld[0]=0x%lx, Bundle[1]=0x%lx\n",
|
||||
bundle.i64[0], bundle.i64[1]);
|
||||
}
|
||||
|
||||
size = 1 << size;
|
||||
if (dir == IOREQ_WRITE) {
|
||||
mmio_access(vcpu, padr, &data, size, ma, dir);
|
||||
} else {
|
||||
mmio_access(vcpu, padr, &data, size, ma, dir);
|
||||
if (inst_type == SL_INTEGER)
|
||||
vcpu_set_gr(vcpu, inst.M1.r1, data, 0);
|
||||
else
|
||||
panic_vm(vcpu, "Unsupported instruction type!\n");
|
||||
|
||||
}
|
||||
vcpu_increment_iip(vcpu);
|
||||
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
2209
arch/ia64/kvm/vcpu.c
2209
arch/ia64/kvm/vcpu.c
File diff suppressed because it is too large
Load Diff
|
@ -1,752 +0,0 @@
|
|||
/*
|
||||
* vcpu.h: vcpu routines
|
||||
* Copyright (c) 2005, Intel Corporation.
|
||||
* Xuefei Xu (Anthony Xu) (Anthony.xu@intel.com)
|
||||
* Yaozu Dong (Eddie Dong) (Eddie.dong@intel.com)
|
||||
*
|
||||
* Copyright (c) 2007, Intel Corporation.
|
||||
* Xuefei Xu (Anthony Xu) (Anthony.xu@intel.com)
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
|
||||
#ifndef __KVM_VCPU_H__
|
||||
#define __KVM_VCPU_H__
|
||||
|
||||
#include <asm/types.h>
|
||||
#include <asm/fpu.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include "vti.h"
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
typedef unsigned long IA64_INST;
|
||||
|
||||
typedef union U_IA64_BUNDLE {
|
||||
unsigned long i64[2];
|
||||
struct { unsigned long template:5, slot0:41, slot1a:18,
|
||||
slot1b:23, slot2:41; };
|
||||
/* NOTE: following doesn't work because bitfields can't cross natural
|
||||
size boundaries
|
||||
struct { unsigned long template:5, slot0:41, slot1:41, slot2:41; }; */
|
||||
} IA64_BUNDLE;
|
||||
|
||||
typedef union U_INST64_A5 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, imm7b:7, r3:2, imm5c:5,
|
||||
imm9d:9, s:1, major:4; };
|
||||
} INST64_A5;
|
||||
|
||||
typedef union U_INST64_B4 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, btype:3, un3:3, p:1, b2:3, un11:11, x6:6,
|
||||
wh:2, d:1, un1:1, major:4; };
|
||||
} INST64_B4;
|
||||
|
||||
typedef union U_INST64_B8 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, un21:21, x6:6, un4:4, major:4; };
|
||||
} INST64_B8;
|
||||
|
||||
typedef union U_INST64_B9 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm20:20, :1, x6:6, :3, i:1, major:4; };
|
||||
} INST64_B9;
|
||||
|
||||
typedef union U_INST64_I19 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm20:20, :1, x6:6, x3:3, i:1, major:4; };
|
||||
} INST64_I19;
|
||||
|
||||
typedef union U_INST64_I26 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, ar3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_I26;
|
||||
|
||||
typedef union U_INST64_I27 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, imm:7, ar3:7, x6:6, x3:3, s:1, major:4; };
|
||||
} INST64_I27;
|
||||
|
||||
typedef union U_INST64_I28 { /* not privileged (mov from AR) */
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, :7, ar3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_I28;
|
||||
|
||||
typedef union U_INST64_M28 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :14, r3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M28;
|
||||
|
||||
typedef union U_INST64_M29 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, ar3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M29;
|
||||
|
||||
typedef union U_INST64_M30 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, imm:7, ar3:7, x4:4, x2:2,
|
||||
x3:3, s:1, major:4; };
|
||||
} INST64_M30;
|
||||
|
||||
typedef union U_INST64_M31 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, :7, ar3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M31;
|
||||
|
||||
typedef union U_INST64_M32 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, cr3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M32;
|
||||
|
||||
typedef union U_INST64_M33 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, :7, cr3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M33;
|
||||
|
||||
typedef union U_INST64_M35 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, :7, x6:6, x3:3, :1, major:4; };
|
||||
|
||||
} INST64_M35;
|
||||
|
||||
typedef union U_INST64_M36 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, :14, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M36;
|
||||
|
||||
typedef union U_INST64_M37 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm20a:20, :1, x4:4, x2:2, x3:3,
|
||||
i:1, major:4; };
|
||||
} INST64_M37;
|
||||
|
||||
typedef union U_INST64_M41 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, :7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M41;
|
||||
|
||||
typedef union U_INST64_M42 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, r3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M42;
|
||||
|
||||
typedef union U_INST64_M43 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, :7, r3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M43;
|
||||
|
||||
typedef union U_INST64_M44 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm:21, x4:4, i2:2, x3:3, i:1, major:4; };
|
||||
} INST64_M44;
|
||||
|
||||
typedef union U_INST64_M45 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, r2:7, r3:7, x6:6, x3:3, :1, major:4; };
|
||||
} INST64_M45;
|
||||
|
||||
typedef union U_INST64_M46 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, un7:7, r3:7, x6:6,
|
||||
x3:3, un1:1, major:4; };
|
||||
} INST64_M46;
|
||||
|
||||
typedef union U_INST64_M47 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, un14:14, r3:7, x6:6, x3:3, un1:1, major:4; };
|
||||
} INST64_M47;
|
||||
|
||||
typedef union U_INST64_M1{
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, un7:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M1;
|
||||
|
||||
typedef union U_INST64_M2{
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, r2:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M2;
|
||||
|
||||
typedef union U_INST64_M3{
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, r1:7, imm7:7, r3:7, i:1, hint:2,
|
||||
x6:6, s:1, major:4; };
|
||||
} INST64_M3;
|
||||
|
||||
typedef union U_INST64_M4 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, un7:7, r2:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M4;
|
||||
|
||||
typedef union U_INST64_M5 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm7:7, r2:7, r3:7, i:1, hint:2,
|
||||
x6:6, s:1, major:4; };
|
||||
} INST64_M5;
|
||||
|
||||
typedef union U_INST64_M6 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, f1:7, un7:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M6;
|
||||
|
||||
typedef union U_INST64_M9 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, f2:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M9;
|
||||
|
||||
typedef union U_INST64_M10 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, imm7:7, f2:7, r3:7, i:1, hint:2,
|
||||
x6:6, s:1, major:4; };
|
||||
} INST64_M10;
|
||||
|
||||
typedef union U_INST64_M12 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, f1:7, f2:7, r3:7, x:1, hint:2,
|
||||
x6:6, m:1, major:4; };
|
||||
} INST64_M12;
|
||||
|
||||
typedef union U_INST64_M15 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long qp:6, :7, imm7:7, r3:7, i:1, hint:2,
|
||||
x6:6, s:1, major:4; };
|
||||
} INST64_M15;
|
||||
|
||||
typedef union U_INST64 {
|
||||
IA64_INST inst;
|
||||
struct { unsigned long :37, major:4; } generic;
|
||||
INST64_A5 A5; /* used in build_hypercall_bundle only */
|
||||
INST64_B4 B4; /* used in build_hypercall_bundle only */
|
||||
INST64_B8 B8; /* rfi, bsw.[01] */
|
||||
INST64_B9 B9; /* break.b */
|
||||
INST64_I19 I19; /* used in build_hypercall_bundle only */
|
||||
INST64_I26 I26; /* mov register to ar (I unit) */
|
||||
INST64_I27 I27; /* mov immediate to ar (I unit) */
|
||||
INST64_I28 I28; /* mov from ar (I unit) */
|
||||
INST64_M1 M1; /* ld integer */
|
||||
INST64_M2 M2;
|
||||
INST64_M3 M3;
|
||||
INST64_M4 M4; /* st integer */
|
||||
INST64_M5 M5;
|
||||
INST64_M6 M6; /* ldfd floating pointer */
|
||||
INST64_M9 M9; /* stfd floating pointer */
|
||||
INST64_M10 M10; /* stfd floating pointer */
|
||||
INST64_M12 M12; /* ldfd pair floating pointer */
|
||||
INST64_M15 M15; /* lfetch + imm update */
|
||||
INST64_M28 M28; /* purge translation cache entry */
|
||||
INST64_M29 M29; /* mov register to ar (M unit) */
|
||||
INST64_M30 M30; /* mov immediate to ar (M unit) */
|
||||
INST64_M31 M31; /* mov from ar (M unit) */
|
||||
INST64_M32 M32; /* mov reg to cr */
|
||||
INST64_M33 M33; /* mov from cr */
|
||||
INST64_M35 M35; /* mov to psr */
|
||||
INST64_M36 M36; /* mov from psr */
|
||||
INST64_M37 M37; /* break.m */
|
||||
INST64_M41 M41; /* translation cache insert */
|
||||
INST64_M42 M42; /* mov to indirect reg/translation reg insert*/
|
||||
INST64_M43 M43; /* mov from indirect reg */
|
||||
INST64_M44 M44; /* set/reset system mask */
|
||||
INST64_M45 M45; /* translation purge */
|
||||
INST64_M46 M46; /* translation access (tpa,tak) */
|
||||
INST64_M47 M47; /* purge translation entry */
|
||||
} INST64;
|
||||
|
||||
#define MASK_41 ((unsigned long)0x1ffffffffff)
|
||||
|
||||
/* Virtual address memory attributes encoding */
|
||||
#define VA_MATTR_WB 0x0
|
||||
#define VA_MATTR_UC 0x4
|
||||
#define VA_MATTR_UCE 0x5
|
||||
#define VA_MATTR_WC 0x6
|
||||
#define VA_MATTR_NATPAGE 0x7
|
||||
|
||||
#define PMASK(size) (~((size) - 1))
|
||||
#define PSIZE(size) (1UL<<(size))
|
||||
#define CLEARLSB(ppn, nbits) (((ppn) >> (nbits)) << (nbits))
|
||||
#define PAGEALIGN(va, ps) CLEARLSB(va, ps)
|
||||
#define PAGE_FLAGS_RV_MASK (0x2|(0x3UL<<50)|(((1UL<<11)-1)<<53))
|
||||
#define _PAGE_MA_ST (0x1 << 2) /* is reserved for software use */
|
||||
|
||||
#define ARCH_PAGE_SHIFT 12
|
||||
|
||||
#define INVALID_TI_TAG (1UL << 63)
|
||||
|
||||
#define VTLB_PTE_P_BIT 0
|
||||
#define VTLB_PTE_IO_BIT 60
|
||||
#define VTLB_PTE_IO (1UL<<VTLB_PTE_IO_BIT)
|
||||
#define VTLB_PTE_P (1UL<<VTLB_PTE_P_BIT)
|
||||
|
||||
#define vcpu_quick_region_check(_tr_regions,_ifa) \
|
||||
(_tr_regions & (1 << ((unsigned long)_ifa >> 61)))
|
||||
|
||||
#define vcpu_quick_region_set(_tr_regions,_ifa) \
|
||||
do {_tr_regions |= (1 << ((unsigned long)_ifa >> 61)); } while (0)
|
||||
|
||||
static inline void vcpu_set_tr(struct thash_data *trp, u64 pte, u64 itir,
|
||||
u64 va, u64 rid)
|
||||
{
|
||||
trp->page_flags = pte;
|
||||
trp->itir = itir;
|
||||
trp->vadr = va;
|
||||
trp->rid = rid;
|
||||
}
|
||||
|
||||
extern u64 kvm_get_mpt_entry(u64 gpfn);
|
||||
|
||||
/* Return I/ */
|
||||
static inline u64 __gpfn_is_io(u64 gpfn)
|
||||
{
|
||||
u64 pte;
|
||||
pte = kvm_get_mpt_entry(gpfn);
|
||||
if (!(pte & GPFN_INV_MASK)) {
|
||||
pte = pte & GPFN_IO_MASK;
|
||||
if (pte != GPFN_PHYS_MMIO)
|
||||
return pte;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#define IA64_NO_FAULT 0
|
||||
#define IA64_FAULT 1
|
||||
|
||||
#define VMM_RBS_OFFSET ((VMM_TASK_SIZE + 15) & ~15)
|
||||
|
||||
#define SW_BAD 0 /* Bad mode transitition */
|
||||
#define SW_V2P 1 /* Physical emulatino is activated */
|
||||
#define SW_P2V 2 /* Exit physical mode emulation */
|
||||
#define SW_SELF 3 /* No mode transition */
|
||||
#define SW_NOP 4 /* Mode transition, but without action required */
|
||||
|
||||
#define GUEST_IN_PHY 0x1
|
||||
#define GUEST_PHY_EMUL 0x2
|
||||
|
||||
#define current_vcpu ((struct kvm_vcpu *) ia64_getreg(_IA64_REG_TP))
|
||||
|
||||
#define VRN_SHIFT 61
|
||||
#define VRN_MASK 0xe000000000000000
|
||||
#define VRN0 0x0UL
|
||||
#define VRN1 0x1UL
|
||||
#define VRN2 0x2UL
|
||||
#define VRN3 0x3UL
|
||||
#define VRN4 0x4UL
|
||||
#define VRN5 0x5UL
|
||||
#define VRN6 0x6UL
|
||||
#define VRN7 0x7UL
|
||||
|
||||
#define IRQ_NO_MASKED 0
|
||||
#define IRQ_MASKED_BY_VTPR 1
|
||||
#define IRQ_MASKED_BY_INSVC 2 /* masked by inservice IRQ */
|
||||
|
||||
#define PTA_BASE_SHIFT 15
|
||||
|
||||
#define IA64_PSR_VM_BIT 46
|
||||
#define IA64_PSR_VM (__IA64_UL(1) << IA64_PSR_VM_BIT)
|
||||
|
||||
/* Interruption Function State */
|
||||
#define IA64_IFS_V_BIT 63
|
||||
#define IA64_IFS_V (__IA64_UL(1) << IA64_IFS_V_BIT)
|
||||
|
||||
#define PHY_PAGE_UC (_PAGE_A|_PAGE_D|_PAGE_P|_PAGE_MA_UC|_PAGE_AR_RWX)
|
||||
#define PHY_PAGE_WB (_PAGE_A|_PAGE_D|_PAGE_P|_PAGE_MA_WB|_PAGE_AR_RWX)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <asm/gcc_intrin.h>
|
||||
|
||||
#define is_physical_mode(v) \
|
||||
((v->arch.mode_flags) & GUEST_IN_PHY)
|
||||
|
||||
#define is_virtual_mode(v) \
|
||||
(!is_physical_mode(v))
|
||||
|
||||
#define MODE_IND(psr) \
|
||||
(((psr).it << 2) + ((psr).dt << 1) + (psr).rt)
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
#define _vmm_raw_spin_lock(x) do {}while(0)
|
||||
#define _vmm_raw_spin_unlock(x) do {}while(0)
|
||||
#else
|
||||
typedef struct {
|
||||
volatile unsigned int lock;
|
||||
} vmm_spinlock_t;
|
||||
#define _vmm_raw_spin_lock(x) \
|
||||
do { \
|
||||
__u32 *ia64_spinlock_ptr = (__u32 *) (x); \
|
||||
__u64 ia64_spinlock_val; \
|
||||
ia64_spinlock_val = ia64_cmpxchg4_acq(ia64_spinlock_ptr, 1, 0);\
|
||||
if (unlikely(ia64_spinlock_val)) { \
|
||||
do { \
|
||||
while (*ia64_spinlock_ptr) \
|
||||
ia64_barrier(); \
|
||||
ia64_spinlock_val = \
|
||||
ia64_cmpxchg4_acq(ia64_spinlock_ptr, 1, 0);\
|
||||
} while (ia64_spinlock_val); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define _vmm_raw_spin_unlock(x) \
|
||||
do { barrier(); \
|
||||
((vmm_spinlock_t *)x)->lock = 0; } \
|
||||
while (0)
|
||||
#endif
|
||||
|
||||
void vmm_spin_lock(vmm_spinlock_t *lock);
|
||||
void vmm_spin_unlock(vmm_spinlock_t *lock);
|
||||
enum {
|
||||
I_TLB = 1,
|
||||
D_TLB = 2
|
||||
};
|
||||
|
||||
union kvm_va {
|
||||
struct {
|
||||
unsigned long off : 60; /* intra-region offset */
|
||||
unsigned long reg : 4; /* region number */
|
||||
} f;
|
||||
unsigned long l;
|
||||
void *p;
|
||||
};
|
||||
|
||||
#define __kvm_pa(x) ({union kvm_va _v; _v.l = (long) (x); \
|
||||
_v.f.reg = 0; _v.l; })
|
||||
#define __kvm_va(x) ({union kvm_va _v; _v.l = (long) (x); \
|
||||
_v.f.reg = -1; _v.p; })
|
||||
|
||||
#define _REGION_ID(x) ({union ia64_rr _v; _v.val = (long)(x); \
|
||||
_v.rid; })
|
||||
#define _REGION_PAGE_SIZE(x) ({union ia64_rr _v; _v.val = (long)(x); \
|
||||
_v.ps; })
|
||||
#define _REGION_HW_WALKER(x) ({union ia64_rr _v; _v.val = (long)(x); \
|
||||
_v.ve; })
|
||||
|
||||
enum vhpt_ref{ DATA_REF, NA_REF, INST_REF, RSE_REF };
|
||||
enum tlb_miss_type { INSTRUCTION, DATA, REGISTER };
|
||||
|
||||
#define VCPU(_v, _x) ((_v)->arch.vpd->_x)
|
||||
#define VMX(_v, _x) ((_v)->arch._x)
|
||||
|
||||
#define VLSAPIC_INSVC(vcpu, i) ((vcpu)->arch.insvc[i])
|
||||
#define VLSAPIC_XTP(_v) VMX(_v, xtp)
|
||||
|
||||
static inline unsigned long itir_ps(unsigned long itir)
|
||||
{
|
||||
return ((itir >> 2) & 0x3f);
|
||||
}
|
||||
|
||||
|
||||
/**************************************************************************
|
||||
VCPU control register access routines
|
||||
**************************************************************************/
|
||||
|
||||
static inline u64 vcpu_get_itir(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, itir));
|
||||
}
|
||||
|
||||
static inline void vcpu_set_itir(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, itir) = val;
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_ifa(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, ifa));
|
||||
}
|
||||
|
||||
static inline void vcpu_set_ifa(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, ifa) = val;
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_iva(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, iva));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_pta(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, pta));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_lid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, lid));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_tpr(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, tpr));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_eoi(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return (0UL); /*reads of eoi always return 0 */
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_irr0(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, irr[0]));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_irr1(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, irr[1]));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_irr2(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, irr[2]));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_irr3(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return ((u64)VCPU(vcpu, irr[3]));
|
||||
}
|
||||
|
||||
static inline void vcpu_set_dcr(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
ia64_setreg(_IA64_REG_CR_DCR, val);
|
||||
}
|
||||
|
||||
static inline void vcpu_set_isr(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, isr) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_lid(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, lid) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_ipsr(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, ipsr) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_iip(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, iip) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_ifs(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, ifs) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_iipa(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, iipa) = val;
|
||||
}
|
||||
|
||||
static inline void vcpu_set_iha(struct kvm_vcpu *vcpu, u64 val)
|
||||
{
|
||||
VCPU(vcpu, iha) = val;
|
||||
}
|
||||
|
||||
|
||||
static inline u64 vcpu_get_rr(struct kvm_vcpu *vcpu, u64 reg)
|
||||
{
|
||||
return vcpu->arch.vrr[reg>>61];
|
||||
}
|
||||
|
||||
/**************************************************************************
|
||||
VCPU debug breakpoint register access routines
|
||||
**************************************************************************/
|
||||
|
||||
static inline void vcpu_set_dbr(struct kvm_vcpu *vcpu, u64 reg, u64 val)
|
||||
{
|
||||
__ia64_set_dbr(reg, val);
|
||||
}
|
||||
|
||||
static inline void vcpu_set_ibr(struct kvm_vcpu *vcpu, u64 reg, u64 val)
|
||||
{
|
||||
ia64_set_ibr(reg, val);
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_dbr(struct kvm_vcpu *vcpu, u64 reg)
|
||||
{
|
||||
return ((u64)__ia64_get_dbr(reg));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_ibr(struct kvm_vcpu *vcpu, u64 reg)
|
||||
{
|
||||
return ((u64)ia64_get_ibr(reg));
|
||||
}
|
||||
|
||||
/**************************************************************************
|
||||
VCPU performance monitor register access routines
|
||||
**************************************************************************/
|
||||
static inline void vcpu_set_pmc(struct kvm_vcpu *vcpu, u64 reg, u64 val)
|
||||
{
|
||||
/* NOTE: Writes to unimplemented PMC registers are discarded */
|
||||
ia64_set_pmc(reg, val);
|
||||
}
|
||||
|
||||
static inline void vcpu_set_pmd(struct kvm_vcpu *vcpu, u64 reg, u64 val)
|
||||
{
|
||||
/* NOTE: Writes to unimplemented PMD registers are discarded */
|
||||
ia64_set_pmd(reg, val);
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_pmc(struct kvm_vcpu *vcpu, u64 reg)
|
||||
{
|
||||
/* NOTE: Reads from unimplemented PMC registers return zero */
|
||||
return ((u64)ia64_get_pmc(reg));
|
||||
}
|
||||
|
||||
static inline u64 vcpu_get_pmd(struct kvm_vcpu *vcpu, u64 reg)
|
||||
{
|
||||
/* NOTE: Reads from unimplemented PMD registers return zero */
|
||||
return ((u64)ia64_get_pmd(reg));
|
||||
}
|
||||
|
||||
static inline unsigned long vrrtomrr(unsigned long val)
|
||||
{
|
||||
union ia64_rr rr;
|
||||
rr.val = val;
|
||||
rr.rid = (rr.rid << 4) | 0xe;
|
||||
if (rr.ps > PAGE_SHIFT)
|
||||
rr.ps = PAGE_SHIFT;
|
||||
rr.ve = 1;
|
||||
return rr.val;
|
||||
}
|
||||
|
||||
|
||||
static inline int highest_bits(int *dat)
|
||||
{
|
||||
u32 bits, bitnum;
|
||||
int i;
|
||||
|
||||
/* loop for all 256 bits */
|
||||
for (i = 7; i >= 0 ; i--) {
|
||||
bits = dat[i];
|
||||
if (bits) {
|
||||
bitnum = fls(bits);
|
||||
return i * 32 + bitnum - 1;
|
||||
}
|
||||
}
|
||||
return NULL_VECTOR;
|
||||
}
|
||||
|
||||
/*
|
||||
* The pending irq is higher than the inservice one.
|
||||
*
|
||||
*/
|
||||
static inline int is_higher_irq(int pending, int inservice)
|
||||
{
|
||||
return ((pending > inservice)
|
||||
|| ((pending != NULL_VECTOR)
|
||||
&& (inservice == NULL_VECTOR)));
|
||||
}
|
||||
|
||||
static inline int is_higher_class(int pending, int mic)
|
||||
{
|
||||
return ((pending >> 4) > mic);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return 0-255 for pending irq.
|
||||
* NULL_VECTOR: when no pending.
|
||||
*/
|
||||
static inline int highest_pending_irq(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (VCPU(vcpu, irr[0]) & (1UL<<NMI_VECTOR))
|
||||
return NMI_VECTOR;
|
||||
if (VCPU(vcpu, irr[0]) & (1UL<<ExtINT_VECTOR))
|
||||
return ExtINT_VECTOR;
|
||||
|
||||
return highest_bits((int *)&VCPU(vcpu, irr[0]));
|
||||
}
|
||||
|
||||
static inline int highest_inservice_irq(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (VMX(vcpu, insvc[0]) & (1UL<<NMI_VECTOR))
|
||||
return NMI_VECTOR;
|
||||
if (VMX(vcpu, insvc[0]) & (1UL<<ExtINT_VECTOR))
|
||||
return ExtINT_VECTOR;
|
||||
|
||||
return highest_bits((int *)&(VMX(vcpu, insvc[0])));
|
||||
}
|
||||
|
||||
extern void vcpu_get_fpreg(struct kvm_vcpu *vcpu, unsigned long reg,
|
||||
struct ia64_fpreg *val);
|
||||
extern void vcpu_set_fpreg(struct kvm_vcpu *vcpu, unsigned long reg,
|
||||
struct ia64_fpreg *val);
|
||||
extern u64 vcpu_get_gr(struct kvm_vcpu *vcpu, unsigned long reg);
|
||||
extern void vcpu_set_gr(struct kvm_vcpu *vcpu, unsigned long reg,
|
||||
u64 val, int nat);
|
||||
extern unsigned long vcpu_get_psr(struct kvm_vcpu *vcpu);
|
||||
extern void vcpu_set_psr(struct kvm_vcpu *vcpu, unsigned long val);
|
||||
extern u64 vcpu_thash(struct kvm_vcpu *vcpu, u64 vadr);
|
||||
extern void vcpu_bsw0(struct kvm_vcpu *vcpu);
|
||||
extern void thash_vhpt_insert(struct kvm_vcpu *v, u64 pte,
|
||||
u64 itir, u64 va, int type);
|
||||
extern struct thash_data *vhpt_lookup(u64 va);
|
||||
extern u64 guest_vhpt_lookup(u64 iha, u64 *pte);
|
||||
extern void thash_purge_entries(struct kvm_vcpu *v, u64 va, u64 ps);
|
||||
extern void thash_purge_entries_remote(struct kvm_vcpu *v, u64 va, u64 ps);
|
||||
extern u64 translate_phy_pte(u64 *pte, u64 itir, u64 va);
|
||||
extern void thash_purge_and_insert(struct kvm_vcpu *v, u64 pte,
|
||||
u64 itir, u64 ifa, int type);
|
||||
extern void thash_purge_all(struct kvm_vcpu *v);
|
||||
extern struct thash_data *vtlb_lookup(struct kvm_vcpu *v,
|
||||
u64 va, int is_data);
|
||||
extern int vtr_find_overlap(struct kvm_vcpu *vcpu, u64 va,
|
||||
u64 ps, int is_data);
|
||||
|
||||
extern void vcpu_increment_iip(struct kvm_vcpu *v);
|
||||
extern void vcpu_decrement_iip(struct kvm_vcpu *vcpu);
|
||||
extern void vcpu_pend_interrupt(struct kvm_vcpu *vcpu, u8 vec);
|
||||
extern void vcpu_unpend_interrupt(struct kvm_vcpu *vcpu, u8 vec);
|
||||
extern void data_page_not_present(struct kvm_vcpu *vcpu, u64 vadr);
|
||||
extern void dnat_page_consumption(struct kvm_vcpu *vcpu, u64 vadr);
|
||||
extern void alt_dtlb(struct kvm_vcpu *vcpu, u64 vadr);
|
||||
extern void nested_dtlb(struct kvm_vcpu *vcpu);
|
||||
extern void dvhpt_fault(struct kvm_vcpu *vcpu, u64 vadr);
|
||||
extern int vhpt_enabled(struct kvm_vcpu *vcpu, u64 vadr, enum vhpt_ref ref);
|
||||
|
||||
extern void update_vhpi(struct kvm_vcpu *vcpu, int vec);
|
||||
extern int irq_masked(struct kvm_vcpu *vcpu, int h_pending, int h_inservice);
|
||||
|
||||
extern int fetch_code(struct kvm_vcpu *vcpu, u64 gip, IA64_BUNDLE *pbundle);
|
||||
extern void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma);
|
||||
extern void vmm_transition(struct kvm_vcpu *vcpu);
|
||||
extern void vmm_trampoline(union context *from, union context *to);
|
||||
extern int vmm_entry(void);
|
||||
extern u64 vcpu_get_itc(struct kvm_vcpu *vcpu);
|
||||
|
||||
extern void vmm_reset_entry(void);
|
||||
void kvm_init_vtlb(struct kvm_vcpu *v);
|
||||
void kvm_init_vhpt(struct kvm_vcpu *v);
|
||||
void thash_init(struct thash_cb *hcb, u64 sz);
|
||||
|
||||
void panic_vm(struct kvm_vcpu *v, const char *fmt, ...);
|
||||
u64 kvm_gpa_to_mpa(u64 gpa);
|
||||
extern u64 ia64_call_vsa(u64 proc, u64 arg1, u64 arg2, u64 arg3,
|
||||
u64 arg4, u64 arg5, u64 arg6, u64 arg7);
|
||||
|
||||
extern long vmm_sanity;
|
||||
|
||||
#endif
|
||||
#endif /* __VCPU_H__ */
|
|
@ -1,99 +0,0 @@
|
|||
/*
|
||||
* vmm.c: vmm module interface with kvm module
|
||||
*
|
||||
* Copyright (c) 2007, Intel Corporation.
|
||||
*
|
||||
* Xiantao Zhang (xiantao.zhang@intel.com)
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*/
|
||||
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <asm/fpswa.h>
|
||||
|
||||
#include "vcpu.h"
|
||||
|
||||
MODULE_AUTHOR("Intel");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
extern char kvm_ia64_ivt;
|
||||
extern char kvm_asm_mov_from_ar;
|
||||
extern char kvm_asm_mov_from_ar_sn2;
|
||||
extern fpswa_interface_t *vmm_fpswa_interface;
|
||||
|
||||
long vmm_sanity = 1;
|
||||
|
||||
struct kvm_vmm_info vmm_info = {
|
||||
.module = THIS_MODULE,
|
||||
.vmm_entry = vmm_entry,
|
||||
.tramp_entry = vmm_trampoline,
|
||||
.vmm_ivt = (unsigned long)&kvm_ia64_ivt,
|
||||
.patch_mov_ar = (unsigned long)&kvm_asm_mov_from_ar,
|
||||
.patch_mov_ar_sn2 = (unsigned long)&kvm_asm_mov_from_ar_sn2,
|
||||
};
|
||||
|
||||
static int __init kvm_vmm_init(void)
|
||||
{
|
||||
|
||||
vmm_fpswa_interface = fpswa_interface;
|
||||
|
||||
/*Register vmm data to kvm side*/
|
||||
return kvm_init(&vmm_info, 1024, 0, THIS_MODULE);
|
||||
}
|
||||
|
||||
static void __exit kvm_vmm_exit(void)
|
||||
{
|
||||
kvm_exit();
|
||||
return ;
|
||||
}
|
||||
|
||||
void vmm_spin_lock(vmm_spinlock_t *lock)
|
||||
{
|
||||
_vmm_raw_spin_lock(lock);
|
||||
}
|
||||
|
||||
void vmm_spin_unlock(vmm_spinlock_t *lock)
|
||||
{
|
||||
_vmm_raw_spin_unlock(lock);
|
||||
}
|
||||
|
||||
static void vcpu_debug_exit(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct exit_ctl_data *p = &vcpu->arch.exit_data;
|
||||
long psr;
|
||||
|
||||
local_irq_save(psr);
|
||||
p->exit_reason = EXIT_REASON_DEBUG;
|
||||
vmm_transition(vcpu);
|
||||
local_irq_restore(psr);
|
||||
}
|
||||
|
||||
asmlinkage int printk(const char *fmt, ...)
|
||||
{
|
||||
struct kvm_vcpu *vcpu = current_vcpu;
|
||||
va_list args;
|
||||
int r;
|
||||
|
||||
memset(vcpu->arch.log_buf, 0, VMM_LOG_LEN);
|
||||
va_start(args, fmt);
|
||||
r = vsnprintf(vcpu->arch.log_buf, VMM_LOG_LEN, fmt, args);
|
||||
va_end(args);
|
||||
vcpu_debug_exit(vcpu);
|
||||
return r;
|
||||
}
|
||||
|
||||
module_init(kvm_vmm_init)
|
||||
module_exit(kvm_vmm_exit)
|
File diff suppressed because it is too large
Load Diff
|
@ -1,290 +0,0 @@
|
|||
/*
|
||||
* vti.h: prototype for generial vt related interface
|
||||
* Copyright (c) 2004, Intel Corporation.
|
||||
*
|
||||
* Xuefei Xu (Anthony Xu) (anthony.xu@intel.com)
|
||||
* Fred Yang (fred.yang@intel.com)
|
||||
* Kun Tian (Kevin Tian) (kevin.tian@intel.com)
|
||||
*
|
||||
* Copyright (c) 2007, Intel Corporation.
|
||||
* Zhang xiantao <xiantao.zhang@intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*/
|
||||
#ifndef _KVM_VT_I_H
|
||||
#define _KVM_VT_I_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <asm/page.h>
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
/* define itr.i and itr.d in ia64_itr function */
|
||||
#define ITR 0x01
|
||||
#define DTR 0x02
|
||||
#define IaDTR 0x03
|
||||
|
||||
#define IA64_TR_VMM 6 /*itr6, dtr6 : maps vmm code, vmbuffer*/
|
||||
#define IA64_TR_VM_DATA 7 /*dtr7 : maps current vm data*/
|
||||
|
||||
#define RR6 (6UL<<61)
|
||||
#define RR7 (7UL<<61)
|
||||
|
||||
|
||||
/* config_options in pal_vp_init_env */
|
||||
#define VP_INITIALIZE 1UL
|
||||
#define VP_FR_PMC 1UL<<1
|
||||
#define VP_OPCODE 1UL<<8
|
||||
#define VP_CAUSE 1UL<<9
|
||||
#define VP_FW_ACC 1UL<<63
|
||||
|
||||
/* init vp env with initializing vm_buffer */
|
||||
#define VP_INIT_ENV_INITALIZE (VP_INITIALIZE | VP_FR_PMC |\
|
||||
VP_OPCODE | VP_CAUSE | VP_FW_ACC)
|
||||
/* init vp env without initializing vm_buffer */
|
||||
#define VP_INIT_ENV VP_FR_PMC | VP_OPCODE | VP_CAUSE | VP_FW_ACC
|
||||
|
||||
#define PAL_VP_CREATE 265
|
||||
/* Stacked Virt. Initializes a new VPD for the operation of
|
||||
* a new virtual processor in the virtual environment.
|
||||
*/
|
||||
#define PAL_VP_ENV_INFO 266
|
||||
/*Stacked Virt. Returns the parameters needed to enter a virtual environment.*/
|
||||
#define PAL_VP_EXIT_ENV 267
|
||||
/*Stacked Virt. Allows a logical processor to exit a virtual environment.*/
|
||||
#define PAL_VP_INIT_ENV 268
|
||||
/*Stacked Virt. Allows a logical processor to enter a virtual environment.*/
|
||||
#define PAL_VP_REGISTER 269
|
||||
/*Stacked Virt. Register a different host IVT for the virtual processor.*/
|
||||
#define PAL_VP_RESUME 270
|
||||
/* Renamed from PAL_VP_RESUME */
|
||||
#define PAL_VP_RESTORE 270
|
||||
/*Stacked Virt. Resumes virtual processor operation on the logical processor.*/
|
||||
#define PAL_VP_SUSPEND 271
|
||||
/* Renamed from PAL_VP_SUSPEND */
|
||||
#define PAL_VP_SAVE 271
|
||||
/* Stacked Virt. Suspends operation for the specified virtual processor on
|
||||
* the logical processor.
|
||||
*/
|
||||
#define PAL_VP_TERMINATE 272
|
||||
/* Stacked Virt. Terminates operation for the specified virtual processor.*/
|
||||
|
||||
union vac {
|
||||
unsigned long value;
|
||||
struct {
|
||||
unsigned int a_int:1;
|
||||
unsigned int a_from_int_cr:1;
|
||||
unsigned int a_to_int_cr:1;
|
||||
unsigned int a_from_psr:1;
|
||||
unsigned int a_from_cpuid:1;
|
||||
unsigned int a_cover:1;
|
||||
unsigned int a_bsw:1;
|
||||
long reserved:57;
|
||||
};
|
||||
};
|
||||
|
||||
union vdc {
|
||||
unsigned long value;
|
||||
struct {
|
||||
unsigned int d_vmsw:1;
|
||||
unsigned int d_extint:1;
|
||||
unsigned int d_ibr_dbr:1;
|
||||
unsigned int d_pmc:1;
|
||||
unsigned int d_to_pmd:1;
|
||||
unsigned int d_itm:1;
|
||||
long reserved:58;
|
||||
};
|
||||
};
|
||||
|
||||
struct vpd {
|
||||
union vac vac;
|
||||
union vdc vdc;
|
||||
unsigned long virt_env_vaddr;
|
||||
unsigned long reserved1[29];
|
||||
unsigned long vhpi;
|
||||
unsigned long reserved2[95];
|
||||
unsigned long vgr[16];
|
||||
unsigned long vbgr[16];
|
||||
unsigned long vnat;
|
||||
unsigned long vbnat;
|
||||
unsigned long vcpuid[5];
|
||||
unsigned long reserved3[11];
|
||||
unsigned long vpsr;
|
||||
unsigned long vpr;
|
||||
unsigned long reserved4[76];
|
||||
union {
|
||||
unsigned long vcr[128];
|
||||
struct {
|
||||
unsigned long dcr;
|
||||
unsigned long itm;
|
||||
unsigned long iva;
|
||||
unsigned long rsv1[5];
|
||||
unsigned long pta;
|
||||
unsigned long rsv2[7];
|
||||
unsigned long ipsr;
|
||||
unsigned long isr;
|
||||
unsigned long rsv3;
|
||||
unsigned long iip;
|
||||
unsigned long ifa;
|
||||
unsigned long itir;
|
||||
unsigned long iipa;
|
||||
unsigned long ifs;
|
||||
unsigned long iim;
|
||||
unsigned long iha;
|
||||
unsigned long rsv4[38];
|
||||
unsigned long lid;
|
||||
unsigned long ivr;
|
||||
unsigned long tpr;
|
||||
unsigned long eoi;
|
||||
unsigned long irr[4];
|
||||
unsigned long itv;
|
||||
unsigned long pmv;
|
||||
unsigned long cmcv;
|
||||
unsigned long rsv5[5];
|
||||
unsigned long lrr0;
|
||||
unsigned long lrr1;
|
||||
unsigned long rsv6[46];
|
||||
};
|
||||
};
|
||||
unsigned long reserved5[128];
|
||||
unsigned long reserved6[3456];
|
||||
unsigned long vmm_avail[128];
|
||||
unsigned long reserved7[4096];
|
||||
};
|
||||
|
||||
#define PAL_PROC_VM_BIT (1UL << 40)
|
||||
#define PAL_PROC_VMSW_BIT (1UL << 54)
|
||||
|
||||
static inline s64 ia64_pal_vp_env_info(u64 *buffer_size,
|
||||
u64 *vp_env_info)
|
||||
{
|
||||
struct ia64_pal_retval iprv;
|
||||
PAL_CALL_STK(iprv, PAL_VP_ENV_INFO, 0, 0, 0);
|
||||
*buffer_size = iprv.v0;
|
||||
*vp_env_info = iprv.v1;
|
||||
return iprv.status;
|
||||
}
|
||||
|
||||
static inline s64 ia64_pal_vp_exit_env(u64 iva)
|
||||
{
|
||||
struct ia64_pal_retval iprv;
|
||||
|
||||
PAL_CALL_STK(iprv, PAL_VP_EXIT_ENV, (u64)iva, 0, 0);
|
||||
return iprv.status;
|
||||
}
|
||||
|
||||
static inline s64 ia64_pal_vp_init_env(u64 config_options, u64 pbase_addr,
|
||||
u64 vbase_addr, u64 *vsa_base)
|
||||
{
|
||||
struct ia64_pal_retval iprv;
|
||||
|
||||
PAL_CALL_STK(iprv, PAL_VP_INIT_ENV, config_options, pbase_addr,
|
||||
vbase_addr);
|
||||
*vsa_base = iprv.v0;
|
||||
|
||||
return iprv.status;
|
||||
}
|
||||
|
||||
static inline s64 ia64_pal_vp_restore(u64 *vpd, u64 pal_proc_vector)
|
||||
{
|
||||
struct ia64_pal_retval iprv;
|
||||
|
||||
PAL_CALL_STK(iprv, PAL_VP_RESTORE, (u64)vpd, pal_proc_vector, 0);
|
||||
|
||||
return iprv.status;
|
||||
}
|
||||
|
||||
static inline s64 ia64_pal_vp_save(u64 *vpd, u64 pal_proc_vector)
|
||||
{
|
||||
struct ia64_pal_retval iprv;
|
||||
|
||||
PAL_CALL_STK(iprv, PAL_VP_SAVE, (u64)vpd, pal_proc_vector, 0);
|
||||
|
||||
return iprv.status;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
/*VPD field offset*/
|
||||
#define VPD_VAC_START_OFFSET 0
|
||||
#define VPD_VDC_START_OFFSET 8
|
||||
#define VPD_VHPI_START_OFFSET 256
|
||||
#define VPD_VGR_START_OFFSET 1024
|
||||
#define VPD_VBGR_START_OFFSET 1152
|
||||
#define VPD_VNAT_START_OFFSET 1280
|
||||
#define VPD_VBNAT_START_OFFSET 1288
|
||||
#define VPD_VCPUID_START_OFFSET 1296
|
||||
#define VPD_VPSR_START_OFFSET 1424
|
||||
#define VPD_VPR_START_OFFSET 1432
|
||||
#define VPD_VRSE_CFLE_START_OFFSET 1440
|
||||
#define VPD_VCR_START_OFFSET 2048
|
||||
#define VPD_VTPR_START_OFFSET 2576
|
||||
#define VPD_VRR_START_OFFSET 3072
|
||||
#define VPD_VMM_VAIL_START_OFFSET 31744
|
||||
|
||||
/*Virtualization faults*/
|
||||
|
||||
#define EVENT_MOV_TO_AR 1
|
||||
#define EVENT_MOV_TO_AR_IMM 2
|
||||
#define EVENT_MOV_FROM_AR 3
|
||||
#define EVENT_MOV_TO_CR 4
|
||||
#define EVENT_MOV_FROM_CR 5
|
||||
#define EVENT_MOV_TO_PSR 6
|
||||
#define EVENT_MOV_FROM_PSR 7
|
||||
#define EVENT_ITC_D 8
|
||||
#define EVENT_ITC_I 9
|
||||
#define EVENT_MOV_TO_RR 10
|
||||
#define EVENT_MOV_TO_DBR 11
|
||||
#define EVENT_MOV_TO_IBR 12
|
||||
#define EVENT_MOV_TO_PKR 13
|
||||
#define EVENT_MOV_TO_PMC 14
|
||||
#define EVENT_MOV_TO_PMD 15
|
||||
#define EVENT_ITR_D 16
|
||||
#define EVENT_ITR_I 17
|
||||
#define EVENT_MOV_FROM_RR 18
|
||||
#define EVENT_MOV_FROM_DBR 19
|
||||
#define EVENT_MOV_FROM_IBR 20
|
||||
#define EVENT_MOV_FROM_PKR 21
|
||||
#define EVENT_MOV_FROM_PMC 22
|
||||
#define EVENT_MOV_FROM_CPUID 23
|
||||
#define EVENT_SSM 24
|
||||
#define EVENT_RSM 25
|
||||
#define EVENT_PTC_L 26
|
||||
#define EVENT_PTC_G 27
|
||||
#define EVENT_PTC_GA 28
|
||||
#define EVENT_PTR_D 29
|
||||
#define EVENT_PTR_I 30
|
||||
#define EVENT_THASH 31
|
||||
#define EVENT_TTAG 32
|
||||
#define EVENT_TPA 33
|
||||
#define EVENT_TAK 34
|
||||
#define EVENT_PTC_E 35
|
||||
#define EVENT_COVER 36
|
||||
#define EVENT_RFI 37
|
||||
#define EVENT_BSW_0 38
|
||||
#define EVENT_BSW_1 39
|
||||
#define EVENT_VMSW 40
|
||||
|
||||
/**PAL virtual services offsets */
|
||||
#define PAL_VPS_RESUME_NORMAL 0x0000
|
||||
#define PAL_VPS_RESUME_HANDLER 0x0400
|
||||
#define PAL_VPS_SYNC_READ 0x0800
|
||||
#define PAL_VPS_SYNC_WRITE 0x0c00
|
||||
#define PAL_VPS_SET_PENDING_INTERRUPT 0x1000
|
||||
#define PAL_VPS_THASH 0x1400
|
||||
#define PAL_VPS_TTAG 0x1800
|
||||
#define PAL_VPS_RESTORE 0x1c00
|
||||
#define PAL_VPS_SAVE 0x2000
|
||||
|
||||
#endif/* _VT_I_H*/
|
|
@ -1,640 +0,0 @@
|
|||
/*
|
||||
* vtlb.c: guest virtual tlb handling module.
|
||||
* Copyright (c) 2004, Intel Corporation.
|
||||
* Yaozu Dong (Eddie Dong) <Eddie.dong@intel.com>
|
||||
* Xuefei Xu (Anthony Xu) <anthony.xu@intel.com>
|
||||
*
|
||||
* Copyright (c) 2007, Intel Corporation.
|
||||
* Xuefei Xu (Anthony Xu) <anthony.xu@intel.com>
|
||||
* Xiantao Zhang <xiantao.zhang@intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
||||
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include "vcpu.h"
|
||||
|
||||
#include <linux/rwsem.h>
|
||||
|
||||
#include <asm/tlb.h>
|
||||
|
||||
/*
|
||||
* Check to see if the address rid:va is translated by the TLB
|
||||
*/
|
||||
|
||||
static int __is_tr_translated(struct thash_data *trp, u64 rid, u64 va)
|
||||
{
|
||||
return ((trp->p) && (trp->rid == rid)
|
||||
&& ((va-trp->vadr) < PSIZE(trp->ps)));
|
||||
}
|
||||
|
||||
/*
|
||||
* Only for GUEST TR format.
|
||||
*/
|
||||
static int __is_tr_overlap(struct thash_data *trp, u64 rid, u64 sva, u64 eva)
|
||||
{
|
||||
u64 sa1, ea1;
|
||||
|
||||
if (!trp->p || trp->rid != rid)
|
||||
return 0;
|
||||
|
||||
sa1 = trp->vadr;
|
||||
ea1 = sa1 + PSIZE(trp->ps) - 1;
|
||||
eva -= 1;
|
||||
if ((sva > ea1) || (sa1 > eva))
|
||||
return 0;
|
||||
else
|
||||
return 1;
|
||||
|
||||
}
|
||||
|
||||
void machine_tlb_purge(u64 va, u64 ps)
|
||||
{
|
||||
ia64_ptcl(va, ps << 2);
|
||||
}
|
||||
|
||||
void local_flush_tlb_all(void)
|
||||
{
|
||||
int i, j;
|
||||
unsigned long flags, count0, count1;
|
||||
unsigned long stride0, stride1, addr;
|
||||
|
||||
addr = current_vcpu->arch.ptce_base;
|
||||
count0 = current_vcpu->arch.ptce_count[0];
|
||||
count1 = current_vcpu->arch.ptce_count[1];
|
||||
stride0 = current_vcpu->arch.ptce_stride[0];
|
||||
stride1 = current_vcpu->arch.ptce_stride[1];
|
||||
|
||||
local_irq_save(flags);
|
||||
for (i = 0; i < count0; ++i) {
|
||||
for (j = 0; j < count1; ++j) {
|
||||
ia64_ptce(addr);
|
||||
addr += stride1;
|
||||
}
|
||||
addr += stride0;
|
||||
}
|
||||
local_irq_restore(flags);
|
||||
ia64_srlz_i(); /* srlz.i implies srlz.d */
|
||||
}
|
||||
|
||||
int vhpt_enabled(struct kvm_vcpu *vcpu, u64 vadr, enum vhpt_ref ref)
|
||||
{
|
||||
union ia64_rr vrr;
|
||||
union ia64_pta vpta;
|
||||
struct ia64_psr vpsr;
|
||||
|
||||
vpsr = *(struct ia64_psr *)&VCPU(vcpu, vpsr);
|
||||
vrr.val = vcpu_get_rr(vcpu, vadr);
|
||||
vpta.val = vcpu_get_pta(vcpu);
|
||||
|
||||
if (vrr.ve & vpta.ve) {
|
||||
switch (ref) {
|
||||
case DATA_REF:
|
||||
case NA_REF:
|
||||
return vpsr.dt;
|
||||
case INST_REF:
|
||||
return vpsr.dt && vpsr.it && vpsr.ic;
|
||||
case RSE_REF:
|
||||
return vpsr.dt && vpsr.rt;
|
||||
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct thash_data *vsa_thash(union ia64_pta vpta, u64 va, u64 vrr, u64 *tag)
|
||||
{
|
||||
u64 index, pfn, rid, pfn_bits;
|
||||
|
||||
pfn_bits = vpta.size - 5 - 8;
|
||||
pfn = REGION_OFFSET(va) >> _REGION_PAGE_SIZE(vrr);
|
||||
rid = _REGION_ID(vrr);
|
||||
index = ((rid & 0xff) << pfn_bits)|(pfn & ((1UL << pfn_bits) - 1));
|
||||
*tag = ((rid >> 8) & 0xffff) | ((pfn >> pfn_bits) << 16);
|
||||
|
||||
return (struct thash_data *)((vpta.base << PTA_BASE_SHIFT) +
|
||||
(index << 5));
|
||||
}
|
||||
|
||||
struct thash_data *__vtr_lookup(struct kvm_vcpu *vcpu, u64 va, int type)
|
||||
{
|
||||
|
||||
struct thash_data *trp;
|
||||
int i;
|
||||
u64 rid;
|
||||
|
||||
rid = vcpu_get_rr(vcpu, va);
|
||||
rid = rid & RR_RID_MASK;
|
||||
if (type == D_TLB) {
|
||||
if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) {
|
||||
for (trp = (struct thash_data *)&vcpu->arch.dtrs, i = 0;
|
||||
i < NDTRS; i++, trp++) {
|
||||
if (__is_tr_translated(trp, rid, va))
|
||||
return trp;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (vcpu_quick_region_check(vcpu->arch.itr_regions, va)) {
|
||||
for (trp = (struct thash_data *)&vcpu->arch.itrs, i = 0;
|
||||
i < NITRS; i++, trp++) {
|
||||
if (__is_tr_translated(trp, rid, va))
|
||||
return trp;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void vhpt_insert(u64 pte, u64 itir, u64 ifa, u64 gpte)
|
||||
{
|
||||
union ia64_rr rr;
|
||||
struct thash_data *head;
|
||||
unsigned long ps, gpaddr;
|
||||
|
||||
ps = itir_ps(itir);
|
||||
rr.val = ia64_get_rr(ifa);
|
||||
|
||||
gpaddr = ((gpte & _PAGE_PPN_MASK) >> ps << ps) |
|
||||
(ifa & ((1UL << ps) - 1));
|
||||
|
||||
head = (struct thash_data *)ia64_thash(ifa);
|
||||
head->etag = INVALID_TI_TAG;
|
||||
ia64_mf();
|
||||
head->page_flags = pte & ~PAGE_FLAGS_RV_MASK;
|
||||
head->itir = rr.ps << 2;
|
||||
head->etag = ia64_ttag(ifa);
|
||||
head->gpaddr = gpaddr;
|
||||
}
|
||||
|
||||
void mark_pages_dirty(struct kvm_vcpu *v, u64 pte, u64 ps)
|
||||
{
|
||||
u64 i, dirty_pages = 1;
|
||||
u64 base_gfn = (pte&_PAGE_PPN_MASK) >> PAGE_SHIFT;
|
||||
vmm_spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
|
||||
void *dirty_bitmap = (void *)KVM_MEM_DIRTY_LOG_BASE;
|
||||
|
||||
dirty_pages <<= ps <= PAGE_SHIFT ? 0 : ps - PAGE_SHIFT;
|
||||
|
||||
vmm_spin_lock(lock);
|
||||
for (i = 0; i < dirty_pages; i++) {
|
||||
/* avoid RMW */
|
||||
if (!test_bit(base_gfn + i, dirty_bitmap))
|
||||
set_bit(base_gfn + i , dirty_bitmap);
|
||||
}
|
||||
vmm_spin_unlock(lock);
|
||||
}
|
||||
|
||||
void thash_vhpt_insert(struct kvm_vcpu *v, u64 pte, u64 itir, u64 va, int type)
|
||||
{
|
||||
u64 phy_pte, psr;
|
||||
union ia64_rr mrr;
|
||||
|
||||
mrr.val = ia64_get_rr(va);
|
||||
phy_pte = translate_phy_pte(&pte, itir, va);
|
||||
|
||||
if (itir_ps(itir) >= mrr.ps) {
|
||||
vhpt_insert(phy_pte, itir, va, pte);
|
||||
} else {
|
||||
phy_pte &= ~PAGE_FLAGS_RV_MASK;
|
||||
psr = ia64_clear_ic();
|
||||
ia64_itc(type, va, phy_pte, itir_ps(itir));
|
||||
paravirt_dv_serialize_data();
|
||||
ia64_set_psr(psr);
|
||||
}
|
||||
|
||||
if (!(pte&VTLB_PTE_IO))
|
||||
mark_pages_dirty(v, pte, itir_ps(itir));
|
||||
}
|
||||
|
||||
/*
|
||||
* vhpt lookup
|
||||
*/
|
||||
struct thash_data *vhpt_lookup(u64 va)
|
||||
{
|
||||
struct thash_data *head;
|
||||
u64 tag;
|
||||
|
||||
head = (struct thash_data *)ia64_thash(va);
|
||||
tag = ia64_ttag(va);
|
||||
if (head->etag == tag)
|
||||
return head;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
u64 guest_vhpt_lookup(u64 iha, u64 *pte)
|
||||
{
|
||||
u64 ret;
|
||||
struct thash_data *data;
|
||||
|
||||
data = __vtr_lookup(current_vcpu, iha, D_TLB);
|
||||
if (data != NULL)
|
||||
thash_vhpt_insert(current_vcpu, data->page_flags,
|
||||
data->itir, iha, D_TLB);
|
||||
|
||||
asm volatile ("rsm psr.ic|psr.i;;"
|
||||
"srlz.d;;"
|
||||
"ld8.s r9=[%1];;"
|
||||
"tnat.nz p6,p7=r9;;"
|
||||
"(p6) mov %0=1;"
|
||||
"(p6) mov r9=r0;"
|
||||
"(p7) extr.u r9=r9,0,53;;"
|
||||
"(p7) mov %0=r0;"
|
||||
"(p7) st8 [%2]=r9;;"
|
||||
"ssm psr.ic;;"
|
||||
"srlz.d;;"
|
||||
"ssm psr.i;;"
|
||||
"srlz.d;;"
|
||||
: "=&r"(ret) : "r"(iha), "r"(pte) : "memory");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* purge software guest tlb
|
||||
*/
|
||||
|
||||
static void vtlb_purge(struct kvm_vcpu *v, u64 va, u64 ps)
|
||||
{
|
||||
struct thash_data *cur;
|
||||
u64 start, curadr, size, psbits, tag, rr_ps, num;
|
||||
union ia64_rr vrr;
|
||||
struct thash_cb *hcb = &v->arch.vtlb;
|
||||
|
||||
vrr.val = vcpu_get_rr(v, va);
|
||||
psbits = VMX(v, psbits[(va >> 61)]);
|
||||
start = va & ~((1UL << ps) - 1);
|
||||
while (psbits) {
|
||||
curadr = start;
|
||||
rr_ps = __ffs(psbits);
|
||||
psbits &= ~(1UL << rr_ps);
|
||||
num = 1UL << ((ps < rr_ps) ? 0 : (ps - rr_ps));
|
||||
size = PSIZE(rr_ps);
|
||||
vrr.ps = rr_ps;
|
||||
while (num) {
|
||||
cur = vsa_thash(hcb->pta, curadr, vrr.val, &tag);
|
||||
if (cur->etag == tag && cur->ps == rr_ps)
|
||||
cur->etag = INVALID_TI_TAG;
|
||||
curadr += size;
|
||||
num--;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* purge VHPT and machine TLB
|
||||
*/
|
||||
static void vhpt_purge(struct kvm_vcpu *v, u64 va, u64 ps)
|
||||
{
|
||||
struct thash_data *cur;
|
||||
u64 start, size, tag, num;
|
||||
union ia64_rr rr;
|
||||
|
||||
start = va & ~((1UL << ps) - 1);
|
||||
rr.val = ia64_get_rr(va);
|
||||
size = PSIZE(rr.ps);
|
||||
num = 1UL << ((ps < rr.ps) ? 0 : (ps - rr.ps));
|
||||
while (num) {
|
||||
cur = (struct thash_data *)ia64_thash(start);
|
||||
tag = ia64_ttag(start);
|
||||
if (cur->etag == tag)
|
||||
cur->etag = INVALID_TI_TAG;
|
||||
start += size;
|
||||
num--;
|
||||
}
|
||||
machine_tlb_purge(va, ps);
|
||||
}
|
||||
|
||||
/*
|
||||
* Insert an entry into hash TLB or VHPT.
|
||||
* NOTES:
|
||||
* 1: When inserting VHPT to thash, "va" is a must covered
|
||||
* address by the inserted machine VHPT entry.
|
||||
* 2: The format of entry is always in TLB.
|
||||
* 3: The caller need to make sure the new entry will not overlap
|
||||
* with any existed entry.
|
||||
*/
|
||||
void vtlb_insert(struct kvm_vcpu *v, u64 pte, u64 itir, u64 va)
|
||||
{
|
||||
struct thash_data *head;
|
||||
union ia64_rr vrr;
|
||||
u64 tag;
|
||||
struct thash_cb *hcb = &v->arch.vtlb;
|
||||
|
||||
vrr.val = vcpu_get_rr(v, va);
|
||||
vrr.ps = itir_ps(itir);
|
||||
VMX(v, psbits[va >> 61]) |= (1UL << vrr.ps);
|
||||
head = vsa_thash(hcb->pta, va, vrr.val, &tag);
|
||||
head->page_flags = pte;
|
||||
head->itir = itir;
|
||||
head->etag = tag;
|
||||
}
|
||||
|
||||
int vtr_find_overlap(struct kvm_vcpu *vcpu, u64 va, u64 ps, int type)
|
||||
{
|
||||
struct thash_data *trp;
|
||||
int i;
|
||||
u64 end, rid;
|
||||
|
||||
rid = vcpu_get_rr(vcpu, va);
|
||||
rid = rid & RR_RID_MASK;
|
||||
end = va + PSIZE(ps);
|
||||
if (type == D_TLB) {
|
||||
if (vcpu_quick_region_check(vcpu->arch.dtr_regions, va)) {
|
||||
for (trp = (struct thash_data *)&vcpu->arch.dtrs, i = 0;
|
||||
i < NDTRS; i++, trp++) {
|
||||
if (__is_tr_overlap(trp, rid, va, end))
|
||||
return i;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (vcpu_quick_region_check(vcpu->arch.itr_regions, va)) {
|
||||
for (trp = (struct thash_data *)&vcpu->arch.itrs, i = 0;
|
||||
i < NITRS; i++, trp++) {
|
||||
if (__is_tr_overlap(trp, rid, va, end))
|
||||
return i;
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Purge entries in VTLB and VHPT
|
||||
*/
|
||||
void thash_purge_entries(struct kvm_vcpu *v, u64 va, u64 ps)
|
||||
{
|
||||
if (vcpu_quick_region_check(v->arch.tc_regions, va))
|
||||
vtlb_purge(v, va, ps);
|
||||
vhpt_purge(v, va, ps);
|
||||
}
|
||||
|
||||
void thash_purge_entries_remote(struct kvm_vcpu *v, u64 va, u64 ps)
|
||||
{
|
||||
u64 old_va = va;
|
||||
va = REGION_OFFSET(va);
|
||||
if (vcpu_quick_region_check(v->arch.tc_regions, old_va))
|
||||
vtlb_purge(v, va, ps);
|
||||
vhpt_purge(v, va, ps);
|
||||
}
|
||||
|
||||
u64 translate_phy_pte(u64 *pte, u64 itir, u64 va)
|
||||
{
|
||||
u64 ps, ps_mask, paddr, maddr, io_mask;
|
||||
union pte_flags phy_pte;
|
||||
|
||||
ps = itir_ps(itir);
|
||||
ps_mask = ~((1UL << ps) - 1);
|
||||
phy_pte.val = *pte;
|
||||
paddr = *pte;
|
||||
paddr = ((paddr & _PAGE_PPN_MASK) & ps_mask) | (va & ~ps_mask);
|
||||
maddr = kvm_get_mpt_entry(paddr >> PAGE_SHIFT);
|
||||
io_mask = maddr & GPFN_IO_MASK;
|
||||
if (io_mask && (io_mask != GPFN_PHYS_MMIO)) {
|
||||
*pte |= VTLB_PTE_IO;
|
||||
return -1;
|
||||
}
|
||||
maddr = ((maddr & _PAGE_PPN_MASK) & PAGE_MASK) |
|
||||
(paddr & ~PAGE_MASK);
|
||||
phy_pte.ppn = maddr >> ARCH_PAGE_SHIFT;
|
||||
return phy_pte.val;
|
||||
}
|
||||
|
||||
/*
|
||||
* Purge overlap TCs and then insert the new entry to emulate itc ops.
|
||||
* Notes: Only TC entry can purge and insert.
|
||||
*/
|
||||
void thash_purge_and_insert(struct kvm_vcpu *v, u64 pte, u64 itir,
|
||||
u64 ifa, int type)
|
||||
{
|
||||
u64 ps;
|
||||
u64 phy_pte, io_mask, index;
|
||||
union ia64_rr vrr, mrr;
|
||||
|
||||
ps = itir_ps(itir);
|
||||
vrr.val = vcpu_get_rr(v, ifa);
|
||||
mrr.val = ia64_get_rr(ifa);
|
||||
|
||||
index = (pte & _PAGE_PPN_MASK) >> PAGE_SHIFT;
|
||||
io_mask = kvm_get_mpt_entry(index) & GPFN_IO_MASK;
|
||||
phy_pte = translate_phy_pte(&pte, itir, ifa);
|
||||
|
||||
/* Ensure WB attribute if pte is related to a normal mem page,
|
||||
* which is required by vga acceleration since qemu maps shared
|
||||
* vram buffer with WB.
|
||||
*/
|
||||
if (!(pte & VTLB_PTE_IO) && ((pte & _PAGE_MA_MASK) != _PAGE_MA_NAT) &&
|
||||
io_mask != GPFN_PHYS_MMIO) {
|
||||
pte &= ~_PAGE_MA_MASK;
|
||||
phy_pte &= ~_PAGE_MA_MASK;
|
||||
}
|
||||
|
||||
vtlb_purge(v, ifa, ps);
|
||||
vhpt_purge(v, ifa, ps);
|
||||
|
||||
if ((ps != mrr.ps) || (pte & VTLB_PTE_IO)) {
|
||||
vtlb_insert(v, pte, itir, ifa);
|
||||
vcpu_quick_region_set(VMX(v, tc_regions), ifa);
|
||||
}
|
||||
if (pte & VTLB_PTE_IO)
|
||||
return;
|
||||
|
||||
if (ps >= mrr.ps)
|
||||
vhpt_insert(phy_pte, itir, ifa, pte);
|
||||
else {
|
||||
u64 psr;
|
||||
phy_pte &= ~PAGE_FLAGS_RV_MASK;
|
||||
psr = ia64_clear_ic();
|
||||
ia64_itc(type, ifa, phy_pte, ps);
|
||||
paravirt_dv_serialize_data();
|
||||
ia64_set_psr(psr);
|
||||
}
|
||||
if (!(pte&VTLB_PTE_IO))
|
||||
mark_pages_dirty(v, pte, ps);
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
* Purge all TCs or VHPT entries including those in Hash table.
|
||||
*
|
||||
*/
|
||||
|
||||
void thash_purge_all(struct kvm_vcpu *v)
|
||||
{
|
||||
int i;
|
||||
struct thash_data *head;
|
||||
struct thash_cb *vtlb, *vhpt;
|
||||
vtlb = &v->arch.vtlb;
|
||||
vhpt = &v->arch.vhpt;
|
||||
|
||||
for (i = 0; i < 8; i++)
|
||||
VMX(v, psbits[i]) = 0;
|
||||
|
||||
head = vtlb->hash;
|
||||
for (i = 0; i < vtlb->num; i++) {
|
||||
head->page_flags = 0;
|
||||
head->etag = INVALID_TI_TAG;
|
||||
head->itir = 0;
|
||||
head->next = 0;
|
||||
head++;
|
||||
};
|
||||
|
||||
head = vhpt->hash;
|
||||
for (i = 0; i < vhpt->num; i++) {
|
||||
head->page_flags = 0;
|
||||
head->etag = INVALID_TI_TAG;
|
||||
head->itir = 0;
|
||||
head->next = 0;
|
||||
head++;
|
||||
};
|
||||
|
||||
local_flush_tlb_all();
|
||||
}
|
||||
|
||||
/*
|
||||
* Lookup the hash table and its collision chain to find an entry
|
||||
* covering this address rid:va or the entry.
|
||||
*
|
||||
* INPUT:
|
||||
* in: TLB format for both VHPT & TLB.
|
||||
*/
|
||||
struct thash_data *vtlb_lookup(struct kvm_vcpu *v, u64 va, int is_data)
|
||||
{
|
||||
struct thash_data *cch;
|
||||
u64 psbits, ps, tag;
|
||||
union ia64_rr vrr;
|
||||
|
||||
struct thash_cb *hcb = &v->arch.vtlb;
|
||||
|
||||
cch = __vtr_lookup(v, va, is_data);
|
||||
if (cch)
|
||||
return cch;
|
||||
|
||||
if (vcpu_quick_region_check(v->arch.tc_regions, va) == 0)
|
||||
return NULL;
|
||||
|
||||
psbits = VMX(v, psbits[(va >> 61)]);
|
||||
vrr.val = vcpu_get_rr(v, va);
|
||||
while (psbits) {
|
||||
ps = __ffs(psbits);
|
||||
psbits &= ~(1UL << ps);
|
||||
vrr.ps = ps;
|
||||
cch = vsa_thash(hcb->pta, va, vrr.val, &tag);
|
||||
if (cch->etag == tag && cch->ps == ps)
|
||||
return cch;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Initialize internal control data before service.
|
||||
*/
|
||||
void thash_init(struct thash_cb *hcb, u64 sz)
|
||||
{
|
||||
int i;
|
||||
struct thash_data *head;
|
||||
|
||||
hcb->pta.val = (unsigned long)hcb->hash;
|
||||
hcb->pta.vf = 1;
|
||||
hcb->pta.ve = 1;
|
||||
hcb->pta.size = sz;
|
||||
head = hcb->hash;
|
||||
for (i = 0; i < hcb->num; i++) {
|
||||
head->page_flags = 0;
|
||||
head->itir = 0;
|
||||
head->etag = INVALID_TI_TAG;
|
||||
head->next = 0;
|
||||
head++;
|
||||
}
|
||||
}
|
||||
|
||||
u64 kvm_get_mpt_entry(u64 gpfn)
|
||||
{
|
||||
u64 *base = (u64 *) KVM_P2M_BASE;
|
||||
|
||||
if (gpfn >= (KVM_P2M_SIZE >> 3))
|
||||
panic_vm(current_vcpu, "Invalid gpfn =%lx\n", gpfn);
|
||||
|
||||
return *(base + gpfn);
|
||||
}
|
||||
|
||||
u64 kvm_lookup_mpa(u64 gpfn)
|
||||
{
|
||||
u64 maddr;
|
||||
maddr = kvm_get_mpt_entry(gpfn);
|
||||
return maddr&_PAGE_PPN_MASK;
|
||||
}
|
||||
|
||||
u64 kvm_gpa_to_mpa(u64 gpa)
|
||||
{
|
||||
u64 pte = kvm_lookup_mpa(gpa >> PAGE_SHIFT);
|
||||
return (pte >> PAGE_SHIFT << PAGE_SHIFT) | (gpa & ~PAGE_MASK);
|
||||
}
|
||||
|
||||
/*
|
||||
* Fetch guest bundle code.
|
||||
* INPUT:
|
||||
* gip: guest ip
|
||||
* pbundle: used to return fetched bundle.
|
||||
*/
|
||||
int fetch_code(struct kvm_vcpu *vcpu, u64 gip, IA64_BUNDLE *pbundle)
|
||||
{
|
||||
u64 gpip = 0; /* guest physical IP*/
|
||||
u64 *vpa;
|
||||
struct thash_data *tlb;
|
||||
u64 maddr;
|
||||
|
||||
if (!(VCPU(vcpu, vpsr) & IA64_PSR_IT)) {
|
||||
/* I-side physical mode */
|
||||
gpip = gip;
|
||||
} else {
|
||||
tlb = vtlb_lookup(vcpu, gip, I_TLB);
|
||||
if (tlb)
|
||||
gpip = (tlb->ppn >> (tlb->ps - 12) << tlb->ps) |
|
||||
(gip & (PSIZE(tlb->ps) - 1));
|
||||
}
|
||||
if (gpip) {
|
||||
maddr = kvm_gpa_to_mpa(gpip);
|
||||
} else {
|
||||
tlb = vhpt_lookup(gip);
|
||||
if (tlb == NULL) {
|
||||
ia64_ptcl(gip, ARCH_PAGE_SHIFT << 2);
|
||||
return IA64_FAULT;
|
||||
}
|
||||
maddr = (tlb->ppn >> (tlb->ps - 12) << tlb->ps)
|
||||
| (gip & (PSIZE(tlb->ps) - 1));
|
||||
}
|
||||
vpa = (u64 *)__kvm_va(maddr);
|
||||
|
||||
pbundle->i64[0] = *vpa++;
|
||||
pbundle->i64[1] = *vpa;
|
||||
|
||||
return IA64_NO_FAULT;
|
||||
}
|
||||
|
||||
void kvm_init_vhpt(struct kvm_vcpu *v)
|
||||
{
|
||||
v->arch.vhpt.num = VHPT_NUM_ENTRIES;
|
||||
thash_init(&v->arch.vhpt, VHPT_SHIFT);
|
||||
ia64_set_pta(v->arch.vhpt.pta.val);
|
||||
/*Enable VHPT here?*/
|
||||
}
|
||||
|
||||
void kvm_init_vtlb(struct kvm_vcpu *v)
|
||||
{
|
||||
v->arch.vtlb.num = VTLB_NUM_ENTRIES;
|
||||
thash_init(&v->arch.vtlb, VTLB_SHIFT);
|
||||
}
|
|
@ -170,8 +170,6 @@ extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr,
|
|||
unsigned long *nb_ret);
|
||||
extern void kvmppc_unpin_guest_page(struct kvm *kvm, void *addr,
|
||||
unsigned long gpa, bool dirty);
|
||||
extern long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
|
||||
long pte_index, unsigned long pteh, unsigned long ptel);
|
||||
extern long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
long pte_index, unsigned long pteh, unsigned long ptel,
|
||||
pgd_t *pgdir, bool realmode, unsigned long *idx_ret);
|
||||
|
|
|
@ -37,7 +37,6 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu)
|
|||
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
|
||||
extern unsigned long kvm_rma_pages;
|
||||
#endif
|
||||
|
||||
#define VRMA_VSID 0x1ffffffUL /* 1TB VSID reserved for VRMA */
|
||||
|
@ -148,7 +147,7 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
|
|||
/* This covers 14..54 bits of va*/
|
||||
rb = (v & ~0x7fUL) << 16; /* AVA field */
|
||||
|
||||
rb |= v >> (62 - 8); /* B field */
|
||||
rb |= (v >> HPTE_V_SSIZE_SHIFT) << 8; /* B field */
|
||||
/*
|
||||
* AVA in v had cleared lower 23 bits. We need to derive
|
||||
* that from pteg index
|
||||
|
|
|
@ -180,11 +180,6 @@ struct kvmppc_spapr_tce_table {
|
|||
struct page *pages[0];
|
||||
};
|
||||
|
||||
struct kvm_rma_info {
|
||||
atomic_t use_count;
|
||||
unsigned long base_pfn;
|
||||
};
|
||||
|
||||
/* XICS components, defined in book3s_xics.c */
|
||||
struct kvmppc_xics;
|
||||
struct kvmppc_icp;
|
||||
|
@ -214,16 +209,9 @@ struct revmap_entry {
|
|||
#define KVMPPC_RMAP_PRESENT 0x100000000ul
|
||||
#define KVMPPC_RMAP_INDEX 0xfffffffful
|
||||
|
||||
/* Low-order bits in memslot->arch.slot_phys[] */
|
||||
#define KVMPPC_PAGE_ORDER_MASK 0x1f
|
||||
#define KVMPPC_PAGE_NO_CACHE HPTE_R_I /* 0x20 */
|
||||
#define KVMPPC_PAGE_WRITETHRU HPTE_R_W /* 0x40 */
|
||||
#define KVMPPC_GOT_PAGE 0x80
|
||||
|
||||
struct kvm_arch_memory_slot {
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
unsigned long *rmap;
|
||||
unsigned long *slot_phys;
|
||||
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
|
||||
};
|
||||
|
||||
|
@ -242,14 +230,12 @@ struct kvm_arch {
|
|||
struct kvm_rma_info *rma;
|
||||
unsigned long vrma_slb_v;
|
||||
int rma_setup_done;
|
||||
int using_mmu_notifiers;
|
||||
u32 hpt_order;
|
||||
atomic_t vcpus_running;
|
||||
u32 online_vcores;
|
||||
unsigned long hpt_npte;
|
||||
unsigned long hpt_mask;
|
||||
atomic_t hpte_mod_interest;
|
||||
spinlock_t slot_phys_lock;
|
||||
cpumask_t need_tlb_flush;
|
||||
int hpt_cma_alloc;
|
||||
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
|
||||
|
@ -297,6 +283,7 @@ struct kvmppc_vcore {
|
|||
struct list_head runnable_threads;
|
||||
spinlock_t lock;
|
||||
wait_queue_head_t wq;
|
||||
spinlock_t stoltb_lock; /* protects stolen_tb and preempt_tb */
|
||||
u64 stolen_tb;
|
||||
u64 preempt_tb;
|
||||
struct kvm_vcpu *runner;
|
||||
|
@ -308,6 +295,7 @@ struct kvmppc_vcore {
|
|||
ulong dpdes; /* doorbell state (POWER8) */
|
||||
void *mpp_buffer; /* Micro Partition Prefetch buffer */
|
||||
bool mpp_buffer_is_valid;
|
||||
ulong conferring_threads;
|
||||
};
|
||||
|
||||
#define VCORE_ENTRY_COUNT(vc) ((vc)->entry_exit_count & 0xff)
|
||||
|
@ -664,6 +652,8 @@ struct kvm_vcpu_arch {
|
|||
spinlock_t tbacct_lock;
|
||||
u64 busy_stolen;
|
||||
u64 busy_preempt;
|
||||
|
||||
u32 emul_inst;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
@ -170,8 +170,6 @@ extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
|
|||
unsigned long ioba, unsigned long tce);
|
||||
extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
|
||||
unsigned long ioba);
|
||||
extern struct kvm_rma_info *kvm_alloc_rma(void);
|
||||
extern void kvm_release_rma(struct kvm_rma_info *ri);
|
||||
extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
|
||||
extern void kvm_release_hpt(struct page *page, unsigned long nr_pages);
|
||||
extern int kvmppc_core_init_vm(struct kvm *kvm);
|
||||
|
|
|
@ -489,7 +489,6 @@ int main(void)
|
|||
DEFINE(KVM_HOST_LPID, offsetof(struct kvm, arch.host_lpid));
|
||||
DEFINE(KVM_HOST_LPCR, offsetof(struct kvm, arch.host_lpcr));
|
||||
DEFINE(KVM_HOST_SDR1, offsetof(struct kvm, arch.host_sdr1));
|
||||
DEFINE(KVM_TLBIE_LOCK, offsetof(struct kvm, arch.tlbie_lock));
|
||||
DEFINE(KVM_NEED_FLUSH, offsetof(struct kvm, arch.need_tlb_flush.bits));
|
||||
DEFINE(KVM_ENABLED_HCALLS, offsetof(struct kvm, arch.enabled_hcalls));
|
||||
DEFINE(KVM_LPCR, offsetof(struct kvm, arch.lpcr));
|
||||
|
@ -499,6 +498,7 @@ int main(void)
|
|||
DEFINE(VCPU_DAR, offsetof(struct kvm_vcpu, arch.shregs.dar));
|
||||
DEFINE(VCPU_VPA, offsetof(struct kvm_vcpu, arch.vpa.pinned_addr));
|
||||
DEFINE(VCPU_VPA_DIRTY, offsetof(struct kvm_vcpu, arch.vpa.dirty));
|
||||
DEFINE(VCPU_HEIR, offsetof(struct kvm_vcpu, arch.emul_inst));
|
||||
#endif
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
DEFINE(VCPU_VCPUID, offsetof(struct kvm_vcpu, vcpu_id));
|
||||
|
|
|
@ -172,6 +172,7 @@ config KVM_XICS
|
|||
depends on KVM_BOOK3S_64 && !KVM_MPIC
|
||||
select HAVE_KVM_IRQCHIP
|
||||
select HAVE_KVM_IRQFD
|
||||
default y
|
||||
---help---
|
||||
Include support for the XICS (eXternal Interrupt Controller
|
||||
Specification) interrupt controller architecture used on
|
||||
|
|
|
@ -64,14 +64,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
|
|||
{ NULL }
|
||||
};
|
||||
|
||||
void kvmppc_core_load_host_debugstate(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
}
|
||||
|
||||
void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
}
|
||||
|
||||
void kvmppc_unfixup_split_real(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) {
|
||||
|
|
|
@ -78,11 +78,6 @@ static inline bool sr_kp(u32 sr_raw)
|
|||
return (sr_raw & 0x20000000) ? true: false;
|
||||
}
|
||||
|
||||
static inline bool sr_nx(u32 sr_raw)
|
||||
{
|
||||
return (sr_raw & 0x10000000) ? true: false;
|
||||
}
|
||||
|
||||
static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
|
||||
struct kvmppc_pte *pte, bool data,
|
||||
bool iswrite);
|
||||
|
|
|
@ -37,8 +37,7 @@
|
|||
#include <asm/ppc-opcode.h>
|
||||
#include <asm/cputable.h>
|
||||
|
||||
/* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
|
||||
#define MAX_LPID_970 63
|
||||
#include "trace_hv.h"
|
||||
|
||||
/* Power architecture requires HPT is at least 256kB */
|
||||
#define PPC_MIN_HPT_ORDER 18
|
||||
|
@ -229,14 +228,9 @@ int kvmppc_mmu_hv_init(void)
|
|||
if (!cpu_has_feature(CPU_FTR_HVMODE))
|
||||
return -EINVAL;
|
||||
|
||||
/* POWER7 has 10-bit LPIDs, PPC970 and e500mc have 6-bit LPIDs */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206)) {
|
||||
host_lpid = mfspr(SPRN_LPID); /* POWER7 */
|
||||
rsvd_lpid = LPID_RSVD;
|
||||
} else {
|
||||
host_lpid = 0; /* PPC970 */
|
||||
rsvd_lpid = MAX_LPID_970;
|
||||
}
|
||||
/* POWER7 has 10-bit LPIDs (12-bit in POWER8) */
|
||||
host_lpid = mfspr(SPRN_LPID);
|
||||
rsvd_lpid = LPID_RSVD;
|
||||
|
||||
kvmppc_init_lpid(rsvd_lpid + 1);
|
||||
|
||||
|
@ -259,130 +253,12 @@ static void kvmppc_mmu_book3s_64_hv_reset_msr(struct kvm_vcpu *vcpu)
|
|||
kvmppc_set_msr(vcpu, msr);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called to get a reference to a guest page if there isn't
|
||||
* one already in the memslot->arch.slot_phys[] array.
|
||||
*/
|
||||
static long kvmppc_get_guest_page(struct kvm *kvm, unsigned long gfn,
|
||||
struct kvm_memory_slot *memslot,
|
||||
unsigned long psize)
|
||||
{
|
||||
unsigned long start;
|
||||
long np, err;
|
||||
struct page *page, *hpage, *pages[1];
|
||||
unsigned long s, pgsize;
|
||||
unsigned long *physp;
|
||||
unsigned int is_io, got, pgorder;
|
||||
struct vm_area_struct *vma;
|
||||
unsigned long pfn, i, npages;
|
||||
|
||||
physp = memslot->arch.slot_phys;
|
||||
if (!physp)
|
||||
return -EINVAL;
|
||||
if (physp[gfn - memslot->base_gfn])
|
||||
return 0;
|
||||
|
||||
is_io = 0;
|
||||
got = 0;
|
||||
page = NULL;
|
||||
pgsize = psize;
|
||||
err = -EINVAL;
|
||||
start = gfn_to_hva_memslot(memslot, gfn);
|
||||
|
||||
/* Instantiate and get the page we want access to */
|
||||
np = get_user_pages_fast(start, 1, 1, pages);
|
||||
if (np != 1) {
|
||||
/* Look up the vma for the page */
|
||||
down_read(¤t->mm->mmap_sem);
|
||||
vma = find_vma(current->mm, start);
|
||||
if (!vma || vma->vm_start > start ||
|
||||
start + psize > vma->vm_end ||
|
||||
!(vma->vm_flags & VM_PFNMAP))
|
||||
goto up_err;
|
||||
is_io = hpte_cache_bits(pgprot_val(vma->vm_page_prot));
|
||||
pfn = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
|
||||
/* check alignment of pfn vs. requested page size */
|
||||
if (psize > PAGE_SIZE && (pfn & ((psize >> PAGE_SHIFT) - 1)))
|
||||
goto up_err;
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
|
||||
} else {
|
||||
page = pages[0];
|
||||
got = KVMPPC_GOT_PAGE;
|
||||
|
||||
/* See if this is a large page */
|
||||
s = PAGE_SIZE;
|
||||
if (PageHuge(page)) {
|
||||
hpage = compound_head(page);
|
||||
s <<= compound_order(hpage);
|
||||
/* Get the whole large page if slot alignment is ok */
|
||||
if (s > psize && slot_is_aligned(memslot, s) &&
|
||||
!(memslot->userspace_addr & (s - 1))) {
|
||||
start &= ~(s - 1);
|
||||
pgsize = s;
|
||||
get_page(hpage);
|
||||
put_page(page);
|
||||
page = hpage;
|
||||
}
|
||||
}
|
||||
if (s < psize)
|
||||
goto out;
|
||||
pfn = page_to_pfn(page);
|
||||
}
|
||||
|
||||
npages = pgsize >> PAGE_SHIFT;
|
||||
pgorder = __ilog2(npages);
|
||||
physp += (gfn - memslot->base_gfn) & ~(npages - 1);
|
||||
spin_lock(&kvm->arch.slot_phys_lock);
|
||||
for (i = 0; i < npages; ++i) {
|
||||
if (!physp[i]) {
|
||||
physp[i] = ((pfn + i) << PAGE_SHIFT) +
|
||||
got + is_io + pgorder;
|
||||
got = 0;
|
||||
}
|
||||
}
|
||||
spin_unlock(&kvm->arch.slot_phys_lock);
|
||||
err = 0;
|
||||
|
||||
out:
|
||||
if (got)
|
||||
put_page(page);
|
||||
return err;
|
||||
|
||||
up_err:
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
return err;
|
||||
}
|
||||
|
||||
long kvmppc_virtmode_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
long pte_index, unsigned long pteh,
|
||||
unsigned long ptel, unsigned long *pte_idx_ret)
|
||||
{
|
||||
unsigned long psize, gpa, gfn;
|
||||
struct kvm_memory_slot *memslot;
|
||||
long ret;
|
||||
|
||||
if (kvm->arch.using_mmu_notifiers)
|
||||
goto do_insert;
|
||||
|
||||
psize = hpte_page_size(pteh, ptel);
|
||||
if (!psize)
|
||||
return H_PARAMETER;
|
||||
|
||||
pteh &= ~(HPTE_V_HVLOCK | HPTE_V_ABSENT | HPTE_V_VALID);
|
||||
|
||||
/* Find the memslot (if any) for this address */
|
||||
gpa = (ptel & HPTE_R_RPN) & ~(psize - 1);
|
||||
gfn = gpa >> PAGE_SHIFT;
|
||||
memslot = gfn_to_memslot(kvm, gfn);
|
||||
if (memslot && !(memslot->flags & KVM_MEMSLOT_INVALID)) {
|
||||
if (!slot_is_aligned(memslot, psize))
|
||||
return H_PARAMETER;
|
||||
if (kvmppc_get_guest_page(kvm, gfn, memslot, psize) < 0)
|
||||
return H_PARAMETER;
|
||||
}
|
||||
|
||||
do_insert:
|
||||
/* Protect linux PTE lookup from page table destruction */
|
||||
rcu_read_lock_sched(); /* this disables preemption too */
|
||||
ret = kvmppc_do_h_enter(kvm, flags, pte_index, pteh, ptel,
|
||||
|
@ -397,19 +273,6 @@ long kvmppc_virtmode_do_h_enter(struct kvm *kvm, unsigned long flags,
|
|||
|
||||
}
|
||||
|
||||
/*
|
||||
* We come here on a H_ENTER call from the guest when we are not
|
||||
* using mmu notifiers and we don't have the requested page pinned
|
||||
* already.
|
||||
*/
|
||||
long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
|
||||
long pte_index, unsigned long pteh,
|
||||
unsigned long ptel)
|
||||
{
|
||||
return kvmppc_virtmode_do_h_enter(vcpu->kvm, flags, pte_index,
|
||||
pteh, ptel, &vcpu->arch.gpr[4]);
|
||||
}
|
||||
|
||||
static struct kvmppc_slb *kvmppc_mmu_book3s_hv_find_slbe(struct kvm_vcpu *vcpu,
|
||||
gva_t eaddr)
|
||||
{
|
||||
|
@ -494,7 +357,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
|
|||
gpte->may_execute = gpte->may_read && !(gr & (HPTE_R_N | HPTE_R_G));
|
||||
|
||||
/* Storage key permission check for POWER7 */
|
||||
if (data && virtmode && cpu_has_feature(CPU_FTR_ARCH_206)) {
|
||||
if (data && virtmode) {
|
||||
int amrfield = hpte_get_skey_perm(gr, vcpu->arch.amr);
|
||||
if (amrfield & 1)
|
||||
gpte->may_read = 0;
|
||||
|
@ -622,14 +485,13 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
gfn = gpa >> PAGE_SHIFT;
|
||||
memslot = gfn_to_memslot(kvm, gfn);
|
||||
|
||||
trace_kvm_page_fault_enter(vcpu, hpte, memslot, ea, dsisr);
|
||||
|
||||
/* No memslot means it's an emulated MMIO region */
|
||||
if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
|
||||
return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
|
||||
dsisr & DSISR_ISSTORE);
|
||||
|
||||
if (!kvm->arch.using_mmu_notifiers)
|
||||
return -EFAULT; /* should never get here */
|
||||
|
||||
/*
|
||||
* This should never happen, because of the slot_is_aligned()
|
||||
* check in kvmppc_do_h_enter().
|
||||
|
@ -641,6 +503,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
mmu_seq = kvm->mmu_notifier_seq;
|
||||
smp_rmb();
|
||||
|
||||
ret = -EFAULT;
|
||||
is_io = 0;
|
||||
pfn = 0;
|
||||
page = NULL;
|
||||
|
@ -664,7 +527,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
}
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
if (!pfn)
|
||||
return -EFAULT;
|
||||
goto out_put;
|
||||
} else {
|
||||
page = pages[0];
|
||||
pfn = page_to_pfn(page);
|
||||
|
@ -694,14 +557,14 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
}
|
||||
}
|
||||
|
||||
ret = -EFAULT;
|
||||
if (psize > pte_size)
|
||||
goto out_put;
|
||||
|
||||
/* Check WIMG vs. the actual page we're accessing */
|
||||
if (!hpte_cache_flags_ok(r, is_io)) {
|
||||
if (is_io)
|
||||
return -EFAULT;
|
||||
goto out_put;
|
||||
|
||||
/*
|
||||
* Allow guest to map emulated device memory as
|
||||
* uncacheable, but actually make it cacheable.
|
||||
|
@ -765,6 +628,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
SetPageDirty(page);
|
||||
|
||||
out_put:
|
||||
trace_kvm_page_fault_exit(vcpu, hpte, ret);
|
||||
|
||||
if (page) {
|
||||
/*
|
||||
* We drop pages[0] here, not page because page might
|
||||
|
@ -895,8 +760,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
|
|||
psize = hpte_page_size(be64_to_cpu(hptep[0]), ptel);
|
||||
if ((be64_to_cpu(hptep[0]) & HPTE_V_VALID) &&
|
||||
hpte_rpn(ptel, psize) == gfn) {
|
||||
if (kvm->arch.using_mmu_notifiers)
|
||||
hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
|
||||
hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
|
||||
kvmppc_invalidate_hpte(kvm, hptep, i);
|
||||
/* Harvest R and C */
|
||||
rcbits = be64_to_cpu(hptep[1]) & (HPTE_R_R | HPTE_R_C);
|
||||
|
@ -914,15 +778,13 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
|
|||
|
||||
int kvm_unmap_hva_hv(struct kvm *kvm, unsigned long hva)
|
||||
{
|
||||
if (kvm->arch.using_mmu_notifiers)
|
||||
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
|
||||
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int kvm_unmap_hva_range_hv(struct kvm *kvm, unsigned long start, unsigned long end)
|
||||
{
|
||||
if (kvm->arch.using_mmu_notifiers)
|
||||
kvm_handle_hva_range(kvm, start, end, kvm_unmap_rmapp);
|
||||
kvm_handle_hva_range(kvm, start, end, kvm_unmap_rmapp);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1004,8 +866,6 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
|
|||
|
||||
int kvm_age_hva_hv(struct kvm *kvm, unsigned long start, unsigned long end)
|
||||
{
|
||||
if (!kvm->arch.using_mmu_notifiers)
|
||||
return 0;
|
||||
return kvm_handle_hva_range(kvm, start, end, kvm_age_rmapp);
|
||||
}
|
||||
|
||||
|
@ -1042,15 +902,11 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
|
|||
|
||||
int kvm_test_age_hva_hv(struct kvm *kvm, unsigned long hva)
|
||||
{
|
||||
if (!kvm->arch.using_mmu_notifiers)
|
||||
return 0;
|
||||
return kvm_handle_hva(kvm, hva, kvm_test_age_rmapp);
|
||||
}
|
||||
|
||||
void kvm_set_spte_hva_hv(struct kvm *kvm, unsigned long hva, pte_t pte)
|
||||
{
|
||||
if (!kvm->arch.using_mmu_notifiers)
|
||||
return;
|
||||
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
|
||||
}
|
||||
|
||||
|
@ -1117,8 +973,11 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp)
|
|||
}
|
||||
|
||||
/* Now check and modify the HPTE */
|
||||
if (!(hptep[0] & cpu_to_be64(HPTE_V_VALID)))
|
||||
if (!(hptep[0] & cpu_to_be64(HPTE_V_VALID))) {
|
||||
/* unlock and continue */
|
||||
hptep[0] &= ~cpu_to_be64(HPTE_V_HVLOCK);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* need to make it temporarily absent so C is stable */
|
||||
hptep[0] |= cpu_to_be64(HPTE_V_ABSENT);
|
||||
|
@ -1206,35 +1065,17 @@ void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long gpa,
|
|||
struct page *page, *pages[1];
|
||||
int npages;
|
||||
unsigned long hva, offset;
|
||||
unsigned long pa;
|
||||
unsigned long *physp;
|
||||
int srcu_idx;
|
||||
|
||||
srcu_idx = srcu_read_lock(&kvm->srcu);
|
||||
memslot = gfn_to_memslot(kvm, gfn);
|
||||
if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
|
||||
goto err;
|
||||
if (!kvm->arch.using_mmu_notifiers) {
|
||||
physp = memslot->arch.slot_phys;
|
||||
if (!physp)
|
||||
goto err;
|
||||
physp += gfn - memslot->base_gfn;
|
||||
pa = *physp;
|
||||
if (!pa) {
|
||||
if (kvmppc_get_guest_page(kvm, gfn, memslot,
|
||||
PAGE_SIZE) < 0)
|
||||
goto err;
|
||||
pa = *physp;
|
||||
}
|
||||
page = pfn_to_page(pa >> PAGE_SHIFT);
|
||||
get_page(page);
|
||||
} else {
|
||||
hva = gfn_to_hva_memslot(memslot, gfn);
|
||||
npages = get_user_pages_fast(hva, 1, 1, pages);
|
||||
if (npages < 1)
|
||||
goto err;
|
||||
page = pages[0];
|
||||
}
|
||||
hva = gfn_to_hva_memslot(memslot, gfn);
|
||||
npages = get_user_pages_fast(hva, 1, 1, pages);
|
||||
if (npages < 1)
|
||||
goto err;
|
||||
page = pages[0];
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
|
||||
offset = gpa & (PAGE_SIZE - 1);
|
||||
|
@ -1258,7 +1099,7 @@ void kvmppc_unpin_guest_page(struct kvm *kvm, void *va, unsigned long gpa,
|
|||
|
||||
put_page(page);
|
||||
|
||||
if (!dirty || !kvm->arch.using_mmu_notifiers)
|
||||
if (!dirty)
|
||||
return;
|
||||
|
||||
/* We need to mark this page dirty in the rmap chain */
|
||||
|
@ -1539,9 +1380,15 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
|
|||
hptp = (__be64 *)(kvm->arch.hpt_virt + (i * HPTE_SIZE));
|
||||
lbuf = (unsigned long __user *)buf;
|
||||
for (j = 0; j < hdr.n_valid; ++j) {
|
||||
__be64 hpte_v;
|
||||
__be64 hpte_r;
|
||||
|
||||
err = -EFAULT;
|
||||
if (__get_user(v, lbuf) || __get_user(r, lbuf + 1))
|
||||
if (__get_user(hpte_v, lbuf) ||
|
||||
__get_user(hpte_r, lbuf + 1))
|
||||
goto out;
|
||||
v = be64_to_cpu(hpte_v);
|
||||
r = be64_to_cpu(hpte_r);
|
||||
err = -EINVAL;
|
||||
if (!(v & HPTE_V_VALID))
|
||||
goto out;
|
||||
|
@ -1652,10 +1499,7 @@ void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
struct kvmppc_mmu *mmu = &vcpu->arch.mmu;
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
vcpu->arch.slb_nr = 32; /* POWER7 */
|
||||
else
|
||||
vcpu->arch.slb_nr = 64;
|
||||
vcpu->arch.slb_nr = 32; /* POWER7/POWER8 */
|
||||
|
||||
mmu->xlate = kvmppc_mmu_book3s_64_hv_xlate;
|
||||
mmu->reset_msr = kvmppc_mmu_book3s_64_hv_reset_msr;
|
||||
|
|
|
@ -58,6 +58,9 @@
|
|||
|
||||
#include "book3s.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include "trace_hv.h"
|
||||
|
||||
/* #define EXIT_DEBUG */
|
||||
/* #define EXIT_DEBUG_SIMPLE */
|
||||
/* #define EXIT_DEBUG_INT */
|
||||
|
@ -135,11 +138,10 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
|
|||
* stolen.
|
||||
*
|
||||
* Updates to busy_stolen are protected by arch.tbacct_lock;
|
||||
* updates to vc->stolen_tb are protected by the arch.tbacct_lock
|
||||
* of the vcpu that has taken responsibility for running the vcore
|
||||
* (i.e. vc->runner). The stolen times are measured in units of
|
||||
* timebase ticks. (Note that the != TB_NIL checks below are
|
||||
* purely defensive; they should never fail.)
|
||||
* updates to vc->stolen_tb are protected by the vcore->stoltb_lock
|
||||
* lock. The stolen times are measured in units of timebase ticks.
|
||||
* (Note that the != TB_NIL checks below are purely defensive;
|
||||
* they should never fail.)
|
||||
*/
|
||||
|
||||
static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
|
||||
|
@ -147,12 +149,21 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
|
|||
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
|
||||
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE &&
|
||||
vc->preempt_tb != TB_NIL) {
|
||||
vc->stolen_tb += mftb() - vc->preempt_tb;
|
||||
vc->preempt_tb = TB_NIL;
|
||||
/*
|
||||
* We can test vc->runner without taking the vcore lock,
|
||||
* because only this task ever sets vc->runner to this
|
||||
* vcpu, and once it is set to this vcpu, only this task
|
||||
* ever sets it to NULL.
|
||||
*/
|
||||
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE) {
|
||||
spin_lock_irqsave(&vc->stoltb_lock, flags);
|
||||
if (vc->preempt_tb != TB_NIL) {
|
||||
vc->stolen_tb += mftb() - vc->preempt_tb;
|
||||
vc->preempt_tb = TB_NIL;
|
||||
}
|
||||
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
|
||||
}
|
||||
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
|
||||
if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST &&
|
||||
vcpu->arch.busy_preempt != TB_NIL) {
|
||||
vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt;
|
||||
|
@ -166,9 +177,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu)
|
|||
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
|
||||
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE)
|
||||
if (vc->runner == vcpu && vc->vcore_state != VCORE_INACTIVE) {
|
||||
spin_lock_irqsave(&vc->stoltb_lock, flags);
|
||||
vc->preempt_tb = mftb();
|
||||
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
|
||||
}
|
||||
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
|
||||
if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST)
|
||||
vcpu->arch.busy_preempt = mftb();
|
||||
spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
|
||||
|
@ -191,9 +205,6 @@ int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
|
|||
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
||||
|
||||
if (arch_compat) {
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return -EINVAL; /* 970 has no compat mode support */
|
||||
|
||||
switch (arch_compat) {
|
||||
case PVR_ARCH_205:
|
||||
/*
|
||||
|
@ -505,25 +516,14 @@ static void kvmppc_update_vpas(struct kvm_vcpu *vcpu)
|
|||
static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now)
|
||||
{
|
||||
u64 p;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* If we are the task running the vcore, then since we hold
|
||||
* the vcore lock, we can't be preempted, so stolen_tb/preempt_tb
|
||||
* can't be updated, so we don't need the tbacct_lock.
|
||||
* If the vcore is inactive, it can't become active (since we
|
||||
* hold the vcore lock), so the vcpu load/put functions won't
|
||||
* update stolen_tb/preempt_tb, and we don't need tbacct_lock.
|
||||
*/
|
||||
spin_lock_irqsave(&vc->stoltb_lock, flags);
|
||||
p = vc->stolen_tb;
|
||||
if (vc->vcore_state != VCORE_INACTIVE &&
|
||||
vc->runner->arch.run_task != current) {
|
||||
spin_lock_irq(&vc->runner->arch.tbacct_lock);
|
||||
p = vc->stolen_tb;
|
||||
if (vc->preempt_tb != TB_NIL)
|
||||
p += now - vc->preempt_tb;
|
||||
spin_unlock_irq(&vc->runner->arch.tbacct_lock);
|
||||
} else {
|
||||
p = vc->stolen_tb;
|
||||
}
|
||||
vc->preempt_tb != TB_NIL)
|
||||
p += now - vc->preempt_tb;
|
||||
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
|
||||
return p;
|
||||
}
|
||||
|
||||
|
@ -607,10 +607,45 @@ static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags,
|
|||
}
|
||||
}
|
||||
|
||||
static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target)
|
||||
{
|
||||
struct kvmppc_vcore *vcore = target->arch.vcore;
|
||||
|
||||
/*
|
||||
* We expect to have been called by the real mode handler
|
||||
* (kvmppc_rm_h_confer()) which would have directly returned
|
||||
* H_SUCCESS if the source vcore wasn't idle (e.g. if it may
|
||||
* have useful work to do and should not confer) so we don't
|
||||
* recheck that here.
|
||||
*/
|
||||
|
||||
spin_lock(&vcore->lock);
|
||||
if (target->arch.state == KVMPPC_VCPU_RUNNABLE &&
|
||||
vcore->vcore_state != VCORE_INACTIVE)
|
||||
target = vcore->runner;
|
||||
spin_unlock(&vcore->lock);
|
||||
|
||||
return kvm_vcpu_yield_to(target);
|
||||
}
|
||||
|
||||
static int kvmppc_get_yield_count(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int yield_count = 0;
|
||||
struct lppaca *lppaca;
|
||||
|
||||
spin_lock(&vcpu->arch.vpa_update_lock);
|
||||
lppaca = (struct lppaca *)vcpu->arch.vpa.pinned_addr;
|
||||
if (lppaca)
|
||||
yield_count = lppaca->yield_count;
|
||||
spin_unlock(&vcpu->arch.vpa_update_lock);
|
||||
return yield_count;
|
||||
}
|
||||
|
||||
int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
unsigned long req = kvmppc_get_gpr(vcpu, 3);
|
||||
unsigned long target, ret = H_SUCCESS;
|
||||
int yield_count;
|
||||
struct kvm_vcpu *tvcpu;
|
||||
int idx, rc;
|
||||
|
||||
|
@ -619,14 +654,6 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
|
|||
return RESUME_HOST;
|
||||
|
||||
switch (req) {
|
||||
case H_ENTER:
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
ret = kvmppc_virtmode_h_enter(vcpu, kvmppc_get_gpr(vcpu, 4),
|
||||
kvmppc_get_gpr(vcpu, 5),
|
||||
kvmppc_get_gpr(vcpu, 6),
|
||||
kvmppc_get_gpr(vcpu, 7));
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
break;
|
||||
case H_CEDE:
|
||||
break;
|
||||
case H_PROD:
|
||||
|
@ -654,7 +681,10 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
|
|||
ret = H_PARAMETER;
|
||||
break;
|
||||
}
|
||||
kvm_vcpu_yield_to(tvcpu);
|
||||
yield_count = kvmppc_get_gpr(vcpu, 5);
|
||||
if (kvmppc_get_yield_count(tvcpu) != yield_count)
|
||||
break;
|
||||
kvm_arch_vcpu_yield_to(tvcpu);
|
||||
break;
|
||||
case H_REGISTER_VPA:
|
||||
ret = do_h_register_vpa(vcpu, kvmppc_get_gpr(vcpu, 4),
|
||||
|
@ -769,6 +799,8 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
vcpu->stat.ext_intr_exits++;
|
||||
r = RESUME_GUEST;
|
||||
break;
|
||||
/* HMI is hypervisor interrupt and host has handled it. Resume guest.*/
|
||||
case BOOK3S_INTERRUPT_HMI:
|
||||
case BOOK3S_INTERRUPT_PERFMON:
|
||||
r = RESUME_GUEST;
|
||||
break;
|
||||
|
@ -837,6 +869,10 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
* Accordingly return to Guest or Host.
|
||||
*/
|
||||
case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
|
||||
if (vcpu->arch.emul_inst != KVM_INST_FETCH_FAILED)
|
||||
vcpu->arch.last_inst = kvmppc_need_byteswap(vcpu) ?
|
||||
swab32(vcpu->arch.emul_inst) :
|
||||
vcpu->arch.emul_inst;
|
||||
if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) {
|
||||
r = kvmppc_emulate_debug_inst(run, vcpu);
|
||||
} else {
|
||||
|
@ -1357,6 +1393,7 @@ static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core)
|
|||
|
||||
INIT_LIST_HEAD(&vcore->runnable_threads);
|
||||
spin_lock_init(&vcore->lock);
|
||||
spin_lock_init(&vcore->stoltb_lock);
|
||||
init_waitqueue_head(&vcore->wq);
|
||||
vcore->preempt_tb = TB_NIL;
|
||||
vcore->lpcr = kvm->arch.lpcr;
|
||||
|
@ -1694,9 +1731,11 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
|
|||
vc->n_woken = 0;
|
||||
vc->nap_count = 0;
|
||||
vc->entry_exit_count = 0;
|
||||
vc->preempt_tb = TB_NIL;
|
||||
vc->vcore_state = VCORE_STARTING;
|
||||
vc->in_guest = 0;
|
||||
vc->napping_threads = 0;
|
||||
vc->conferring_threads = 0;
|
||||
|
||||
/*
|
||||
* Updating any of the vpas requires calling kvmppc_pin_guest_page,
|
||||
|
@ -1726,6 +1765,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
|
|||
list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
|
||||
kvmppc_start_thread(vcpu);
|
||||
kvmppc_create_dtl_entry(vcpu, vc);
|
||||
trace_kvm_guest_enter(vcpu);
|
||||
}
|
||||
|
||||
/* Set this explicitly in case thread 0 doesn't have a vcpu */
|
||||
|
@ -1734,6 +1774,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
|
|||
|
||||
vc->vcore_state = VCORE_RUNNING;
|
||||
preempt_disable();
|
||||
|
||||
trace_kvmppc_run_core(vc, 0);
|
||||
|
||||
spin_unlock(&vc->lock);
|
||||
|
||||
kvm_guest_enter();
|
||||
|
@ -1779,6 +1822,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
|
|||
kvmppc_core_pending_dec(vcpu))
|
||||
kvmppc_core_dequeue_dec(vcpu);
|
||||
|
||||
trace_kvm_guest_exit(vcpu);
|
||||
|
||||
ret = RESUME_GUEST;
|
||||
if (vcpu->arch.trap)
|
||||
ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
|
||||
|
@ -1804,6 +1849,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
|
|||
wake_up(&vcpu->arch.cpu_run);
|
||||
}
|
||||
}
|
||||
|
||||
trace_kvmppc_run_core(vc, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1826,15 +1873,37 @@ static void kvmppc_wait_for_exec(struct kvm_vcpu *vcpu, int wait_state)
|
|||
*/
|
||||
static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
|
||||
{
|
||||
struct kvm_vcpu *vcpu;
|
||||
int do_sleep = 1;
|
||||
|
||||
DEFINE_WAIT(wait);
|
||||
|
||||
prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE);
|
||||
|
||||
/*
|
||||
* Check one last time for pending exceptions and ceded state after
|
||||
* we put ourselves on the wait queue
|
||||
*/
|
||||
list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
|
||||
if (vcpu->arch.pending_exceptions || !vcpu->arch.ceded) {
|
||||
do_sleep = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!do_sleep) {
|
||||
finish_wait(&vc->wq, &wait);
|
||||
return;
|
||||
}
|
||||
|
||||
vc->vcore_state = VCORE_SLEEPING;
|
||||
trace_kvmppc_vcore_blocked(vc, 0);
|
||||
spin_unlock(&vc->lock);
|
||||
schedule();
|
||||
finish_wait(&vc->wq, &wait);
|
||||
spin_lock(&vc->lock);
|
||||
vc->vcore_state = VCORE_INACTIVE;
|
||||
trace_kvmppc_vcore_blocked(vc, 1);
|
||||
}
|
||||
|
||||
static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
|
||||
|
@ -1843,6 +1912,8 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
|
|||
struct kvmppc_vcore *vc;
|
||||
struct kvm_vcpu *v, *vn;
|
||||
|
||||
trace_kvmppc_run_vcpu_enter(vcpu);
|
||||
|
||||
kvm_run->exit_reason = 0;
|
||||
vcpu->arch.ret = RESUME_GUEST;
|
||||
vcpu->arch.trap = 0;
|
||||
|
@ -1872,6 +1943,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
|
|||
VCORE_EXIT_COUNT(vc) == 0) {
|
||||
kvmppc_create_dtl_entry(vcpu, vc);
|
||||
kvmppc_start_thread(vcpu);
|
||||
trace_kvm_guest_enter(vcpu);
|
||||
} else if (vc->vcore_state == VCORE_SLEEPING) {
|
||||
wake_up(&vc->wq);
|
||||
}
|
||||
|
@ -1936,6 +2008,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
|
|||
wake_up(&v->arch.cpu_run);
|
||||
}
|
||||
|
||||
trace_kvmppc_run_vcpu_exit(vcpu, kvm_run);
|
||||
spin_unlock(&vc->lock);
|
||||
return vcpu->arch.ret;
|
||||
}
|
||||
|
@ -1962,7 +2035,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
/* Order vcpus_running vs. rma_setup_done, see kvmppc_alloc_reset_hpt */
|
||||
smp_mb();
|
||||
|
||||
/* On the first time here, set up HTAB and VRMA or RMA */
|
||||
/* On the first time here, set up HTAB and VRMA */
|
||||
if (!vcpu->kvm->arch.rma_setup_done) {
|
||||
r = kvmppc_hv_setup_htab_rma(vcpu);
|
||||
if (r)
|
||||
|
@ -1981,7 +2054,9 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
|
||||
if (run->exit_reason == KVM_EXIT_PAPR_HCALL &&
|
||||
!(vcpu->arch.shregs.msr & MSR_PR)) {
|
||||
trace_kvm_hcall_enter(vcpu);
|
||||
r = kvmppc_pseries_do_hcall(vcpu);
|
||||
trace_kvm_hcall_exit(vcpu, r);
|
||||
kvmppc_core_prepare_to_enter(vcpu);
|
||||
} else if (r == RESUME_PAGE_FAULT) {
|
||||
srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
|
@ -1997,98 +2072,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
|||
return r;
|
||||
}
|
||||
|
||||
|
||||
/* Work out RMLS (real mode limit selector) field value for a given RMA size.
|
||||
Assumes POWER7 or PPC970. */
|
||||
static inline int lpcr_rmls(unsigned long rma_size)
|
||||
{
|
||||
switch (rma_size) {
|
||||
case 32ul << 20: /* 32 MB */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return 8; /* only supported on POWER7 */
|
||||
return -1;
|
||||
case 64ul << 20: /* 64 MB */
|
||||
return 3;
|
||||
case 128ul << 20: /* 128 MB */
|
||||
return 7;
|
||||
case 256ul << 20: /* 256 MB */
|
||||
return 4;
|
||||
case 1ul << 30: /* 1 GB */
|
||||
return 2;
|
||||
case 16ul << 30: /* 16 GB */
|
||||
return 1;
|
||||
case 256ul << 30: /* 256 GB */
|
||||
return 0;
|
||||
default:
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
static int kvm_rma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
||||
{
|
||||
struct page *page;
|
||||
struct kvm_rma_info *ri = vma->vm_file->private_data;
|
||||
|
||||
if (vmf->pgoff >= kvm_rma_pages)
|
||||
return VM_FAULT_SIGBUS;
|
||||
|
||||
page = pfn_to_page(ri->base_pfn + vmf->pgoff);
|
||||
get_page(page);
|
||||
vmf->page = page;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct vm_operations_struct kvm_rma_vm_ops = {
|
||||
.fault = kvm_rma_fault,
|
||||
};
|
||||
|
||||
static int kvm_rma_mmap(struct file *file, struct vm_area_struct *vma)
|
||||
{
|
||||
vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
|
||||
vma->vm_ops = &kvm_rma_vm_ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kvm_rma_release(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct kvm_rma_info *ri = filp->private_data;
|
||||
|
||||
kvm_release_rma(ri);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct file_operations kvm_rma_fops = {
|
||||
.mmap = kvm_rma_mmap,
|
||||
.release = kvm_rma_release,
|
||||
};
|
||||
|
||||
static long kvm_vm_ioctl_allocate_rma(struct kvm *kvm,
|
||||
struct kvm_allocate_rma *ret)
|
||||
{
|
||||
long fd;
|
||||
struct kvm_rma_info *ri;
|
||||
/*
|
||||
* Only do this on PPC970 in HV mode
|
||||
*/
|
||||
if (!cpu_has_feature(CPU_FTR_HVMODE) ||
|
||||
!cpu_has_feature(CPU_FTR_ARCH_201))
|
||||
return -EINVAL;
|
||||
|
||||
if (!kvm_rma_pages)
|
||||
return -EINVAL;
|
||||
|
||||
ri = kvm_alloc_rma();
|
||||
if (!ri)
|
||||
return -ENOMEM;
|
||||
|
||||
fd = anon_inode_getfd("kvm-rma", &kvm_rma_fops, ri, O_RDWR | O_CLOEXEC);
|
||||
if (fd < 0)
|
||||
kvm_release_rma(ri);
|
||||
|
||||
ret->rma_size = kvm_rma_pages << PAGE_SHIFT;
|
||||
return fd;
|
||||
}
|
||||
|
||||
static void kvmppc_add_seg_page_size(struct kvm_ppc_one_seg_page_size **sps,
|
||||
int linux_psize)
|
||||
{
|
||||
|
@ -2167,26 +2150,6 @@ out:
|
|||
return r;
|
||||
}
|
||||
|
||||
static void unpin_slot(struct kvm_memory_slot *memslot)
|
||||
{
|
||||
unsigned long *physp;
|
||||
unsigned long j, npages, pfn;
|
||||
struct page *page;
|
||||
|
||||
physp = memslot->arch.slot_phys;
|
||||
npages = memslot->npages;
|
||||
if (!physp)
|
||||
return;
|
||||
for (j = 0; j < npages; j++) {
|
||||
if (!(physp[j] & KVMPPC_GOT_PAGE))
|
||||
continue;
|
||||
pfn = physp[j] >> PAGE_SHIFT;
|
||||
page = pfn_to_page(pfn);
|
||||
SetPageDirty(page);
|
||||
put_page(page);
|
||||
}
|
||||
}
|
||||
|
||||
static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free,
|
||||
struct kvm_memory_slot *dont)
|
||||
{
|
||||
|
@ -2194,11 +2157,6 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free,
|
|||
vfree(free->arch.rmap);
|
||||
free->arch.rmap = NULL;
|
||||
}
|
||||
if (!dont || free->arch.slot_phys != dont->arch.slot_phys) {
|
||||
unpin_slot(free);
|
||||
vfree(free->arch.slot_phys);
|
||||
free->arch.slot_phys = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static int kvmppc_core_create_memslot_hv(struct kvm_memory_slot *slot,
|
||||
|
@ -2207,7 +2165,6 @@ static int kvmppc_core_create_memslot_hv(struct kvm_memory_slot *slot,
|
|||
slot->arch.rmap = vzalloc(npages * sizeof(*slot->arch.rmap));
|
||||
if (!slot->arch.rmap)
|
||||
return -ENOMEM;
|
||||
slot->arch.slot_phys = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -2216,17 +2173,6 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
|
|||
struct kvm_memory_slot *memslot,
|
||||
struct kvm_userspace_memory_region *mem)
|
||||
{
|
||||
unsigned long *phys;
|
||||
|
||||
/* Allocate a slot_phys array if needed */
|
||||
phys = memslot->arch.slot_phys;
|
||||
if (!kvm->arch.using_mmu_notifiers && !phys && memslot->npages) {
|
||||
phys = vzalloc(memslot->npages * sizeof(unsigned long));
|
||||
if (!phys)
|
||||
return -ENOMEM;
|
||||
memslot->arch.slot_phys = phys;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -2284,17 +2230,11 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
int err = 0;
|
||||
struct kvm *kvm = vcpu->kvm;
|
||||
struct kvm_rma_info *ri = NULL;
|
||||
unsigned long hva;
|
||||
struct kvm_memory_slot *memslot;
|
||||
struct vm_area_struct *vma;
|
||||
unsigned long lpcr = 0, senc;
|
||||
unsigned long lpcr_mask = 0;
|
||||
unsigned long psize, porder;
|
||||
unsigned long rma_size;
|
||||
unsigned long rmls;
|
||||
unsigned long *physp;
|
||||
unsigned long i, npages;
|
||||
int srcu_idx;
|
||||
|
||||
mutex_lock(&kvm->lock);
|
||||
|
@ -2329,88 +2269,25 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
|
|||
psize = vma_kernel_pagesize(vma);
|
||||
porder = __ilog2(psize);
|
||||
|
||||
/* Is this one of our preallocated RMAs? */
|
||||
if (vma->vm_file && vma->vm_file->f_op == &kvm_rma_fops &&
|
||||
hva == vma->vm_start)
|
||||
ri = vma->vm_file->private_data;
|
||||
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
|
||||
if (!ri) {
|
||||
/* On POWER7, use VRMA; on PPC970, give up */
|
||||
err = -EPERM;
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_201)) {
|
||||
pr_err("KVM: CPU requires an RMO\n");
|
||||
goto out_srcu;
|
||||
}
|
||||
/* We can handle 4k, 64k or 16M pages in the VRMA */
|
||||
err = -EINVAL;
|
||||
if (!(psize == 0x1000 || psize == 0x10000 ||
|
||||
psize == 0x1000000))
|
||||
goto out_srcu;
|
||||
|
||||
/* We can handle 4k, 64k or 16M pages in the VRMA */
|
||||
err = -EINVAL;
|
||||
if (!(psize == 0x1000 || psize == 0x10000 ||
|
||||
psize == 0x1000000))
|
||||
goto out_srcu;
|
||||
/* Update VRMASD field in the LPCR */
|
||||
senc = slb_pgsize_encoding(psize);
|
||||
kvm->arch.vrma_slb_v = senc | SLB_VSID_B_1T |
|
||||
(VRMA_VSID << SLB_VSID_SHIFT_1T);
|
||||
/* the -4 is to account for senc values starting at 0x10 */
|
||||
lpcr = senc << (LPCR_VRMASD_SH - 4);
|
||||
|
||||
/* Update VRMASD field in the LPCR */
|
||||
senc = slb_pgsize_encoding(psize);
|
||||
kvm->arch.vrma_slb_v = senc | SLB_VSID_B_1T |
|
||||
(VRMA_VSID << SLB_VSID_SHIFT_1T);
|
||||
lpcr_mask = LPCR_VRMASD;
|
||||
/* the -4 is to account for senc values starting at 0x10 */
|
||||
lpcr = senc << (LPCR_VRMASD_SH - 4);
|
||||
/* Create HPTEs in the hash page table for the VRMA */
|
||||
kvmppc_map_vrma(vcpu, memslot, porder);
|
||||
|
||||
/* Create HPTEs in the hash page table for the VRMA */
|
||||
kvmppc_map_vrma(vcpu, memslot, porder);
|
||||
|
||||
} else {
|
||||
/* Set up to use an RMO region */
|
||||
rma_size = kvm_rma_pages;
|
||||
if (rma_size > memslot->npages)
|
||||
rma_size = memslot->npages;
|
||||
rma_size <<= PAGE_SHIFT;
|
||||
rmls = lpcr_rmls(rma_size);
|
||||
err = -EINVAL;
|
||||
if ((long)rmls < 0) {
|
||||
pr_err("KVM: Can't use RMA of 0x%lx bytes\n", rma_size);
|
||||
goto out_srcu;
|
||||
}
|
||||
atomic_inc(&ri->use_count);
|
||||
kvm->arch.rma = ri;
|
||||
|
||||
/* Update LPCR and RMOR */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_201)) {
|
||||
/* PPC970; insert RMLS value (split field) in HID4 */
|
||||
lpcr_mask = (1ul << HID4_RMLS0_SH) |
|
||||
(3ul << HID4_RMLS2_SH) | HID4_RMOR;
|
||||
lpcr = ((rmls >> 2) << HID4_RMLS0_SH) |
|
||||
((rmls & 3) << HID4_RMLS2_SH);
|
||||
/* RMOR is also in HID4 */
|
||||
lpcr |= ((ri->base_pfn >> (26 - PAGE_SHIFT)) & 0xffff)
|
||||
<< HID4_RMOR_SH;
|
||||
} else {
|
||||
/* POWER7 */
|
||||
lpcr_mask = LPCR_VPM0 | LPCR_VRMA_L | LPCR_RMLS;
|
||||
lpcr = rmls << LPCR_RMLS_SH;
|
||||
kvm->arch.rmor = ri->base_pfn << PAGE_SHIFT;
|
||||
}
|
||||
pr_info("KVM: Using RMO at %lx size %lx (LPCR = %lx)\n",
|
||||
ri->base_pfn << PAGE_SHIFT, rma_size, lpcr);
|
||||
|
||||
/* Initialize phys addrs of pages in RMO */
|
||||
npages = kvm_rma_pages;
|
||||
porder = __ilog2(npages);
|
||||
physp = memslot->arch.slot_phys;
|
||||
if (physp) {
|
||||
if (npages > memslot->npages)
|
||||
npages = memslot->npages;
|
||||
spin_lock(&kvm->arch.slot_phys_lock);
|
||||
for (i = 0; i < npages; ++i)
|
||||
physp[i] = ((ri->base_pfn + i) << PAGE_SHIFT) +
|
||||
porder;
|
||||
spin_unlock(&kvm->arch.slot_phys_lock);
|
||||
}
|
||||
}
|
||||
|
||||
kvmppc_update_lpcr(kvm, lpcr, lpcr_mask);
|
||||
kvmppc_update_lpcr(kvm, lpcr, LPCR_VRMASD);
|
||||
|
||||
/* Order updates to kvm->arch.lpcr etc. vs. rma_setup_done */
|
||||
smp_wmb();
|
||||
|
@ -2449,35 +2326,21 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
|
|||
memcpy(kvm->arch.enabled_hcalls, default_enabled_hcalls,
|
||||
sizeof(kvm->arch.enabled_hcalls));
|
||||
|
||||
kvm->arch.rma = NULL;
|
||||
|
||||
kvm->arch.host_sdr1 = mfspr(SPRN_SDR1);
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_201)) {
|
||||
/* PPC970; HID4 is effectively the LPCR */
|
||||
kvm->arch.host_lpid = 0;
|
||||
kvm->arch.host_lpcr = lpcr = mfspr(SPRN_HID4);
|
||||
lpcr &= ~((3 << HID4_LPID1_SH) | (0xful << HID4_LPID5_SH));
|
||||
lpcr |= ((lpid >> 4) << HID4_LPID1_SH) |
|
||||
((lpid & 0xf) << HID4_LPID5_SH);
|
||||
} else {
|
||||
/* POWER7; init LPCR for virtual RMA mode */
|
||||
kvm->arch.host_lpid = mfspr(SPRN_LPID);
|
||||
kvm->arch.host_lpcr = lpcr = mfspr(SPRN_LPCR);
|
||||
lpcr &= LPCR_PECE | LPCR_LPES;
|
||||
lpcr |= (4UL << LPCR_DPFD_SH) | LPCR_HDICE |
|
||||
LPCR_VPM0 | LPCR_VPM1;
|
||||
kvm->arch.vrma_slb_v = SLB_VSID_B_1T |
|
||||
(VRMA_VSID << SLB_VSID_SHIFT_1T);
|
||||
/* On POWER8 turn on online bit to enable PURR/SPURR */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S))
|
||||
lpcr |= LPCR_ONL;
|
||||
}
|
||||
/* Init LPCR for virtual RMA mode */
|
||||
kvm->arch.host_lpid = mfspr(SPRN_LPID);
|
||||
kvm->arch.host_lpcr = lpcr = mfspr(SPRN_LPCR);
|
||||
lpcr &= LPCR_PECE | LPCR_LPES;
|
||||
lpcr |= (4UL << LPCR_DPFD_SH) | LPCR_HDICE |
|
||||
LPCR_VPM0 | LPCR_VPM1;
|
||||
kvm->arch.vrma_slb_v = SLB_VSID_B_1T |
|
||||
(VRMA_VSID << SLB_VSID_SHIFT_1T);
|
||||
/* On POWER8 turn on online bit to enable PURR/SPURR */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_207S))
|
||||
lpcr |= LPCR_ONL;
|
||||
kvm->arch.lpcr = lpcr;
|
||||
|
||||
kvm->arch.using_mmu_notifiers = !!cpu_has_feature(CPU_FTR_ARCH_206);
|
||||
spin_lock_init(&kvm->arch.slot_phys_lock);
|
||||
|
||||
/*
|
||||
* Track that we now have a HV mode VM active. This blocks secondary
|
||||
* CPU threads from coming online.
|
||||
|
@ -2507,10 +2370,6 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
|
|||
kvm_hv_vm_deactivated();
|
||||
|
||||
kvmppc_free_vcores(kvm);
|
||||
if (kvm->arch.rma) {
|
||||
kvm_release_rma(kvm->arch.rma);
|
||||
kvm->arch.rma = NULL;
|
||||
}
|
||||
|
||||
kvmppc_free_hpt(kvm);
|
||||
}
|
||||
|
@ -2536,7 +2395,8 @@ static int kvmppc_core_emulate_mfspr_hv(struct kvm_vcpu *vcpu, int sprn,
|
|||
|
||||
static int kvmppc_core_check_processor_compat_hv(void)
|
||||
{
|
||||
if (!cpu_has_feature(CPU_FTR_HVMODE))
|
||||
if (!cpu_has_feature(CPU_FTR_HVMODE) ||
|
||||
!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return -EIO;
|
||||
return 0;
|
||||
}
|
||||
|
@ -2550,16 +2410,6 @@ static long kvm_arch_vm_ioctl_hv(struct file *filp,
|
|||
|
||||
switch (ioctl) {
|
||||
|
||||
case KVM_ALLOCATE_RMA: {
|
||||
struct kvm_allocate_rma rma;
|
||||
struct kvm *kvm = filp->private_data;
|
||||
|
||||
r = kvm_vm_ioctl_allocate_rma(kvm, &rma);
|
||||
if (r >= 0 && copy_to_user(argp, &rma, sizeof(rma)))
|
||||
r = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
case KVM_PPC_ALLOCATE_HTAB: {
|
||||
u32 htab_order;
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#include <linux/memblock.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/cma.h>
|
||||
#include <linux/bitops.h>
|
||||
|
||||
#include <asm/cputable.h>
|
||||
#include <asm/kvm_ppc.h>
|
||||
|
@ -32,95 +33,9 @@
|
|||
* By default we reserve 5% of memory for hash pagetable allocation.
|
||||
*/
|
||||
static unsigned long kvm_cma_resv_ratio = 5;
|
||||
/*
|
||||
* We allocate RMAs (real mode areas) for KVM guests from the KVM CMA area.
|
||||
* Each RMA has to be physically contiguous and of a size that the
|
||||
* hardware supports. PPC970 and POWER7 support 64MB, 128MB and 256MB,
|
||||
* and other larger sizes. Since we are unlikely to be allocate that
|
||||
* much physically contiguous memory after the system is up and running,
|
||||
* we preallocate a set of RMAs in early boot using CMA.
|
||||
* should be power of 2.
|
||||
*/
|
||||
unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT; /* 128MB */
|
||||
EXPORT_SYMBOL_GPL(kvm_rma_pages);
|
||||
|
||||
static struct cma *kvm_cma;
|
||||
|
||||
/* Work out RMLS (real mode limit selector) field value for a given RMA size.
|
||||
Assumes POWER7 or PPC970. */
|
||||
static inline int lpcr_rmls(unsigned long rma_size)
|
||||
{
|
||||
switch (rma_size) {
|
||||
case 32ul << 20: /* 32 MB */
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return 8; /* only supported on POWER7 */
|
||||
return -1;
|
||||
case 64ul << 20: /* 64 MB */
|
||||
return 3;
|
||||
case 128ul << 20: /* 128 MB */
|
||||
return 7;
|
||||
case 256ul << 20: /* 256 MB */
|
||||
return 4;
|
||||
case 1ul << 30: /* 1 GB */
|
||||
return 2;
|
||||
case 16ul << 30: /* 16 GB */
|
||||
return 1;
|
||||
case 256ul << 30: /* 256 GB */
|
||||
return 0;
|
||||
default:
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
static int __init early_parse_rma_size(char *p)
|
||||
{
|
||||
unsigned long kvm_rma_size;
|
||||
|
||||
pr_debug("%s(%s)\n", __func__, p);
|
||||
if (!p)
|
||||
return -EINVAL;
|
||||
kvm_rma_size = memparse(p, &p);
|
||||
/*
|
||||
* Check that the requested size is one supported in hardware
|
||||
*/
|
||||
if (lpcr_rmls(kvm_rma_size) < 0) {
|
||||
pr_err("RMA size of 0x%lx not supported\n", kvm_rma_size);
|
||||
return -EINVAL;
|
||||
}
|
||||
kvm_rma_pages = kvm_rma_size >> PAGE_SHIFT;
|
||||
return 0;
|
||||
}
|
||||
early_param("kvm_rma_size", early_parse_rma_size);
|
||||
|
||||
struct kvm_rma_info *kvm_alloc_rma()
|
||||
{
|
||||
struct page *page;
|
||||
struct kvm_rma_info *ri;
|
||||
|
||||
ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
|
||||
if (!ri)
|
||||
return NULL;
|
||||
page = cma_alloc(kvm_cma, kvm_rma_pages, order_base_2(kvm_rma_pages));
|
||||
if (!page)
|
||||
goto err_out;
|
||||
atomic_set(&ri->use_count, 1);
|
||||
ri->base_pfn = page_to_pfn(page);
|
||||
return ri;
|
||||
err_out:
|
||||
kfree(ri);
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_alloc_rma);
|
||||
|
||||
void kvm_release_rma(struct kvm_rma_info *ri)
|
||||
{
|
||||
if (atomic_dec_and_test(&ri->use_count)) {
|
||||
cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
|
||||
kfree(ri);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_release_rma);
|
||||
|
||||
static int __init early_parse_kvm_cma_resv(char *p)
|
||||
{
|
||||
pr_debug("%s(%s)\n", __func__, p);
|
||||
|
@ -132,14 +47,9 @@ early_param("kvm_cma_resv_ratio", early_parse_kvm_cma_resv);
|
|||
|
||||
struct page *kvm_alloc_hpt(unsigned long nr_pages)
|
||||
{
|
||||
unsigned long align_pages = HPT_ALIGN_PAGES;
|
||||
|
||||
VM_BUG_ON(order_base_2(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
|
||||
|
||||
/* Old CPUs require HPT aligned on a multiple of its size */
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
align_pages = nr_pages;
|
||||
return cma_alloc(kvm_cma, nr_pages, order_base_2(align_pages));
|
||||
return cma_alloc(kvm_cma, nr_pages, order_base_2(HPT_ALIGN_PAGES));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
|
||||
|
||||
|
@ -180,21 +90,43 @@ void __init kvm_cma_reserve(void)
|
|||
if (selected_size) {
|
||||
pr_debug("%s: reserving %ld MiB for global area\n", __func__,
|
||||
(unsigned long)selected_size / SZ_1M);
|
||||
/*
|
||||
* Old CPUs require HPT aligned on a multiple of its size. So for them
|
||||
* make the alignment as max size we could request.
|
||||
*/
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
align_size = __rounddown_pow_of_two(selected_size);
|
||||
else
|
||||
align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
|
||||
|
||||
align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
|
||||
align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
|
||||
cma_declare_contiguous(0, selected_size, 0, align_size,
|
||||
KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Real-mode H_CONFER implementation.
|
||||
* We check if we are the only vcpu out of this virtual core
|
||||
* still running in the guest and not ceded. If so, we pop up
|
||||
* to the virtual-mode implementation; if not, just return to
|
||||
* the guest.
|
||||
*/
|
||||
long int kvmppc_rm_h_confer(struct kvm_vcpu *vcpu, int target,
|
||||
unsigned int yield_count)
|
||||
{
|
||||
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
||||
int threads_running;
|
||||
int threads_ceded;
|
||||
int threads_conferring;
|
||||
u64 stop = get_tb() + 10 * tb_ticks_per_usec;
|
||||
int rv = H_SUCCESS; /* => don't yield */
|
||||
|
||||
set_bit(vcpu->arch.ptid, &vc->conferring_threads);
|
||||
while ((get_tb() < stop) && (VCORE_EXIT_COUNT(vc) == 0)) {
|
||||
threads_running = VCORE_ENTRY_COUNT(vc);
|
||||
threads_ceded = hweight32(vc->napping_threads);
|
||||
threads_conferring = hweight32(vc->conferring_threads);
|
||||
if (threads_ceded + threads_conferring >= threads_running) {
|
||||
rv = H_TOO_HARD; /* => do yield */
|
||||
break;
|
||||
}
|
||||
}
|
||||
clear_bit(vcpu->arch.ptid, &vc->conferring_threads);
|
||||
return rv;
|
||||
}
|
||||
|
||||
/*
|
||||
* When running HV mode KVM we need to block certain operations while KVM VMs
|
||||
* exist in the system. We use a counter of VMs to track this.
|
||||
|
|
|
@ -52,10 +52,8 @@ _GLOBAL(__kvmppc_vcore_entry)
|
|||
std r3, _CCR(r1)
|
||||
|
||||
/* Save host DSCR */
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r3, SPRN_DSCR
|
||||
std r3, HSTATE_DSCR(r13)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Save host DABR */
|
||||
|
@ -84,11 +82,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
mfspr r7, SPRN_MMCR0 /* save MMCR0 */
|
||||
mtspr SPRN_MMCR0, r3 /* freeze all counters, disable interrupts */
|
||||
mfspr r6, SPRN_MMCRA
|
||||
BEGIN_FTR_SECTION
|
||||
/* On P7, clear MMCRA in order to disable SDAR updates */
|
||||
/* Clear MMCRA in order to disable SDAR updates */
|
||||
li r5, 0
|
||||
mtspr SPRN_MMCRA, r5
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
isync
|
||||
ld r3, PACALPPACAPTR(r13) /* is the host using the PMU? */
|
||||
lbz r5, LPPACA_PMCINUSE(r3)
|
||||
|
@ -113,20 +109,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
mfspr r7, SPRN_PMC4
|
||||
mfspr r8, SPRN_PMC5
|
||||
mfspr r9, SPRN_PMC6
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r10, SPRN_PMC7
|
||||
mfspr r11, SPRN_PMC8
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
stw r3, HSTATE_PMC(r13)
|
||||
stw r5, HSTATE_PMC + 4(r13)
|
||||
stw r6, HSTATE_PMC + 8(r13)
|
||||
stw r7, HSTATE_PMC + 12(r13)
|
||||
stw r8, HSTATE_PMC + 16(r13)
|
||||
stw r9, HSTATE_PMC + 20(r13)
|
||||
BEGIN_FTR_SECTION
|
||||
stw r10, HSTATE_PMC + 24(r13)
|
||||
stw r11, HSTATE_PMC + 28(r13)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
31:
|
||||
|
||||
/*
|
||||
|
@ -140,31 +128,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
|||
add r8,r8,r7
|
||||
std r8,HSTATE_DECEXP(r13)
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/*
|
||||
* On PPC970, if the guest vcpu has an external interrupt pending,
|
||||
* send ourselves an IPI so as to interrupt the guest once it
|
||||
* enables interrupts. (It must have interrupts disabled,
|
||||
* otherwise we would already have delivered the interrupt.)
|
||||
*
|
||||
* XXX If this is a UP build, smp_send_reschedule is not available,
|
||||
* so the interrupt will be delayed until the next time the vcpu
|
||||
* enters the guest with interrupts enabled.
|
||||
*/
|
||||
BEGIN_FTR_SECTION
|
||||
ld r4, HSTATE_KVM_VCPU(r13)
|
||||
ld r0, VCPU_PENDING_EXC(r4)
|
||||
li r7, (1 << BOOK3S_IRQPRIO_EXTERNAL)
|
||||
oris r7, r7, (1 << BOOK3S_IRQPRIO_EXTERNAL_LEVEL)@h
|
||||
and. r0, r0, r7
|
||||
beq 32f
|
||||
lhz r3, PACAPACAINDEX(r13)
|
||||
bl smp_send_reschedule
|
||||
nop
|
||||
32:
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
/* Jump to partition switch code */
|
||||
bl kvmppc_hv_entry_trampoline
|
||||
nop
|
||||
|
|
|
@ -138,8 +138,5 @@ out:
|
|||
|
||||
long kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return kvmppc_realmode_mc_power7(vcpu);
|
||||
|
||||
return 0;
|
||||
return kvmppc_realmode_mc_power7(vcpu);
|
||||
}
|
||||
|
|
|
@ -45,16 +45,12 @@ static int global_invalidates(struct kvm *kvm, unsigned long flags)
|
|||
* as indicated by local_paca->kvm_hstate.kvm_vcpu being set,
|
||||
* we can use tlbiel as long as we mark all other physical
|
||||
* cores as potentially having stale TLB entries for this lpid.
|
||||
* If we're not using MMU notifiers, we never take pages away
|
||||
* from the guest, so we can use tlbiel if requested.
|
||||
* Otherwise, don't use tlbiel.
|
||||
*/
|
||||
if (kvm->arch.online_vcores == 1 && local_paca->kvm_hstate.kvm_vcpu)
|
||||
global = 0;
|
||||
else if (kvm->arch.using_mmu_notifiers)
|
||||
global = 1;
|
||||
else
|
||||
global = !(flags & H_LOCAL);
|
||||
global = 1;
|
||||
|
||||
if (!global) {
|
||||
/* any other core might now have stale TLB entries... */
|
||||
|
@ -170,7 +166,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
|||
struct revmap_entry *rev;
|
||||
unsigned long g_ptel;
|
||||
struct kvm_memory_slot *memslot;
|
||||
unsigned long *physp, pte_size;
|
||||
unsigned long pte_size;
|
||||
unsigned long is_io;
|
||||
unsigned long *rmap;
|
||||
pte_t pte;
|
||||
|
@ -198,9 +194,6 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
|||
is_io = ~0ul;
|
||||
rmap = NULL;
|
||||
if (!(memslot && !(memslot->flags & KVM_MEMSLOT_INVALID))) {
|
||||
/* PPC970 can't do emulated MMIO */
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return H_PARAMETER;
|
||||
/* Emulated MMIO - mark this with key=31 */
|
||||
pteh |= HPTE_V_ABSENT;
|
||||
ptel |= HPTE_R_KEY_HI | HPTE_R_KEY_LO;
|
||||
|
@ -213,37 +206,20 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
|||
slot_fn = gfn - memslot->base_gfn;
|
||||
rmap = &memslot->arch.rmap[slot_fn];
|
||||
|
||||
if (!kvm->arch.using_mmu_notifiers) {
|
||||
physp = memslot->arch.slot_phys;
|
||||
if (!physp)
|
||||
return H_PARAMETER;
|
||||
physp += slot_fn;
|
||||
if (realmode)
|
||||
physp = real_vmalloc_addr(physp);
|
||||
pa = *physp;
|
||||
if (!pa)
|
||||
return H_TOO_HARD;
|
||||
is_io = pa & (HPTE_R_I | HPTE_R_W);
|
||||
pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK);
|
||||
pa &= PAGE_MASK;
|
||||
pa |= gpa & ~PAGE_MASK;
|
||||
} else {
|
||||
/* Translate to host virtual address */
|
||||
hva = __gfn_to_hva_memslot(memslot, gfn);
|
||||
/* Translate to host virtual address */
|
||||
hva = __gfn_to_hva_memslot(memslot, gfn);
|
||||
|
||||
/* Look up the Linux PTE for the backing page */
|
||||
pte_size = psize;
|
||||
pte = lookup_linux_pte_and_update(pgdir, hva, writing,
|
||||
&pte_size);
|
||||
if (pte_present(pte) && !pte_numa(pte)) {
|
||||
if (writing && !pte_write(pte))
|
||||
/* make the actual HPTE be read-only */
|
||||
ptel = hpte_make_readonly(ptel);
|
||||
is_io = hpte_cache_bits(pte_val(pte));
|
||||
pa = pte_pfn(pte) << PAGE_SHIFT;
|
||||
pa |= hva & (pte_size - 1);
|
||||
pa |= gpa & ~PAGE_MASK;
|
||||
}
|
||||
/* Look up the Linux PTE for the backing page */
|
||||
pte_size = psize;
|
||||
pte = lookup_linux_pte_and_update(pgdir, hva, writing, &pte_size);
|
||||
if (pte_present(pte) && !pte_numa(pte)) {
|
||||
if (writing && !pte_write(pte))
|
||||
/* make the actual HPTE be read-only */
|
||||
ptel = hpte_make_readonly(ptel);
|
||||
is_io = hpte_cache_bits(pte_val(pte));
|
||||
pa = pte_pfn(pte) << PAGE_SHIFT;
|
||||
pa |= hva & (pte_size - 1);
|
||||
pa |= gpa & ~PAGE_MASK;
|
||||
}
|
||||
|
||||
if (pte_size < psize)
|
||||
|
@ -337,8 +313,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
|||
rmap = real_vmalloc_addr(rmap);
|
||||
lock_rmap(rmap);
|
||||
/* Check for pending invalidations under the rmap chain lock */
|
||||
if (kvm->arch.using_mmu_notifiers &&
|
||||
mmu_notifier_retry(kvm, mmu_seq)) {
|
||||
if (mmu_notifier_retry(kvm, mmu_seq)) {
|
||||
/* inval in progress, write a non-present HPTE */
|
||||
pteh |= HPTE_V_ABSENT;
|
||||
pteh &= ~HPTE_V_VALID;
|
||||
|
@ -395,61 +370,11 @@ static inline int try_lock_tlbie(unsigned int *lock)
|
|||
return old == 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* tlbie/tlbiel is a bit different on the PPC970 compared to later
|
||||
* processors such as POWER7; the large page bit is in the instruction
|
||||
* not RB, and the top 16 bits and the bottom 12 bits of the VA
|
||||
* in RB must be 0.
|
||||
*/
|
||||
static void do_tlbies_970(struct kvm *kvm, unsigned long *rbvalues,
|
||||
long npages, int global, bool need_sync)
|
||||
{
|
||||
long i;
|
||||
|
||||
if (global) {
|
||||
while (!try_lock_tlbie(&kvm->arch.tlbie_lock))
|
||||
cpu_relax();
|
||||
if (need_sync)
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
for (i = 0; i < npages; ++i) {
|
||||
unsigned long rb = rbvalues[i];
|
||||
|
||||
if (rb & 1) /* large page */
|
||||
asm volatile("tlbie %0,1" : :
|
||||
"r" (rb & 0x0000fffffffff000ul));
|
||||
else
|
||||
asm volatile("tlbie %0,0" : :
|
||||
"r" (rb & 0x0000fffffffff000ul));
|
||||
}
|
||||
asm volatile("eieio; tlbsync; ptesync" : : : "memory");
|
||||
kvm->arch.tlbie_lock = 0;
|
||||
} else {
|
||||
if (need_sync)
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
for (i = 0; i < npages; ++i) {
|
||||
unsigned long rb = rbvalues[i];
|
||||
|
||||
if (rb & 1) /* large page */
|
||||
asm volatile("tlbiel %0,1" : :
|
||||
"r" (rb & 0x0000fffffffff000ul));
|
||||
else
|
||||
asm volatile("tlbiel %0,0" : :
|
||||
"r" (rb & 0x0000fffffffff000ul));
|
||||
}
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
}
|
||||
}
|
||||
|
||||
static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
|
||||
long npages, int global, bool need_sync)
|
||||
{
|
||||
long i;
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_201)) {
|
||||
/* PPC970 tlbie instruction is a bit different */
|
||||
do_tlbies_970(kvm, rbvalues, npages, global, need_sync);
|
||||
return;
|
||||
}
|
||||
if (global) {
|
||||
while (!try_lock_tlbie(&kvm->arch.tlbie_lock))
|
||||
cpu_relax();
|
||||
|
@ -667,40 +592,29 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
|
|||
rev->guest_rpte = r;
|
||||
note_hpte_modification(kvm, rev);
|
||||
}
|
||||
r = (be64_to_cpu(hpte[1]) & ~mask) | bits;
|
||||
|
||||
/* Update HPTE */
|
||||
if (v & HPTE_V_VALID) {
|
||||
rb = compute_tlbie_rb(v, r, pte_index);
|
||||
hpte[0] = cpu_to_be64(v & ~HPTE_V_VALID);
|
||||
do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags), true);
|
||||
/*
|
||||
* If the host has this page as readonly but the guest
|
||||
* wants to make it read/write, reduce the permissions.
|
||||
* Checking the host permissions involves finding the
|
||||
* memslot and then the Linux PTE for the page.
|
||||
* If the page is valid, don't let it transition from
|
||||
* readonly to writable. If it should be writable, we'll
|
||||
* take a trap and let the page fault code sort it out.
|
||||
*/
|
||||
if (hpte_is_writable(r) && kvm->arch.using_mmu_notifiers) {
|
||||
unsigned long psize, gfn, hva;
|
||||
struct kvm_memory_slot *memslot;
|
||||
pgd_t *pgdir = vcpu->arch.pgdir;
|
||||
pte_t pte;
|
||||
|
||||
psize = hpte_page_size(v, r);
|
||||
gfn = ((r & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT;
|
||||
memslot = __gfn_to_memslot(kvm_memslots_raw(kvm), gfn);
|
||||
if (memslot) {
|
||||
hva = __gfn_to_hva_memslot(memslot, gfn);
|
||||
pte = lookup_linux_pte_and_update(pgdir, hva,
|
||||
1, &psize);
|
||||
if (pte_present(pte) && !pte_write(pte))
|
||||
r = hpte_make_readonly(r);
|
||||
}
|
||||
pte = be64_to_cpu(hpte[1]);
|
||||
r = (pte & ~mask) | bits;
|
||||
if (hpte_is_writable(r) && !hpte_is_writable(pte))
|
||||
r = hpte_make_readonly(r);
|
||||
/* If the PTE is changing, invalidate it first */
|
||||
if (r != pte) {
|
||||
rb = compute_tlbie_rb(v, r, pte_index);
|
||||
hpte[0] = cpu_to_be64((v & ~HPTE_V_VALID) |
|
||||
HPTE_V_ABSENT);
|
||||
do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags),
|
||||
true);
|
||||
hpte[1] = cpu_to_be64(r);
|
||||
}
|
||||
}
|
||||
hpte[1] = cpu_to_be64(r);
|
||||
eieio();
|
||||
hpte[0] = cpu_to_be64(v & ~HPTE_V_HVLOCK);
|
||||
unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
return H_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -183,8 +183,10 @@ static void icp_rm_down_cppr(struct kvmppc_xics *xics, struct kvmppc_icp *icp,
|
|||
* state update in HW (ie bus transactions) so we can handle them
|
||||
* separately here as well.
|
||||
*/
|
||||
if (resend)
|
||||
if (resend) {
|
||||
icp->rm_action |= XICS_RM_CHECK_RESEND;
|
||||
icp->rm_resend_icp = icp;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -254,10 +256,25 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
|||
* nothing needs to be done as there can be no XISR to
|
||||
* reject.
|
||||
*
|
||||
* If the CPPR is less favored, then we might be replacing
|
||||
* an interrupt, and thus need to possibly reject it as in
|
||||
*
|
||||
* ICP state: Check_IPI
|
||||
*
|
||||
* If the CPPR is less favored, then we might be replacing
|
||||
* an interrupt, and thus need to possibly reject it.
|
||||
*
|
||||
* ICP State: IPI
|
||||
*
|
||||
* Besides rejecting any pending interrupts, we also
|
||||
* update XISR and pending_pri to mark IPI as pending.
|
||||
*
|
||||
* PAPR does not describe this state, but if the MFRR is being
|
||||
* made less favored than its earlier value, there might be
|
||||
* a previously-rejected interrupt needing to be resent.
|
||||
* Ideally, we would want to resend only if
|
||||
* prio(pending_interrupt) < mfrr &&
|
||||
* prio(pending_interrupt) < cppr
|
||||
* where pending interrupt is the one that was rejected. But
|
||||
* we don't have that state, so we simply trigger a resend
|
||||
* whenever the MFRR is made less favored.
|
||||
*/
|
||||
do {
|
||||
old_state = new_state = ACCESS_ONCE(icp->state);
|
||||
|
@ -270,13 +287,14 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
|||
resend = false;
|
||||
if (mfrr < new_state.cppr) {
|
||||
/* Reject a pending interrupt if not an IPI */
|
||||
if (mfrr <= new_state.pending_pri)
|
||||
if (mfrr <= new_state.pending_pri) {
|
||||
reject = new_state.xisr;
|
||||
new_state.pending_pri = mfrr;
|
||||
new_state.xisr = XICS_IPI;
|
||||
new_state.pending_pri = mfrr;
|
||||
new_state.xisr = XICS_IPI;
|
||||
}
|
||||
}
|
||||
|
||||
if (mfrr > old_state.mfrr && mfrr > new_state.cppr) {
|
||||
if (mfrr > old_state.mfrr) {
|
||||
resend = new_state.need_resend;
|
||||
new_state.need_resend = 0;
|
||||
}
|
||||
|
@ -289,8 +307,10 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
|||
}
|
||||
|
||||
/* Pass resends to virtual mode */
|
||||
if (resend)
|
||||
if (resend) {
|
||||
this_icp->rm_action |= XICS_RM_CHECK_RESEND;
|
||||
this_icp->rm_resend_icp = icp;
|
||||
}
|
||||
|
||||
return check_too_hard(xics, this_icp);
|
||||
}
|
||||
|
|
|
@ -94,20 +94,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG)
|
|||
lwz r6, HSTATE_PMC + 12(r13)
|
||||
lwz r8, HSTATE_PMC + 16(r13)
|
||||
lwz r9, HSTATE_PMC + 20(r13)
|
||||
BEGIN_FTR_SECTION
|
||||
lwz r10, HSTATE_PMC + 24(r13)
|
||||
lwz r11, HSTATE_PMC + 28(r13)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
mtspr SPRN_PMC1, r3
|
||||
mtspr SPRN_PMC2, r4
|
||||
mtspr SPRN_PMC3, r5
|
||||
mtspr SPRN_PMC4, r6
|
||||
mtspr SPRN_PMC5, r8
|
||||
mtspr SPRN_PMC6, r9
|
||||
BEGIN_FTR_SECTION
|
||||
mtspr SPRN_PMC7, r10
|
||||
mtspr SPRN_PMC8, r11
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
ld r3, HSTATE_MMCR(r13)
|
||||
ld r4, HSTATE_MMCR + 8(r13)
|
||||
ld r5, HSTATE_MMCR + 16(r13)
|
||||
|
@ -153,11 +145,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
|
||||
cmpwi cr1, r12, BOOK3S_INTERRUPT_MACHINE_CHECK
|
||||
cmpwi r12, BOOK3S_INTERRUPT_EXTERNAL
|
||||
BEGIN_FTR_SECTION
|
||||
beq 11f
|
||||
cmpwi cr2, r12, BOOK3S_INTERRUPT_HMI
|
||||
beq cr2, 14f /* HMI check */
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
/* RFI into the highmem handler, or branch to interrupt handler */
|
||||
mfmsr r6
|
||||
|
@ -166,7 +156,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
|||
mtmsrd r6, 1 /* Clear RI in MSR */
|
||||
mtsrr0 r8
|
||||
mtsrr1 r7
|
||||
beqa 0x500 /* external interrupt (PPC970) */
|
||||
beq cr1, 13f /* machine check */
|
||||
RFI
|
||||
|
||||
|
@ -393,11 +382,8 @@ kvmppc_hv_entry:
|
|||
slbia
|
||||
ptesync
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
b 30f
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
/*
|
||||
* POWER7 host -> guest partition switch code.
|
||||
* POWER7/POWER8 host -> guest partition switch code.
|
||||
* We don't have to lock against concurrent tlbies,
|
||||
* but we do have to coordinate across hardware threads.
|
||||
*/
|
||||
|
@ -505,97 +491,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
cmpwi r3,512 /* 1 microsecond */
|
||||
li r12,BOOK3S_INTERRUPT_HV_DECREMENTER
|
||||
blt hdec_soon
|
||||
b 31f
|
||||
|
||||
/*
|
||||
* PPC970 host -> guest partition switch code.
|
||||
* We have to lock against concurrent tlbies,
|
||||
* using native_tlbie_lock to lock against host tlbies
|
||||
* and kvm->arch.tlbie_lock to lock against guest tlbies.
|
||||
* We also have to invalidate the TLB since its
|
||||
* entries aren't tagged with the LPID.
|
||||
*/
|
||||
30: ld r5,HSTATE_KVM_VCORE(r13)
|
||||
ld r9,VCORE_KVM(r5) /* pointer to struct kvm */
|
||||
|
||||
/* first take native_tlbie_lock */
|
||||
.section ".toc","aw"
|
||||
toc_tlbie_lock:
|
||||
.tc native_tlbie_lock[TC],native_tlbie_lock
|
||||
.previous
|
||||
ld r3,toc_tlbie_lock@toc(r2)
|
||||
#ifdef __BIG_ENDIAN__
|
||||
lwz r8,PACA_LOCK_TOKEN(r13)
|
||||
#else
|
||||
lwz r8,PACAPACAINDEX(r13)
|
||||
#endif
|
||||
24: lwarx r0,0,r3
|
||||
cmpwi r0,0
|
||||
bne 24b
|
||||
stwcx. r8,0,r3
|
||||
bne 24b
|
||||
isync
|
||||
|
||||
ld r5,HSTATE_KVM_VCORE(r13)
|
||||
ld r7,VCORE_LPCR(r5) /* use vcore->lpcr to store HID4 */
|
||||
li r0,0x18f
|
||||
rotldi r0,r0,HID4_LPID5_SH /* all lpid bits in HID4 = 1 */
|
||||
or r0,r7,r0
|
||||
ptesync
|
||||
sync
|
||||
mtspr SPRN_HID4,r0 /* switch to reserved LPID */
|
||||
isync
|
||||
li r0,0
|
||||
stw r0,0(r3) /* drop native_tlbie_lock */
|
||||
|
||||
/* invalidate the whole TLB */
|
||||
li r0,256
|
||||
mtctr r0
|
||||
li r6,0
|
||||
25: tlbiel r6
|
||||
addi r6,r6,0x1000
|
||||
bdnz 25b
|
||||
ptesync
|
||||
|
||||
/* Take the guest's tlbie_lock */
|
||||
addi r3,r9,KVM_TLBIE_LOCK
|
||||
24: lwarx r0,0,r3
|
||||
cmpwi r0,0
|
||||
bne 24b
|
||||
stwcx. r8,0,r3
|
||||
bne 24b
|
||||
isync
|
||||
ld r6,KVM_SDR1(r9)
|
||||
mtspr SPRN_SDR1,r6 /* switch to partition page table */
|
||||
|
||||
/* Set up HID4 with the guest's LPID etc. */
|
||||
sync
|
||||
mtspr SPRN_HID4,r7
|
||||
isync
|
||||
|
||||
/* drop the guest's tlbie_lock */
|
||||
li r0,0
|
||||
stw r0,0(r3)
|
||||
|
||||
/* Check if HDEC expires soon */
|
||||
mfspr r3,SPRN_HDEC
|
||||
cmpwi r3,10
|
||||
li r12,BOOK3S_INTERRUPT_HV_DECREMENTER
|
||||
blt hdec_soon
|
||||
|
||||
/* Enable HDEC interrupts */
|
||||
mfspr r0,SPRN_HID0
|
||||
li r3,1
|
||||
rldimi r0,r3, HID0_HDICE_SH, 64-HID0_HDICE_SH-1
|
||||
sync
|
||||
mtspr SPRN_HID0,r0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
31:
|
||||
/* Do we have a guest vcpu to run? */
|
||||
cmpdi r4, 0
|
||||
beq kvmppc_primary_no_guest
|
||||
|
@ -625,7 +521,6 @@ kvmppc_got_guest:
|
|||
stb r6, VCPU_VPA_DIRTY(r4)
|
||||
25:
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Save purr/spurr */
|
||||
mfspr r5,SPRN_PURR
|
||||
mfspr r6,SPRN_SPURR
|
||||
|
@ -635,7 +530,6 @@ BEGIN_FTR_SECTION
|
|||
ld r8,VCPU_SPURR(r4)
|
||||
mtspr SPRN_PURR,r7
|
||||
mtspr SPRN_SPURR,r8
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Set partition DABR */
|
||||
|
@ -644,9 +538,7 @@ BEGIN_FTR_SECTION
|
|||
ld r6,VCPU_DABR(r4)
|
||||
mtspr SPRN_DABRX,r5
|
||||
mtspr SPRN_DABR,r6
|
||||
BEGIN_FTR_SECTION_NESTED(89)
|
||||
isync
|
||||
END_FTR_SECTION_NESTED(CPU_FTR_ARCH_206, CPU_FTR_ARCH_206, 89)
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
||||
|
||||
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||
|
@ -777,20 +669,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG)
|
|||
lwz r7, VCPU_PMC + 12(r4)
|
||||
lwz r8, VCPU_PMC + 16(r4)
|
||||
lwz r9, VCPU_PMC + 20(r4)
|
||||
BEGIN_FTR_SECTION
|
||||
lwz r10, VCPU_PMC + 24(r4)
|
||||
lwz r11, VCPU_PMC + 28(r4)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
mtspr SPRN_PMC1, r3
|
||||
mtspr SPRN_PMC2, r5
|
||||
mtspr SPRN_PMC3, r6
|
||||
mtspr SPRN_PMC4, r7
|
||||
mtspr SPRN_PMC5, r8
|
||||
mtspr SPRN_PMC6, r9
|
||||
BEGIN_FTR_SECTION
|
||||
mtspr SPRN_PMC7, r10
|
||||
mtspr SPRN_PMC8, r11
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
ld r3, VCPU_MMCR(r4)
|
||||
ld r5, VCPU_MMCR + 8(r4)
|
||||
ld r6, VCPU_MMCR + 16(r4)
|
||||
|
@ -837,14 +721,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
ld r30, VCPU_GPR(R30)(r4)
|
||||
ld r31, VCPU_GPR(R31)(r4)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Switch DSCR to guest value */
|
||||
ld r5, VCPU_DSCR(r4)
|
||||
mtspr SPRN_DSCR, r5
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Skip next section on POWER7 or PPC970 */
|
||||
/* Skip next section on POWER7 */
|
||||
b 8f
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
||||
/* Turn on TM so we can access TFHAR/TFIAR/TEXASR */
|
||||
|
@ -920,7 +802,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
|||
mtspr SPRN_DAR, r5
|
||||
mtspr SPRN_DSISR, r6
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* Restore AMR and UAMOR, set AMOR to all 1s */
|
||||
ld r5,VCPU_AMR(r4)
|
||||
ld r6,VCPU_UAMOR(r4)
|
||||
|
@ -928,7 +809,6 @@ BEGIN_FTR_SECTION
|
|||
mtspr SPRN_AMR,r5
|
||||
mtspr SPRN_UAMOR,r6
|
||||
mtspr SPRN_AMOR,r7
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
/* Restore state of CTRL run bit; assume 1 on entry */
|
||||
lwz r5,VCPU_CTRL(r4)
|
||||
|
@ -963,13 +843,11 @@ deliver_guest_interrupt:
|
|||
rldicl r0, r0, 64 - BOOK3S_IRQPRIO_EXTERNAL_LEVEL, 63
|
||||
cmpdi cr1, r0, 0
|
||||
andi. r8, r11, MSR_EE
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r8, SPRN_LPCR
|
||||
/* Insert EXTERNAL_LEVEL bit into LPCR at the MER bit position */
|
||||
rldimi r8, r0, LPCR_MER_SH, 63 - LPCR_MER_SH
|
||||
mtspr SPRN_LPCR, r8
|
||||
isync
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
beq 5f
|
||||
li r0, BOOK3S_INTERRUPT_EXTERNAL
|
||||
bne cr1, 12f
|
||||
|
@ -1124,15 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|||
|
||||
stw r12,VCPU_TRAP(r9)
|
||||
|
||||
/* Save HEIR (HV emulation assist reg) in last_inst
|
||||
/* Save HEIR (HV emulation assist reg) in emul_inst
|
||||
if this is an HEI (HV emulation interrupt, e40) */
|
||||
li r3,KVM_INST_FETCH_FAILED
|
||||
BEGIN_FTR_SECTION
|
||||
cmpwi r12,BOOK3S_INTERRUPT_H_EMUL_ASSIST
|
||||
bne 11f
|
||||
mfspr r3,SPRN_HEIR
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
11: stw r3,VCPU_LAST_INST(r9)
|
||||
11: stw r3,VCPU_HEIR(r9)
|
||||
|
||||
/* these are volatile across C function calls */
|
||||
mfctr r3
|
||||
|
@ -1140,13 +1016,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
|||
std r3, VCPU_CTR(r9)
|
||||
stw r4, VCPU_XER(r9)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/* If this is a page table miss then see if it's theirs or ours */
|
||||
cmpwi r12, BOOK3S_INTERRUPT_H_DATA_STORAGE
|
||||
beq kvmppc_hdsi
|
||||
cmpwi r12, BOOK3S_INTERRUPT_H_INST_STORAGE
|
||||
beq kvmppc_hisi
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
/* See if this is a leftover HDEC interrupt */
|
||||
cmpwi r12,BOOK3S_INTERRUPT_HV_DECREMENTER
|
||||
|
@ -1159,11 +1033,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
|||
cmpwi r12,BOOK3S_INTERRUPT_SYSCALL
|
||||
beq hcall_try_real_mode
|
||||
|
||||
/* Only handle external interrupts here on arch 206 and later */
|
||||
BEGIN_FTR_SECTION
|
||||
b ext_interrupt_to_host
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206)
|
||||
|
||||
/* External interrupt ? */
|
||||
cmpwi r12, BOOK3S_INTERRUPT_EXTERNAL
|
||||
bne+ ext_interrupt_to_host
|
||||
|
@ -1193,11 +1062,9 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
|
|||
mfdsisr r7
|
||||
std r6, VCPU_DAR(r9)
|
||||
stw r7, VCPU_DSISR(r9)
|
||||
BEGIN_FTR_SECTION
|
||||
/* don't overwrite fault_dar/fault_dsisr if HDSI */
|
||||
cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE
|
||||
beq 6f
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
std r6, VCPU_FAULT_DAR(r9)
|
||||
stw r7, VCPU_FAULT_DSISR(r9)
|
||||
|
||||
|
@ -1236,7 +1103,6 @@ mc_cont:
|
|||
/*
|
||||
* Save the guest PURR/SPURR
|
||||
*/
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r5,SPRN_PURR
|
||||
mfspr r6,SPRN_SPURR
|
||||
ld r7,VCPU_PURR(r9)
|
||||
|
@ -1256,7 +1122,6 @@ BEGIN_FTR_SECTION
|
|||
add r4,r4,r6
|
||||
mtspr SPRN_PURR,r3
|
||||
mtspr SPRN_SPURR,r4
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_201)
|
||||
|
||||
/* Save DEC */
|
||||
mfspr r5,SPRN_DEC
|
||||
|
@ -1306,22 +1171,18 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
|||
8:
|
||||
|
||||
/* Save and reset AMR and UAMOR before turning on the MMU */
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r5,SPRN_AMR
|
||||
mfspr r6,SPRN_UAMOR
|
||||
std r5,VCPU_AMR(r9)
|
||||
std r6,VCPU_UAMOR(r9)
|
||||
li r6,0
|
||||
mtspr SPRN_AMR,r6
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
/* Switch DSCR back to host value */
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r8, SPRN_DSCR
|
||||
ld r7, HSTATE_DSCR(r13)
|
||||
std r8, VCPU_DSCR(r9)
|
||||
mtspr SPRN_DSCR, r7
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
|
||||
/* Save non-volatile GPRs */
|
||||
std r14, VCPU_GPR(R14)(r9)
|
||||
|
@ -1503,11 +1364,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
mfspr r4, SPRN_MMCR0 /* save MMCR0 */
|
||||
mtspr SPRN_MMCR0, r3 /* freeze all counters, disable ints */
|
||||
mfspr r6, SPRN_MMCRA
|
||||
BEGIN_FTR_SECTION
|
||||
/* On P7, clear MMCRA in order to disable SDAR updates */
|
||||
/* Clear MMCRA in order to disable SDAR updates */
|
||||
li r7, 0
|
||||
mtspr SPRN_MMCRA, r7
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
|
||||
isync
|
||||
beq 21f /* if no VPA, save PMU stuff anyway */
|
||||
lbz r7, LPPACA_PMCINUSE(r8)
|
||||
|
@ -1532,20 +1391,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
mfspr r6, SPRN_PMC4
|
||||
mfspr r7, SPRN_PMC5
|
||||
mfspr r8, SPRN_PMC6
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r10, SPRN_PMC7
|
||||
mfspr r11, SPRN_PMC8
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
stw r3, VCPU_PMC(r9)
|
||||
stw r4, VCPU_PMC + 4(r9)
|
||||
stw r5, VCPU_PMC + 8(r9)
|
||||
stw r6, VCPU_PMC + 12(r9)
|
||||
stw r7, VCPU_PMC + 16(r9)
|
||||
stw r8, VCPU_PMC + 20(r9)
|
||||
BEGIN_FTR_SECTION
|
||||
stw r10, VCPU_PMC + 24(r9)
|
||||
stw r11, VCPU_PMC + 28(r9)
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
BEGIN_FTR_SECTION
|
||||
mfspr r5, SPRN_SIER
|
||||
mfspr r6, SPRN_SPMC1
|
||||
|
@ -1566,11 +1417,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
ptesync
|
||||
|
||||
hdec_soon: /* r12 = trap, r13 = paca */
|
||||
BEGIN_FTR_SECTION
|
||||
b 32f
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
|
||||
/*
|
||||
* POWER7 guest -> host partition switch code.
|
||||
* POWER7/POWER8 guest -> host partition switch code.
|
||||
* We don't have to lock against tlbies but we do
|
||||
* have to coordinate the hardware threads.
|
||||
*/
|
||||
|
@ -1698,87 +1546,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
16: ld r8,KVM_HOST_LPCR(r4)
|
||||
mtspr SPRN_LPCR,r8
|
||||
isync
|
||||
b 33f
|
||||
|
||||
/*
|
||||
* PPC970 guest -> host partition switch code.
|
||||
* We have to lock against concurrent tlbies, and
|
||||
* we have to flush the whole TLB.
|
||||
*/
|
||||
32: ld r5,HSTATE_KVM_VCORE(r13)
|
||||
ld r4,VCORE_KVM(r5) /* pointer to struct kvm */
|
||||
|
||||
/* Take the guest's tlbie_lock */
|
||||
#ifdef __BIG_ENDIAN__
|
||||
lwz r8,PACA_LOCK_TOKEN(r13)
|
||||
#else
|
||||
lwz r8,PACAPACAINDEX(r13)
|
||||
#endif
|
||||
addi r3,r4,KVM_TLBIE_LOCK
|
||||
24: lwarx r0,0,r3
|
||||
cmpwi r0,0
|
||||
bne 24b
|
||||
stwcx. r8,0,r3
|
||||
bne 24b
|
||||
isync
|
||||
|
||||
ld r7,KVM_HOST_LPCR(r4) /* use kvm->arch.host_lpcr for HID4 */
|
||||
li r0,0x18f
|
||||
rotldi r0,r0,HID4_LPID5_SH /* all lpid bits in HID4 = 1 */
|
||||
or r0,r7,r0
|
||||
ptesync
|
||||
sync
|
||||
mtspr SPRN_HID4,r0 /* switch to reserved LPID */
|
||||
isync
|
||||
li r0,0
|
||||
stw r0,0(r3) /* drop guest tlbie_lock */
|
||||
|
||||
/* invalidate the whole TLB */
|
||||
li r0,256
|
||||
mtctr r0
|
||||
li r6,0
|
||||
25: tlbiel r6
|
||||
addi r6,r6,0x1000
|
||||
bdnz 25b
|
||||
ptesync
|
||||
|
||||
/* take native_tlbie_lock */
|
||||
ld r3,toc_tlbie_lock@toc(2)
|
||||
24: lwarx r0,0,r3
|
||||
cmpwi r0,0
|
||||
bne 24b
|
||||
stwcx. r8,0,r3
|
||||
bne 24b
|
||||
isync
|
||||
|
||||
ld r6,KVM_HOST_SDR1(r4)
|
||||
mtspr SPRN_SDR1,r6 /* switch to host page table */
|
||||
|
||||
/* Set up host HID4 value */
|
||||
sync
|
||||
mtspr SPRN_HID4,r7
|
||||
isync
|
||||
li r0,0
|
||||
stw r0,0(r3) /* drop native_tlbie_lock */
|
||||
|
||||
lis r8,0x7fff /* MAX_INT@h */
|
||||
mtspr SPRN_HDEC,r8
|
||||
|
||||
/* Disable HDEC interrupts */
|
||||
mfspr r0,SPRN_HID0
|
||||
li r3,0
|
||||
rldimi r0,r3, HID0_HDICE_SH, 64-HID0_HDICE_SH-1
|
||||
sync
|
||||
mtspr SPRN_HID0,r0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
mfspr r0,SPRN_HID0
|
||||
|
||||
/* load host SLB entries */
|
||||
33: ld r8,PACA_SLBSHADOWPTR(r13)
|
||||
ld r8,PACA_SLBSHADOWPTR(r13)
|
||||
|
||||
.rept SLB_NUM_BOLTED
|
||||
li r3, SLBSHADOW_SAVEAREA
|
||||
|
@ -2047,7 +1817,7 @@ hcall_real_table:
|
|||
.long 0 /* 0xd8 */
|
||||
.long 0 /* 0xdc */
|
||||
.long DOTSYM(kvmppc_h_cede) - hcall_real_table
|
||||
.long 0 /* 0xe4 */
|
||||
.long DOTSYM(kvmppc_rm_h_confer) - hcall_real_table
|
||||
.long 0 /* 0xe8 */
|
||||
.long 0 /* 0xec */
|
||||
.long 0 /* 0xf0 */
|
||||
|
@ -2126,9 +1896,6 @@ _GLOBAL(kvmppc_h_cede)
|
|||
stw r0,VCPU_TRAP(r3)
|
||||
li r0,H_SUCCESS
|
||||
std r0,VCPU_GPR(R3)(r3)
|
||||
BEGIN_FTR_SECTION
|
||||
b kvm_cede_exit /* just send it up to host on 970 */
|
||||
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206)
|
||||
|
||||
/*
|
||||
* Set our bit in the bitmask of napping threads unless all the
|
||||
|
@ -2455,7 +2222,6 @@ BEGIN_FTR_SECTION
|
|||
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
#endif
|
||||
mtmsrd r8
|
||||
isync
|
||||
addi r3,r3,VCPU_FPRS
|
||||
bl store_fp_state
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
|
@ -2491,7 +2257,6 @@ BEGIN_FTR_SECTION
|
|||
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
#endif
|
||||
mtmsrd r8
|
||||
isync
|
||||
addi r3,r4,VCPU_FPRS
|
||||
bl load_fp_state
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
|
|
|
@ -352,14 +352,6 @@ static inline u32 inst_get_field(u32 inst, int msb, int lsb)
|
|||
return kvmppc_get_field(inst, msb + 32, lsb + 32);
|
||||
}
|
||||
|
||||
/*
|
||||
* Replaces inst bits with ordering according to spec.
|
||||
*/
|
||||
static inline u32 inst_set_field(u32 inst, int msb, int lsb, int value)
|
||||
{
|
||||
return kvmppc_set_field(inst, msb + 32, lsb + 32, value);
|
||||
}
|
||||
|
||||
bool kvmppc_inst_is_paired_single(struct kvm_vcpu *vcpu, u32 inst)
|
||||
{
|
||||
if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
|
||||
|
|
|
@ -644,11 +644,6 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
return r;
|
||||
}
|
||||
|
||||
static inline int get_fpr_index(int i)
|
||||
{
|
||||
return i * TS_FPRWIDTH;
|
||||
}
|
||||
|
||||
/* Give up external provider (FPU, Altivec, VSX) */
|
||||
void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
|
||||
{
|
||||
|
|
|
@ -613,10 +613,25 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
|||
* there might be a previously-rejected interrupt needing
|
||||
* to be resent.
|
||||
*
|
||||
* If the CPPR is less favored, then we might be replacing
|
||||
* an interrupt, and thus need to possibly reject it as in
|
||||
*
|
||||
* ICP state: Check_IPI
|
||||
*
|
||||
* If the CPPR is less favored, then we might be replacing
|
||||
* an interrupt, and thus need to possibly reject it.
|
||||
*
|
||||
* ICP State: IPI
|
||||
*
|
||||
* Besides rejecting any pending interrupts, we also
|
||||
* update XISR and pending_pri to mark IPI as pending.
|
||||
*
|
||||
* PAPR does not describe this state, but if the MFRR is being
|
||||
* made less favored than its earlier value, there might be
|
||||
* a previously-rejected interrupt needing to be resent.
|
||||
* Ideally, we would want to resend only if
|
||||
* prio(pending_interrupt) < mfrr &&
|
||||
* prio(pending_interrupt) < cppr
|
||||
* where pending interrupt is the one that was rejected. But
|
||||
* we don't have that state, so we simply trigger a resend
|
||||
* whenever the MFRR is made less favored.
|
||||
*/
|
||||
do {
|
||||
old_state = new_state = ACCESS_ONCE(icp->state);
|
||||
|
@ -629,13 +644,14 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
|||
resend = false;
|
||||
if (mfrr < new_state.cppr) {
|
||||
/* Reject a pending interrupt if not an IPI */
|
||||
if (mfrr <= new_state.pending_pri)
|
||||
if (mfrr <= new_state.pending_pri) {
|
||||
reject = new_state.xisr;
|
||||
new_state.pending_pri = mfrr;
|
||||
new_state.xisr = XICS_IPI;
|
||||
new_state.pending_pri = mfrr;
|
||||
new_state.xisr = XICS_IPI;
|
||||
}
|
||||
}
|
||||
|
||||
if (mfrr > old_state.mfrr && mfrr > new_state.cppr) {
|
||||
if (mfrr > old_state.mfrr) {
|
||||
resend = new_state.need_resend;
|
||||
new_state.need_resend = 0;
|
||||
}
|
||||
|
@ -789,7 +805,7 @@ static noinline int kvmppc_xics_rm_complete(struct kvm_vcpu *vcpu, u32 hcall)
|
|||
if (icp->rm_action & XICS_RM_KICK_VCPU)
|
||||
kvmppc_fast_vcpu_kick(icp->rm_kick_target);
|
||||
if (icp->rm_action & XICS_RM_CHECK_RESEND)
|
||||
icp_check_resend(xics, icp);
|
||||
icp_check_resend(xics, icp->rm_resend_icp);
|
||||
if (icp->rm_action & XICS_RM_REJECT)
|
||||
icp_deliver_irq(xics, icp, icp->rm_reject);
|
||||
if (icp->rm_action & XICS_RM_NOTIFY_EOI)
|
||||
|
|
|
@ -74,6 +74,7 @@ struct kvmppc_icp {
|
|||
#define XICS_RM_NOTIFY_EOI 0x8
|
||||
u32 rm_action;
|
||||
struct kvm_vcpu *rm_kick_target;
|
||||
struct kvmppc_icp *rm_resend_icp;
|
||||
u32 rm_reject;
|
||||
u32 rm_eoied_irq;
|
||||
|
||||
|
|
|
@ -299,14 +299,6 @@ void kvmppc_mmu_msr_notify(struct kvm_vcpu *vcpu, u32 old_msr)
|
|||
kvmppc_e500_recalc_shadow_pid(to_e500(vcpu));
|
||||
}
|
||||
|
||||
void kvmppc_core_load_host_debugstate(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
}
|
||||
|
||||
void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
}
|
||||
|
||||
static void kvmppc_core_vcpu_load_e500(struct kvm_vcpu *vcpu, int cpu)
|
||||
{
|
||||
kvmppc_booke_vcpu_load(vcpu, cpu);
|
||||
|
|
|
@ -527,18 +527,12 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
|||
r = 0;
|
||||
break;
|
||||
case KVM_CAP_PPC_RMA:
|
||||
r = hv_enabled;
|
||||
/* PPC970 requires an RMA */
|
||||
if (r && cpu_has_feature(CPU_FTR_ARCH_201))
|
||||
r = 2;
|
||||
r = 0;
|
||||
break;
|
||||
#endif
|
||||
case KVM_CAP_SYNC_MMU:
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
if (hv_enabled)
|
||||
r = cpu_has_feature(CPU_FTR_ARCH_206) ? 1 : 0;
|
||||
else
|
||||
r = 0;
|
||||
r = hv_enabled;
|
||||
#elif defined(KVM_ARCH_WANT_MMU_NOTIFIER)
|
||||
r = 1;
|
||||
#else
|
||||
|
|
|
@ -0,0 +1,32 @@
|
|||
#if !defined(_TRACE_KVM_BOOK3S_H)
|
||||
#define _TRACE_KVM_BOOK3S_H
|
||||
|
||||
/*
|
||||
* Common defines used by the trace macros in trace_pr.h and trace_hv.h
|
||||
*/
|
||||
|
||||
#define kvm_trace_symbol_exit \
|
||||
{0x100, "SYSTEM_RESET"}, \
|
||||
{0x200, "MACHINE_CHECK"}, \
|
||||
{0x300, "DATA_STORAGE"}, \
|
||||
{0x380, "DATA_SEGMENT"}, \
|
||||
{0x400, "INST_STORAGE"}, \
|
||||
{0x480, "INST_SEGMENT"}, \
|
||||
{0x500, "EXTERNAL"}, \
|
||||
{0x501, "EXTERNAL_LEVEL"}, \
|
||||
{0x502, "EXTERNAL_HV"}, \
|
||||
{0x600, "ALIGNMENT"}, \
|
||||
{0x700, "PROGRAM"}, \
|
||||
{0x800, "FP_UNAVAIL"}, \
|
||||
{0x900, "DECREMENTER"}, \
|
||||
{0x980, "HV_DECREMENTER"}, \
|
||||
{0xc00, "SYSCALL"}, \
|
||||
{0xd00, "TRACE"}, \
|
||||
{0xe00, "H_DATA_STORAGE"}, \
|
||||
{0xe20, "H_INST_STORAGE"}, \
|
||||
{0xe40, "H_EMUL_ASSIST"}, \
|
||||
{0xf00, "PERFMON"}, \
|
||||
{0xf20, "ALTIVEC"}, \
|
||||
{0xf40, "VSX"}
|
||||
|
||||
#endif
|
|
@ -151,6 +151,47 @@ TRACE_EVENT(kvm_booke206_ref_release,
|
|||
__entry->pfn, __entry->flags)
|
||||
);
|
||||
|
||||
#ifdef CONFIG_SPE_POSSIBLE
|
||||
#define kvm_trace_symbol_irqprio_spe \
|
||||
{BOOKE_IRQPRIO_SPE_UNAVAIL, "SPE_UNAVAIL"}, \
|
||||
{BOOKE_IRQPRIO_SPE_FP_DATA, "SPE_FP_DATA"}, \
|
||||
{BOOKE_IRQPRIO_SPE_FP_ROUND, "SPE_FP_ROUND"},
|
||||
#else
|
||||
#define kvm_trace_symbol_irqprio_spe
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_E500MC
|
||||
#define kvm_trace_symbol_irqprio_e500mc \
|
||||
{BOOKE_IRQPRIO_ALTIVEC_UNAVAIL, "ALTIVEC_UNAVAIL"}, \
|
||||
{BOOKE_IRQPRIO_ALTIVEC_ASSIST, "ALTIVEC_ASSIST"},
|
||||
#else
|
||||
#define kvm_trace_symbol_irqprio_e500mc
|
||||
#endif
|
||||
|
||||
#define kvm_trace_symbol_irqprio \
|
||||
kvm_trace_symbol_irqprio_spe \
|
||||
kvm_trace_symbol_irqprio_e500mc \
|
||||
{BOOKE_IRQPRIO_DATA_STORAGE, "DATA_STORAGE"}, \
|
||||
{BOOKE_IRQPRIO_INST_STORAGE, "INST_STORAGE"}, \
|
||||
{BOOKE_IRQPRIO_ALIGNMENT, "ALIGNMENT"}, \
|
||||
{BOOKE_IRQPRIO_PROGRAM, "PROGRAM"}, \
|
||||
{BOOKE_IRQPRIO_FP_UNAVAIL, "FP_UNAVAIL"}, \
|
||||
{BOOKE_IRQPRIO_SYSCALL, "SYSCALL"}, \
|
||||
{BOOKE_IRQPRIO_AP_UNAVAIL, "AP_UNAVAIL"}, \
|
||||
{BOOKE_IRQPRIO_DTLB_MISS, "DTLB_MISS"}, \
|
||||
{BOOKE_IRQPRIO_ITLB_MISS, "ITLB_MISS"}, \
|
||||
{BOOKE_IRQPRIO_MACHINE_CHECK, "MACHINE_CHECK"}, \
|
||||
{BOOKE_IRQPRIO_DEBUG, "DEBUG"}, \
|
||||
{BOOKE_IRQPRIO_CRITICAL, "CRITICAL"}, \
|
||||
{BOOKE_IRQPRIO_WATCHDOG, "WATCHDOG"}, \
|
||||
{BOOKE_IRQPRIO_EXTERNAL, "EXTERNAL"}, \
|
||||
{BOOKE_IRQPRIO_FIT, "FIT"}, \
|
||||
{BOOKE_IRQPRIO_DECREMENTER, "DECREMENTER"}, \
|
||||
{BOOKE_IRQPRIO_PERFORMANCE_MONITOR, "PERFORMANCE_MONITOR"}, \
|
||||
{BOOKE_IRQPRIO_EXTERNAL_LEVEL, "EXTERNAL_LEVEL"}, \
|
||||
{BOOKE_IRQPRIO_DBELL, "DBELL"}, \
|
||||
{BOOKE_IRQPRIO_DBELL_CRIT, "DBELL_CRIT"} \
|
||||
|
||||
TRACE_EVENT(kvm_booke_queue_irqprio,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int priority),
|
||||
TP_ARGS(vcpu, priority),
|
||||
|
@ -167,8 +208,10 @@ TRACE_EVENT(kvm_booke_queue_irqprio,
|
|||
__entry->pending = vcpu->arch.pending_exceptions;
|
||||
),
|
||||
|
||||
TP_printk("vcpu=%x prio=%x pending=%lx",
|
||||
__entry->cpu_nr, __entry->priority, __entry->pending)
|
||||
TP_printk("vcpu=%x prio=%s pending=%lx",
|
||||
__entry->cpu_nr,
|
||||
__print_symbolic(__entry->priority, kvm_trace_symbol_irqprio),
|
||||
__entry->pending)
|
||||
);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -0,0 +1,477 @@
|
|||
#if !defined(_TRACE_KVM_HV_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _TRACE_KVM_HV_H
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
#include "trace_book3s.h"
|
||||
#include <asm/hvcall.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM kvm_hv
|
||||
#define TRACE_INCLUDE_PATH .
|
||||
#define TRACE_INCLUDE_FILE trace_hv
|
||||
|
||||
#define kvm_trace_symbol_hcall \
|
||||
{H_REMOVE, "H_REMOVE"}, \
|
||||
{H_ENTER, "H_ENTER"}, \
|
||||
{H_READ, "H_READ"}, \
|
||||
{H_CLEAR_MOD, "H_CLEAR_MOD"}, \
|
||||
{H_CLEAR_REF, "H_CLEAR_REF"}, \
|
||||
{H_PROTECT, "H_PROTECT"}, \
|
||||
{H_GET_TCE, "H_GET_TCE"}, \
|
||||
{H_PUT_TCE, "H_PUT_TCE"}, \
|
||||
{H_SET_SPRG0, "H_SET_SPRG0"}, \
|
||||
{H_SET_DABR, "H_SET_DABR"}, \
|
||||
{H_PAGE_INIT, "H_PAGE_INIT"}, \
|
||||
{H_SET_ASR, "H_SET_ASR"}, \
|
||||
{H_ASR_ON, "H_ASR_ON"}, \
|
||||
{H_ASR_OFF, "H_ASR_OFF"}, \
|
||||
{H_LOGICAL_CI_LOAD, "H_LOGICAL_CI_LOAD"}, \
|
||||
{H_LOGICAL_CI_STORE, "H_LOGICAL_CI_STORE"}, \
|
||||
{H_LOGICAL_CACHE_LOAD, "H_LOGICAL_CACHE_LOAD"}, \
|
||||
{H_LOGICAL_CACHE_STORE, "H_LOGICAL_CACHE_STORE"}, \
|
||||
{H_LOGICAL_ICBI, "H_LOGICAL_ICBI"}, \
|
||||
{H_LOGICAL_DCBF, "H_LOGICAL_DCBF"}, \
|
||||
{H_GET_TERM_CHAR, "H_GET_TERM_CHAR"}, \
|
||||
{H_PUT_TERM_CHAR, "H_PUT_TERM_CHAR"}, \
|
||||
{H_REAL_TO_LOGICAL, "H_REAL_TO_LOGICAL"}, \
|
||||
{H_HYPERVISOR_DATA, "H_HYPERVISOR_DATA"}, \
|
||||
{H_EOI, "H_EOI"}, \
|
||||
{H_CPPR, "H_CPPR"}, \
|
||||
{H_IPI, "H_IPI"}, \
|
||||
{H_IPOLL, "H_IPOLL"}, \
|
||||
{H_XIRR, "H_XIRR"}, \
|
||||
{H_PERFMON, "H_PERFMON"}, \
|
||||
{H_MIGRATE_DMA, "H_MIGRATE_DMA"}, \
|
||||
{H_REGISTER_VPA, "H_REGISTER_VPA"}, \
|
||||
{H_CEDE, "H_CEDE"}, \
|
||||
{H_CONFER, "H_CONFER"}, \
|
||||
{H_PROD, "H_PROD"}, \
|
||||
{H_GET_PPP, "H_GET_PPP"}, \
|
||||
{H_SET_PPP, "H_SET_PPP"}, \
|
||||
{H_PURR, "H_PURR"}, \
|
||||
{H_PIC, "H_PIC"}, \
|
||||
{H_REG_CRQ, "H_REG_CRQ"}, \
|
||||
{H_FREE_CRQ, "H_FREE_CRQ"}, \
|
||||
{H_VIO_SIGNAL, "H_VIO_SIGNAL"}, \
|
||||
{H_SEND_CRQ, "H_SEND_CRQ"}, \
|
||||
{H_COPY_RDMA, "H_COPY_RDMA"}, \
|
||||
{H_REGISTER_LOGICAL_LAN, "H_REGISTER_LOGICAL_LAN"}, \
|
||||
{H_FREE_LOGICAL_LAN, "H_FREE_LOGICAL_LAN"}, \
|
||||
{H_ADD_LOGICAL_LAN_BUFFER, "H_ADD_LOGICAL_LAN_BUFFER"}, \
|
||||
{H_SEND_LOGICAL_LAN, "H_SEND_LOGICAL_LAN"}, \
|
||||
{H_BULK_REMOVE, "H_BULK_REMOVE"}, \
|
||||
{H_MULTICAST_CTRL, "H_MULTICAST_CTRL"}, \
|
||||
{H_SET_XDABR, "H_SET_XDABR"}, \
|
||||
{H_STUFF_TCE, "H_STUFF_TCE"}, \
|
||||
{H_PUT_TCE_INDIRECT, "H_PUT_TCE_INDIRECT"}, \
|
||||
{H_CHANGE_LOGICAL_LAN_MAC, "H_CHANGE_LOGICAL_LAN_MAC"}, \
|
||||
{H_VTERM_PARTNER_INFO, "H_VTERM_PARTNER_INFO"}, \
|
||||
{H_REGISTER_VTERM, "H_REGISTER_VTERM"}, \
|
||||
{H_FREE_VTERM, "H_FREE_VTERM"}, \
|
||||
{H_RESET_EVENTS, "H_RESET_EVENTS"}, \
|
||||
{H_ALLOC_RESOURCE, "H_ALLOC_RESOURCE"}, \
|
||||
{H_FREE_RESOURCE, "H_FREE_RESOURCE"}, \
|
||||
{H_MODIFY_QP, "H_MODIFY_QP"}, \
|
||||
{H_QUERY_QP, "H_QUERY_QP"}, \
|
||||
{H_REREGISTER_PMR, "H_REREGISTER_PMR"}, \
|
||||
{H_REGISTER_SMR, "H_REGISTER_SMR"}, \
|
||||
{H_QUERY_MR, "H_QUERY_MR"}, \
|
||||
{H_QUERY_MW, "H_QUERY_MW"}, \
|
||||
{H_QUERY_HCA, "H_QUERY_HCA"}, \
|
||||
{H_QUERY_PORT, "H_QUERY_PORT"}, \
|
||||
{H_MODIFY_PORT, "H_MODIFY_PORT"}, \
|
||||
{H_DEFINE_AQP1, "H_DEFINE_AQP1"}, \
|
||||
{H_GET_TRACE_BUFFER, "H_GET_TRACE_BUFFER"}, \
|
||||
{H_DEFINE_AQP0, "H_DEFINE_AQP0"}, \
|
||||
{H_RESIZE_MR, "H_RESIZE_MR"}, \
|
||||
{H_ATTACH_MCQP, "H_ATTACH_MCQP"}, \
|
||||
{H_DETACH_MCQP, "H_DETACH_MCQP"}, \
|
||||
{H_CREATE_RPT, "H_CREATE_RPT"}, \
|
||||
{H_REMOVE_RPT, "H_REMOVE_RPT"}, \
|
||||
{H_REGISTER_RPAGES, "H_REGISTER_RPAGES"}, \
|
||||
{H_DISABLE_AND_GETC, "H_DISABLE_AND_GETC"}, \
|
||||
{H_ERROR_DATA, "H_ERROR_DATA"}, \
|
||||
{H_GET_HCA_INFO, "H_GET_HCA_INFO"}, \
|
||||
{H_GET_PERF_COUNT, "H_GET_PERF_COUNT"}, \
|
||||
{H_MANAGE_TRACE, "H_MANAGE_TRACE"}, \
|
||||
{H_FREE_LOGICAL_LAN_BUFFER, "H_FREE_LOGICAL_LAN_BUFFER"}, \
|
||||
{H_QUERY_INT_STATE, "H_QUERY_INT_STATE"}, \
|
||||
{H_POLL_PENDING, "H_POLL_PENDING"}, \
|
||||
{H_ILLAN_ATTRIBUTES, "H_ILLAN_ATTRIBUTES"}, \
|
||||
{H_MODIFY_HEA_QP, "H_MODIFY_HEA_QP"}, \
|
||||
{H_QUERY_HEA_QP, "H_QUERY_HEA_QP"}, \
|
||||
{H_QUERY_HEA, "H_QUERY_HEA"}, \
|
||||
{H_QUERY_HEA_PORT, "H_QUERY_HEA_PORT"}, \
|
||||
{H_MODIFY_HEA_PORT, "H_MODIFY_HEA_PORT"}, \
|
||||
{H_REG_BCMC, "H_REG_BCMC"}, \
|
||||
{H_DEREG_BCMC, "H_DEREG_BCMC"}, \
|
||||
{H_REGISTER_HEA_RPAGES, "H_REGISTER_HEA_RPAGES"}, \
|
||||
{H_DISABLE_AND_GET_HEA, "H_DISABLE_AND_GET_HEA"}, \
|
||||
{H_GET_HEA_INFO, "H_GET_HEA_INFO"}, \
|
||||
{H_ALLOC_HEA_RESOURCE, "H_ALLOC_HEA_RESOURCE"}, \
|
||||
{H_ADD_CONN, "H_ADD_CONN"}, \
|
||||
{H_DEL_CONN, "H_DEL_CONN"}, \
|
||||
{H_JOIN, "H_JOIN"}, \
|
||||
{H_VASI_STATE, "H_VASI_STATE"}, \
|
||||
{H_ENABLE_CRQ, "H_ENABLE_CRQ"}, \
|
||||
{H_GET_EM_PARMS, "H_GET_EM_PARMS"}, \
|
||||
{H_SET_MPP, "H_SET_MPP"}, \
|
||||
{H_GET_MPP, "H_GET_MPP"}, \
|
||||
{H_HOME_NODE_ASSOCIATIVITY, "H_HOME_NODE_ASSOCIATIVITY"}, \
|
||||
{H_BEST_ENERGY, "H_BEST_ENERGY"}, \
|
||||
{H_XIRR_X, "H_XIRR_X"}, \
|
||||
{H_RANDOM, "H_RANDOM"}, \
|
||||
{H_COP, "H_COP"}, \
|
||||
{H_GET_MPP_X, "H_GET_MPP_X"}, \
|
||||
{H_SET_MODE, "H_SET_MODE"}, \
|
||||
{H_RTAS, "H_RTAS"}
|
||||
|
||||
#define kvm_trace_symbol_kvmret \
|
||||
{RESUME_GUEST, "RESUME_GUEST"}, \
|
||||
{RESUME_GUEST_NV, "RESUME_GUEST_NV"}, \
|
||||
{RESUME_HOST, "RESUME_HOST"}, \
|
||||
{RESUME_HOST_NV, "RESUME_HOST_NV"}
|
||||
|
||||
#define kvm_trace_symbol_hcall_rc \
|
||||
{H_SUCCESS, "H_SUCCESS"}, \
|
||||
{H_BUSY, "H_BUSY"}, \
|
||||
{H_CLOSED, "H_CLOSED"}, \
|
||||
{H_NOT_AVAILABLE, "H_NOT_AVAILABLE"}, \
|
||||
{H_CONSTRAINED, "H_CONSTRAINED"}, \
|
||||
{H_PARTIAL, "H_PARTIAL"}, \
|
||||
{H_IN_PROGRESS, "H_IN_PROGRESS"}, \
|
||||
{H_PAGE_REGISTERED, "H_PAGE_REGISTERED"}, \
|
||||
{H_PARTIAL_STORE, "H_PARTIAL_STORE"}, \
|
||||
{H_PENDING, "H_PENDING"}, \
|
||||
{H_CONTINUE, "H_CONTINUE"}, \
|
||||
{H_LONG_BUSY_START_RANGE, "H_LONG_BUSY_START_RANGE"}, \
|
||||
{H_LONG_BUSY_ORDER_1_MSEC, "H_LONG_BUSY_ORDER_1_MSEC"}, \
|
||||
{H_LONG_BUSY_ORDER_10_MSEC, "H_LONG_BUSY_ORDER_10_MSEC"}, \
|
||||
{H_LONG_BUSY_ORDER_100_MSEC, "H_LONG_BUSY_ORDER_100_MSEC"}, \
|
||||
{H_LONG_BUSY_ORDER_1_SEC, "H_LONG_BUSY_ORDER_1_SEC"}, \
|
||||
{H_LONG_BUSY_ORDER_10_SEC, "H_LONG_BUSY_ORDER_10_SEC"}, \
|
||||
{H_LONG_BUSY_ORDER_100_SEC, "H_LONG_BUSY_ORDER_100_SEC"}, \
|
||||
{H_LONG_BUSY_END_RANGE, "H_LONG_BUSY_END_RANGE"}, \
|
||||
{H_TOO_HARD, "H_TOO_HARD"}, \
|
||||
{H_HARDWARE, "H_HARDWARE"}, \
|
||||
{H_FUNCTION, "H_FUNCTION"}, \
|
||||
{H_PRIVILEGE, "H_PRIVILEGE"}, \
|
||||
{H_PARAMETER, "H_PARAMETER"}, \
|
||||
{H_BAD_MODE, "H_BAD_MODE"}, \
|
||||
{H_PTEG_FULL, "H_PTEG_FULL"}, \
|
||||
{H_NOT_FOUND, "H_NOT_FOUND"}, \
|
||||
{H_RESERVED_DABR, "H_RESERVED_DABR"}, \
|
||||
{H_NO_MEM, "H_NO_MEM"}, \
|
||||
{H_AUTHORITY, "H_AUTHORITY"}, \
|
||||
{H_PERMISSION, "H_PERMISSION"}, \
|
||||
{H_DROPPED, "H_DROPPED"}, \
|
||||
{H_SOURCE_PARM, "H_SOURCE_PARM"}, \
|
||||
{H_DEST_PARM, "H_DEST_PARM"}, \
|
||||
{H_REMOTE_PARM, "H_REMOTE_PARM"}, \
|
||||
{H_RESOURCE, "H_RESOURCE"}, \
|
||||
{H_ADAPTER_PARM, "H_ADAPTER_PARM"}, \
|
||||
{H_RH_PARM, "H_RH_PARM"}, \
|
||||
{H_RCQ_PARM, "H_RCQ_PARM"}, \
|
||||
{H_SCQ_PARM, "H_SCQ_PARM"}, \
|
||||
{H_EQ_PARM, "H_EQ_PARM"}, \
|
||||
{H_RT_PARM, "H_RT_PARM"}, \
|
||||
{H_ST_PARM, "H_ST_PARM"}, \
|
||||
{H_SIGT_PARM, "H_SIGT_PARM"}, \
|
||||
{H_TOKEN_PARM, "H_TOKEN_PARM"}, \
|
||||
{H_MLENGTH_PARM, "H_MLENGTH_PARM"}, \
|
||||
{H_MEM_PARM, "H_MEM_PARM"}, \
|
||||
{H_MEM_ACCESS_PARM, "H_MEM_ACCESS_PARM"}, \
|
||||
{H_ATTR_PARM, "H_ATTR_PARM"}, \
|
||||
{H_PORT_PARM, "H_PORT_PARM"}, \
|
||||
{H_MCG_PARM, "H_MCG_PARM"}, \
|
||||
{H_VL_PARM, "H_VL_PARM"}, \
|
||||
{H_TSIZE_PARM, "H_TSIZE_PARM"}, \
|
||||
{H_TRACE_PARM, "H_TRACE_PARM"}, \
|
||||
{H_MASK_PARM, "H_MASK_PARM"}, \
|
||||
{H_MCG_FULL, "H_MCG_FULL"}, \
|
||||
{H_ALIAS_EXIST, "H_ALIAS_EXIST"}, \
|
||||
{H_P_COUNTER, "H_P_COUNTER"}, \
|
||||
{H_TABLE_FULL, "H_TABLE_FULL"}, \
|
||||
{H_ALT_TABLE, "H_ALT_TABLE"}, \
|
||||
{H_MR_CONDITION, "H_MR_CONDITION"}, \
|
||||
{H_NOT_ENOUGH_RESOURCES, "H_NOT_ENOUGH_RESOURCES"}, \
|
||||
{H_R_STATE, "H_R_STATE"}, \
|
||||
{H_RESCINDED, "H_RESCINDED"}, \
|
||||
{H_P2, "H_P2"}, \
|
||||
{H_P3, "H_P3"}, \
|
||||
{H_P4, "H_P4"}, \
|
||||
{H_P5, "H_P5"}, \
|
||||
{H_P6, "H_P6"}, \
|
||||
{H_P7, "H_P7"}, \
|
||||
{H_P8, "H_P8"}, \
|
||||
{H_P9, "H_P9"}, \
|
||||
{H_TOO_BIG, "H_TOO_BIG"}, \
|
||||
{H_OVERLAP, "H_OVERLAP"}, \
|
||||
{H_INTERRUPT, "H_INTERRUPT"}, \
|
||||
{H_BAD_DATA, "H_BAD_DATA"}, \
|
||||
{H_NOT_ACTIVE, "H_NOT_ACTIVE"}, \
|
||||
{H_SG_LIST, "H_SG_LIST"}, \
|
||||
{H_OP_MODE, "H_OP_MODE"}, \
|
||||
{H_COP_HW, "H_COP_HW"}, \
|
||||
{H_UNSUPPORTED_FLAG_START, "H_UNSUPPORTED_FLAG_START"}, \
|
||||
{H_UNSUPPORTED_FLAG_END, "H_UNSUPPORTED_FLAG_END"}, \
|
||||
{H_MULTI_THREADS_ACTIVE, "H_MULTI_THREADS_ACTIVE"}, \
|
||||
{H_OUTSTANDING_COP_OPS, "H_OUTSTANDING_COP_OPS"}
|
||||
|
||||
TRACE_EVENT(kvm_guest_enter,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu),
|
||||
TP_ARGS(vcpu),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(unsigned long, pc)
|
||||
__field(unsigned long, pending_exceptions)
|
||||
__field(u8, ceded)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->pc = kvmppc_get_pc(vcpu);
|
||||
__entry->ceded = vcpu->arch.ceded;
|
||||
__entry->pending_exceptions = vcpu->arch.pending_exceptions;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: pc=0x%lx pexcp=0x%lx ceded=%d",
|
||||
__entry->vcpu_id,
|
||||
__entry->pc,
|
||||
__entry->pending_exceptions, __entry->ceded)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_guest_exit,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu),
|
||||
TP_ARGS(vcpu),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(int, trap)
|
||||
__field(unsigned long, pc)
|
||||
__field(unsigned long, msr)
|
||||
__field(u8, ceded)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->trap = vcpu->arch.trap;
|
||||
__entry->ceded = vcpu->arch.ceded;
|
||||
__entry->pc = kvmppc_get_pc(vcpu);
|
||||
__entry->msr = vcpu->arch.shregs.msr;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: trap=%s pc=0x%lx msr=0x%lx, ceded=%d",
|
||||
__entry->vcpu_id,
|
||||
__print_symbolic(__entry->trap, kvm_trace_symbol_exit),
|
||||
__entry->pc, __entry->msr, __entry->ceded
|
||||
)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_page_fault_enter,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, unsigned long *hptep,
|
||||
struct kvm_memory_slot *memslot, unsigned long ea,
|
||||
unsigned long dsisr),
|
||||
|
||||
TP_ARGS(vcpu, hptep, memslot, ea, dsisr),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(unsigned long, hpte_v)
|
||||
__field(unsigned long, hpte_r)
|
||||
__field(unsigned long, gpte_r)
|
||||
__field(unsigned long, ea)
|
||||
__field(u64, base_gfn)
|
||||
__field(u32, slot_flags)
|
||||
__field(u32, dsisr)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->hpte_v = hptep[0];
|
||||
__entry->hpte_r = hptep[1];
|
||||
__entry->gpte_r = hptep[2];
|
||||
__entry->ea = ea;
|
||||
__entry->dsisr = dsisr;
|
||||
__entry->base_gfn = memslot ? memslot->base_gfn : -1UL;
|
||||
__entry->slot_flags = memslot ? memslot->flags : 0;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: hpte=0x%lx:0x%lx guest=0x%lx ea=0x%lx,%x slot=0x%llx,0x%x",
|
||||
__entry->vcpu_id,
|
||||
__entry->hpte_v, __entry->hpte_r, __entry->gpte_r,
|
||||
__entry->ea, __entry->dsisr,
|
||||
__entry->base_gfn, __entry->slot_flags)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_page_fault_exit,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, unsigned long *hptep, long ret),
|
||||
|
||||
TP_ARGS(vcpu, hptep, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(unsigned long, hpte_v)
|
||||
__field(unsigned long, hpte_r)
|
||||
__field(long, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->hpte_v = hptep[0];
|
||||
__entry->hpte_r = hptep[1];
|
||||
__entry->ret = ret;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: hpte=0x%lx:0x%lx ret=0x%lx",
|
||||
__entry->vcpu_id,
|
||||
__entry->hpte_v, __entry->hpte_r, __entry->ret)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_hcall_enter,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu),
|
||||
|
||||
TP_ARGS(vcpu),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(unsigned long, req)
|
||||
__field(unsigned long, gpr4)
|
||||
__field(unsigned long, gpr5)
|
||||
__field(unsigned long, gpr6)
|
||||
__field(unsigned long, gpr7)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->req = kvmppc_get_gpr(vcpu, 3);
|
||||
__entry->gpr4 = kvmppc_get_gpr(vcpu, 4);
|
||||
__entry->gpr5 = kvmppc_get_gpr(vcpu, 5);
|
||||
__entry->gpr6 = kvmppc_get_gpr(vcpu, 6);
|
||||
__entry->gpr7 = kvmppc_get_gpr(vcpu, 7);
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: hcall=%s GPR4-7=0x%lx,0x%lx,0x%lx,0x%lx",
|
||||
__entry->vcpu_id,
|
||||
__print_symbolic(__entry->req, kvm_trace_symbol_hcall),
|
||||
__entry->gpr4, __entry->gpr5, __entry->gpr6, __entry->gpr7)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvm_hcall_exit,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, int ret),
|
||||
|
||||
TP_ARGS(vcpu, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(unsigned long, ret)
|
||||
__field(unsigned long, hcall_rc)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->ret = ret;
|
||||
__entry->hcall_rc = kvmppc_get_gpr(vcpu, 3);
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: ret=%s hcall_rc=%s",
|
||||
__entry->vcpu_id,
|
||||
__print_symbolic(__entry->ret, kvm_trace_symbol_kvmret),
|
||||
__print_symbolic(__entry->ret & RESUME_FLAG_HOST ?
|
||||
H_TOO_HARD : __entry->hcall_rc,
|
||||
kvm_trace_symbol_hcall_rc))
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvmppc_run_core,
|
||||
TP_PROTO(struct kvmppc_vcore *vc, int where),
|
||||
|
||||
TP_ARGS(vc, where),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, n_runnable)
|
||||
__field(int, runner_vcpu)
|
||||
__field(int, where)
|
||||
__field(pid_t, tgid)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->runner_vcpu = vc->runner->vcpu_id;
|
||||
__entry->n_runnable = vc->n_runnable;
|
||||
__entry->where = where;
|
||||
__entry->tgid = current->tgid;
|
||||
),
|
||||
|
||||
TP_printk("%s runner_vcpu==%d runnable=%d tgid=%d",
|
||||
__entry->where ? "Exit" : "Enter",
|
||||
__entry->runner_vcpu, __entry->n_runnable, __entry->tgid)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvmppc_vcore_blocked,
|
||||
TP_PROTO(struct kvmppc_vcore *vc, int where),
|
||||
|
||||
TP_ARGS(vc, where),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, n_runnable)
|
||||
__field(int, runner_vcpu)
|
||||
__field(int, where)
|
||||
__field(pid_t, tgid)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->runner_vcpu = vc->runner->vcpu_id;
|
||||
__entry->n_runnable = vc->n_runnable;
|
||||
__entry->where = where;
|
||||
__entry->tgid = current->tgid;
|
||||
),
|
||||
|
||||
TP_printk("%s runner_vcpu=%d runnable=%d tgid=%d",
|
||||
__entry->where ? "Exit" : "Enter",
|
||||
__entry->runner_vcpu, __entry->n_runnable, __entry->tgid)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvmppc_run_vcpu_enter,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu),
|
||||
|
||||
TP_ARGS(vcpu),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(pid_t, tgid)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->tgid = current->tgid;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: tgid=%d", __entry->vcpu_id, __entry->tgid)
|
||||
);
|
||||
|
||||
TRACE_EVENT(kvmppc_run_vcpu_exit,
|
||||
TP_PROTO(struct kvm_vcpu *vcpu, struct kvm_run *run),
|
||||
|
||||
TP_ARGS(vcpu, run),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, vcpu_id)
|
||||
__field(int, exit)
|
||||
__field(int, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->vcpu_id = vcpu->vcpu_id;
|
||||
__entry->exit = run->exit_reason;
|
||||
__entry->ret = vcpu->arch.ret;
|
||||
),
|
||||
|
||||
TP_printk("VCPU %d: exit=%d, ret=%d",
|
||||
__entry->vcpu_id, __entry->exit, __entry->ret)
|
||||
);
|
||||
|
||||
#endif /* _TRACE_KVM_HV_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
#include <trace/define_trace.h>
|
|
@ -3,36 +3,13 @@
|
|||
#define _TRACE_KVM_PR_H
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
#include "trace_book3s.h"
|
||||
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM kvm_pr
|
||||
#define TRACE_INCLUDE_PATH .
|
||||
#define TRACE_INCLUDE_FILE trace_pr
|
||||
|
||||
#define kvm_trace_symbol_exit \
|
||||
{0x100, "SYSTEM_RESET"}, \
|
||||
{0x200, "MACHINE_CHECK"}, \
|
||||
{0x300, "DATA_STORAGE"}, \
|
||||
{0x380, "DATA_SEGMENT"}, \
|
||||
{0x400, "INST_STORAGE"}, \
|
||||
{0x480, "INST_SEGMENT"}, \
|
||||
{0x500, "EXTERNAL"}, \
|
||||
{0x501, "EXTERNAL_LEVEL"}, \
|
||||
{0x502, "EXTERNAL_HV"}, \
|
||||
{0x600, "ALIGNMENT"}, \
|
||||
{0x700, "PROGRAM"}, \
|
||||
{0x800, "FP_UNAVAIL"}, \
|
||||
{0x900, "DECREMENTER"}, \
|
||||
{0x980, "HV_DECREMENTER"}, \
|
||||
{0xc00, "SYSCALL"}, \
|
||||
{0xd00, "TRACE"}, \
|
||||
{0xe00, "H_DATA_STORAGE"}, \
|
||||
{0xe20, "H_INST_STORAGE"}, \
|
||||
{0xe40, "H_EMUL_ASSIST"}, \
|
||||
{0xf00, "PERFMON"}, \
|
||||
{0xf20, "ALTIVEC"}, \
|
||||
{0xf40, "VSX"}
|
||||
|
||||
TRACE_EVENT(kvm_book3s_reenter,
|
||||
TP_PROTO(int r, struct kvm_vcpu *vcpu),
|
||||
TP_ARGS(r, vcpu),
|
||||
|
|
|
@ -123,7 +123,7 @@ struct kvm_s390_sie_block {
|
|||
#define ICPT_PARTEXEC 0x38
|
||||
#define ICPT_IOINST 0x40
|
||||
__u8 icptcode; /* 0x0050 */
|
||||
__u8 reserved51; /* 0x0051 */
|
||||
__u8 icptstatus; /* 0x0051 */
|
||||
__u16 ihcpu; /* 0x0052 */
|
||||
__u8 reserved54[2]; /* 0x0054 */
|
||||
__u16 ipa; /* 0x0056 */
|
||||
|
@ -226,10 +226,17 @@ struct kvm_vcpu_stat {
|
|||
u32 instruction_sigp_sense_running;
|
||||
u32 instruction_sigp_external_call;
|
||||
u32 instruction_sigp_emergency;
|
||||
u32 instruction_sigp_cond_emergency;
|
||||
u32 instruction_sigp_start;
|
||||
u32 instruction_sigp_stop;
|
||||
u32 instruction_sigp_stop_store_status;
|
||||
u32 instruction_sigp_store_status;
|
||||
u32 instruction_sigp_arch;
|
||||
u32 instruction_sigp_prefix;
|
||||
u32 instruction_sigp_restart;
|
||||
u32 instruction_sigp_init_cpu_reset;
|
||||
u32 instruction_sigp_cpu_reset;
|
||||
u32 instruction_sigp_unknown;
|
||||
u32 diagnose_10;
|
||||
u32 diagnose_44;
|
||||
u32 diagnose_9c;
|
||||
|
@ -288,6 +295,79 @@ struct kvm_vcpu_stat {
|
|||
#define PGM_PER 0x80
|
||||
#define PGM_CRYPTO_OPERATION 0x119
|
||||
|
||||
/* irq types in order of priority */
|
||||
enum irq_types {
|
||||
IRQ_PEND_MCHK_EX = 0,
|
||||
IRQ_PEND_SVC,
|
||||
IRQ_PEND_PROG,
|
||||
IRQ_PEND_MCHK_REP,
|
||||
IRQ_PEND_EXT_IRQ_KEY,
|
||||
IRQ_PEND_EXT_MALFUNC,
|
||||
IRQ_PEND_EXT_EMERGENCY,
|
||||
IRQ_PEND_EXT_EXTERNAL,
|
||||
IRQ_PEND_EXT_CLOCK_COMP,
|
||||
IRQ_PEND_EXT_CPU_TIMER,
|
||||
IRQ_PEND_EXT_TIMING,
|
||||
IRQ_PEND_EXT_SERVICE,
|
||||
IRQ_PEND_EXT_HOST,
|
||||
IRQ_PEND_PFAULT_INIT,
|
||||
IRQ_PEND_PFAULT_DONE,
|
||||
IRQ_PEND_VIRTIO,
|
||||
IRQ_PEND_IO_ISC_0,
|
||||
IRQ_PEND_IO_ISC_1,
|
||||
IRQ_PEND_IO_ISC_2,
|
||||
IRQ_PEND_IO_ISC_3,
|
||||
IRQ_PEND_IO_ISC_4,
|
||||
IRQ_PEND_IO_ISC_5,
|
||||
IRQ_PEND_IO_ISC_6,
|
||||
IRQ_PEND_IO_ISC_7,
|
||||
IRQ_PEND_SIGP_STOP,
|
||||
IRQ_PEND_RESTART,
|
||||
IRQ_PEND_SET_PREFIX,
|
||||
IRQ_PEND_COUNT
|
||||
};
|
||||
|
||||
/*
|
||||
* Repressible (non-floating) machine check interrupts
|
||||
* subclass bits in MCIC
|
||||
*/
|
||||
#define MCHK_EXTD_BIT 58
|
||||
#define MCHK_DEGR_BIT 56
|
||||
#define MCHK_WARN_BIT 55
|
||||
#define MCHK_REP_MASK ((1UL << MCHK_DEGR_BIT) | \
|
||||
(1UL << MCHK_EXTD_BIT) | \
|
||||
(1UL << MCHK_WARN_BIT))
|
||||
|
||||
/* Exigent machine check interrupts subclass bits in MCIC */
|
||||
#define MCHK_SD_BIT 63
|
||||
#define MCHK_PD_BIT 62
|
||||
#define MCHK_EX_MASK ((1UL << MCHK_SD_BIT) | (1UL << MCHK_PD_BIT))
|
||||
|
||||
#define IRQ_PEND_EXT_MASK ((1UL << IRQ_PEND_EXT_IRQ_KEY) | \
|
||||
(1UL << IRQ_PEND_EXT_CLOCK_COMP) | \
|
||||
(1UL << IRQ_PEND_EXT_CPU_TIMER) | \
|
||||
(1UL << IRQ_PEND_EXT_MALFUNC) | \
|
||||
(1UL << IRQ_PEND_EXT_EMERGENCY) | \
|
||||
(1UL << IRQ_PEND_EXT_EXTERNAL) | \
|
||||
(1UL << IRQ_PEND_EXT_TIMING) | \
|
||||
(1UL << IRQ_PEND_EXT_HOST) | \
|
||||
(1UL << IRQ_PEND_EXT_SERVICE) | \
|
||||
(1UL << IRQ_PEND_VIRTIO) | \
|
||||
(1UL << IRQ_PEND_PFAULT_INIT) | \
|
||||
(1UL << IRQ_PEND_PFAULT_DONE))
|
||||
|
||||
#define IRQ_PEND_IO_MASK ((1UL << IRQ_PEND_IO_ISC_0) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_1) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_2) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_3) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_4) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_5) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_6) | \
|
||||
(1UL << IRQ_PEND_IO_ISC_7))
|
||||
|
||||
#define IRQ_PEND_MCHK_MASK ((1UL << IRQ_PEND_MCHK_REP) | \
|
||||
(1UL << IRQ_PEND_MCHK_EX))
|
||||
|
||||
struct kvm_s390_interrupt_info {
|
||||
struct list_head list;
|
||||
u64 type;
|
||||
|
@ -306,14 +386,25 @@ struct kvm_s390_interrupt_info {
|
|||
#define ACTION_STORE_ON_STOP (1<<0)
|
||||
#define ACTION_STOP_ON_STOP (1<<1)
|
||||
|
||||
struct kvm_s390_irq_payload {
|
||||
struct kvm_s390_io_info io;
|
||||
struct kvm_s390_ext_info ext;
|
||||
struct kvm_s390_pgm_info pgm;
|
||||
struct kvm_s390_emerg_info emerg;
|
||||
struct kvm_s390_extcall_info extcall;
|
||||
struct kvm_s390_prefix_info prefix;
|
||||
struct kvm_s390_mchk_info mchk;
|
||||
};
|
||||
|
||||
struct kvm_s390_local_interrupt {
|
||||
spinlock_t lock;
|
||||
struct list_head list;
|
||||
atomic_t active;
|
||||
struct kvm_s390_float_interrupt *float_int;
|
||||
wait_queue_head_t *wq;
|
||||
atomic_t *cpuflags;
|
||||
unsigned int action_bits;
|
||||
DECLARE_BITMAP(sigp_emerg_pending, KVM_MAX_VCPUS);
|
||||
struct kvm_s390_irq_payload irq;
|
||||
unsigned long pending_irqs;
|
||||
};
|
||||
|
||||
struct kvm_s390_float_interrupt {
|
||||
|
@ -434,6 +525,8 @@ struct kvm_arch{
|
|||
int user_cpu_state_ctrl;
|
||||
struct s390_io_adapter *adapters[MAX_S390_IO_ADAPTERS];
|
||||
wait_queue_head_t ipte_wq;
|
||||
int ipte_lock_count;
|
||||
struct mutex ipte_mutex;
|
||||
spinlock_t start_stop_lock;
|
||||
struct kvm_s390_crypto crypto;
|
||||
};
|
||||
|
|
|
@ -24,6 +24,7 @@ void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long);
|
|||
|
||||
int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
|
||||
unsigned long key, bool nq);
|
||||
unsigned long get_guest_storage_key(struct mm_struct *mm, unsigned long addr);
|
||||
|
||||
static inline void clear_table(unsigned long *s, unsigned long val, size_t n)
|
||||
{
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#define SIGP_RESTART 6
|
||||
#define SIGP_STOP_AND_STORE_STATUS 9
|
||||
#define SIGP_INITIAL_CPU_RESET 11
|
||||
#define SIGP_CPU_RESET 12
|
||||
#define SIGP_SET_PREFIX 13
|
||||
#define SIGP_STORE_STATUS_AT_ADDRESS 14
|
||||
#define SIGP_SET_ARCHITECTURE 18
|
||||
|
|
|
@ -207,8 +207,6 @@ union raddress {
|
|||
unsigned long pfra : 52; /* Page-Frame Real Address */
|
||||
};
|
||||
|
||||
static int ipte_lock_count;
|
||||
static DEFINE_MUTEX(ipte_mutex);
|
||||
|
||||
int ipte_lock_held(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
@ -216,47 +214,51 @@ int ipte_lock_held(struct kvm_vcpu *vcpu)
|
|||
|
||||
if (vcpu->arch.sie_block->eca & 1)
|
||||
return ic->kh != 0;
|
||||
return ipte_lock_count != 0;
|
||||
return vcpu->kvm->arch.ipte_lock_count != 0;
|
||||
}
|
||||
|
||||
static void ipte_lock_simple(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
union ipte_control old, new, *ic;
|
||||
|
||||
mutex_lock(&ipte_mutex);
|
||||
ipte_lock_count++;
|
||||
if (ipte_lock_count > 1)
|
||||
mutex_lock(&vcpu->kvm->arch.ipte_mutex);
|
||||
vcpu->kvm->arch.ipte_lock_count++;
|
||||
if (vcpu->kvm->arch.ipte_lock_count > 1)
|
||||
goto out;
|
||||
ic = &vcpu->kvm->arch.sca->ipte_control;
|
||||
do {
|
||||
old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
while (old.k) {
|
||||
cond_resched();
|
||||
old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
}
|
||||
new = old;
|
||||
new.k = 1;
|
||||
} while (cmpxchg(&ic->val, old.val, new.val) != old.val);
|
||||
out:
|
||||
mutex_unlock(&ipte_mutex);
|
||||
mutex_unlock(&vcpu->kvm->arch.ipte_mutex);
|
||||
}
|
||||
|
||||
static void ipte_unlock_simple(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
union ipte_control old, new, *ic;
|
||||
|
||||
mutex_lock(&ipte_mutex);
|
||||
ipte_lock_count--;
|
||||
if (ipte_lock_count)
|
||||
mutex_lock(&vcpu->kvm->arch.ipte_mutex);
|
||||
vcpu->kvm->arch.ipte_lock_count--;
|
||||
if (vcpu->kvm->arch.ipte_lock_count)
|
||||
goto out;
|
||||
ic = &vcpu->kvm->arch.sca->ipte_control;
|
||||
do {
|
||||
new = old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
new = old;
|
||||
new.k = 0;
|
||||
} while (cmpxchg(&ic->val, old.val, new.val) != old.val);
|
||||
wake_up(&vcpu->kvm->arch.ipte_wq);
|
||||
out:
|
||||
mutex_unlock(&ipte_mutex);
|
||||
mutex_unlock(&vcpu->kvm->arch.ipte_mutex);
|
||||
}
|
||||
|
||||
static void ipte_lock_siif(struct kvm_vcpu *vcpu)
|
||||
|
@ -265,10 +267,12 @@ static void ipte_lock_siif(struct kvm_vcpu *vcpu)
|
|||
|
||||
ic = &vcpu->kvm->arch.sca->ipte_control;
|
||||
do {
|
||||
old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
while (old.kg) {
|
||||
cond_resched();
|
||||
old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
}
|
||||
new = old;
|
||||
new.k = 1;
|
||||
|
@ -282,7 +286,9 @@ static void ipte_unlock_siif(struct kvm_vcpu *vcpu)
|
|||
|
||||
ic = &vcpu->kvm->arch.sca->ipte_control;
|
||||
do {
|
||||
new = old = ACCESS_ONCE(*ic);
|
||||
old = *ic;
|
||||
barrier();
|
||||
new = old;
|
||||
new.kh--;
|
||||
if (!new.kh)
|
||||
new.k = 0;
|
||||
|
|
|
@ -38,6 +38,19 @@ static const intercept_handler_t instruction_handlers[256] = {
|
|||
[0xeb] = kvm_s390_handle_eb,
|
||||
};
|
||||
|
||||
void kvm_s390_rewind_psw(struct kvm_vcpu *vcpu, int ilc)
|
||||
{
|
||||
struct kvm_s390_sie_block *sie_block = vcpu->arch.sie_block;
|
||||
|
||||
/* Use the length of the EXECUTE instruction if necessary */
|
||||
if (sie_block->icptstatus & 1) {
|
||||
ilc = (sie_block->icptstatus >> 4) & 0x6;
|
||||
if (!ilc)
|
||||
ilc = 4;
|
||||
}
|
||||
sie_block->gpsw.addr = __rewind_psw(sie_block->gpsw, ilc);
|
||||
}
|
||||
|
||||
static int handle_noop(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
switch (vcpu->arch.sie_block->icptcode) {
|
||||
|
@ -244,7 +257,7 @@ static int handle_instruction_and_prog(struct kvm_vcpu *vcpu)
|
|||
static int handle_external_interrupt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u16 eic = vcpu->arch.sie_block->eic;
|
||||
struct kvm_s390_interrupt irq;
|
||||
struct kvm_s390_irq irq;
|
||||
psw_t newpsw;
|
||||
int rc;
|
||||
|
||||
|
@ -269,7 +282,7 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
|
|||
if (kvm_s390_si_ext_call_pending(vcpu))
|
||||
return 0;
|
||||
irq.type = KVM_S390_INT_EXTERNAL_CALL;
|
||||
irq.parm = vcpu->arch.sie_block->extcpuaddr;
|
||||
irq.u.extcall.code = vcpu->arch.sie_block->extcpuaddr;
|
||||
break;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
|
@ -288,7 +301,6 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
|
|||
*/
|
||||
static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
psw_t *psw = &vcpu->arch.sie_block->gpsw;
|
||||
unsigned long srcaddr, dstaddr;
|
||||
int reg1, reg2, rc;
|
||||
|
||||
|
@ -310,7 +322,7 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
|
|||
if (rc != 0)
|
||||
return rc;
|
||||
|
||||
psw->addr = __rewind_psw(*psw, 4);
|
||||
kvm_s390_rewind_psw(vcpu, 4);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -81,10 +81,17 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
|
|||
{ "instruction_sigp_sense_running", VCPU_STAT(instruction_sigp_sense_running) },
|
||||
{ "instruction_sigp_external_call", VCPU_STAT(instruction_sigp_external_call) },
|
||||
{ "instruction_sigp_emergency", VCPU_STAT(instruction_sigp_emergency) },
|
||||
{ "instruction_sigp_cond_emergency", VCPU_STAT(instruction_sigp_cond_emergency) },
|
||||
{ "instruction_sigp_start", VCPU_STAT(instruction_sigp_start) },
|
||||
{ "instruction_sigp_stop", VCPU_STAT(instruction_sigp_stop) },
|
||||
{ "instruction_sigp_stop_store_status", VCPU_STAT(instruction_sigp_stop_store_status) },
|
||||
{ "instruction_sigp_store_status", VCPU_STAT(instruction_sigp_store_status) },
|
||||
{ "instruction_sigp_set_arch", VCPU_STAT(instruction_sigp_arch) },
|
||||
{ "instruction_sigp_set_prefix", VCPU_STAT(instruction_sigp_prefix) },
|
||||
{ "instruction_sigp_restart", VCPU_STAT(instruction_sigp_restart) },
|
||||
{ "instruction_sigp_cpu_reset", VCPU_STAT(instruction_sigp_cpu_reset) },
|
||||
{ "instruction_sigp_init_cpu_reset", VCPU_STAT(instruction_sigp_init_cpu_reset) },
|
||||
{ "instruction_sigp_unknown", VCPU_STAT(instruction_sigp_unknown) },
|
||||
{ "diagnose_10", VCPU_STAT(diagnose_10) },
|
||||
{ "diagnose_44", VCPU_STAT(diagnose_44) },
|
||||
{ "diagnose_9c", VCPU_STAT(diagnose_9c) },
|
||||
|
@ -453,6 +460,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
|||
spin_lock_init(&kvm->arch.float_int.lock);
|
||||
INIT_LIST_HEAD(&kvm->arch.float_int.list);
|
||||
init_waitqueue_head(&kvm->arch.ipte_wq);
|
||||
mutex_init(&kvm->arch.ipte_mutex);
|
||||
|
||||
debug_register_view(kvm->arch.dbf, &debug_sprintf_view);
|
||||
VM_EVENT(kvm, 3, "%s", "vm created");
|
||||
|
@ -711,7 +719,6 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
|
|||
}
|
||||
|
||||
spin_lock_init(&vcpu->arch.local_int.lock);
|
||||
INIT_LIST_HEAD(&vcpu->arch.local_int.list);
|
||||
vcpu->arch.local_int.float_int = &kvm->arch.float_int;
|
||||
vcpu->arch.local_int.wq = &vcpu->wq;
|
||||
vcpu->arch.local_int.cpuflags = &vcpu->arch.sie_block->cpuflags;
|
||||
|
@ -1114,13 +1121,15 @@ static void __kvm_inject_pfault_token(struct kvm_vcpu *vcpu, bool start_token,
|
|||
unsigned long token)
|
||||
{
|
||||
struct kvm_s390_interrupt inti;
|
||||
inti.parm64 = token;
|
||||
struct kvm_s390_irq irq;
|
||||
|
||||
if (start_token) {
|
||||
inti.type = KVM_S390_INT_PFAULT_INIT;
|
||||
WARN_ON_ONCE(kvm_s390_inject_vcpu(vcpu, &inti));
|
||||
irq.u.ext.ext_params2 = token;
|
||||
irq.type = KVM_S390_INT_PFAULT_INIT;
|
||||
WARN_ON_ONCE(kvm_s390_inject_vcpu(vcpu, &irq));
|
||||
} else {
|
||||
inti.type = KVM_S390_INT_PFAULT_DONE;
|
||||
inti.parm64 = token;
|
||||
WARN_ON_ONCE(kvm_s390_inject_vm(vcpu->kvm, &inti));
|
||||
}
|
||||
}
|
||||
|
@ -1614,11 +1623,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
|||
switch (ioctl) {
|
||||
case KVM_S390_INTERRUPT: {
|
||||
struct kvm_s390_interrupt s390int;
|
||||
struct kvm_s390_irq s390irq;
|
||||
|
||||
r = -EFAULT;
|
||||
if (copy_from_user(&s390int, argp, sizeof(s390int)))
|
||||
break;
|
||||
r = kvm_s390_inject_vcpu(vcpu, &s390int);
|
||||
if (s390int_to_s390irq(&s390int, &s390irq))
|
||||
return -EINVAL;
|
||||
r = kvm_s390_inject_vcpu(vcpu, &s390irq);
|
||||
break;
|
||||
}
|
||||
case KVM_S390_STORE_STATUS:
|
||||
|
|
|
@ -24,8 +24,6 @@ typedef int (*intercept_handler_t)(struct kvm_vcpu *vcpu);
|
|||
/* declare vfacilities extern */
|
||||
extern unsigned long *vfacilities;
|
||||
|
||||
int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu);
|
||||
|
||||
/* Transactional Memory Execution related macros */
|
||||
#define IS_TE_ENABLED(vcpu) ((vcpu->arch.sie_block->ecb & 0x10))
|
||||
#define TDB_FORMAT1 1
|
||||
|
@ -144,7 +142,7 @@ void kvm_s390_clear_float_irqs(struct kvm *kvm);
|
|||
int __must_check kvm_s390_inject_vm(struct kvm *kvm,
|
||||
struct kvm_s390_interrupt *s390int);
|
||||
int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
|
||||
struct kvm_s390_interrupt *s390int);
|
||||
struct kvm_s390_irq *irq);
|
||||
int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
|
||||
struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
|
||||
u64 cr6, u64 schid);
|
||||
|
@ -152,6 +150,10 @@ void kvm_s390_reinject_io_int(struct kvm *kvm,
|
|||
struct kvm_s390_interrupt_info *inti);
|
||||
int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked);
|
||||
|
||||
/* implemented in intercept.c */
|
||||
void kvm_s390_rewind_psw(struct kvm_vcpu *vcpu, int ilc);
|
||||
int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu);
|
||||
|
||||
/* implemented in priv.c */
|
||||
int is_valid_psw(psw_t *psw);
|
||||
int kvm_s390_handle_b2(struct kvm_vcpu *vcpu);
|
||||
|
@ -222,6 +224,9 @@ static inline int kvm_s390_inject_prog_cond(struct kvm_vcpu *vcpu, int rc)
|
|||
return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
|
||||
}
|
||||
|
||||
int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
|
||||
struct kvm_s390_irq *s390irq);
|
||||
|
||||
/* implemented in interrupt.c */
|
||||
int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
|
||||
int psw_extint_disabled(struct kvm_vcpu *vcpu);
|
||||
|
|
|
@ -180,21 +180,18 @@ static int handle_skey(struct kvm_vcpu *vcpu)
|
|||
if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
|
||||
return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
|
||||
|
||||
vcpu->arch.sie_block->gpsw.addr =
|
||||
__rewind_psw(vcpu->arch.sie_block->gpsw, 4);
|
||||
kvm_s390_rewind_psw(vcpu, 4);
|
||||
VCPU_EVENT(vcpu, 4, "%s", "retrying storage key operation");
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int handle_ipte_interlock(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
psw_t *psw = &vcpu->arch.sie_block->gpsw;
|
||||
|
||||
vcpu->stat.instruction_ipte_interlock++;
|
||||
if (psw_bits(*psw).p)
|
||||
if (psw_bits(vcpu->arch.sie_block->gpsw).p)
|
||||
return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
|
||||
wait_event(vcpu->kvm->arch.ipte_wq, !ipte_lock_held(vcpu));
|
||||
psw->addr = __rewind_psw(*psw, 4);
|
||||
kvm_s390_rewind_psw(vcpu, 4);
|
||||
VCPU_EVENT(vcpu, 4, "%s", "retrying ipte interlock operation");
|
||||
return 0;
|
||||
}
|
||||
|
@ -650,10 +647,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
|
|||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||
|
||||
start = vcpu->run->s.regs.gprs[reg2] & PAGE_MASK;
|
||||
if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) {
|
||||
if (kvm_s390_check_low_addr_protection(vcpu, start))
|
||||
return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
|
||||
}
|
||||
start = kvm_s390_logical_to_effective(vcpu, start);
|
||||
|
||||
switch (vcpu->run->s.regs.gprs[reg1] & PFMF_FSC) {
|
||||
case 0x00000000:
|
||||
|
@ -669,6 +663,12 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
|
|||
default:
|
||||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||
}
|
||||
|
||||
if (vcpu->run->s.regs.gprs[reg1] & PFMF_CF) {
|
||||
if (kvm_s390_check_low_addr_protection(vcpu, start))
|
||||
return kvm_s390_inject_prog_irq(vcpu, &vcpu->arch.pgm);
|
||||
}
|
||||
|
||||
while (start < end) {
|
||||
unsigned long useraddr, abs_addr;
|
||||
|
||||
|
@ -725,8 +725,7 @@ static int handle_essa(struct kvm_vcpu *vcpu)
|
|||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||
|
||||
/* Rewind PSW to repeat the ESSA instruction */
|
||||
vcpu->arch.sie_block->gpsw.addr =
|
||||
__rewind_psw(vcpu->arch.sie_block->gpsw, 4);
|
||||
kvm_s390_rewind_psw(vcpu, 4);
|
||||
vcpu->arch.sie_block->cbrlo &= PAGE_MASK; /* reset nceo */
|
||||
cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo);
|
||||
down_read(&gmap->mm->mmap_sem);
|
||||
|
@ -769,8 +768,8 @@ int kvm_s390_handle_lctl(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
int reg1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
|
||||
int reg3 = vcpu->arch.sie_block->ipa & 0x000f;
|
||||
u32 val = 0;
|
||||
int reg, rc;
|
||||
int reg, rc, nr_regs;
|
||||
u32 ctl_array[16];
|
||||
u64 ga;
|
||||
|
||||
vcpu->stat.instruction_lctl++;
|
||||
|
@ -786,19 +785,20 @@ int kvm_s390_handle_lctl(struct kvm_vcpu *vcpu)
|
|||
VCPU_EVENT(vcpu, 5, "lctl r1:%x, r3:%x, addr:%llx", reg1, reg3, ga);
|
||||
trace_kvm_s390_handle_lctl(vcpu, 0, reg1, reg3, ga);
|
||||
|
||||
nr_regs = ((reg3 - reg1) & 0xf) + 1;
|
||||
rc = read_guest(vcpu, ga, ctl_array, nr_regs * sizeof(u32));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
reg = reg1;
|
||||
nr_regs = 0;
|
||||
do {
|
||||
rc = read_guest(vcpu, ga, &val, sizeof(val));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
vcpu->arch.sie_block->gcr[reg] &= 0xffffffff00000000ul;
|
||||
vcpu->arch.sie_block->gcr[reg] |= val;
|
||||
ga += 4;
|
||||
vcpu->arch.sie_block->gcr[reg] |= ctl_array[nr_regs++];
|
||||
if (reg == reg3)
|
||||
break;
|
||||
reg = (reg + 1) % 16;
|
||||
} while (1);
|
||||
|
||||
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -806,9 +806,9 @@ int kvm_s390_handle_stctl(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
int reg1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
|
||||
int reg3 = vcpu->arch.sie_block->ipa & 0x000f;
|
||||
int reg, rc, nr_regs;
|
||||
u32 ctl_array[16];
|
||||
u64 ga;
|
||||
u32 val;
|
||||
int reg, rc;
|
||||
|
||||
vcpu->stat.instruction_stctl++;
|
||||
|
||||
|
@ -824,26 +824,24 @@ int kvm_s390_handle_stctl(struct kvm_vcpu *vcpu)
|
|||
trace_kvm_s390_handle_stctl(vcpu, 0, reg1, reg3, ga);
|
||||
|
||||
reg = reg1;
|
||||
nr_regs = 0;
|
||||
do {
|
||||
val = vcpu->arch.sie_block->gcr[reg] & 0x00000000fffffffful;
|
||||
rc = write_guest(vcpu, ga, &val, sizeof(val));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
ga += 4;
|
||||
ctl_array[nr_regs++] = vcpu->arch.sie_block->gcr[reg];
|
||||
if (reg == reg3)
|
||||
break;
|
||||
reg = (reg + 1) % 16;
|
||||
} while (1);
|
||||
|
||||
return 0;
|
||||
rc = write_guest(vcpu, ga, ctl_array, nr_regs * sizeof(u32));
|
||||
return rc ? kvm_s390_inject_prog_cond(vcpu, rc) : 0;
|
||||
}
|
||||
|
||||
static int handle_lctlg(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int reg1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
|
||||
int reg3 = vcpu->arch.sie_block->ipa & 0x000f;
|
||||
u64 ga, val;
|
||||
int reg, rc;
|
||||
int reg, rc, nr_regs;
|
||||
u64 ctl_array[16];
|
||||
u64 ga;
|
||||
|
||||
vcpu->stat.instruction_lctlg++;
|
||||
|
||||
|
@ -855,22 +853,22 @@ static int handle_lctlg(struct kvm_vcpu *vcpu)
|
|||
if (ga & 7)
|
||||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||
|
||||
reg = reg1;
|
||||
|
||||
VCPU_EVENT(vcpu, 5, "lctlg r1:%x, r3:%x, addr:%llx", reg1, reg3, ga);
|
||||
trace_kvm_s390_handle_lctl(vcpu, 1, reg1, reg3, ga);
|
||||
|
||||
nr_regs = ((reg3 - reg1) & 0xf) + 1;
|
||||
rc = read_guest(vcpu, ga, ctl_array, nr_regs * sizeof(u64));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
reg = reg1;
|
||||
nr_regs = 0;
|
||||
do {
|
||||
rc = read_guest(vcpu, ga, &val, sizeof(val));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
vcpu->arch.sie_block->gcr[reg] = val;
|
||||
ga += 8;
|
||||
vcpu->arch.sie_block->gcr[reg] = ctl_array[nr_regs++];
|
||||
if (reg == reg3)
|
||||
break;
|
||||
reg = (reg + 1) % 16;
|
||||
} while (1);
|
||||
|
||||
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -878,8 +876,9 @@ static int handle_stctg(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
int reg1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
|
||||
int reg3 = vcpu->arch.sie_block->ipa & 0x000f;
|
||||
u64 ga, val;
|
||||
int reg, rc;
|
||||
int reg, rc, nr_regs;
|
||||
u64 ctl_array[16];
|
||||
u64 ga;
|
||||
|
||||
vcpu->stat.instruction_stctg++;
|
||||
|
||||
|
@ -891,23 +890,19 @@ static int handle_stctg(struct kvm_vcpu *vcpu)
|
|||
if (ga & 7)
|
||||
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
|
||||
|
||||
reg = reg1;
|
||||
|
||||
VCPU_EVENT(vcpu, 5, "stctg r1:%x, r3:%x, addr:%llx", reg1, reg3, ga);
|
||||
trace_kvm_s390_handle_stctl(vcpu, 1, reg1, reg3, ga);
|
||||
|
||||
reg = reg1;
|
||||
nr_regs = 0;
|
||||
do {
|
||||
val = vcpu->arch.sie_block->gcr[reg];
|
||||
rc = write_guest(vcpu, ga, &val, sizeof(val));
|
||||
if (rc)
|
||||
return kvm_s390_inject_prog_cond(vcpu, rc);
|
||||
ga += 8;
|
||||
ctl_array[nr_regs++] = vcpu->arch.sie_block->gcr[reg];
|
||||
if (reg == reg3)
|
||||
break;
|
||||
reg = (reg + 1) % 16;
|
||||
} while (1);
|
||||
|
||||
return 0;
|
||||
rc = write_guest(vcpu, ga, ctl_array, nr_regs * sizeof(u64));
|
||||
return rc ? kvm_s390_inject_prog_cond(vcpu, rc) : 0;
|
||||
}
|
||||
|
||||
static const intercept_handler_t eb_handlers[256] = {
|
||||
|
|
|
@ -20,20 +20,13 @@
|
|||
#include "kvm-s390.h"
|
||||
#include "trace.h"
|
||||
|
||||
static int __sigp_sense(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
||||
static int __sigp_sense(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu,
|
||||
u64 *reg)
|
||||
{
|
||||
struct kvm_s390_local_interrupt *li;
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int cpuflags;
|
||||
int rc;
|
||||
|
||||
if (cpu_addr >= KVM_MAX_VCPUS)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
li = &dst_vcpu->arch.local_int;
|
||||
|
||||
cpuflags = atomic_read(li->cpuflags);
|
||||
|
@ -48,55 +41,53 @@ static int __sigp_sense(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
|||
rc = SIGP_CC_STATUS_STORED;
|
||||
}
|
||||
|
||||
VCPU_EVENT(vcpu, 4, "sensed status of cpu %x rc %x", cpu_addr, rc);
|
||||
VCPU_EVENT(vcpu, 4, "sensed status of cpu %x rc %x", dst_vcpu->vcpu_id,
|
||||
rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int __sigp_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr)
|
||||
static int __inject_sigp_emergency(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu)
|
||||
{
|
||||
struct kvm_s390_interrupt s390int = {
|
||||
struct kvm_s390_irq irq = {
|
||||
.type = KVM_S390_INT_EMERGENCY,
|
||||
.parm = vcpu->vcpu_id,
|
||||
.u.emerg.code = vcpu->vcpu_id,
|
||||
};
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int rc = 0;
|
||||
|
||||
if (cpu_addr < KVM_MAX_VCPUS)
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
||||
rc = kvm_s390_inject_vcpu(dst_vcpu, &s390int);
|
||||
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
||||
if (!rc)
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp emerg to cpu %x", cpu_addr);
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp emerg to cpu %x",
|
||||
dst_vcpu->vcpu_id);
|
||||
|
||||
return rc ? rc : SIGP_CC_ORDER_CODE_ACCEPTED;
|
||||
}
|
||||
|
||||
static int __sigp_conditional_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
||||
static int __sigp_emergency(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu)
|
||||
{
|
||||
return __inject_sigp_emergency(vcpu, dst_vcpu);
|
||||
}
|
||||
|
||||
static int __sigp_conditional_emergency(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu,
|
||||
u16 asn, u64 *reg)
|
||||
{
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
const u64 psw_int_mask = PSW_MASK_IO | PSW_MASK_EXT;
|
||||
u16 p_asn, s_asn;
|
||||
psw_t *psw;
|
||||
u32 flags;
|
||||
|
||||
if (cpu_addr < KVM_MAX_VCPUS)
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
flags = atomic_read(&dst_vcpu->arch.sie_block->cpuflags);
|
||||
psw = &dst_vcpu->arch.sie_block->gpsw;
|
||||
p_asn = dst_vcpu->arch.sie_block->gcr[4] & 0xffff; /* Primary ASN */
|
||||
s_asn = dst_vcpu->arch.sie_block->gcr[3] & 0xffff; /* Secondary ASN */
|
||||
|
||||
/* Deliver the emergency signal? */
|
||||
/* Inject the emergency signal? */
|
||||
if (!(flags & CPUSTAT_STOPPED)
|
||||
|| (psw->mask & psw_int_mask) != psw_int_mask
|
||||
|| ((flags & CPUSTAT_WAIT) && psw->addr != 0)
|
||||
|| (!(flags & CPUSTAT_WAIT) && (asn == p_asn || asn == s_asn))) {
|
||||
return __sigp_emergency(vcpu, cpu_addr);
|
||||
return __inject_sigp_emergency(vcpu, dst_vcpu);
|
||||
} else {
|
||||
*reg &= 0xffffffff00000000UL;
|
||||
*reg |= SIGP_STATUS_INCORRECT_STATE;
|
||||
|
@ -104,23 +95,19 @@ static int __sigp_conditional_emergency(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
|||
}
|
||||
}
|
||||
|
||||
static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr)
|
||||
static int __sigp_external_call(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu)
|
||||
{
|
||||
struct kvm_s390_interrupt s390int = {
|
||||
struct kvm_s390_irq irq = {
|
||||
.type = KVM_S390_INT_EXTERNAL_CALL,
|
||||
.parm = vcpu->vcpu_id,
|
||||
.u.extcall.code = vcpu->vcpu_id,
|
||||
};
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int rc;
|
||||
|
||||
if (cpu_addr < KVM_MAX_VCPUS)
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
||||
rc = kvm_s390_inject_vcpu(dst_vcpu, &s390int);
|
||||
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
||||
if (!rc)
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp ext call to cpu %x", cpu_addr);
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp ext call to cpu %x",
|
||||
dst_vcpu->vcpu_id);
|
||||
|
||||
return rc ? rc : SIGP_CC_ORDER_CODE_ACCEPTED;
|
||||
}
|
||||
|
@ -128,29 +115,20 @@ static int __sigp_external_call(struct kvm_vcpu *vcpu, u16 cpu_addr)
|
|||
static int __inject_sigp_stop(struct kvm_vcpu *dst_vcpu, int action)
|
||||
{
|
||||
struct kvm_s390_local_interrupt *li = &dst_vcpu->arch.local_int;
|
||||
struct kvm_s390_interrupt_info *inti;
|
||||
int rc = SIGP_CC_ORDER_CODE_ACCEPTED;
|
||||
|
||||
inti = kzalloc(sizeof(*inti), GFP_ATOMIC);
|
||||
if (!inti)
|
||||
return -ENOMEM;
|
||||
inti->type = KVM_S390_SIGP_STOP;
|
||||
|
||||
spin_lock(&li->lock);
|
||||
if (li->action_bits & ACTION_STOP_ON_STOP) {
|
||||
/* another SIGP STOP is pending */
|
||||
kfree(inti);
|
||||
rc = SIGP_CC_BUSY;
|
||||
goto out;
|
||||
}
|
||||
if ((atomic_read(li->cpuflags) & CPUSTAT_STOPPED)) {
|
||||
kfree(inti);
|
||||
if ((action & ACTION_STORE_ON_STOP) != 0)
|
||||
rc = -ESHUTDOWN;
|
||||
goto out;
|
||||
}
|
||||
list_add_tail(&inti->list, &li->list);
|
||||
atomic_set(&li->active, 1);
|
||||
set_bit(IRQ_PEND_SIGP_STOP, &li->pending_irqs);
|
||||
li->action_bits |= action;
|
||||
atomic_set_mask(CPUSTAT_STOP_INT, li->cpuflags);
|
||||
kvm_s390_vcpu_wakeup(dst_vcpu);
|
||||
|
@ -160,23 +138,27 @@ out:
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action)
|
||||
static int __sigp_stop(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu)
|
||||
{
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int rc;
|
||||
|
||||
if (cpu_addr >= KVM_MAX_VCPUS)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
rc = __inject_sigp_stop(dst_vcpu, ACTION_STOP_ON_STOP);
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x", dst_vcpu->vcpu_id);
|
||||
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = __inject_sigp_stop(dst_vcpu, action);
|
||||
static int __sigp_stop_and_store_status(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu, u64 *reg)
|
||||
{
|
||||
int rc;
|
||||
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x", cpu_addr);
|
||||
rc = __inject_sigp_stop(dst_vcpu, ACTION_STOP_ON_STOP |
|
||||
ACTION_STORE_ON_STOP);
|
||||
VCPU_EVENT(vcpu, 4, "sent sigp stop and store status to cpu %x",
|
||||
dst_vcpu->vcpu_id);
|
||||
|
||||
if ((action & ACTION_STORE_ON_STOP) != 0 && rc == -ESHUTDOWN) {
|
||||
if (rc == -ESHUTDOWN) {
|
||||
/* If the CPU has already been stopped, we still have
|
||||
* to save the status when doing stop-and-store. This
|
||||
* has to be done after unlocking all spinlocks. */
|
||||
|
@ -212,18 +194,12 @@ static int __sigp_set_arch(struct kvm_vcpu *vcpu, u32 parameter)
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int __sigp_set_prefix(struct kvm_vcpu *vcpu, u16 cpu_addr, u32 address,
|
||||
u64 *reg)
|
||||
static int __sigp_set_prefix(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu,
|
||||
u32 address, u64 *reg)
|
||||
{
|
||||
struct kvm_s390_local_interrupt *li;
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
struct kvm_s390_interrupt_info *inti;
|
||||
int rc;
|
||||
|
||||
if (cpu_addr < KVM_MAX_VCPUS)
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
li = &dst_vcpu->arch.local_int;
|
||||
|
||||
/*
|
||||
|
@ -238,46 +214,34 @@ static int __sigp_set_prefix(struct kvm_vcpu *vcpu, u16 cpu_addr, u32 address,
|
|||
return SIGP_CC_STATUS_STORED;
|
||||
}
|
||||
|
||||
inti = kzalloc(sizeof(*inti), GFP_KERNEL);
|
||||
if (!inti)
|
||||
return SIGP_CC_BUSY;
|
||||
|
||||
spin_lock(&li->lock);
|
||||
/* cpu must be in stopped state */
|
||||
if (!(atomic_read(li->cpuflags) & CPUSTAT_STOPPED)) {
|
||||
*reg &= 0xffffffff00000000UL;
|
||||
*reg |= SIGP_STATUS_INCORRECT_STATE;
|
||||
rc = SIGP_CC_STATUS_STORED;
|
||||
kfree(inti);
|
||||
goto out_li;
|
||||
}
|
||||
|
||||
inti->type = KVM_S390_SIGP_SET_PREFIX;
|
||||
inti->prefix.address = address;
|
||||
|
||||
list_add_tail(&inti->list, &li->list);
|
||||
atomic_set(&li->active, 1);
|
||||
li->irq.prefix.address = address;
|
||||
set_bit(IRQ_PEND_SET_PREFIX, &li->pending_irqs);
|
||||
kvm_s390_vcpu_wakeup(dst_vcpu);
|
||||
rc = SIGP_CC_ORDER_CODE_ACCEPTED;
|
||||
|
||||
VCPU_EVENT(vcpu, 4, "set prefix of cpu %02x to %x", cpu_addr, address);
|
||||
VCPU_EVENT(vcpu, 4, "set prefix of cpu %02x to %x", dst_vcpu->vcpu_id,
|
||||
address);
|
||||
out_li:
|
||||
spin_unlock(&li->lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int __sigp_store_status_at_addr(struct kvm_vcpu *vcpu, u16 cpu_id,
|
||||
u32 addr, u64 *reg)
|
||||
static int __sigp_store_status_at_addr(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu,
|
||||
u32 addr, u64 *reg)
|
||||
{
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int flags;
|
||||
int rc;
|
||||
|
||||
if (cpu_id < KVM_MAX_VCPUS)
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_id);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
||||
spin_lock(&dst_vcpu->arch.local_int.lock);
|
||||
flags = atomic_read(dst_vcpu->arch.local_int.cpuflags);
|
||||
spin_unlock(&dst_vcpu->arch.local_int.lock);
|
||||
|
@ -297,19 +261,12 @@ static int __sigp_store_status_at_addr(struct kvm_vcpu *vcpu, u16 cpu_id,
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
||||
u64 *reg)
|
||||
static int __sigp_sense_running(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu, u64 *reg)
|
||||
{
|
||||
struct kvm_s390_local_interrupt *li;
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
int rc;
|
||||
|
||||
if (cpu_addr >= KVM_MAX_VCPUS)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
||||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
li = &dst_vcpu->arch.local_int;
|
||||
if (atomic_read(li->cpuflags) & CPUSTAT_RUNNING) {
|
||||
/* running */
|
||||
|
@ -321,18 +278,46 @@ static int __sigp_sense_running(struct kvm_vcpu *vcpu, u16 cpu_addr,
|
|||
rc = SIGP_CC_STATUS_STORED;
|
||||
}
|
||||
|
||||
VCPU_EVENT(vcpu, 4, "sensed running status of cpu %x rc %x", cpu_addr,
|
||||
rc);
|
||||
VCPU_EVENT(vcpu, 4, "sensed running status of cpu %x rc %x",
|
||||
dst_vcpu->vcpu_id, rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Test whether the destination CPU is available and not busy */
|
||||
static int sigp_check_callable(struct kvm_vcpu *vcpu, u16 cpu_addr)
|
||||
static int __prepare_sigp_re_start(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu, u8 order_code)
|
||||
{
|
||||
struct kvm_s390_local_interrupt *li;
|
||||
int rc = SIGP_CC_ORDER_CODE_ACCEPTED;
|
||||
struct kvm_vcpu *dst_vcpu = NULL;
|
||||
struct kvm_s390_local_interrupt *li = &dst_vcpu->arch.local_int;
|
||||
/* handle (RE)START in user space */
|
||||
int rc = -EOPNOTSUPP;
|
||||
|
||||
spin_lock(&li->lock);
|
||||
if (li->action_bits & ACTION_STOP_ON_STOP)
|
||||
rc = SIGP_CC_BUSY;
|
||||
spin_unlock(&li->lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int __prepare_sigp_cpu_reset(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu, u8 order_code)
|
||||
{
|
||||
/* handle (INITIAL) CPU RESET in user space */
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static int __prepare_sigp_unknown(struct kvm_vcpu *vcpu,
|
||||
struct kvm_vcpu *dst_vcpu)
|
||||
{
|
||||
/* handle unknown orders in user space */
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code,
|
||||
u16 cpu_addr, u32 parameter, u64 *status_reg)
|
||||
{
|
||||
int rc;
|
||||
struct kvm_vcpu *dst_vcpu;
|
||||
|
||||
if (cpu_addr >= KVM_MAX_VCPUS)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
|
@ -340,11 +325,71 @@ static int sigp_check_callable(struct kvm_vcpu *vcpu, u16 cpu_addr)
|
|||
dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
|
||||
if (!dst_vcpu)
|
||||
return SIGP_CC_NOT_OPERATIONAL;
|
||||
li = &dst_vcpu->arch.local_int;
|
||||
spin_lock(&li->lock);
|
||||
if (li->action_bits & ACTION_STOP_ON_STOP)
|
||||
rc = SIGP_CC_BUSY;
|
||||
spin_unlock(&li->lock);
|
||||
|
||||
switch (order_code) {
|
||||
case SIGP_SENSE:
|
||||
vcpu->stat.instruction_sigp_sense++;
|
||||
rc = __sigp_sense(vcpu, dst_vcpu, status_reg);
|
||||
break;
|
||||
case SIGP_EXTERNAL_CALL:
|
||||
vcpu->stat.instruction_sigp_external_call++;
|
||||
rc = __sigp_external_call(vcpu, dst_vcpu);
|
||||
break;
|
||||
case SIGP_EMERGENCY_SIGNAL:
|
||||
vcpu->stat.instruction_sigp_emergency++;
|
||||
rc = __sigp_emergency(vcpu, dst_vcpu);
|
||||
break;
|
||||
case SIGP_STOP:
|
||||
vcpu->stat.instruction_sigp_stop++;
|
||||
rc = __sigp_stop(vcpu, dst_vcpu);
|
||||
break;
|
||||
case SIGP_STOP_AND_STORE_STATUS:
|
||||
vcpu->stat.instruction_sigp_stop_store_status++;
|
||||
rc = __sigp_stop_and_store_status(vcpu, dst_vcpu, status_reg);
|
||||
break;
|
||||
case SIGP_STORE_STATUS_AT_ADDRESS:
|
||||
vcpu->stat.instruction_sigp_store_status++;
|
||||
rc = __sigp_store_status_at_addr(vcpu, dst_vcpu, parameter,
|
||||
status_reg);
|
||||
break;
|
||||
case SIGP_SET_PREFIX:
|
||||
vcpu->stat.instruction_sigp_prefix++;
|
||||
rc = __sigp_set_prefix(vcpu, dst_vcpu, parameter, status_reg);
|
||||
break;
|
||||
case SIGP_COND_EMERGENCY_SIGNAL:
|
||||
vcpu->stat.instruction_sigp_cond_emergency++;
|
||||
rc = __sigp_conditional_emergency(vcpu, dst_vcpu, parameter,
|
||||
status_reg);
|
||||
break;
|
||||
case SIGP_SENSE_RUNNING:
|
||||
vcpu->stat.instruction_sigp_sense_running++;
|
||||
rc = __sigp_sense_running(vcpu, dst_vcpu, status_reg);
|
||||
break;
|
||||
case SIGP_START:
|
||||
vcpu->stat.instruction_sigp_start++;
|
||||
rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code);
|
||||
break;
|
||||
case SIGP_RESTART:
|
||||
vcpu->stat.instruction_sigp_restart++;
|
||||
rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code);
|
||||
break;
|
||||
case SIGP_INITIAL_CPU_RESET:
|
||||
vcpu->stat.instruction_sigp_init_cpu_reset++;
|
||||
rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code);
|
||||
break;
|
||||
case SIGP_CPU_RESET:
|
||||
vcpu->stat.instruction_sigp_cpu_reset++;
|
||||
rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code);
|
||||
break;
|
||||
default:
|
||||
vcpu->stat.instruction_sigp_unknown++;
|
||||
rc = __prepare_sigp_unknown(vcpu, dst_vcpu);
|
||||
}
|
||||
|
||||
if (rc == -EOPNOTSUPP)
|
||||
VCPU_EVENT(vcpu, 4,
|
||||
"sigp order %u -> cpu %x: handled in user space",
|
||||
order_code, dst_vcpu->vcpu_id);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -371,68 +416,14 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
|
|||
|
||||
trace_kvm_s390_handle_sigp(vcpu, order_code, cpu_addr, parameter);
|
||||
switch (order_code) {
|
||||
case SIGP_SENSE:
|
||||
vcpu->stat.instruction_sigp_sense++;
|
||||
rc = __sigp_sense(vcpu, cpu_addr,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
break;
|
||||
case SIGP_EXTERNAL_CALL:
|
||||
vcpu->stat.instruction_sigp_external_call++;
|
||||
rc = __sigp_external_call(vcpu, cpu_addr);
|
||||
break;
|
||||
case SIGP_EMERGENCY_SIGNAL:
|
||||
vcpu->stat.instruction_sigp_emergency++;
|
||||
rc = __sigp_emergency(vcpu, cpu_addr);
|
||||
break;
|
||||
case SIGP_STOP:
|
||||
vcpu->stat.instruction_sigp_stop++;
|
||||
rc = __sigp_stop(vcpu, cpu_addr, ACTION_STOP_ON_STOP);
|
||||
break;
|
||||
case SIGP_STOP_AND_STORE_STATUS:
|
||||
vcpu->stat.instruction_sigp_stop++;
|
||||
rc = __sigp_stop(vcpu, cpu_addr, ACTION_STORE_ON_STOP |
|
||||
ACTION_STOP_ON_STOP);
|
||||
break;
|
||||
case SIGP_STORE_STATUS_AT_ADDRESS:
|
||||
rc = __sigp_store_status_at_addr(vcpu, cpu_addr, parameter,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
break;
|
||||
case SIGP_SET_ARCHITECTURE:
|
||||
vcpu->stat.instruction_sigp_arch++;
|
||||
rc = __sigp_set_arch(vcpu, parameter);
|
||||
break;
|
||||
case SIGP_SET_PREFIX:
|
||||
vcpu->stat.instruction_sigp_prefix++;
|
||||
rc = __sigp_set_prefix(vcpu, cpu_addr, parameter,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
break;
|
||||
case SIGP_COND_EMERGENCY_SIGNAL:
|
||||
rc = __sigp_conditional_emergency(vcpu, cpu_addr, parameter,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
break;
|
||||
case SIGP_SENSE_RUNNING:
|
||||
vcpu->stat.instruction_sigp_sense_running++;
|
||||
rc = __sigp_sense_running(vcpu, cpu_addr,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
break;
|
||||
case SIGP_START:
|
||||
rc = sigp_check_callable(vcpu, cpu_addr);
|
||||
if (rc == SIGP_CC_ORDER_CODE_ACCEPTED)
|
||||
rc = -EOPNOTSUPP; /* Handle START in user space */
|
||||
break;
|
||||
case SIGP_RESTART:
|
||||
vcpu->stat.instruction_sigp_restart++;
|
||||
rc = sigp_check_callable(vcpu, cpu_addr);
|
||||
if (rc == SIGP_CC_ORDER_CODE_ACCEPTED) {
|
||||
VCPU_EVENT(vcpu, 4,
|
||||
"sigp restart %x to handle userspace",
|
||||
cpu_addr);
|
||||
/* user space must know about restart */
|
||||
rc = -EOPNOTSUPP;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
rc = handle_sigp_dst(vcpu, order_code, cpu_addr,
|
||||
parameter,
|
||||
&vcpu->run->s.regs.gprs[r1]);
|
||||
}
|
||||
|
||||
if (rc < 0)
|
||||
|
|
|
@ -844,7 +844,7 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
|
|||
|
||||
down_read(&mm->mmap_sem);
|
||||
retry:
|
||||
ptep = get_locked_pte(current->mm, addr, &ptl);
|
||||
ptep = get_locked_pte(mm, addr, &ptl);
|
||||
if (unlikely(!ptep)) {
|
||||
up_read(&mm->mmap_sem);
|
||||
return -EFAULT;
|
||||
|
@ -888,6 +888,45 @@ retry:
|
|||
}
|
||||
EXPORT_SYMBOL(set_guest_storage_key);
|
||||
|
||||
unsigned long get_guest_storage_key(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
spinlock_t *ptl;
|
||||
pgste_t pgste;
|
||||
pte_t *ptep;
|
||||
uint64_t physaddr;
|
||||
unsigned long key = 0;
|
||||
|
||||
down_read(&mm->mmap_sem);
|
||||
ptep = get_locked_pte(mm, addr, &ptl);
|
||||
if (unlikely(!ptep)) {
|
||||
up_read(&mm->mmap_sem);
|
||||
return -EFAULT;
|
||||
}
|
||||
pgste = pgste_get_lock(ptep);
|
||||
|
||||
if (pte_val(*ptep) & _PAGE_INVALID) {
|
||||
key |= (pgste_val(pgste) & PGSTE_ACC_BITS) >> 56;
|
||||
key |= (pgste_val(pgste) & PGSTE_FP_BIT) >> 56;
|
||||
key |= (pgste_val(pgste) & PGSTE_GR_BIT) >> 48;
|
||||
key |= (pgste_val(pgste) & PGSTE_GC_BIT) >> 48;
|
||||
} else {
|
||||
physaddr = pte_val(*ptep) & PAGE_MASK;
|
||||
key = page_get_storage_key(physaddr);
|
||||
|
||||
/* Reflect guest's logical view, not physical */
|
||||
if (pgste_val(pgste) & PGSTE_GR_BIT)
|
||||
key |= _PAGE_REFERENCED;
|
||||
if (pgste_val(pgste) & PGSTE_GC_BIT)
|
||||
key |= _PAGE_CHANGED;
|
||||
}
|
||||
|
||||
pgste_set_unlock(ptep, pgste);
|
||||
pte_unmap_unlock(ptep, ptl);
|
||||
up_read(&mm->mmap_sem);
|
||||
return key;
|
||||
}
|
||||
EXPORT_SYMBOL(get_guest_storage_key);
|
||||
|
||||
#else /* CONFIG_PGSTE */
|
||||
|
||||
static inline int page_table_with_pgste(struct page *page)
|
||||
|
|
|
@ -33,7 +33,7 @@
|
|||
|
||||
#define KVM_MAX_VCPUS 255
|
||||
#define KVM_SOFT_MAX_VCPUS 160
|
||||
#define KVM_USER_MEM_SLOTS 125
|
||||
#define KVM_USER_MEM_SLOTS 509
|
||||
/* memory slots that are not exposed to userspace */
|
||||
#define KVM_PRIVATE_MEM_SLOTS 3
|
||||
#define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)
|
||||
|
@ -51,6 +51,7 @@
|
|||
| X86_CR0_NW | X86_CR0_CD | X86_CR0_PG))
|
||||
|
||||
#define CR3_L_MODE_RESERVED_BITS 0xFFFFFF0000000000ULL
|
||||
#define CR3_PCID_INVD (1UL << 63)
|
||||
#define CR4_RESERVED_BITS \
|
||||
(~(unsigned long)(X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD | X86_CR4_DE\
|
||||
| X86_CR4_PSE | X86_CR4_PAE | X86_CR4_MCE \
|
||||
|
@ -361,6 +362,7 @@ struct kvm_vcpu_arch {
|
|||
int mp_state;
|
||||
u64 ia32_misc_enable_msr;
|
||||
bool tpr_access_reporting;
|
||||
u64 ia32_xss;
|
||||
|
||||
/*
|
||||
* Paging state of the vcpu
|
||||
|
@ -542,7 +544,7 @@ struct kvm_apic_map {
|
|||
struct rcu_head rcu;
|
||||
u8 ldr_bits;
|
||||
/* fields bellow are used to decode ldr values in different modes */
|
||||
u32 cid_shift, cid_mask, lid_mask;
|
||||
u32 cid_shift, cid_mask, lid_mask, broadcast;
|
||||
struct kvm_lapic *phys_map[256];
|
||||
/* first index is cluster id second is cpu id in a cluster */
|
||||
struct kvm_lapic *logical_map[16][16];
|
||||
|
@ -602,6 +604,9 @@ struct kvm_arch {
|
|||
|
||||
struct kvm_xen_hvm_config xen_hvm_config;
|
||||
|
||||
/* reads protected by irq_srcu, writes by irq_lock */
|
||||
struct hlist_head mask_notifier_list;
|
||||
|
||||
/* fields used by HYPER-V emulation */
|
||||
u64 hv_guest_os_id;
|
||||
u64 hv_hypercall;
|
||||
|
@ -659,6 +664,16 @@ struct msr_data {
|
|||
u64 data;
|
||||
};
|
||||
|
||||
struct kvm_lapic_irq {
|
||||
u32 vector;
|
||||
u32 delivery_mode;
|
||||
u32 dest_mode;
|
||||
u32 level;
|
||||
u32 trig_mode;
|
||||
u32 shorthand;
|
||||
u32 dest_id;
|
||||
};
|
||||
|
||||
struct kvm_x86_ops {
|
||||
int (*cpu_has_kvm_support)(void); /* __init */
|
||||
int (*disabled_by_bios)(void); /* __init */
|
||||
|
@ -767,6 +782,7 @@ struct kvm_x86_ops {
|
|||
enum x86_intercept_stage stage);
|
||||
void (*handle_external_intr)(struct kvm_vcpu *vcpu);
|
||||
bool (*mpx_supported)(void);
|
||||
bool (*xsaves_supported)(void);
|
||||
|
||||
int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
|
||||
|
||||
|
@ -818,6 +834,19 @@ int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
|
|||
const void *val, int bytes);
|
||||
u8 kvm_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn);
|
||||
|
||||
struct kvm_irq_mask_notifier {
|
||||
void (*func)(struct kvm_irq_mask_notifier *kimn, bool masked);
|
||||
int irq;
|
||||
struct hlist_node link;
|
||||
};
|
||||
|
||||
void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
|
||||
struct kvm_irq_mask_notifier *kimn);
|
||||
void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
|
||||
struct kvm_irq_mask_notifier *kimn);
|
||||
void kvm_fire_mask_notifiers(struct kvm *kvm, unsigned irqchip, unsigned pin,
|
||||
bool mask);
|
||||
|
||||
extern bool tdp_enabled;
|
||||
|
||||
u64 vcpu_tsc_khz(struct kvm_vcpu *vcpu);
|
||||
|
@ -863,7 +892,7 @@ int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);
|
|||
|
||||
void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
|
||||
int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
|
||||
void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, unsigned int vector);
|
||||
void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
|
||||
|
||||
int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
|
||||
int reason, bool has_error_code, u32 error_code);
|
||||
|
@ -895,6 +924,7 @@ int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
|
|||
gfn_t gfn, void *data, int offset, int len,
|
||||
u32 access);
|
||||
bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl);
|
||||
bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr);
|
||||
|
||||
static inline int __kvm_irq_line_state(unsigned long *irq_state,
|
||||
int irq_source_id, int level)
|
||||
|
@ -1066,6 +1096,7 @@ void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
|
|||
void kvm_define_shared_msr(unsigned index, u32 msr);
|
||||
int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
|
||||
|
||||
unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu);
|
||||
bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip);
|
||||
|
||||
void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
|
||||
|
|
|
@ -69,6 +69,7 @@
|
|||
#define SECONDARY_EXEC_PAUSE_LOOP_EXITING 0x00000400
|
||||
#define SECONDARY_EXEC_ENABLE_INVPCID 0x00001000
|
||||
#define SECONDARY_EXEC_SHADOW_VMCS 0x00004000
|
||||
#define SECONDARY_EXEC_XSAVES 0x00100000
|
||||
|
||||
|
||||
#define PIN_BASED_EXT_INTR_MASK 0x00000001
|
||||
|
@ -159,6 +160,8 @@ enum vmcs_field {
|
|||
EOI_EXIT_BITMAP3_HIGH = 0x00002023,
|
||||
VMREAD_BITMAP = 0x00002026,
|
||||
VMWRITE_BITMAP = 0x00002028,
|
||||
XSS_EXIT_BITMAP = 0x0000202C,
|
||||
XSS_EXIT_BITMAP_HIGH = 0x0000202D,
|
||||
GUEST_PHYSICAL_ADDRESS = 0x00002400,
|
||||
GUEST_PHYSICAL_ADDRESS_HIGH = 0x00002401,
|
||||
VMCS_LINK_POINTER = 0x00002800,
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#define XSTATE_Hi16_ZMM 0x80
|
||||
|
||||
#define XSTATE_FPSSE (XSTATE_FP | XSTATE_SSE)
|
||||
#define XSTATE_AVX512 (XSTATE_OPMASK | XSTATE_ZMM_Hi256 | XSTATE_Hi16_ZMM)
|
||||
/* Bit 63 of XCR0 is reserved for future expansion */
|
||||
#define XSTATE_EXTEND_MASK (~(XSTATE_FPSSE | (1ULL << 63)))
|
||||
|
||||
|
|
|
@ -72,6 +72,8 @@
|
|||
#define EXIT_REASON_XSETBV 55
|
||||
#define EXIT_REASON_APIC_WRITE 56
|
||||
#define EXIT_REASON_INVPCID 58
|
||||
#define EXIT_REASON_XSAVES 63
|
||||
#define EXIT_REASON_XRSTORS 64
|
||||
|
||||
#define VMX_EXIT_REASONS \
|
||||
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
|
||||
|
@ -116,6 +118,8 @@
|
|||
{ EXIT_REASON_INVALID_STATE, "INVALID_STATE" }, \
|
||||
{ EXIT_REASON_INVD, "INVD" }, \
|
||||
{ EXIT_REASON_INVVPID, "INVVPID" }, \
|
||||
{ EXIT_REASON_INVPCID, "INVPCID" }
|
||||
{ EXIT_REASON_INVPCID, "INVPCID" }, \
|
||||
{ EXIT_REASON_XSAVES, "XSAVES" }, \
|
||||
{ EXIT_REASON_XRSTORS, "XRSTORS" }
|
||||
|
||||
#endif /* _UAPIVMX_H */
|
||||
|
|
|
@ -283,7 +283,14 @@ NOKPROBE_SYMBOL(do_async_page_fault);
|
|||
static void __init paravirt_ops_setup(void)
|
||||
{
|
||||
pv_info.name = "KVM";
|
||||
pv_info.paravirt_enabled = 1;
|
||||
|
||||
/*
|
||||
* KVM isn't paravirt in the sense of paravirt_enabled. A KVM
|
||||
* guest kernel works like a bare metal kernel with additional
|
||||
* features, and paravirt_enabled is about features that are
|
||||
* missing.
|
||||
*/
|
||||
pv_info.paravirt_enabled = 0;
|
||||
|
||||
if (kvm_para_has_feature(KVM_FEATURE_NOP_IO_DELAY))
|
||||
pv_cpu_ops.io_delay = kvm_io_delay;
|
||||
|
|
|
@ -59,13 +59,12 @@ static void kvm_get_wallclock(struct timespec *now)
|
|||
|
||||
native_write_msr(msr_kvm_wall_clock, low, high);
|
||||
|
||||
preempt_disable();
|
||||
cpu = smp_processor_id();
|
||||
cpu = get_cpu();
|
||||
|
||||
vcpu_time = &hv_clock[cpu].pvti;
|
||||
pvclock_read_wallclock(&wall_clock, vcpu_time, now);
|
||||
|
||||
preempt_enable();
|
||||
put_cpu();
|
||||
}
|
||||
|
||||
static int kvm_set_wallclock(const struct timespec *now)
|
||||
|
@ -107,11 +106,10 @@ static unsigned long kvm_get_tsc_khz(void)
|
|||
int cpu;
|
||||
unsigned long tsc_khz;
|
||||
|
||||
preempt_disable();
|
||||
cpu = smp_processor_id();
|
||||
cpu = get_cpu();
|
||||
src = &hv_clock[cpu].pvti;
|
||||
tsc_khz = pvclock_tsc_khz(src);
|
||||
preempt_enable();
|
||||
put_cpu();
|
||||
return tsc_khz;
|
||||
}
|
||||
|
||||
|
@ -263,7 +261,6 @@ void __init kvmclock_init(void)
|
|||
#endif
|
||||
kvm_get_preset_lpj();
|
||||
clocksource_register_hz(&kvm_clock, NSEC_PER_SEC);
|
||||
pv_info.paravirt_enabled = 1;
|
||||
pv_info.name = "KVM";
|
||||
|
||||
if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT))
|
||||
|
@ -284,23 +281,22 @@ int __init kvm_setup_vsyscall_timeinfo(void)
|
|||
|
||||
size = PAGE_ALIGN(sizeof(struct pvclock_vsyscall_time_info)*NR_CPUS);
|
||||
|
||||
preempt_disable();
|
||||
cpu = smp_processor_id();
|
||||
cpu = get_cpu();
|
||||
|
||||
vcpu_time = &hv_clock[cpu].pvti;
|
||||
flags = pvclock_read_flags(vcpu_time);
|
||||
|
||||
if (!(flags & PVCLOCK_TSC_STABLE_BIT)) {
|
||||
preempt_enable();
|
||||
put_cpu();
|
||||
return 1;
|
||||
}
|
||||
|
||||
if ((ret = pvclock_init_vsyscall(hv_clock, size))) {
|
||||
preempt_enable();
|
||||
put_cpu();
|
||||
return ret;
|
||||
}
|
||||
|
||||
preempt_enable();
|
||||
put_cpu();
|
||||
|
||||
kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK;
|
||||
#endif
|
||||
|
|
|
@ -738,3 +738,4 @@ void *get_xsave_addr(struct xsave_struct *xsave, int xstate)
|
|||
|
||||
return (void *)xsave + xstate_comp_offsets[feature];
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(get_xsave_addr);
|
||||
|
|
|
@ -7,14 +7,13 @@ CFLAGS_vmx.o := -I.
|
|||
|
||||
KVM := ../../../virt/kvm
|
||||
|
||||
kvm-y += $(KVM)/kvm_main.o $(KVM)/ioapic.o \
|
||||
$(KVM)/coalesced_mmio.o $(KVM)/irq_comm.o \
|
||||
kvm-y += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \
|
||||
$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o
|
||||
kvm-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += $(KVM)/assigned-dev.o $(KVM)/iommu.o
|
||||
kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
|
||||
|
||||
kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
|
||||
i8254.o cpuid.o pmu.o
|
||||
i8254.o ioapic.o irq_comm.o cpuid.o pmu.o
|
||||
kvm-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += assigned-dev.o iommu.o
|
||||
kvm-intel-y += vmx.o
|
||||
kvm-amd-y += svm.o
|
||||
|
||||
|
|
|
@ -20,6 +20,32 @@
|
|||
#include <linux/namei.h>
|
||||
#include <linux/fs.h>
|
||||
#include "irq.h"
|
||||
#include "assigned-dev.h"
|
||||
|
||||
struct kvm_assigned_dev_kernel {
|
||||
struct kvm_irq_ack_notifier ack_notifier;
|
||||
struct list_head list;
|
||||
int assigned_dev_id;
|
||||
int host_segnr;
|
||||
int host_busnr;
|
||||
int host_devfn;
|
||||
unsigned int entries_nr;
|
||||
int host_irq;
|
||||
bool host_irq_disabled;
|
||||
bool pci_2_3;
|
||||
struct msix_entry *host_msix_entries;
|
||||
int guest_irq;
|
||||
struct msix_entry *guest_msix_entries;
|
||||
unsigned long irq_requested_type;
|
||||
int irq_source_id;
|
||||
int flags;
|
||||
struct pci_dev *dev;
|
||||
struct kvm *kvm;
|
||||
spinlock_t intx_lock;
|
||||
spinlock_t intx_mask_lock;
|
||||
char irq_name[32];
|
||||
struct pci_saved_state *pci_saved_state;
|
||||
};
|
||||
|
||||
static struct kvm_assigned_dev_kernel *kvm_find_assigned_dev(struct list_head *head,
|
||||
int assigned_dev_id)
|
||||
|
@ -748,7 +774,7 @@ static int kvm_vm_ioctl_assign_device(struct kvm *kvm,
|
|||
if (r)
|
||||
goto out_list_del;
|
||||
}
|
||||
r = kvm_assign_device(kvm, match);
|
||||
r = kvm_assign_device(kvm, match->dev);
|
||||
if (r)
|
||||
goto out_list_del;
|
||||
|
||||
|
@ -790,7 +816,7 @@ static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
|
|||
goto out;
|
||||
}
|
||||
|
||||
kvm_deassign_device(kvm, match);
|
||||
kvm_deassign_device(kvm, match->dev);
|
||||
|
||||
kvm_free_assigned_device(kvm, match);
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
#ifndef ARCH_X86_KVM_ASSIGNED_DEV_H
|
||||
#define ARCH_X86_KVM_ASSIGNED_DEV_H
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
|
||||
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
|
||||
int kvm_assign_device(struct kvm *kvm, struct pci_dev *pdev);
|
||||
int kvm_deassign_device(struct kvm *kvm, struct pci_dev *pdev);
|
||||
|
||||
int kvm_iommu_map_guest(struct kvm *kvm);
|
||||
int kvm_iommu_unmap_guest(struct kvm *kvm);
|
||||
|
||||
long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl,
|
||||
unsigned long arg);
|
||||
|
||||
void kvm_free_all_assigned_devices(struct kvm *kvm);
|
||||
#else
|
||||
static inline int kvm_iommu_unmap_guest(struct kvm *kvm)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl,
|
||||
unsigned long arg)
|
||||
{
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
||||
static inline void kvm_free_all_assigned_devices(struct kvm *kvm) {}
|
||||
#endif /* CONFIG_KVM_DEVICE_ASSIGNMENT */
|
||||
|
||||
#endif /* ARCH_X86_KVM_ASSIGNED_DEV_H */
|
|
@ -23,7 +23,7 @@
|
|||
#include "mmu.h"
|
||||
#include "trace.h"
|
||||
|
||||
static u32 xstate_required_size(u64 xstate_bv)
|
||||
static u32 xstate_required_size(u64 xstate_bv, bool compacted)
|
||||
{
|
||||
int feature_bit = 0;
|
||||
u32 ret = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
|
||||
|
@ -31,9 +31,10 @@ static u32 xstate_required_size(u64 xstate_bv)
|
|||
xstate_bv &= XSTATE_EXTEND_MASK;
|
||||
while (xstate_bv) {
|
||||
if (xstate_bv & 0x1) {
|
||||
u32 eax, ebx, ecx, edx;
|
||||
u32 eax, ebx, ecx, edx, offset;
|
||||
cpuid_count(0xD, feature_bit, &eax, &ebx, &ecx, &edx);
|
||||
ret = max(ret, eax + ebx);
|
||||
offset = compacted ? ret : ebx;
|
||||
ret = max(ret, offset + eax);
|
||||
}
|
||||
|
||||
xstate_bv >>= 1;
|
||||
|
@ -53,6 +54,8 @@ u64 kvm_supported_xcr0(void)
|
|||
return xcr0;
|
||||
}
|
||||
|
||||
#define F(x) bit(X86_FEATURE_##x)
|
||||
|
||||
int kvm_update_cpuid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_cpuid_entry2 *best;
|
||||
|
@ -64,13 +67,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
|
|||
|
||||
/* Update OSXSAVE bit */
|
||||
if (cpu_has_xsave && best->function == 0x1) {
|
||||
best->ecx &= ~(bit(X86_FEATURE_OSXSAVE));
|
||||
best->ecx &= ~F(OSXSAVE);
|
||||
if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE))
|
||||
best->ecx |= bit(X86_FEATURE_OSXSAVE);
|
||||
best->ecx |= F(OSXSAVE);
|
||||
}
|
||||
|
||||
if (apic) {
|
||||
if (best->ecx & bit(X86_FEATURE_TSC_DEADLINE_TIMER))
|
||||
if (best->ecx & F(TSC_DEADLINE_TIMER))
|
||||
apic->lapic_timer.timer_mode_mask = 3 << 17;
|
||||
else
|
||||
apic->lapic_timer.timer_mode_mask = 1 << 17;
|
||||
|
@ -85,9 +88,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
|
|||
(best->eax | ((u64)best->edx << 32)) &
|
||||
kvm_supported_xcr0();
|
||||
vcpu->arch.guest_xstate_size = best->ebx =
|
||||
xstate_required_size(vcpu->arch.xcr0);
|
||||
xstate_required_size(vcpu->arch.xcr0, false);
|
||||
}
|
||||
|
||||
best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
|
||||
if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
|
||||
best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
|
||||
|
||||
/*
|
||||
* The existing code assumes virtual address is 48-bit in the canonical
|
||||
* address checks; exit if it is ever changed.
|
||||
|
@ -122,8 +129,8 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
|
|||
break;
|
||||
}
|
||||
}
|
||||
if (entry && (entry->edx & bit(X86_FEATURE_NX)) && !is_efer_nx()) {
|
||||
entry->edx &= ~bit(X86_FEATURE_NX);
|
||||
if (entry && (entry->edx & F(NX)) && !is_efer_nx()) {
|
||||
entry->edx &= ~F(NX);
|
||||
printk(KERN_INFO "kvm: guest NX capability removed\n");
|
||||
}
|
||||
}
|
||||
|
@ -227,8 +234,6 @@ static void do_cpuid_1_ent(struct kvm_cpuid_entry2 *entry, u32 function,
|
|||
entry->flags = 0;
|
||||
}
|
||||
|
||||
#define F(x) bit(X86_FEATURE_##x)
|
||||
|
||||
static int __do_cpuid_ent_emulated(struct kvm_cpuid_entry2 *entry,
|
||||
u32 func, u32 index, int *nent, int maxnent)
|
||||
{
|
||||
|
@ -267,6 +272,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
|
|||
unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
|
||||
unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
|
||||
unsigned f_mpx = kvm_x86_ops->mpx_supported() ? F(MPX) : 0;
|
||||
unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
|
||||
|
||||
/* cpuid 1.edx */
|
||||
const u32 kvm_supported_word0_x86_features =
|
||||
|
@ -317,7 +323,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
|
|||
const u32 kvm_supported_word9_x86_features =
|
||||
F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
|
||||
F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
|
||||
F(ADX) | F(SMAP);
|
||||
F(ADX) | F(SMAP) | F(AVX512F) | F(AVX512PF) | F(AVX512ER) |
|
||||
F(AVX512CD);
|
||||
|
||||
/* cpuid 0xD.1.eax */
|
||||
const u32 kvm_supported_word10_x86_features =
|
||||
F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
|
||||
|
||||
/* all calls to cpuid_count() should be made on the same cpu */
|
||||
get_cpu();
|
||||
|
@ -453,16 +464,34 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
|
|||
u64 supported = kvm_supported_xcr0();
|
||||
|
||||
entry->eax &= supported;
|
||||
entry->ebx = xstate_required_size(supported, false);
|
||||
entry->ecx = entry->ebx;
|
||||
entry->edx &= supported >> 32;
|
||||
entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
|
||||
if (!supported)
|
||||
break;
|
||||
|
||||
for (idx = 1, i = 1; idx < 64; ++idx) {
|
||||
u64 mask = ((u64)1 << idx);
|
||||
if (*nent >= maxnent)
|
||||
goto out;
|
||||
|
||||
do_cpuid_1_ent(&entry[i], function, idx);
|
||||
if (entry[i].eax == 0 || !(supported & mask))
|
||||
continue;
|
||||
if (idx == 1) {
|
||||
entry[i].eax &= kvm_supported_word10_x86_features;
|
||||
entry[i].ebx = 0;
|
||||
if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
|
||||
entry[i].ebx =
|
||||
xstate_required_size(supported,
|
||||
true);
|
||||
} else {
|
||||
if (entry[i].eax == 0 || !(supported & mask))
|
||||
continue;
|
||||
if (WARN_ON_ONCE(entry[i].ecx & 1))
|
||||
continue;
|
||||
}
|
||||
entry[i].ecx = 0;
|
||||
entry[i].edx = 0;
|
||||
entry[i].flags |=
|
||||
KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
|
||||
++*nent;
|
||||
|
|
|
@ -123,6 +123,7 @@
|
|||
#define Prefix (3<<15) /* Instruction varies with 66/f2/f3 prefix */
|
||||
#define RMExt (4<<15) /* Opcode extension in ModRM r/m if mod == 3 */
|
||||
#define Escape (5<<15) /* Escape to coprocessor instruction */
|
||||
#define InstrDual (6<<15) /* Alternate instruction decoding of mod == 3 */
|
||||
#define Sse (1<<18) /* SSE Vector instruction */
|
||||
/* Generic ModRM decode. */
|
||||
#define ModRM (1<<19)
|
||||
|
@ -166,6 +167,8 @@
|
|||
#define CheckPerm ((u64)1 << 49) /* Has valid check_perm field */
|
||||
#define NoBigReal ((u64)1 << 50) /* No big real mode */
|
||||
#define PrivUD ((u64)1 << 51) /* #UD instead of #GP on CPL > 0 */
|
||||
#define NearBranch ((u64)1 << 52) /* Near branches */
|
||||
#define No16 ((u64)1 << 53) /* No 16 bit operand */
|
||||
|
||||
#define DstXacc (DstAccLo | SrcAccHi | SrcWrite)
|
||||
|
||||
|
@ -209,6 +212,7 @@ struct opcode {
|
|||
const struct group_dual *gdual;
|
||||
const struct gprefix *gprefix;
|
||||
const struct escape *esc;
|
||||
const struct instr_dual *idual;
|
||||
void (*fastop)(struct fastop *fake);
|
||||
} u;
|
||||
int (*check_perm)(struct x86_emulate_ctxt *ctxt);
|
||||
|
@ -231,6 +235,11 @@ struct escape {
|
|||
struct opcode high[64];
|
||||
};
|
||||
|
||||
struct instr_dual {
|
||||
struct opcode mod012;
|
||||
struct opcode mod3;
|
||||
};
|
||||
|
||||
/* EFLAGS bit definitions. */
|
||||
#define EFLG_ID (1<<21)
|
||||
#define EFLG_VIP (1<<20)
|
||||
|
@ -379,6 +388,15 @@ static int fastop(struct x86_emulate_ctxt *ctxt, void (*fop)(struct fastop *));
|
|||
ON64(FOP2E(op##q, rax, cl)) \
|
||||
FOP_END
|
||||
|
||||
/* 2 operand, src and dest are reversed */
|
||||
#define FASTOP2R(op, name) \
|
||||
FOP_START(name) \
|
||||
FOP2E(op##b, dl, al) \
|
||||
FOP2E(op##w, dx, ax) \
|
||||
FOP2E(op##l, edx, eax) \
|
||||
ON64(FOP2E(op##q, rdx, rax)) \
|
||||
FOP_END
|
||||
|
||||
#define FOP3E(op, dst, src, src2) \
|
||||
FOP_ALIGN #op " %" #src2 ", %" #src ", %" #dst " \n\t" FOP_RET
|
||||
|
||||
|
@ -477,9 +495,9 @@ address_mask(struct x86_emulate_ctxt *ctxt, unsigned long reg)
|
|||
}
|
||||
|
||||
static inline unsigned long
|
||||
register_address(struct x86_emulate_ctxt *ctxt, unsigned long reg)
|
||||
register_address(struct x86_emulate_ctxt *ctxt, int reg)
|
||||
{
|
||||
return address_mask(ctxt, reg);
|
||||
return address_mask(ctxt, reg_read(ctxt, reg));
|
||||
}
|
||||
|
||||
static void masked_increment(ulong *reg, ulong mask, int inc)
|
||||
|
@ -488,7 +506,7 @@ static void masked_increment(ulong *reg, ulong mask, int inc)
|
|||
}
|
||||
|
||||
static inline void
|
||||
register_address_increment(struct x86_emulate_ctxt *ctxt, unsigned long *reg, int inc)
|
||||
register_address_increment(struct x86_emulate_ctxt *ctxt, int reg, int inc)
|
||||
{
|
||||
ulong mask;
|
||||
|
||||
|
@ -496,7 +514,7 @@ register_address_increment(struct x86_emulate_ctxt *ctxt, unsigned long *reg, in
|
|||
mask = ~0UL;
|
||||
else
|
||||
mask = ad_mask(ctxt);
|
||||
masked_increment(reg, mask, inc);
|
||||
masked_increment(reg_rmw(ctxt, reg), mask, inc);
|
||||
}
|
||||
|
||||
static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc)
|
||||
|
@ -564,40 +582,6 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt)
|
|||
return emulate_exception(ctxt, NM_VECTOR, 0, false);
|
||||
}
|
||||
|
||||
static inline int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
|
||||
int cs_l)
|
||||
{
|
||||
switch (ctxt->op_bytes) {
|
||||
case 2:
|
||||
ctxt->_eip = (u16)dst;
|
||||
break;
|
||||
case 4:
|
||||
ctxt->_eip = (u32)dst;
|
||||
break;
|
||||
#ifdef CONFIG_X86_64
|
||||
case 8:
|
||||
if ((cs_l && is_noncanonical_address(dst)) ||
|
||||
(!cs_l && (dst >> 32) != 0))
|
||||
return emulate_gp(ctxt, 0);
|
||||
ctxt->_eip = dst;
|
||||
break;
|
||||
#endif
|
||||
default:
|
||||
WARN(1, "unsupported eip assignment size\n");
|
||||
}
|
||||
return X86EMUL_CONTINUE;
|
||||
}
|
||||
|
||||
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
|
||||
{
|
||||
return assign_eip_far(ctxt, dst, ctxt->mode == X86EMUL_MODE_PROT64);
|
||||
}
|
||||
|
||||
static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
|
||||
{
|
||||
return assign_eip_near(ctxt, ctxt->_eip + rel);
|
||||
}
|
||||
|
||||
static u16 get_segment_selector(struct x86_emulate_ctxt *ctxt, unsigned seg)
|
||||
{
|
||||
u16 selector;
|
||||
|
@ -641,25 +625,24 @@ static bool insn_aligned(struct x86_emulate_ctxt *ctxt, unsigned size)
|
|||
return true;
|
||||
}
|
||||
|
||||
static int __linearize(struct x86_emulate_ctxt *ctxt,
|
||||
struct segmented_address addr,
|
||||
unsigned *max_size, unsigned size,
|
||||
bool write, bool fetch,
|
||||
ulong *linear)
|
||||
static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
|
||||
struct segmented_address addr,
|
||||
unsigned *max_size, unsigned size,
|
||||
bool write, bool fetch,
|
||||
enum x86emul_mode mode, ulong *linear)
|
||||
{
|
||||
struct desc_struct desc;
|
||||
bool usable;
|
||||
ulong la;
|
||||
u32 lim;
|
||||
u16 sel;
|
||||
unsigned cpl;
|
||||
|
||||
la = seg_base(ctxt, addr.seg) + addr.ea;
|
||||
*max_size = 0;
|
||||
switch (ctxt->mode) {
|
||||
switch (mode) {
|
||||
case X86EMUL_MODE_PROT64:
|
||||
if (((signed long)la << 16) >> 16 != la)
|
||||
return emulate_gp(ctxt, 0);
|
||||
if (is_noncanonical_address(la))
|
||||
goto bad;
|
||||
|
||||
*max_size = min_t(u64, ~0u, (1ull << 48) - la);
|
||||
if (size > *max_size)
|
||||
|
@ -678,46 +661,20 @@ static int __linearize(struct x86_emulate_ctxt *ctxt,
|
|||
if (!fetch && (desc.type & 8) && !(desc.type & 2))
|
||||
goto bad;
|
||||
lim = desc_limit_scaled(&desc);
|
||||
if ((ctxt->mode == X86EMUL_MODE_REAL) && !fetch &&
|
||||
(ctxt->d & NoBigReal)) {
|
||||
/* la is between zero and 0xffff */
|
||||
if (la > 0xffff)
|
||||
goto bad;
|
||||
*max_size = 0x10000 - la;
|
||||
} else if ((desc.type & 8) || !(desc.type & 4)) {
|
||||
/* expand-up segment */
|
||||
if (addr.ea > lim)
|
||||
goto bad;
|
||||
*max_size = min_t(u64, ~0u, (u64)lim + 1 - addr.ea);
|
||||
} else {
|
||||
if (!(desc.type & 8) && (desc.type & 4)) {
|
||||
/* expand-down segment */
|
||||
if (addr.ea <= lim)
|
||||
goto bad;
|
||||
lim = desc.d ? 0xffffffff : 0xffff;
|
||||
if (addr.ea > lim)
|
||||
goto bad;
|
||||
*max_size = min_t(u64, ~0u, (u64)lim + 1 - addr.ea);
|
||||
}
|
||||
if (addr.ea > lim)
|
||||
goto bad;
|
||||
*max_size = min_t(u64, ~0u, (u64)lim + 1 - addr.ea);
|
||||
if (size > *max_size)
|
||||
goto bad;
|
||||
cpl = ctxt->ops->cpl(ctxt);
|
||||
if (!(desc.type & 8)) {
|
||||
/* data segment */
|
||||
if (cpl > desc.dpl)
|
||||
goto bad;
|
||||
} else if ((desc.type & 8) && !(desc.type & 4)) {
|
||||
/* nonconforming code segment */
|
||||
if (cpl != desc.dpl)
|
||||
goto bad;
|
||||
} else if ((desc.type & 8) && (desc.type & 4)) {
|
||||
/* conforming code segment */
|
||||
if (cpl < desc.dpl)
|
||||
goto bad;
|
||||
}
|
||||
la &= (u32)-1;
|
||||
break;
|
||||
}
|
||||
if (fetch ? ctxt->mode != X86EMUL_MODE_PROT64 : ctxt->ad_bytes != 8)
|
||||
la &= (u32)-1;
|
||||
if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0))
|
||||
return emulate_gp(ctxt, 0);
|
||||
*linear = la;
|
||||
|
@ -735,9 +692,55 @@ static int linearize(struct x86_emulate_ctxt *ctxt,
|
|||
ulong *linear)
|
||||
{
|
||||
unsigned max_size;
|
||||
return __linearize(ctxt, addr, &max_size, size, write, false, linear);
|
||||
return __linearize(ctxt, addr, &max_size, size, write, false,
|
||||
ctxt->mode, linear);
|
||||
}
|
||||
|
||||
static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
|
||||
enum x86emul_mode mode)
|
||||
{
|
||||
ulong linear;
|
||||
int rc;
|
||||
unsigned max_size;
|
||||
struct segmented_address addr = { .seg = VCPU_SREG_CS,
|
||||
.ea = dst };
|
||||
|
||||
if (ctxt->op_bytes != sizeof(unsigned long))
|
||||
addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
|
||||
rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);
|
||||
if (rc == X86EMUL_CONTINUE)
|
||||
ctxt->_eip = addr.ea;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
|
||||
{
|
||||
return assign_eip(ctxt, dst, ctxt->mode);
|
||||
}
|
||||
|
||||
static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
|
||||
const struct desc_struct *cs_desc)
|
||||
{
|
||||
enum x86emul_mode mode = ctxt->mode;
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
if (ctxt->mode >= X86EMUL_MODE_PROT32 && cs_desc->l) {
|
||||
u64 efer = 0;
|
||||
|
||||
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
||||
if (efer & EFER_LMA)
|
||||
mode = X86EMUL_MODE_PROT64;
|
||||
}
|
||||
#endif
|
||||
if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)
|
||||
mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
|
||||
return assign_eip(ctxt, dst, mode);
|
||||
}
|
||||
|
||||
static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
|
||||
{
|
||||
return assign_eip_near(ctxt, ctxt->_eip + rel);
|
||||
}
|
||||
|
||||
static int segmented_read_std(struct x86_emulate_ctxt *ctxt,
|
||||
struct segmented_address addr,
|
||||
|
@ -776,7 +779,8 @@ static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size)
|
|||
* boundary check itself. Instead, we use max_size to check
|
||||
* against op_size.
|
||||
*/
|
||||
rc = __linearize(ctxt, addr, &max_size, 0, false, true, &linear);
|
||||
rc = __linearize(ctxt, addr, &max_size, 0, false, true, ctxt->mode,
|
||||
&linear);
|
||||
if (unlikely(rc != X86EMUL_CONTINUE))
|
||||
return rc;
|
||||
|
||||
|
@ -911,6 +915,8 @@ FASTOP2W(btc);
|
|||
|
||||
FASTOP2(xadd);
|
||||
|
||||
FASTOP2R(cmp, cmp_r);
|
||||
|
||||
static u8 test_cc(unsigned int condition, unsigned long flags)
|
||||
{
|
||||
u8 rc;
|
||||
|
@ -1221,6 +1227,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
|
|||
if (index_reg != 4)
|
||||
modrm_ea += reg_read(ctxt, index_reg) << scale;
|
||||
} else if ((ctxt->modrm_rm & 7) == 5 && ctxt->modrm_mod == 0) {
|
||||
modrm_ea += insn_fetch(s32, ctxt);
|
||||
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
||||
ctxt->rip_relative = 1;
|
||||
} else {
|
||||
|
@ -1229,10 +1236,6 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
|
|||
adjust_modrm_seg(ctxt, base_reg);
|
||||
}
|
||||
switch (ctxt->modrm_mod) {
|
||||
case 0:
|
||||
if (ctxt->modrm_rm == 5)
|
||||
modrm_ea += insn_fetch(s32, ctxt);
|
||||
break;
|
||||
case 1:
|
||||
modrm_ea += insn_fetch(s8, ctxt);
|
||||
break;
|
||||
|
@ -1284,7 +1287,8 @@ static void fetch_bit_operand(struct x86_emulate_ctxt *ctxt)
|
|||
else
|
||||
sv = (s64)ctxt->src.val & (s64)mask;
|
||||
|
||||
ctxt->dst.addr.mem.ea += (sv >> 3);
|
||||
ctxt->dst.addr.mem.ea = address_mask(ctxt,
|
||||
ctxt->dst.addr.mem.ea + (sv >> 3));
|
||||
}
|
||||
|
||||
/* only subword offset */
|
||||
|
@ -1610,6 +1614,9 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
|||
sizeof(base3), &ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
return ret;
|
||||
if (is_noncanonical_address(get_desc_base(&seg_desc) |
|
||||
((u64)base3 << 32)))
|
||||
return emulate_gp(ctxt, 0);
|
||||
}
|
||||
load:
|
||||
ctxt->ops->set_segment(ctxt, selector, &seg_desc, base3, seg);
|
||||
|
@ -1807,6 +1814,10 @@ static int em_push_sreg(struct x86_emulate_ctxt *ctxt)
|
|||
int seg = ctxt->src2.val;
|
||||
|
||||
ctxt->src.val = get_segment_selector(ctxt, seg);
|
||||
if (ctxt->op_bytes == 4) {
|
||||
rsp_increment(ctxt, -2);
|
||||
ctxt->op_bytes = 2;
|
||||
}
|
||||
|
||||
return em_push(ctxt);
|
||||
}
|
||||
|
@ -1850,7 +1861,7 @@ static int em_pusha(struct x86_emulate_ctxt *ctxt)
|
|||
|
||||
static int em_pushf(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
ctxt->src.val = (unsigned long)ctxt->eflags;
|
||||
ctxt->src.val = (unsigned long)ctxt->eflags & ~EFLG_VM;
|
||||
return em_push(ctxt);
|
||||
}
|
||||
|
||||
|
@ -2035,7 +2046,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
|
|||
if (rc != X86EMUL_CONTINUE)
|
||||
return rc;
|
||||
|
||||
rc = assign_eip_far(ctxt, ctxt->src.val, new_desc.l);
|
||||
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
|
||||
if (rc != X86EMUL_CONTINUE) {
|
||||
WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
|
||||
/* assigning eip failed; restore the old cs */
|
||||
|
@ -2045,31 +2056,22 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int em_grp45(struct x86_emulate_ctxt *ctxt)
|
||||
static int em_jmp_abs(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
int rc = X86EMUL_CONTINUE;
|
||||
return assign_eip_near(ctxt, ctxt->src.val);
|
||||
}
|
||||
|
||||
switch (ctxt->modrm_reg) {
|
||||
case 2: /* call near abs */ {
|
||||
long int old_eip;
|
||||
old_eip = ctxt->_eip;
|
||||
rc = assign_eip_near(ctxt, ctxt->src.val);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
break;
|
||||
ctxt->src.val = old_eip;
|
||||
rc = em_push(ctxt);
|
||||
break;
|
||||
}
|
||||
case 4: /* jmp abs */
|
||||
rc = assign_eip_near(ctxt, ctxt->src.val);
|
||||
break;
|
||||
case 5: /* jmp far */
|
||||
rc = em_jmp_far(ctxt);
|
||||
break;
|
||||
case 6: /* push */
|
||||
rc = em_push(ctxt);
|
||||
break;
|
||||
}
|
||||
static int em_call_near_abs(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
int rc;
|
||||
long int old_eip;
|
||||
|
||||
old_eip = ctxt->_eip;
|
||||
rc = assign_eip_near(ctxt, ctxt->src.val);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
return rc;
|
||||
ctxt->src.val = old_eip;
|
||||
rc = em_push(ctxt);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -2128,11 +2130,11 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
|
|||
/* Outer-privilege level return is not implemented */
|
||||
if (ctxt->mode >= X86EMUL_MODE_PROT16 && (cs & 3) > cpl)
|
||||
return X86EMUL_UNHANDLEABLE;
|
||||
rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, 0, false,
|
||||
rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, cpl, false,
|
||||
&new_desc);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
return rc;
|
||||
rc = assign_eip_far(ctxt, eip, new_desc.l);
|
||||
rc = assign_eip_far(ctxt, eip, &new_desc);
|
||||
if (rc != X86EMUL_CONTINUE) {
|
||||
WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
|
||||
ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS);
|
||||
|
@ -2316,6 +2318,7 @@ static int em_syscall(struct x86_emulate_ctxt *ctxt)
|
|||
|
||||
ops->get_msr(ctxt, MSR_SYSCALL_MASK, &msr_data);
|
||||
ctxt->eflags &= ~msr_data;
|
||||
ctxt->eflags |= EFLG_RESERVED_ONE_MASK;
|
||||
#endif
|
||||
} else {
|
||||
/* legacy mode */
|
||||
|
@ -2349,11 +2352,9 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
|
|||
&& !vendor_intel(ctxt))
|
||||
return emulate_ud(ctxt);
|
||||
|
||||
/* XXX sysenter/sysexit have not been tested in 64bit mode.
|
||||
* Therefore, we inject an #UD.
|
||||
*/
|
||||
/* sysenter/sysexit have not been tested in 64bit mode. */
|
||||
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
||||
return emulate_ud(ctxt);
|
||||
return X86EMUL_UNHANDLEABLE;
|
||||
|
||||
setup_syscalls_segments(ctxt, &cs, &ss);
|
||||
|
||||
|
@ -2425,6 +2426,8 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
|
|||
if ((msr_data & 0xfffc) == 0x0)
|
||||
return emulate_gp(ctxt, 0);
|
||||
ss_sel = (u16)(msr_data + 24);
|
||||
rcx = (u32)rcx;
|
||||
rdx = (u32)rdx;
|
||||
break;
|
||||
case X86EMUL_MODE_PROT64:
|
||||
cs_sel = (u16)(msr_data + 32);
|
||||
|
@ -2599,7 +2602,6 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt,
|
|||
ret = ops->read_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg,
|
||||
&ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
/* FIXME: need to provide precise fault address */
|
||||
return ret;
|
||||
|
||||
save_state_to_tss16(ctxt, &tss_seg);
|
||||
|
@ -2607,13 +2609,11 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt,
|
|||
ret = ops->write_std(ctxt, old_tss_base, &tss_seg, sizeof tss_seg,
|
||||
&ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
/* FIXME: need to provide precise fault address */
|
||||
return ret;
|
||||
|
||||
ret = ops->read_std(ctxt, new_tss_base, &tss_seg, sizeof tss_seg,
|
||||
&ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
/* FIXME: need to provide precise fault address */
|
||||
return ret;
|
||||
|
||||
if (old_tss_sel != 0xffff) {
|
||||
|
@ -2624,7 +2624,6 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt,
|
|||
sizeof tss_seg.prev_task_link,
|
||||
&ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
/* FIXME: need to provide precise fault address */
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2813,7 +2812,8 @@ static int emulator_do_task_switch(struct x86_emulate_ctxt *ctxt,
|
|||
*
|
||||
* 1. jmp/call/int to task gate: Check against DPL of the task gate
|
||||
* 2. Exception/IRQ/iret: No check is performed
|
||||
* 3. jmp/call to TSS: Check against DPL of the TSS
|
||||
* 3. jmp/call to TSS/task-gate: No check is performed since the
|
||||
* hardware checks it before exiting.
|
||||
*/
|
||||
if (reason == TASK_SWITCH_GATE) {
|
||||
if (idt_index != -1) {
|
||||
|
@ -2830,13 +2830,8 @@ static int emulator_do_task_switch(struct x86_emulate_ctxt *ctxt,
|
|||
if ((tss_selector & 3) > dpl || ops->cpl(ctxt) > dpl)
|
||||
return emulate_gp(ctxt, (idt_index << 3) | 0x2);
|
||||
}
|
||||
} else if (reason != TASK_SWITCH_IRET) {
|
||||
int dpl = next_tss_desc.dpl;
|
||||
if ((tss_selector & 3) > dpl || ops->cpl(ctxt) > dpl)
|
||||
return emulate_gp(ctxt, tss_selector);
|
||||
}
|
||||
|
||||
|
||||
desc_limit = desc_limit_scaled(&next_tss_desc);
|
||||
if (!next_tss_desc.p ||
|
||||
((desc_limit < 0x67 && (next_tss_desc.type & 8)) ||
|
||||
|
@ -2913,8 +2908,8 @@ static void string_addr_inc(struct x86_emulate_ctxt *ctxt, int reg,
|
|||
{
|
||||
int df = (ctxt->eflags & EFLG_DF) ? -op->count : op->count;
|
||||
|
||||
register_address_increment(ctxt, reg_rmw(ctxt, reg), df * op->bytes);
|
||||
op->addr.mem.ea = register_address(ctxt, reg_read(ctxt, reg));
|
||||
register_address_increment(ctxt, reg, df * op->bytes);
|
||||
op->addr.mem.ea = register_address(ctxt, reg);
|
||||
}
|
||||
|
||||
static int em_das(struct x86_emulate_ctxt *ctxt)
|
||||
|
@ -3025,7 +3020,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt)
|
|||
if (rc != X86EMUL_CONTINUE)
|
||||
return X86EMUL_CONTINUE;
|
||||
|
||||
rc = assign_eip_far(ctxt, ctxt->src.val, new_desc.l);
|
||||
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
goto fail;
|
||||
|
||||
|
@ -3215,6 +3210,8 @@ static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
|
|||
return emulate_ud(ctxt);
|
||||
|
||||
ctxt->dst.val = get_segment_selector(ctxt, ctxt->modrm_reg);
|
||||
if (ctxt->dst.bytes == 4 && ctxt->dst.type == OP_MEM)
|
||||
ctxt->dst.bytes = 2;
|
||||
return X86EMUL_CONTINUE;
|
||||
}
|
||||
|
||||
|
@ -3317,7 +3314,7 @@ static int em_sidt(struct x86_emulate_ctxt *ctxt)
|
|||
return emulate_store_desc_ptr(ctxt, ctxt->ops->get_idt);
|
||||
}
|
||||
|
||||
static int em_lgdt(struct x86_emulate_ctxt *ctxt)
|
||||
static int em_lgdt_lidt(struct x86_emulate_ctxt *ctxt, bool lgdt)
|
||||
{
|
||||
struct desc_ptr desc_ptr;
|
||||
int rc;
|
||||
|
@ -3329,12 +3326,23 @@ static int em_lgdt(struct x86_emulate_ctxt *ctxt)
|
|||
ctxt->op_bytes);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
return rc;
|
||||
ctxt->ops->set_gdt(ctxt, &desc_ptr);
|
||||
if (ctxt->mode == X86EMUL_MODE_PROT64 &&
|
||||
is_noncanonical_address(desc_ptr.address))
|
||||
return emulate_gp(ctxt, 0);
|
||||
if (lgdt)
|
||||
ctxt->ops->set_gdt(ctxt, &desc_ptr);
|
||||
else
|
||||
ctxt->ops->set_idt(ctxt, &desc_ptr);
|
||||
/* Disable writeback. */
|
||||
ctxt->dst.type = OP_NONE;
|
||||
return X86EMUL_CONTINUE;
|
||||
}
|
||||
|
||||
static int em_lgdt(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
return em_lgdt_lidt(ctxt, true);
|
||||
}
|
||||
|
||||
static int em_vmmcall(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
int rc;
|
||||
|
@ -3348,20 +3356,7 @@ static int em_vmmcall(struct x86_emulate_ctxt *ctxt)
|
|||
|
||||
static int em_lidt(struct x86_emulate_ctxt *ctxt)
|
||||
{
|
||||
struct desc_ptr desc_ptr;
|
||||
int rc;
|
||||
|
||||
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
||||
ctxt->op_bytes = 8;
|
||||
rc = read_descriptor(ctxt, ctxt->src.addr.mem,
|
||||
&desc_ptr.size, &desc_ptr.address,
|
||||
ctxt->op_bytes);
|
||||
if (rc != X86EMUL_CONTINUE)
|
||||
return rc;
|
||||
ctxt->ops->set_idt(ctxt, &desc_ptr);
|
||||
/* Disable writeback. */
|
||||
ctxt->dst.type = OP_NONE;
|
||||
return X86EMUL_CONTINUE;
|
||||
return em_lgdt_lidt(ctxt, false);
|
||||
}
|
||||
|
||||
static int em_smsw(struct x86_emulate_ctxt *ctxt)
|
||||
|
@ -3384,7 +3379,7 @@ static int em_loop(struct x86_emulate_ctxt *ctxt)
|
|||
{
|
||||
int rc = X86EMUL_CONTINUE;
|
||||
|
||||
register_address_increment(ctxt, reg_rmw(ctxt, VCPU_REGS_RCX), -1);
|
||||
register_address_increment(ctxt, VCPU_REGS_RCX, -1);
|
||||
if ((address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) != 0) &&
|
||||
(ctxt->b == 0xe2 || test_cc(ctxt->b ^ 0x5, ctxt->eflags)))
|
||||
rc = jmp_rel(ctxt, ctxt->src.val);
|
||||
|
@ -3554,7 +3549,7 @@ static int check_cr_write(struct x86_emulate_ctxt *ctxt)
|
|||
|
||||
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
||||
if (efer & EFER_LMA)
|
||||
rsvd = CR3_L_MODE_RESERVED_BITS;
|
||||
rsvd = CR3_L_MODE_RESERVED_BITS & ~CR3_PCID_INVD;
|
||||
|
||||
if (new_val & rsvd)
|
||||
return emulate_gp(ctxt, 0);
|
||||
|
@ -3596,8 +3591,15 @@ static int check_dr_read(struct x86_emulate_ctxt *ctxt)
|
|||
if ((cr4 & X86_CR4_DE) && (dr == 4 || dr == 5))
|
||||
return emulate_ud(ctxt);
|
||||
|
||||
if (check_dr7_gd(ctxt))
|
||||
if (check_dr7_gd(ctxt)) {
|
||||
ulong dr6;
|
||||
|
||||
ctxt->ops->get_dr(ctxt, 6, &dr6);
|
||||
dr6 &= ~15;
|
||||
dr6 |= DR6_BD | DR6_RTM;
|
||||
ctxt->ops->set_dr(ctxt, 6, dr6);
|
||||
return emulate_db(ctxt);
|
||||
}
|
||||
|
||||
return X86EMUL_CONTINUE;
|
||||
}
|
||||
|
@ -3684,6 +3686,7 @@ static int check_perm_out(struct x86_emulate_ctxt *ctxt)
|
|||
#define EXT(_f, _e) { .flags = ((_f) | RMExt), .u.group = (_e) }
|
||||
#define G(_f, _g) { .flags = ((_f) | Group | ModRM), .u.group = (_g) }
|
||||
#define GD(_f, _g) { .flags = ((_f) | GroupDual | ModRM), .u.gdual = (_g) }
|
||||
#define ID(_f, _i) { .flags = ((_f) | InstrDual | ModRM), .u.idual = (_i) }
|
||||
#define E(_f, _e) { .flags = ((_f) | Escape | ModRM), .u.esc = (_e) }
|
||||
#define I(_f, _e) { .flags = (_f), .u.execute = (_e) }
|
||||
#define F(_f, _e) { .flags = (_f) | Fastop, .u.fastop = (_e) }
|
||||
|
@ -3780,11 +3783,11 @@ static const struct opcode group4[] = {
|
|||
static const struct opcode group5[] = {
|
||||
F(DstMem | SrcNone | Lock, em_inc),
|
||||
F(DstMem | SrcNone | Lock, em_dec),
|
||||
I(SrcMem | Stack, em_grp45),
|
||||
I(SrcMem | NearBranch, em_call_near_abs),
|
||||
I(SrcMemFAddr | ImplicitOps | Stack, em_call_far),
|
||||
I(SrcMem | Stack, em_grp45),
|
||||
I(SrcMemFAddr | ImplicitOps, em_grp45),
|
||||
I(SrcMem | Stack, em_grp45), D(Undefined),
|
||||
I(SrcMem | NearBranch, em_jmp_abs),
|
||||
I(SrcMemFAddr | ImplicitOps, em_jmp_far),
|
||||
I(SrcMem | Stack, em_push), D(Undefined),
|
||||
};
|
||||
|
||||
static const struct opcode group6[] = {
|
||||
|
@ -3845,8 +3848,12 @@ static const struct gprefix pfx_0f_6f_0f_7f = {
|
|||
I(Mmx, em_mov), I(Sse | Aligned, em_mov), N, I(Sse | Unaligned, em_mov),
|
||||
};
|
||||
|
||||
static const struct instr_dual instr_dual_0f_2b = {
|
||||
I(0, em_mov), N
|
||||
};
|
||||
|
||||
static const struct gprefix pfx_0f_2b = {
|
||||
I(0, em_mov), I(0, em_mov), N, N,
|
||||
ID(0, &instr_dual_0f_2b), ID(0, &instr_dual_0f_2b), N, N,
|
||||
};
|
||||
|
||||
static const struct gprefix pfx_0f_28_0f_29 = {
|
||||
|
@ -3920,6 +3927,10 @@ static const struct escape escape_dd = { {
|
|||
N, N, N, N, N, N, N, N,
|
||||
} };
|
||||
|
||||
static const struct instr_dual instr_dual_0f_c3 = {
|
||||
I(DstMem | SrcReg | ModRM | No16 | Mov, em_mov), N
|
||||
};
|
||||
|
||||
static const struct opcode opcode_table[256] = {
|
||||
/* 0x00 - 0x07 */
|
||||
F6ALU(Lock, em_add),
|
||||
|
@ -3964,7 +3975,7 @@ static const struct opcode opcode_table[256] = {
|
|||
I2bvIP(DstDI | SrcDX | Mov | String | Unaligned, em_in, ins, check_perm_in), /* insb, insw/insd */
|
||||
I2bvIP(SrcSI | DstDX | String, em_out, outs, check_perm_out), /* outsb, outsw/outsd */
|
||||
/* 0x70 - 0x7F */
|
||||
X16(D(SrcImmByte)),
|
||||
X16(D(SrcImmByte | NearBranch)),
|
||||
/* 0x80 - 0x87 */
|
||||
G(ByteOp | DstMem | SrcImm, group1),
|
||||
G(DstMem | SrcImm, group1),
|
||||
|
@ -3991,20 +4002,20 @@ static const struct opcode opcode_table[256] = {
|
|||
I2bv(DstAcc | SrcMem | Mov | MemAbs, em_mov),
|
||||
I2bv(DstMem | SrcAcc | Mov | MemAbs | PageTable, em_mov),
|
||||
I2bv(SrcSI | DstDI | Mov | String, em_mov),
|
||||
F2bv(SrcSI | DstDI | String | NoWrite, em_cmp),
|
||||
F2bv(SrcSI | DstDI | String | NoWrite, em_cmp_r),
|
||||
/* 0xA8 - 0xAF */
|
||||
F2bv(DstAcc | SrcImm | NoWrite, em_test),
|
||||
I2bv(SrcAcc | DstDI | Mov | String, em_mov),
|
||||
I2bv(SrcSI | DstAcc | Mov | String, em_mov),
|
||||
F2bv(SrcAcc | DstDI | String | NoWrite, em_cmp),
|
||||
F2bv(SrcAcc | DstDI | String | NoWrite, em_cmp_r),
|
||||
/* 0xB0 - 0xB7 */
|
||||
X8(I(ByteOp | DstReg | SrcImm | Mov, em_mov)),
|
||||
/* 0xB8 - 0xBF */
|
||||
X8(I(DstReg | SrcImm64 | Mov, em_mov)),
|
||||
/* 0xC0 - 0xC7 */
|
||||
G(ByteOp | Src2ImmByte, group2), G(Src2ImmByte, group2),
|
||||
I(ImplicitOps | Stack | SrcImmU16, em_ret_near_imm),
|
||||
I(ImplicitOps | Stack, em_ret),
|
||||
I(ImplicitOps | NearBranch | SrcImmU16, em_ret_near_imm),
|
||||
I(ImplicitOps | NearBranch, em_ret),
|
||||
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2ES, em_lseg),
|
||||
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2DS, em_lseg),
|
||||
G(ByteOp, group11), G(0, group11),
|
||||
|
@ -4024,13 +4035,14 @@ static const struct opcode opcode_table[256] = {
|
|||
/* 0xD8 - 0xDF */
|
||||
N, E(0, &escape_d9), N, E(0, &escape_db), N, E(0, &escape_dd), N, N,
|
||||
/* 0xE0 - 0xE7 */
|
||||
X3(I(SrcImmByte, em_loop)),
|
||||
I(SrcImmByte, em_jcxz),
|
||||
X3(I(SrcImmByte | NearBranch, em_loop)),
|
||||
I(SrcImmByte | NearBranch, em_jcxz),
|
||||
I2bvIP(SrcImmUByte | DstAcc, em_in, in, check_perm_in),
|
||||
I2bvIP(SrcAcc | DstImmUByte, em_out, out, check_perm_out),
|
||||
/* 0xE8 - 0xEF */
|
||||
I(SrcImm | Stack, em_call), D(SrcImm | ImplicitOps),
|
||||
I(SrcImmFAddr | No64, em_jmp_far), D(SrcImmByte | ImplicitOps),
|
||||
I(SrcImm | NearBranch, em_call), D(SrcImm | ImplicitOps | NearBranch),
|
||||
I(SrcImmFAddr | No64, em_jmp_far),
|
||||
D(SrcImmByte | ImplicitOps | NearBranch),
|
||||
I2bvIP(SrcDX | DstAcc, em_in, in, check_perm_in),
|
||||
I2bvIP(SrcAcc | DstDX, em_out, out, check_perm_out),
|
||||
/* 0xF0 - 0xF7 */
|
||||
|
@ -4090,7 +4102,7 @@ static const struct opcode twobyte_table[256] = {
|
|||
N, N, N, N,
|
||||
N, N, N, GP(SrcReg | DstMem | ModRM | Mov, &pfx_0f_6f_0f_7f),
|
||||
/* 0x80 - 0x8F */
|
||||
X16(D(SrcImm)),
|
||||
X16(D(SrcImm | NearBranch)),
|
||||
/* 0x90 - 0x9F */
|
||||
X16(D(ByteOp | DstMem | SrcNone | ModRM| Mov)),
|
||||
/* 0xA0 - 0xA7 */
|
||||
|
@ -4121,7 +4133,7 @@ static const struct opcode twobyte_table[256] = {
|
|||
D(DstReg | SrcMem8 | ModRM | Mov), D(DstReg | SrcMem16 | ModRM | Mov),
|
||||
/* 0xC0 - 0xC7 */
|
||||
F2bv(DstMem | SrcReg | ModRM | SrcWrite | Lock, em_xadd),
|
||||
N, D(DstMem | SrcReg | ModRM | Mov),
|
||||
N, ID(0, &instr_dual_0f_c3),
|
||||
N, N, N, GD(0, &group9),
|
||||
/* 0xC8 - 0xCF */
|
||||
X8(I(DstReg, em_bswap)),
|
||||
|
@ -4134,12 +4146,20 @@ static const struct opcode twobyte_table[256] = {
|
|||
N, N, N, N, N, N, N, N, N, N, N, N, N, N, N, N
|
||||
};
|
||||
|
||||
static const struct instr_dual instr_dual_0f_38_f0 = {
|
||||
I(DstReg | SrcMem | Mov, em_movbe), N
|
||||
};
|
||||
|
||||
static const struct instr_dual instr_dual_0f_38_f1 = {
|
||||
I(DstMem | SrcReg | Mov, em_movbe), N
|
||||
};
|
||||
|
||||
static const struct gprefix three_byte_0f_38_f0 = {
|
||||
I(DstReg | SrcMem | Mov, em_movbe), N, N, N
|
||||
ID(0, &instr_dual_0f_38_f0), N, N, N
|
||||
};
|
||||
|
||||
static const struct gprefix three_byte_0f_38_f1 = {
|
||||
I(DstMem | SrcReg | Mov, em_movbe), N, N, N
|
||||
ID(0, &instr_dual_0f_38_f1), N, N, N
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -4152,8 +4172,8 @@ static const struct opcode opcode_map_0f_38[256] = {
|
|||
/* 0x80 - 0xef */
|
||||
X16(N), X16(N), X16(N), X16(N), X16(N), X16(N), X16(N),
|
||||
/* 0xf0 - 0xf1 */
|
||||
GP(EmulateOnUD | ModRM | Prefix, &three_byte_0f_38_f0),
|
||||
GP(EmulateOnUD | ModRM | Prefix, &three_byte_0f_38_f1),
|
||||
GP(EmulateOnUD | ModRM, &three_byte_0f_38_f0),
|
||||
GP(EmulateOnUD | ModRM, &three_byte_0f_38_f1),
|
||||
/* 0xf2 - 0xff */
|
||||
N, N, X4(N), X8(N)
|
||||
};
|
||||
|
@ -4275,7 +4295,7 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
|||
op->type = OP_MEM;
|
||||
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
||||
op->addr.mem.ea =
|
||||
register_address(ctxt, reg_read(ctxt, VCPU_REGS_RDI));
|
||||
register_address(ctxt, VCPU_REGS_RDI);
|
||||
op->addr.mem.seg = VCPU_SREG_ES;
|
||||
op->val = 0;
|
||||
op->count = 1;
|
||||
|
@ -4329,7 +4349,7 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
|||
op->type = OP_MEM;
|
||||
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
||||
op->addr.mem.ea =
|
||||
register_address(ctxt, reg_read(ctxt, VCPU_REGS_RSI));
|
||||
register_address(ctxt, VCPU_REGS_RSI);
|
||||
op->addr.mem.seg = ctxt->seg_override;
|
||||
op->val = 0;
|
||||
op->count = 1;
|
||||
|
@ -4338,7 +4358,7 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
|||
op->type = OP_MEM;
|
||||
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
||||
op->addr.mem.ea =
|
||||
register_address(ctxt,
|
||||
address_mask(ctxt,
|
||||
reg_read(ctxt, VCPU_REGS_RBX) +
|
||||
(reg_read(ctxt, VCPU_REGS_RAX) & 0xff));
|
||||
op->addr.mem.seg = ctxt->seg_override;
|
||||
|
@ -4510,8 +4530,7 @@ done_prefixes:
|
|||
|
||||
/* vex-prefix instructions are not implemented */
|
||||
if (ctxt->opcode_len == 1 && (ctxt->b == 0xc5 || ctxt->b == 0xc4) &&
|
||||
(mode == X86EMUL_MODE_PROT64 ||
|
||||
(mode >= X86EMUL_MODE_PROT16 && (ctxt->modrm & 0x80)))) {
|
||||
(mode == X86EMUL_MODE_PROT64 || (ctxt->modrm & 0xc0) == 0xc0)) {
|
||||
ctxt->d = NotImpl;
|
||||
}
|
||||
|
||||
|
@ -4549,6 +4568,12 @@ done_prefixes:
|
|||
else
|
||||
opcode = opcode.u.esc->op[(ctxt->modrm >> 3) & 7];
|
||||
break;
|
||||
case InstrDual:
|
||||
if ((ctxt->modrm >> 6) == 3)
|
||||
opcode = opcode.u.idual->mod3;
|
||||
else
|
||||
opcode = opcode.u.idual->mod012;
|
||||
break;
|
||||
default:
|
||||
return EMULATION_FAILED;
|
||||
}
|
||||
|
@ -4567,7 +4592,8 @@ done_prefixes:
|
|||
return EMULATION_FAILED;
|
||||
|
||||
if (unlikely(ctxt->d &
|
||||
(NotImpl|Stack|Op3264|Sse|Mmx|Intercept|CheckPerm))) {
|
||||
(NotImpl|Stack|Op3264|Sse|Mmx|Intercept|CheckPerm|NearBranch|
|
||||
No16))) {
|
||||
/*
|
||||
* These are copied unconditionally here, and checked unconditionally
|
||||
* in x86_emulate_insn.
|
||||
|
@ -4578,8 +4604,12 @@ done_prefixes:
|
|||
if (ctxt->d & NotImpl)
|
||||
return EMULATION_FAILED;
|
||||
|
||||
if (mode == X86EMUL_MODE_PROT64 && (ctxt->d & Stack))
|
||||
ctxt->op_bytes = 8;
|
||||
if (mode == X86EMUL_MODE_PROT64) {
|
||||
if (ctxt->op_bytes == 4 && (ctxt->d & Stack))
|
||||
ctxt->op_bytes = 8;
|
||||
else if (ctxt->d & NearBranch)
|
||||
ctxt->op_bytes = 8;
|
||||
}
|
||||
|
||||
if (ctxt->d & Op3264) {
|
||||
if (mode == X86EMUL_MODE_PROT64)
|
||||
|
@ -4588,6 +4618,9 @@ done_prefixes:
|
|||
ctxt->op_bytes = 4;
|
||||
}
|
||||
|
||||
if ((ctxt->d & No16) && ctxt->op_bytes == 2)
|
||||
ctxt->op_bytes = 4;
|
||||
|
||||
if (ctxt->d & Sse)
|
||||
ctxt->op_bytes = 16;
|
||||
else if (ctxt->d & Mmx)
|
||||
|
@ -4631,7 +4664,8 @@ done_prefixes:
|
|||
rc = decode_operand(ctxt, &ctxt->dst, (ctxt->d >> DstShift) & OpMask);
|
||||
|
||||
if (ctxt->rip_relative)
|
||||
ctxt->memopp->addr.mem.ea += ctxt->_eip;
|
||||
ctxt->memopp->addr.mem.ea = address_mask(ctxt,
|
||||
ctxt->memopp->addr.mem.ea + ctxt->_eip);
|
||||
|
||||
done:
|
||||
return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK;
|
||||
|
@ -4775,6 +4809,12 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
|
|||
goto done;
|
||||
}
|
||||
|
||||
/* Instruction can only be executed in protected mode */
|
||||
if ((ctxt->d & Prot) && ctxt->mode < X86EMUL_MODE_PROT16) {
|
||||
rc = emulate_ud(ctxt);
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Privileged instruction can be executed only in CPL=0 */
|
||||
if ((ctxt->d & Priv) && ops->cpl(ctxt)) {
|
||||
if (ctxt->d & PrivUD)
|
||||
|
@ -4784,12 +4824,6 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
|
|||
goto done;
|
||||
}
|
||||
|
||||
/* Instruction can only be executed in protected mode */
|
||||
if ((ctxt->d & Prot) && ctxt->mode < X86EMUL_MODE_PROT16) {
|
||||
rc = emulate_ud(ctxt);
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Do instruction specific permission checks */
|
||||
if (ctxt->d & CheckPerm) {
|
||||
rc = ctxt->check_perm(ctxt);
|
||||
|
@ -4974,8 +5008,7 @@ writeback:
|
|||
count = ctxt->src.count;
|
||||
else
|
||||
count = ctxt->dst.count;
|
||||
register_address_increment(ctxt, reg_rmw(ctxt, VCPU_REGS_RCX),
|
||||
-count);
|
||||
register_address_increment(ctxt, VCPU_REGS_RCX, -count);
|
||||
|
||||
if (!string_insn_completed(ctxt)) {
|
||||
/*
|
||||
|
@ -5053,11 +5086,6 @@ twobyte_insn:
|
|||
ctxt->dst.val = (ctxt->src.bytes == 1) ? (s8) ctxt->src.val :
|
||||
(s16) ctxt->src.val;
|
||||
break;
|
||||
case 0xc3: /* movnti */
|
||||
ctxt->dst.bytes = ctxt->op_bytes;
|
||||
ctxt->dst.val = (ctxt->op_bytes == 8) ? (u64) ctxt->src.val :
|
||||
(u32) ctxt->src.val;
|
||||
break;
|
||||
default:
|
||||
goto cannot_emulate;
|
||||
}
|
||||
|
|
|
@ -270,7 +270,6 @@ void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap,
|
|||
spin_unlock(&ioapic->lock);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
void kvm_vcpu_request_scan_ioapic(struct kvm *kvm)
|
||||
{
|
||||
struct kvm_ioapic *ioapic = kvm->arch.vioapic;
|
||||
|
@ -279,12 +278,6 @@ void kvm_vcpu_request_scan_ioapic(struct kvm *kvm)
|
|||
return;
|
||||
kvm_make_scan_ioapic_request(kvm);
|
||||
}
|
||||
#else
|
||||
void kvm_vcpu_request_scan_ioapic(struct kvm *kvm)
|
||||
{
|
||||
return;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
|
||||
{
|
||||
|
@ -586,11 +579,6 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
|
|||
case IOAPIC_REG_WINDOW:
|
||||
ioapic_write_indirect(ioapic, data);
|
||||
break;
|
||||
#ifdef CONFIG_IA64
|
||||
case IOAPIC_REG_EOI:
|
||||
__kvm_ioapic_update_eoi(NULL, ioapic, data, IOAPIC_LEVEL_TRIG);
|
||||
break;
|
||||
#endif
|
||||
|
||||
default:
|
||||
break;
|
|
@ -19,7 +19,6 @@ struct kvm_vcpu;
|
|||
/* Direct registers. */
|
||||
#define IOAPIC_REG_SELECT 0x00
|
||||
#define IOAPIC_REG_WINDOW 0x10
|
||||
#define IOAPIC_REG_EOI 0x40 /* IA64 IOSAPIC only */
|
||||
|
||||
/* Indirect registers. */
|
||||
#define IOAPIC_REG_APIC_ID 0x00 /* x86 IOAPIC only */
|
||||
|
@ -45,6 +44,23 @@ struct rtc_status {
|
|||
DECLARE_BITMAP(dest_map, KVM_MAX_VCPUS);
|
||||
};
|
||||
|
||||
union kvm_ioapic_redirect_entry {
|
||||
u64 bits;
|
||||
struct {
|
||||
u8 vector;
|
||||
u8 delivery_mode:3;
|
||||
u8 dest_mode:1;
|
||||
u8 delivery_status:1;
|
||||
u8 polarity:1;
|
||||
u8 remote_irr:1;
|
||||
u8 trig_mode:1;
|
||||
u8 mask:1;
|
||||
u8 reserve:7;
|
||||
u8 reserved[4];
|
||||
u8 dest_id;
|
||||
} fields;
|
||||
};
|
||||
|
||||
struct kvm_ioapic {
|
||||
u64 base_address;
|
||||
u32 ioregsel;
|
||||
|
@ -83,7 +99,7 @@ static inline struct kvm_ioapic *ioapic_irqchip(struct kvm *kvm)
|
|||
|
||||
void kvm_rtc_eoi_tracking_restore_one(struct kvm_vcpu *vcpu);
|
||||
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
|
||||
int short_hand, int dest, int dest_mode);
|
||||
int short_hand, unsigned int dest, int dest_mode);
|
||||
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
|
||||
void kvm_ioapic_update_eoi(struct kvm_vcpu *vcpu, int vector,
|
||||
int trigger_mode);
|
||||
|
@ -97,7 +113,6 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
|
|||
struct kvm_lapic_irq *irq, unsigned long *dest_map);
|
||||
int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
|
||||
int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
|
||||
void kvm_vcpu_request_scan_ioapic(struct kvm *kvm);
|
||||
void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap,
|
||||
u32 *tmr);
|
||||
|
|
@ -31,6 +31,7 @@
|
|||
#include <linux/dmar.h>
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/intel-iommu.h>
|
||||
#include "assigned-dev.h"
|
||||
|
||||
static bool allow_unsafe_assigned_interrupts;
|
||||
module_param_named(allow_unsafe_assigned_interrupts,
|
||||
|
@ -169,10 +170,8 @@ static int kvm_iommu_map_memslots(struct kvm *kvm)
|
|||
return r;
|
||||
}
|
||||
|
||||
int kvm_assign_device(struct kvm *kvm,
|
||||
struct kvm_assigned_dev_kernel *assigned_dev)
|
||||
int kvm_assign_device(struct kvm *kvm, struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_dev *pdev = NULL;
|
||||
struct iommu_domain *domain = kvm->arch.iommu_domain;
|
||||
int r;
|
||||
bool noncoherent;
|
||||
|
@ -181,7 +180,6 @@ int kvm_assign_device(struct kvm *kvm,
|
|||
if (!domain)
|
||||
return 0;
|
||||
|
||||
pdev = assigned_dev->dev;
|
||||
if (pdev == NULL)
|
||||
return -ENODEV;
|
||||
|
||||
|
@ -212,17 +210,14 @@ out_unmap:
|
|||
return r;
|
||||
}
|
||||
|
||||
int kvm_deassign_device(struct kvm *kvm,
|
||||
struct kvm_assigned_dev_kernel *assigned_dev)
|
||||
int kvm_deassign_device(struct kvm *kvm, struct pci_dev *pdev)
|
||||
{
|
||||
struct iommu_domain *domain = kvm->arch.iommu_domain;
|
||||
struct pci_dev *pdev = NULL;
|
||||
|
||||
/* check if iommu exists and in use */
|
||||
if (!domain)
|
||||
return 0;
|
||||
|
||||
pdev = assigned_dev->dev;
|
||||
if (pdev == NULL)
|
||||
return -ENODEV;
|
||||
|
|
@ -26,9 +26,6 @@
|
|||
#include <trace/events/kvm.h>
|
||||
|
||||
#include <asm/msidef.h>
|
||||
#ifdef CONFIG_IA64
|
||||
#include <asm/iosapic.h>
|
||||
#endif
|
||||
|
||||
#include "irq.h"
|
||||
|
||||
|
@ -38,12 +35,8 @@ static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
|
|||
struct kvm *kvm, int irq_source_id, int level,
|
||||
bool line_status)
|
||||
{
|
||||
#ifdef CONFIG_X86
|
||||
struct kvm_pic *pic = pic_irqchip(kvm);
|
||||
return kvm_pic_set_irq(pic, e->irqchip.pin, irq_source_id, level);
|
||||
#else
|
||||
return -1;
|
||||
#endif
|
||||
}
|
||||
|
||||
static int kvm_set_ioapic_irq(struct kvm_kernel_irq_routing_entry *e,
|
||||
|
@ -57,12 +50,7 @@ static int kvm_set_ioapic_irq(struct kvm_kernel_irq_routing_entry *e,
|
|||
|
||||
inline static bool kvm_is_dm_lowest_prio(struct kvm_lapic_irq *irq)
|
||||
{
|
||||
#ifdef CONFIG_IA64
|
||||
return irq->delivery_mode ==
|
||||
(IOSAPIC_LOWEST_PRIORITY << IOSAPIC_DELIVERY_SHIFT);
|
||||
#else
|
||||
return irq->delivery_mode == APIC_DM_LOWEST;
|
||||
#endif
|
||||
}
|
||||
|
||||
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
|
||||
|
@ -202,9 +190,7 @@ int kvm_request_irq_source_id(struct kvm *kvm)
|
|||
}
|
||||
|
||||
ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
|
||||
#ifdef CONFIG_X86
|
||||
ASSERT(irq_source_id != KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID);
|
||||
#endif
|
||||
set_bit(irq_source_id, bitmap);
|
||||
unlock:
|
||||
mutex_unlock(&kvm->irq_lock);
|
||||
|
@ -215,9 +201,7 @@ unlock:
|
|||
void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
|
||||
{
|
||||
ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
|
||||
#ifdef CONFIG_X86
|
||||
ASSERT(irq_source_id != KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID);
|
||||
#endif
|
||||
|
||||
mutex_lock(&kvm->irq_lock);
|
||||
if (irq_source_id < 0 ||
|
||||
|
@ -230,9 +214,7 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
|
|||
goto unlock;
|
||||
|
||||
kvm_ioapic_clear_all(kvm->arch.vioapic, irq_source_id);
|
||||
#ifdef CONFIG_X86
|
||||
kvm_pic_clear_all(pic_irqchip(kvm), irq_source_id);
|
||||
#endif
|
||||
unlock:
|
||||
mutex_unlock(&kvm->irq_lock);
|
||||
}
|
||||
|
@ -242,7 +224,7 @@ void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
|
|||
{
|
||||
mutex_lock(&kvm->irq_lock);
|
||||
kimn->irq = irq;
|
||||
hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
|
||||
hlist_add_head_rcu(&kimn->link, &kvm->arch.mask_notifier_list);
|
||||
mutex_unlock(&kvm->irq_lock);
|
||||
}
|
||||
|
||||
|
@ -264,7 +246,7 @@ void kvm_fire_mask_notifiers(struct kvm *kvm, unsigned irqchip, unsigned pin,
|
|||
idx = srcu_read_lock(&kvm->irq_srcu);
|
||||
gsi = kvm_irq_map_chip_pin(kvm, irqchip, pin);
|
||||
if (gsi != -1)
|
||||
hlist_for_each_entry_rcu(kimn, &kvm->mask_notifier_list, link)
|
||||
hlist_for_each_entry_rcu(kimn, &kvm->arch.mask_notifier_list, link)
|
||||
if (kimn->irq == gsi)
|
||||
kimn->func(kimn, mask);
|
||||
srcu_read_unlock(&kvm->irq_srcu, idx);
|
||||
|
@ -322,16 +304,11 @@ out:
|
|||
.u.irqchip = { .irqchip = KVM_IRQCHIP_IOAPIC, .pin = (irq) } }
|
||||
#define ROUTING_ENTRY1(irq) IOAPIC_ROUTING_ENTRY(irq)
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
# define PIC_ROUTING_ENTRY(irq) \
|
||||
#define PIC_ROUTING_ENTRY(irq) \
|
||||
{ .gsi = irq, .type = KVM_IRQ_ROUTING_IRQCHIP, \
|
||||
.u.irqchip = { .irqchip = SELECT_PIC(irq), .pin = (irq) % 8 } }
|
||||
# define ROUTING_ENTRY2(irq) \
|
||||
#define ROUTING_ENTRY2(irq) \
|
||||
IOAPIC_ROUTING_ENTRY(irq), PIC_ROUTING_ENTRY(irq)
|
||||
#else
|
||||
# define ROUTING_ENTRY2(irq) \
|
||||
IOAPIC_ROUTING_ENTRY(irq)
|
||||
#endif
|
||||
|
||||
static const struct kvm_irq_routing_entry default_routing[] = {
|
||||
ROUTING_ENTRY2(0), ROUTING_ENTRY2(1),
|
||||
|
@ -346,20 +323,6 @@ static const struct kvm_irq_routing_entry default_routing[] = {
|
|||
ROUTING_ENTRY1(18), ROUTING_ENTRY1(19),
|
||||
ROUTING_ENTRY1(20), ROUTING_ENTRY1(21),
|
||||
ROUTING_ENTRY1(22), ROUTING_ENTRY1(23),
|
||||
#ifdef CONFIG_IA64
|
||||
ROUTING_ENTRY1(24), ROUTING_ENTRY1(25),
|
||||
ROUTING_ENTRY1(26), ROUTING_ENTRY1(27),
|
||||
ROUTING_ENTRY1(28), ROUTING_ENTRY1(29),
|
||||
ROUTING_ENTRY1(30), ROUTING_ENTRY1(31),
|
||||
ROUTING_ENTRY1(32), ROUTING_ENTRY1(33),
|
||||
ROUTING_ENTRY1(34), ROUTING_ENTRY1(35),
|
||||
ROUTING_ENTRY1(36), ROUTING_ENTRY1(37),
|
||||
ROUTING_ENTRY1(38), ROUTING_ENTRY1(39),
|
||||
ROUTING_ENTRY1(40), ROUTING_ENTRY1(41),
|
||||
ROUTING_ENTRY1(42), ROUTING_ENTRY1(43),
|
||||
ROUTING_ENTRY1(44), ROUTING_ENTRY1(45),
|
||||
ROUTING_ENTRY1(46), ROUTING_ENTRY1(47),
|
||||
#endif
|
||||
};
|
||||
|
||||
int kvm_setup_default_irq_routing(struct kvm *kvm)
|
|
@ -68,6 +68,9 @@
|
|||
#define MAX_APIC_VECTOR 256
|
||||
#define APIC_VECTORS_PER_REG 32
|
||||
|
||||
#define APIC_BROADCAST 0xFF
|
||||
#define X2APIC_BROADCAST 0xFFFFFFFFul
|
||||
|
||||
#define VEC_POS(v) ((v) & (32 - 1))
|
||||
#define REG_POS(v) (((v) >> 5) << 4)
|
||||
|
||||
|
@ -129,8 +132,6 @@ static inline int kvm_apic_id(struct kvm_lapic *apic)
|
|||
return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
|
||||
}
|
||||
|
||||
#define KVM_X2APIC_CID_BITS 0
|
||||
|
||||
static void recalculate_apic_map(struct kvm *kvm)
|
||||
{
|
||||
struct kvm_apic_map *new, *old = NULL;
|
||||
|
@ -149,42 +150,56 @@ static void recalculate_apic_map(struct kvm *kvm)
|
|||
new->cid_shift = 8;
|
||||
new->cid_mask = 0;
|
||||
new->lid_mask = 0xff;
|
||||
new->broadcast = APIC_BROADCAST;
|
||||
|
||||
kvm_for_each_vcpu(i, vcpu, kvm) {
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
u16 cid, lid;
|
||||
u32 ldr;
|
||||
|
||||
if (!kvm_apic_present(vcpu))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* All APICs have to be configured in the same mode by an OS.
|
||||
* We take advatage of this while building logical id loockup
|
||||
* table. After reset APICs are in xapic/flat mode, so if we
|
||||
* find apic with different setting we assume this is the mode
|
||||
* OS wants all apics to be in; build lookup table accordingly.
|
||||
*/
|
||||
if (apic_x2apic_mode(apic)) {
|
||||
new->ldr_bits = 32;
|
||||
new->cid_shift = 16;
|
||||
new->cid_mask = (1 << KVM_X2APIC_CID_BITS) - 1;
|
||||
new->lid_mask = 0xffff;
|
||||
} else if (kvm_apic_sw_enabled(apic) &&
|
||||
!new->cid_mask /* flat mode */ &&
|
||||
kvm_apic_get_reg(apic, APIC_DFR) == APIC_DFR_CLUSTER) {
|
||||
new->cid_shift = 4;
|
||||
new->cid_mask = 0xf;
|
||||
new->lid_mask = 0xf;
|
||||
new->cid_mask = new->lid_mask = 0xffff;
|
||||
new->broadcast = X2APIC_BROADCAST;
|
||||
} else if (kvm_apic_get_reg(apic, APIC_LDR)) {
|
||||
if (kvm_apic_get_reg(apic, APIC_DFR) ==
|
||||
APIC_DFR_CLUSTER) {
|
||||
new->cid_shift = 4;
|
||||
new->cid_mask = 0xf;
|
||||
new->lid_mask = 0xf;
|
||||
} else {
|
||||
new->cid_shift = 8;
|
||||
new->cid_mask = 0;
|
||||
new->lid_mask = 0xff;
|
||||
}
|
||||
}
|
||||
|
||||
new->phys_map[kvm_apic_id(apic)] = apic;
|
||||
/*
|
||||
* All APICs have to be configured in the same mode by an OS.
|
||||
* We take advatage of this while building logical id loockup
|
||||
* table. After reset APICs are in software disabled mode, so if
|
||||
* we find apic with different setting we assume this is the mode
|
||||
* OS wants all apics to be in; build lookup table accordingly.
|
||||
*/
|
||||
if (kvm_apic_sw_enabled(apic))
|
||||
break;
|
||||
}
|
||||
|
||||
kvm_for_each_vcpu(i, vcpu, kvm) {
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
u16 cid, lid;
|
||||
u32 ldr, aid;
|
||||
|
||||
aid = kvm_apic_id(apic);
|
||||
ldr = kvm_apic_get_reg(apic, APIC_LDR);
|
||||
cid = apic_cluster_id(new, ldr);
|
||||
lid = apic_logical_id(new, ldr);
|
||||
|
||||
if (lid)
|
||||
if (aid < ARRAY_SIZE(new->phys_map))
|
||||
new->phys_map[aid] = apic;
|
||||
if (lid && cid < ARRAY_SIZE(new->logical_map))
|
||||
new->logical_map[cid][ffs(lid) - 1] = apic;
|
||||
}
|
||||
out:
|
||||
|
@ -201,11 +216,13 @@ out:
|
|||
|
||||
static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
|
||||
{
|
||||
u32 prev = kvm_apic_get_reg(apic, APIC_SPIV);
|
||||
bool enabled = val & APIC_SPIV_APIC_ENABLED;
|
||||
|
||||
apic_set_reg(apic, APIC_SPIV, val);
|
||||
if ((prev ^ val) & APIC_SPIV_APIC_ENABLED) {
|
||||
if (val & APIC_SPIV_APIC_ENABLED) {
|
||||
|
||||
if (enabled != apic->sw_enabled) {
|
||||
apic->sw_enabled = enabled;
|
||||
if (enabled) {
|
||||
static_key_slow_dec_deferred(&apic_sw_disabled);
|
||||
recalculate_apic_map(apic->vcpu->kvm);
|
||||
} else
|
||||
|
@ -237,21 +254,17 @@ static inline int apic_lvt_vector(struct kvm_lapic *apic, int lvt_type)
|
|||
|
||||
static inline int apic_lvtt_oneshot(struct kvm_lapic *apic)
|
||||
{
|
||||
return ((kvm_apic_get_reg(apic, APIC_LVTT) &
|
||||
apic->lapic_timer.timer_mode_mask) == APIC_LVT_TIMER_ONESHOT);
|
||||
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_ONESHOT;
|
||||
}
|
||||
|
||||
static inline int apic_lvtt_period(struct kvm_lapic *apic)
|
||||
{
|
||||
return ((kvm_apic_get_reg(apic, APIC_LVTT) &
|
||||
apic->lapic_timer.timer_mode_mask) == APIC_LVT_TIMER_PERIODIC);
|
||||
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_PERIODIC;
|
||||
}
|
||||
|
||||
static inline int apic_lvtt_tscdeadline(struct kvm_lapic *apic)
|
||||
{
|
||||
return ((kvm_apic_get_reg(apic, APIC_LVTT) &
|
||||
apic->lapic_timer.timer_mode_mask) ==
|
||||
APIC_LVT_TIMER_TSCDEADLINE);
|
||||
return apic->lapic_timer.timer_mode == APIC_LVT_TIMER_TSCDEADLINE;
|
||||
}
|
||||
|
||||
static inline int apic_lvt_nmi_mode(u32 lvt_val)
|
||||
|
@ -326,8 +339,12 @@ EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
|
|||
|
||||
static inline void apic_set_irr(int vec, struct kvm_lapic *apic)
|
||||
{
|
||||
apic->irr_pending = true;
|
||||
apic_set_vector(vec, apic->regs + APIC_IRR);
|
||||
/*
|
||||
* irr_pending must be true if any interrupt is pending; set it after
|
||||
* APIC_IRR to avoid race with apic_clear_irr
|
||||
*/
|
||||
apic->irr_pending = true;
|
||||
}
|
||||
|
||||
static inline int apic_search_irr(struct kvm_lapic *apic)
|
||||
|
@ -359,13 +376,15 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
|
|||
|
||||
vcpu = apic->vcpu;
|
||||
|
||||
apic_clear_vector(vec, apic->regs + APIC_IRR);
|
||||
if (unlikely(kvm_apic_vid_enabled(vcpu->kvm)))
|
||||
if (unlikely(kvm_apic_vid_enabled(vcpu->kvm))) {
|
||||
/* try to update RVI */
|
||||
apic_clear_vector(vec, apic->regs + APIC_IRR);
|
||||
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
||||
else {
|
||||
vec = apic_search_irr(apic);
|
||||
apic->irr_pending = (vec != -1);
|
||||
} else {
|
||||
apic->irr_pending = false;
|
||||
apic_clear_vector(vec, apic->regs + APIC_IRR);
|
||||
if (apic_search_irr(apic) != -1)
|
||||
apic->irr_pending = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -558,16 +577,25 @@ static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
|
|||
apic_update_ppr(apic);
|
||||
}
|
||||
|
||||
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest)
|
||||
static int kvm_apic_broadcast(struct kvm_lapic *apic, u32 dest)
|
||||
{
|
||||
return dest == 0xff || kvm_apic_id(apic) == dest;
|
||||
return dest == (apic_x2apic_mode(apic) ?
|
||||
X2APIC_BROADCAST : APIC_BROADCAST);
|
||||
}
|
||||
|
||||
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda)
|
||||
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u32 dest)
|
||||
{
|
||||
return kvm_apic_id(apic) == dest || kvm_apic_broadcast(apic, dest);
|
||||
}
|
||||
|
||||
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda)
|
||||
{
|
||||
int result = 0;
|
||||
u32 logical_id;
|
||||
|
||||
if (kvm_apic_broadcast(apic, mda))
|
||||
return 1;
|
||||
|
||||
if (apic_x2apic_mode(apic)) {
|
||||
logical_id = kvm_apic_get_reg(apic, APIC_LDR);
|
||||
return logical_id & mda;
|
||||
|
@ -595,7 +623,7 @@ int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda)
|
|||
}
|
||||
|
||||
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
|
||||
int short_hand, int dest, int dest_mode)
|
||||
int short_hand, unsigned int dest, int dest_mode)
|
||||
{
|
||||
int result = 0;
|
||||
struct kvm_lapic *target = vcpu->arch.apic;
|
||||
|
@ -657,15 +685,24 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
|
|||
if (!map)
|
||||
goto out;
|
||||
|
||||
if (irq->dest_id == map->broadcast)
|
||||
goto out;
|
||||
|
||||
ret = true;
|
||||
|
||||
if (irq->dest_mode == 0) { /* physical mode */
|
||||
if (irq->delivery_mode == APIC_DM_LOWEST ||
|
||||
irq->dest_id == 0xff)
|
||||
if (irq->dest_id >= ARRAY_SIZE(map->phys_map))
|
||||
goto out;
|
||||
dst = &map->phys_map[irq->dest_id & 0xff];
|
||||
|
||||
dst = &map->phys_map[irq->dest_id];
|
||||
} else {
|
||||
u32 mda = irq->dest_id << (32 - map->ldr_bits);
|
||||
u16 cid = apic_cluster_id(map, mda);
|
||||
|
||||
dst = map->logical_map[apic_cluster_id(map, mda)];
|
||||
if (cid >= ARRAY_SIZE(map->logical_map))
|
||||
goto out;
|
||||
|
||||
dst = map->logical_map[cid];
|
||||
|
||||
bitmap = apic_logical_id(map, mda);
|
||||
|
||||
|
@ -691,8 +728,6 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
|
|||
*r = 0;
|
||||
*r += kvm_apic_set_irq(dst[i]->vcpu, irq, dest_map);
|
||||
}
|
||||
|
||||
ret = true;
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
|
@ -1034,6 +1069,26 @@ static void update_divide_count(struct kvm_lapic *apic)
|
|||
apic->divide_count);
|
||||
}
|
||||
|
||||
static void apic_timer_expired(struct kvm_lapic *apic)
|
||||
{
|
||||
struct kvm_vcpu *vcpu = apic->vcpu;
|
||||
wait_queue_head_t *q = &vcpu->wq;
|
||||
|
||||
/*
|
||||
* Note: KVM_REQ_PENDING_TIMER is implicitly checked in
|
||||
* vcpu_enter_guest.
|
||||
*/
|
||||
if (atomic_read(&apic->lapic_timer.pending))
|
||||
return;
|
||||
|
||||
atomic_inc(&apic->lapic_timer.pending);
|
||||
/* FIXME: this code should not know anything about vcpus */
|
||||
kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);
|
||||
|
||||
if (waitqueue_active(q))
|
||||
wake_up_interruptible(q);
|
||||
}
|
||||
|
||||
static void start_apic_timer(struct kvm_lapic *apic)
|
||||
{
|
||||
ktime_t now;
|
||||
|
@ -1096,9 +1151,10 @@ static void start_apic_timer(struct kvm_lapic *apic)
|
|||
if (likely(tscdeadline > guest_tsc)) {
|
||||
ns = (tscdeadline - guest_tsc) * 1000000ULL;
|
||||
do_div(ns, this_tsc_khz);
|
||||
}
|
||||
hrtimer_start(&apic->lapic_timer.timer,
|
||||
ktime_add_ns(now, ns), HRTIMER_MODE_ABS);
|
||||
hrtimer_start(&apic->lapic_timer.timer,
|
||||
ktime_add_ns(now, ns), HRTIMER_MODE_ABS);
|
||||
} else
|
||||
apic_timer_expired(apic);
|
||||
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
@ -1203,17 +1259,20 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
|
|||
|
||||
break;
|
||||
|
||||
case APIC_LVTT:
|
||||
if ((kvm_apic_get_reg(apic, APIC_LVTT) &
|
||||
apic->lapic_timer.timer_mode_mask) !=
|
||||
(val & apic->lapic_timer.timer_mode_mask))
|
||||
case APIC_LVTT: {
|
||||
u32 timer_mode = val & apic->lapic_timer.timer_mode_mask;
|
||||
|
||||
if (apic->lapic_timer.timer_mode != timer_mode) {
|
||||
apic->lapic_timer.timer_mode = timer_mode;
|
||||
hrtimer_cancel(&apic->lapic_timer.timer);
|
||||
}
|
||||
|
||||
if (!kvm_apic_sw_enabled(apic))
|
||||
val |= APIC_LVT_MASKED;
|
||||
val &= (apic_lvt_mask[0] | apic->lapic_timer.timer_mode_mask);
|
||||
apic_set_reg(apic, APIC_LVTT, val);
|
||||
break;
|
||||
}
|
||||
|
||||
case APIC_TMICT:
|
||||
if (apic_lvtt_tscdeadline(apic))
|
||||
|
@ -1320,7 +1379,7 @@ void kvm_free_lapic(struct kvm_vcpu *vcpu)
|
|||
if (!(vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE))
|
||||
static_key_slow_dec_deferred(&apic_hw_disabled);
|
||||
|
||||
if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_APIC_ENABLED))
|
||||
if (!apic->sw_enabled)
|
||||
static_key_slow_dec_deferred(&apic_sw_disabled);
|
||||
|
||||
if (apic->regs)
|
||||
|
@ -1355,9 +1414,6 @@ void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
|
|||
return;
|
||||
|
||||
hrtimer_cancel(&apic->lapic_timer.timer);
|
||||
/* Inject here so clearing tscdeadline won't override new value */
|
||||
if (apic_has_pending_timer(vcpu))
|
||||
kvm_inject_apic_timer_irqs(vcpu);
|
||||
apic->lapic_timer.tscdeadline = data;
|
||||
start_apic_timer(apic);
|
||||
}
|
||||
|
@ -1422,6 +1478,10 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
|
|||
apic->base_address = apic->vcpu->arch.apic_base &
|
||||
MSR_IA32_APICBASE_BASE;
|
||||
|
||||
if ((value & MSR_IA32_APICBASE_ENABLE) &&
|
||||
apic->base_address != APIC_DEFAULT_PHYS_BASE)
|
||||
pr_warn_once("APIC base relocation is unsupported by KVM");
|
||||
|
||||
/* with FSB delivery interrupt, we can restart APIC functionality */
|
||||
apic_debug("apic base msr is 0x%016" PRIx64 ", and base address is "
|
||||
"0x%lx.\n", apic->vcpu->arch.apic_base, apic->base_address);
|
||||
|
@ -1447,6 +1507,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu)
|
|||
|
||||
for (i = 0; i < APIC_LVT_NUM; i++)
|
||||
apic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED);
|
||||
apic->lapic_timer.timer_mode = 0;
|
||||
apic_set_reg(apic, APIC_LVT0,
|
||||
SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT));
|
||||
|
||||
|
@ -1538,23 +1599,8 @@ static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
|
|||
{
|
||||
struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
|
||||
struct kvm_lapic *apic = container_of(ktimer, struct kvm_lapic, lapic_timer);
|
||||
struct kvm_vcpu *vcpu = apic->vcpu;
|
||||
wait_queue_head_t *q = &vcpu->wq;
|
||||
|
||||
/*
|
||||
* There is a race window between reading and incrementing, but we do
|
||||
* not care about potentially losing timer events in the !reinject
|
||||
* case anyway. Note: KVM_REQ_PENDING_TIMER is implicitly checked
|
||||
* in vcpu_enter_guest.
|
||||
*/
|
||||
if (!atomic_read(&ktimer->pending)) {
|
||||
atomic_inc(&ktimer->pending);
|
||||
/* FIXME: this code should not know anything about vcpus */
|
||||
kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);
|
||||
}
|
||||
|
||||
if (waitqueue_active(q))
|
||||
wake_up_interruptible(q);
|
||||
apic_timer_expired(apic);
|
||||
|
||||
if (lapic_is_periodic(apic)) {
|
||||
hrtimer_add_expires_ns(&ktimer->timer, ktimer->period);
|
||||
|
@ -1693,6 +1739,9 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
|
|||
apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm) ?
|
||||
1 : count_vectors(apic->regs + APIC_ISR);
|
||||
apic->highest_isr_cache = -1;
|
||||
if (kvm_x86_ops->hwapic_irr_update)
|
||||
kvm_x86_ops->hwapic_irr_update(vcpu,
|
||||
apic_find_highest_irr(apic));
|
||||
kvm_x86_ops->hwapic_isr_update(vcpu->kvm, apic_find_highest_isr(apic));
|
||||
kvm_make_request(KVM_REQ_EVENT, vcpu);
|
||||
kvm_rtc_eoi_tracking_restore_one(vcpu);
|
||||
|
@ -1837,8 +1886,11 @@ int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
|
|||
if (!irqchip_in_kernel(vcpu->kvm) || !apic_x2apic_mode(apic))
|
||||
return 1;
|
||||
|
||||
if (reg == APIC_ICR2)
|
||||
return 1;
|
||||
|
||||
/* if this is ICR write vector before command */
|
||||
if (msr == 0x830)
|
||||
if (reg == APIC_ICR)
|
||||
apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
|
||||
return apic_reg_write(apic, reg, (u32)data);
|
||||
}
|
||||
|
@ -1851,9 +1903,15 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
|
|||
if (!irqchip_in_kernel(vcpu->kvm) || !apic_x2apic_mode(apic))
|
||||
return 1;
|
||||
|
||||
if (reg == APIC_DFR || reg == APIC_ICR2) {
|
||||
apic_debug("KVM_APIC_READ: read x2apic reserved register %x\n",
|
||||
reg);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (apic_reg_read(apic, reg, 4, &low))
|
||||
return 1;
|
||||
if (msr == 0x830)
|
||||
if (reg == APIC_ICR)
|
||||
apic_reg_read(apic, APIC_ICR2, 4, &high);
|
||||
|
||||
*data = (((u64)high) << 32) | low;
|
||||
|
@ -1908,7 +1966,7 @@ int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data)
|
|||
void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
unsigned int sipi_vector;
|
||||
u8 sipi_vector;
|
||||
unsigned long pe;
|
||||
|
||||
if (!kvm_vcpu_has_lapic(vcpu) || !apic->pending_events)
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
struct kvm_timer {
|
||||
struct hrtimer timer;
|
||||
s64 period; /* unit: ns */
|
||||
u32 timer_mode;
|
||||
u32 timer_mode_mask;
|
||||
u64 tscdeadline;
|
||||
atomic_t pending; /* accumulated triggered timers */
|
||||
|
@ -22,6 +23,7 @@ struct kvm_lapic {
|
|||
struct kvm_timer lapic_timer;
|
||||
u32 divide_count;
|
||||
struct kvm_vcpu *vcpu;
|
||||
bool sw_enabled;
|
||||
bool irr_pending;
|
||||
/* Number of bits set in ISR. */
|
||||
s16 isr_count;
|
||||
|
@ -55,8 +57,8 @@ void kvm_apic_set_version(struct kvm_vcpu *vcpu);
|
|||
|
||||
void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr);
|
||||
void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir);
|
||||
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
|
||||
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
|
||||
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u32 dest);
|
||||
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda);
|
||||
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
|
||||
unsigned long *dest_map);
|
||||
int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type);
|
||||
|
@ -119,11 +121,11 @@ static inline int kvm_apic_hw_enabled(struct kvm_lapic *apic)
|
|||
|
||||
extern struct static_key_deferred apic_sw_disabled;
|
||||
|
||||
static inline int kvm_apic_sw_enabled(struct kvm_lapic *apic)
|
||||
static inline bool kvm_apic_sw_enabled(struct kvm_lapic *apic)
|
||||
{
|
||||
if (static_key_false(&apic_sw_disabled.key))
|
||||
return kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_APIC_ENABLED;
|
||||
return APIC_SPIV_APIC_ENABLED;
|
||||
return apic->sw_enabled;
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool kvm_apic_present(struct kvm_vcpu *vcpu)
|
||||
|
@ -152,8 +154,6 @@ static inline u16 apic_cluster_id(struct kvm_apic_map *map, u32 ldr)
|
|||
ldr >>= 32 - map->ldr_bits;
|
||||
cid = (ldr >> map->cid_shift) & map->cid_mask;
|
||||
|
||||
BUG_ON(cid >= ARRAY_SIZE(map->logical_map));
|
||||
|
||||
return cid;
|
||||
}
|
||||
|
||||
|
|
|
@ -214,13 +214,12 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
|
|||
#define MMIO_GEN_LOW_SHIFT 10
|
||||
#define MMIO_GEN_LOW_MASK ((1 << MMIO_GEN_LOW_SHIFT) - 2)
|
||||
#define MMIO_GEN_MASK ((1 << MMIO_GEN_SHIFT) - 1)
|
||||
#define MMIO_MAX_GEN ((1 << MMIO_GEN_SHIFT) - 1)
|
||||
|
||||
static u64 generation_mmio_spte_mask(unsigned int gen)
|
||||
{
|
||||
u64 mask;
|
||||
|
||||
WARN_ON(gen > MMIO_MAX_GEN);
|
||||
WARN_ON(gen & ~MMIO_GEN_MASK);
|
||||
|
||||
mask = (gen & MMIO_GEN_LOW_MASK) << MMIO_SPTE_GEN_LOW_SHIFT;
|
||||
mask |= ((u64)gen >> MMIO_GEN_LOW_SHIFT) << MMIO_SPTE_GEN_HIGH_SHIFT;
|
||||
|
@ -263,13 +262,13 @@ static bool is_mmio_spte(u64 spte)
|
|||
|
||||
static gfn_t get_mmio_spte_gfn(u64 spte)
|
||||
{
|
||||
u64 mask = generation_mmio_spte_mask(MMIO_MAX_GEN) | shadow_mmio_mask;
|
||||
u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask;
|
||||
return (spte & ~mask) >> PAGE_SHIFT;
|
||||
}
|
||||
|
||||
static unsigned get_mmio_spte_access(u64 spte)
|
||||
{
|
||||
u64 mask = generation_mmio_spte_mask(MMIO_MAX_GEN) | shadow_mmio_mask;
|
||||
u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask;
|
||||
return (spte & ~mask) & ~PAGE_MASK;
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue