Commit Graph

560984 Commits

Author SHA1 Message Date
Andrey Smetanin c71acc4c74 drivers/hv: Move struct hv_timer_message_payload into UAPI Hyper-V x86 header
This struct is required for Hyper-V SynIC timers implementation inside KVM
and for upcoming Hyper-V VMBus support by userspace(QEMU). So place it into
Hyper-V UAPI header.

Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
CC: Gleb Natapov <gleb@kernel.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: "K. Y. Srinivasan" <kys@microsoft.com>
CC: Haiyang Zhang <haiyangz@microsoft.com>
CC: Vitaly Kuznetsov <vkuznets@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: qemu-devel@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-12-16 18:49:41 +01:00
Andrey Smetanin 5b423efe11 drivers/hv: Move struct hv_message into UAPI Hyper-V x86 header
This struct is required for Hyper-V SynIC timers implementation inside KVM
and for upcoming Hyper-V VMBus support by userspace(QEMU). So place it into
Hyper-V UAPI header.

Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Acked-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
CC: Gleb Natapov <gleb@kernel.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: "K. Y. Srinivasan" <kys@microsoft.com>
CC: Haiyang Zhang <haiyangz@microsoft.com>
CC: Vitaly Kuznetsov <vkuznets@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: qemu-devel@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-12-16 18:49:40 +01:00
Andrey Smetanin 4f39bcfd1c drivers/hv: Move HV_SYNIC_STIMER_COUNT into Hyper-V UAPI x86 header
This constant is required for Hyper-V SynIC timers MSR's
support by userspace(QEMU).

Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Acked-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
CC: Gleb Natapov <gleb@kernel.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: "K. Y. Srinivasan" <kys@microsoft.com>
CC: Haiyang Zhang <haiyangz@microsoft.com>
CC: Vitaly Kuznetsov <vkuznets@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: qemu-devel@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-12-16 18:49:40 +01:00
Andrey Smetanin 7797dcf63f drivers/hv: replace enum hv_message_type by u32
enum hv_message_type inside struct hv_message, hv_post_message
is not size portable. Replace enum by u32.

Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
CC: Gleb Natapov <gleb@kernel.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: "K. Y. Srinivasan" <kys@microsoft.com>
CC: Haiyang Zhang <haiyangz@microsoft.com>
CC: Vitaly Kuznetsov <vkuznets@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: qemu-devel@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-12-16 18:49:39 +01:00
Paolo Bonzini da3f7ca3e8 KVM: s390 features and fixes for 4.5 (kvm/next)
Some small cleanups
 - use assignment instead of memcpy
 - use %pK for kernel pointers
 
 Changes regarding guest memory size
 - Fix an off-by-one error in our guest memory interface (we might
 use unnecessarily big page tables, e.g. 3 levels for a 2GB guest
 instead of 2 levels)
 - We now ask the machine about the max. supported guest address
   and limit accordingly.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJWcDsJAAoJEBF7vIC1phx8sR0P+wWcpDKbnmoYBoJhUMraJp5Y
 myf5Ch/meZYiKknTnsoXS+sZFjAnQTtm9jXpz0k1oSyXQ+/J0kxcbcdrtBCbGs7I
 r24XGb1Ld6cs+1lnE9Ch64g62UUuRiwcDmF+ZZmMLrgZLSz9EFIjd4WkF5MLYr0K
 /z+gBcmNQGZLO2ud33qHf2KoQxf9uU1vmsd6db/ksINUj13APLwDCvHjarmQnlKS
 74kQcrvR1nOgrV+MV3Un8SsI3AR90guHmXbSA3LzVb0BB7ocMnPncK0HcUqmFfAX
 cC+dI8mXwLKcr17xE0gm9ddnQy0SJx2bAvYYdyft/xA09P4sBxWzCoLCUvN/MHSm
 +Zd5YXJzNY1jKFnPSNFxCJCei8zxTDxx5sZ5cfbU3pWrto9vwQdx/D/9nPC3wjgG
 enxvCivYlNB755Qtm/3jsyt7UuT4TbFuShg8I1BID4jsWTads2CtDJOZ3ny0vwh2
 GHtzRx5Kb+oTmj8A5xkTa4AUj0Ov4QWDbrX1NHPTNBcHi04MJRhWQsYO9202EaNN
 kDNmcKhiBxVgLrXkCGXfh55QF7YRAq6RLv5mYkJg5JCsthD0x6aJr/xucT1yr1pd
 JCQDWOSCh4A3+WpYZ2ICBEkX2dJPX2eSivSX8Ky8p+GCKQ0tfLQSnc6Gv/UBc9Sj
 KraXyMJamI56rZu33Vzb
 =HaYd
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-next-4.5-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD

KVM: s390 features and fixes for 4.5 (kvm/next)

Some small cleanups
- use assignment instead of memcpy
- use %pK for kernel pointers

Changes regarding guest memory size
- Fix an off-by-one error in our guest memory interface (we might
use unnecessarily big page tables, e.g. 3 levels for a 2GB guest
instead of 2 levels)
- We now ask the machine about the max. supported guest address
  and limit accordingly.
2015-12-16 18:49:29 +01:00
Guenther Hutzl 32e6b236d2 KVM: s390: consider system MHA for guest storage
Verify that the guest maximum storage address is below the MHA (maximum
host address) value allowed on the host.

Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Guenther Hutzl <hutzl@linux.vnet.ibm.com>
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
[adopt to match recent limit,size changes]

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-12-15 17:08:22 +01:00
Dominik Dingel a3a92c31bf KVM: s390: fix mismatch between user and in-kernel guest limit
While the userspace interface requests the maximum size the gmap code
expects to get a maximum address.

This error resulted in bigger page tables than necessary for some guest
sizes, e.g. a 2GB guest used 3 levels instead of 2.

At the same time we introduce KVM_S390_NO_MEM_LIMIT, which allows in a
bright future that a guest spans the complete 64 bit address space.

We also switch to TASK_MAX_SIZE for the initial memory size, this is a
cosmetic change as the previous size also resulted in a 4 level pagetable
creation.

Reported-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-12-15 17:08:21 +01:00
Christian Borntraeger 8335713ad0 KVM: s390: obey kptr_restrict in traces
The s390dbf and trace events provide a debugfs interface.
If kptr_restrict is active, we should not expose kernel
pointers. We can fence the debugfs output by using %pK
instead of %p.

Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-12-15 17:06:32 +01:00
Christian Borntraeger 7ec7c8c70b KVM: s390: use assignment instead of memcpy
Replace two memcpy with proper assignment.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-12-15 16:06:48 +01:00
Paolo Bonzini 4601463485 KVM: s390 features, kvm_get_vcpu_by_id and stat
Several features for s390
 1. ESCA support (up to 248 vCPUs)
 2. KVM detection: we  can now detect if we support KVM (e.g. does KVM
    under KVM work?)
 
 kvm_stat:
 1. cleanup the exit path
 
 kvm_get_vcpu_by_id:
 1. Use kvm_get_vcpu_by_id where appropriate
 2. Apply a heuristic to optimize for ID VCPU == No. VCPU
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJWXqtzAAoJEBF7vIC1phx8W10P/0lMuB0B/GqyZdkkOUvQJuPh
 U5zb3wNBunjVNVg/raD6JaSacIzZCnWKZtDZiHWaTPhtCyRJyDbCPiSgOyxJMMEd
 qaxv220IIZF3j5k9rL7cGV+S1aYSWJ/cPhWRQ3kk3RwbivEEw8ISCXw/UBmfLXnD
 7X1QUEBRIuwA5WVn9FZ8ECzMBNqcuIO5b078ggSVlEWEQDpa1fp080DmESN/Ffws
 fazr2JhBcX0q20WXd1Rh1CMlbCYWM9NLZSTbJmDf+B7z5z/Yi+IfbhB36cWTD2X5
 2a0vXEJ0XwMhncZW3E7MYc+0qJMTpaFaCXhX/cZFele/8+eI3L0hqb70bYkvZqid
 FAuBstBHb7mpmOQnV+ym/HY8nz/1j/8pRHgblA55TcJll6N45KKZqX1QxCFt0a9d
 WJDN2Nrm2iT0y53spw5BvmmMWbaLwMRldkJM+4VlZrThjdldM+iJ4HGdX2kCwKDS
 y9D/MqsmVA1WQQVaENGU39fwZWP9EDpn5NCu/6/l0EdvmudtuDcscu/sQByU8+lg
 t7WTZAK7BUvE8Z481Cy/Nh5/bkY+b0g9BQFFG3n/ywj563OobVxNi8/EllSkbZBT
 aj14hmB7icEMUOwNfyX5rOB08dPphh5oBeT+g9G/MPIC0nYvHyMz9QbGiba+l/8T
 zIykgDhcxi9MLo923iZf
 =i4js
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-next-4.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD

KVM: s390 features, kvm_get_vcpu_by_id and stat

Several features for s390
1. ESCA support (up to 248 vCPUs)
2. KVM detection: we  can now detect if we support KVM (e.g. does KVM
   under KVM work?)

kvm_stat:
1. cleanup the exit path

kvm_get_vcpu_by_id:
1. Use kvm_get_vcpu_by_id where appropriate
2. Apply a heuristic to optimize for ID VCPU == No. VCPU
2015-12-02 13:50:04 +01:00
Christian Borntraeger 2f8a43d45d KVM: s390: remove redudant assigment of error code
rc already contains -ENOMEM, no need to assign it twice.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
2015-11-30 12:47:13 +01:00
Heiko Carstens a6aacc3f87 KVM: s390: remove pointless test_facility(2) check
This evaluates always to 'true'.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:12 +01:00
David Hildenbrand 07197fd05f KVM: s390: don't load kvm without virtualization support
If we don't have support for virtualization (SIE), e.g. when running under
a hypervisor not supporting execution of the SIE instruction, we should
immediately abort loading the kvm module, as the SIE instruction cannot
be enabled dynamically.

Currently, the SIE instructions fails with an exception on a non-SIE
host, resulting in the guest making no progress, instead of failing hard.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:12 +01:00
David Hildenbrand 7f16d7e787 s390: show virtualization support in /proc/cpuinfo
This patch exposes the SIE capability (aka virtualization support) via
/proc/cpuinfo -> "features" as "sie".

As we don't want to expose this hwcap via elf, let's add a second,
"internal"/non-elf capability list. The content is simply concatenated
to the existing features when printing /proc/cpuinfo.

We also add the defines to elf.h to keep the hwcap stuff at a common
place.

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:11 +01:00
David Hildenbrand 8dfd523f85 s390/sclp: introduce check for SIE
This patch adds a way to check if the SIE with zArchitecture support is
available.

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:11 +01:00
David Hildenbrand 4215825eeb KVM: s390: don't switch to ESCA for ucontrol
sca_add_vpcu is not called for ucontrol guests. We must also not
apply the sca checking for sca_can_add_vcpu as ucontrol guests
do not have to follow the sca limits.

As common code already checks that id < KVM_MAX_VCPUS all other
data structures are safe as well.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:11 +01:00
David Hildenbrand eaa78f3432 KVM: s390: cleanup sca_add_vcpu
Now that we already have kvm and the VCPU id set for the VCPU, we can
convert sda_add_vcpu to look much more like sda_del_vcpu.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:10 +01:00
David Hildenbrand 10ce32d5b0 KVM: s390: always set/clear the SCA sda field
Let's always set and clear the sda when enabling/disabling a VCPU.
Dealing with sda being set to something else makes no sense anymore
as we enable a VCPU in the SCA now after it has been registered at
the VM.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:10 +01:00
David Hildenbrand 2550882449 KVM: s390: fix SCA related races and double use
If something goes wrong in kvm_arch_vcpu_create, the VCPU has already
been added to the sca but will never be removed. Trying to create VCPUs
with duplicate ids (e.g. after a failed attempt) is problematic.

Also, when creating multiple VCPUs in parallel, we could theoretically
forget to set the correct SCA when the switch to ESCA happens just
before the VCPU is registered.

Let's add the VCPU to the SCA in kvm_arch_vcpu_postcreate, where we can
be sure that no duplicate VCPU with the same id is around and the VCPU
has already been registered at the VM. We also have to make sure to update
ECB at that point.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:09 +01:00
David Hildenbrand 5f3fe620a5 KVM: s390: we always have a SCA
Having no sca can never happen, even when something goes wrong when
switching to ESCA. Otherwise we would have a serious bug.
Let's remove this superfluous check.

Acked-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:09 +01:00
David Hildenbrand 2c1bb2be98 KVM: s390: fast path for sca_ext_call_pending
If CPUSTAT_ECALL_PEND isn't set, we can't have an external call pending,
so we can directly avoid taking the lock.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:09 +01:00
Eugene (jno) Dvurechenski fe0edcb731 KVM: s390: Enable up to 248 VCPUs per VM
This patch allows s390 to have more than 64 VCPUs for a guest (up to
248 for memory usage considerations), if supported by the underlaying
hardware (sclp.has_esca).

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:08 +01:00
Eugene (jno) Dvurechenski 5e04431523 KVM: s390: Introduce switching code
This patch adds code that performs transparent switch to Extended
SCA on addition of 65th VCPU in a VM. Disposal of ESCA is added too.
The entier ESCA functionality, however, is still not enabled.
The enablement will be provided in a separate patch.

This patch also uses read/write lock protection of SCA and its subfields for
possible disposal at the BSCA-to-ESCA transition. While only Basic SCA needs such
a protection (for the swap), any SCA access is now guarded.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:08 +01:00
Eugene (jno) Dvurechenski 7d43bafcff KVM: s390: Make provisions for ESCA utilization
This patch updates the routines (sca_*) to provide transparent access
to and manipulation on the data for both Basic and Extended SCA in use.
The kvm.arch.sca is generalized to (void *) to handle BSCA/ESCA cases.
Also the kvm.arch.use_esca flag is provided.
The actual functionality is kept the same.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:08 +01:00
Eugene (jno) Dvurechenski bc784ccee5 KVM: s390: Introduce new structures
This patch adds new structures and updates some existing ones to
provide the base for Extended SCA functionality.

The old sca_* structures were renamed to bsca_* to keep things uniform.

The access to fields of SIGP controls were turned into bitfields instead
of hardcoded bitmasks.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:07 +01:00
Eugene (jno) Dvurechenski a6e2f683e7 KVM: s390: Provide SCA-aware helpers for VCPU add/del
This patch provides SCA-aware helpers to create/delete a VCPU.
This is to prepare for upcoming introduction of Extended SCA support.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:07 +01:00
Eugene (jno) Dvurechenski a5bd764734 KVM: s390: Generalize access to SIGP controls
This patch generalizes access to the SIGP controls, which is a part of SCA.
This is to prepare for upcoming introduction of Extended SCA support.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:06 +01:00
Eugene (jno) Dvurechenski 605145103a KVM: s390: Generalize access to IPTE controls
This patch generalizes access to the IPTE controls, which is a part of SCA.
This is to prepare for upcoming introduction of Extended SCA support.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:06 +01:00
Eugene (jno) Dvurechenski f7ba1d3426 s390/sclp: introduce checks for ESCA and HVS
Introduce sclp.has_hvs and sclp.has_esca to provide a way for kvm to check
whether the extended-SCA and the home-virtual-SCA facilities are available.

Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:06 +01:00
David Hildenbrand 71f116bfed KVM: s390: rewrite vcpu_post_run and drop out early
Let's rewrite this function to better reflect how we actually handle
exit_code. By dropping out early we can save a few cycles. This
especially speeds up sie exits caused by host irqs.

Also, let's move the special -EOPNOTSUPP for intercepts to
the place where it belongs and convert it to -EREMOTE.

Reviewed-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:05 +01:00
Janosch Frank 4bd33b5688 KVM: Remove unnecessary debugfs dentry references
KVM creates debugfs files to export VM statistics to userland. To be
able to remove them on kvm exit it tracks the files' dentries.

Since their parent directory is also tracked and since each parent
direntry knows its children we can easily remove them by using
debugfs_remove_recursive(kvm_debugfs_dir). Therefore we don't
need the extra tracking in the kvm_stats_debugfs_item anymore.

Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:05 +01:00
David Hildenbrand c896939f7c KVM: use heuristic for fast VCPU lookup by id
Usually, VCPU ids match the array index. So let's try a fast
lookup first before falling back to the slow iteration.

Suggested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2015-11-30 12:47:04 +01:00
David Hildenbrand e09fefdeeb KVM: Use common function for VCPU lookup by id
Let's reuse the new common function for VPCU lookup by id.

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[split out the new function into a separate patch]
2015-11-30 12:47:04 +01:00
Takuya Yoshikawa bb11c6c965 KVM: x86: MMU: Remove unused parameter parent_pte from kvm_mmu_get_page()
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-26 15:31:36 +01:00
Takuya Yoshikawa 74c4e63ab9 KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk()
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-26 15:31:31 +01:00
Takuya Yoshikawa 98bba23842 KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to link_shadow_page()
Every time kvm_mmu_get_page() is called with a non-NULL parent_pte
argument, link_shadow_page() follows that to set the parent entry so
that the new mapping will point to the returned page table.

Moving parent_pte handling there allows to clean up the code because
parent_pte is passed to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().

In addition, the patch avoids calling mark_unsync() for other parents in
the sp->parent_ptes chain than the newly added parent_pte, because they
have been there since before the current page fault handling started.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-26 15:31:24 +01:00
Takuya Yoshikawa 4700579241 KVM: x86: MMU: Move initialization of parent_ptes out from kvm_mmu_alloc_page()
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:27:06 +01:00
Takuya Yoshikawa 77fbbbd2f0 KVM: x86: MMU: Consolidate BUG_ON checks for reverse-mapped sptes
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.

Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro, will be checked the
same way.

One thing to keep in mind is that kvm_mmu_unlink_parents() also uses
rmap_get_first() to handle parent sptes.  The change will not break it
because parent sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:26:47 +01:00
Takuya Yoshikawa afd28fe1c9 KVM: x86: MMU: Remove is_rmap_spte() and use is_shadow_present_pte()
is_rmap_spte(), originally named is_rmap_pte(), was introduced when the
simple reverse mapping was implemented by commit cd4a4e5374
("[PATCH] KVM: MMU: Implement simple reverse mapping").  At that point,
its role was clear and only rmap_add() and rmap_remove() were using it
to select sptes that need to be reverse-mapped.

Independently of that, is_shadow_present_pte() was first introduced by
commit c7addb9020 ("KVM: Allow not-present guest page faults to
bypass kvm") to do bypass_guest_pf optimization, which does not exist
any more.

These two seem to have changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.

Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:26:35 +01:00
Takuya Yoshikawa 029499b477 KVM: x86: MMU: Make mmu_set_spte() return emulate value
mmu_set_spte()'s code is based on the assumption that the emulate
parameter has a valid pointer value if set_spte() returns true and
write_fault is not zero.  In other cases, emulate may be NULL, so a
NULL-check is needed.

Stop passing emulate pointer and make mmu_set_spte() return the emulate
value instead to clean up this complex interface.  Prefetch functions
can just throw away the return value.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:26:28 +01:00
Takuya Yoshikawa fd9514572f KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
Both __mmu_unsync_walk() and mmu_pages_clear_parents() have three line
code which clears a bit in the unsync child bitmap; the former places it
inside a loop block and uses a few goto statements to jump to it.

A new helper function, clear_unsync_child_bit(), makes the code cleaner.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:26:15 +01:00
Takuya Yoshikawa 7ee0e5b29d KVM: x86: MMU: Remove unused parameter of __direct_map()
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:26:03 +01:00
Takuya Yoshikawa 018aabb56d KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct
New struct kvm_rmap_head makes the code type-safe to some extent.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:25:44 +01:00
Yaowei Bai 378b417d65 KVM: powerpc: kvmppc_visible_gpa can be boolean
In another patch kvm_is_visible_gfn is maken return bool due to this
function only returns zero or one as its return value, let's also make
kvmppc_visible_gpa return bool to keep consistent.

No functional change.

Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:24 +01:00
Yaowei Bai 08ff0d5e63 KVM: kvm_para_has_feature can be boolean
This patch makes kvm_para_has_feature return bool due to this
particular function only using either one or zero as its return
value.

No functional change.

Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:23 +01:00
Yaowei Bai 33e9415479 KVM: kvm_is_visible_gfn can be boolean
This patch makes kvm_is_visible_gfn return bool due to this particular
function only using either one or zero as its return value.

No functional change.

Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:23 +01:00
Markus Elfring 4f52696a6c KVM-async_pf: Delete an unnecessary check before the function call "kmem_cache_destroy"
The kmem_cache_destroy() function tests whether its argument is NULL
and then returns immediately. Thus the test around the call is not needed.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:23 +01:00
Paolo Bonzini 0e3d0648bd KVM: x86: MMU: always set accessed bit in shadow PTEs
Commit 7a1638ce42 ("nEPT: Redefine EPT-specific link_shadow_page()",
2013-08-05) says:

    Since nEPT doesn't support A/D bit, we should not set those bit
    when building the shadow page table.

but this is not necessary.  Even though nEPT doesn't support A/D
bits, and hence the vmcs12 EPT pointer will never enable them,
we always use them for shadow page tables if available (see
construct_eptp in vmx.c).  So we can set the A/D bits freely
in the shadow page table.

This patch hence basically reverts commit 7a1638ce42.

Cc: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:23 +01:00
Paolo Bonzini aba2f06c07 KVM: x86: correctly print #AC in traces
Poor #AC was so unimportant until a few days ago that we were
not even tracing its name correctly.  But now it's all over
the place.

Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:23 +01:00
Paolo Bonzini 46896c73c1 KVM: svm: add support for RDTSCP
RDTSCP was never supported for AMD CPUs, which nobody noticed because
Linux does not use it.  But exactly the fact that Linux does not
use it makes the implementation very simple; we can freely trash
MSR_TSC_AUX while running the guest.

Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-25 17:24:22 +01:00