Theoretically, the largest bulk of commands in the ring buffer of an
engine might be the first submission, which usually contains a lot
of commands to initialize the HW. After removing the initial allocation
of the ring scan buffer and let krealloc() do everything we need, we
still have a big chance to get the buffer of suitable size in the first
submission.
Tested on my SKL NUC.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Move ring scan buffers into intel_vgpu_submission since they belongs to
a part of vGPU submission stuffs.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
"reserved" means reserve something from somewhere. Actually they are
buffers used by command scanner. Rename it to ring_scan_buffer.
v2:
- Remove the usage of an extra variable. (Zhenyu)
Fixes: 0a53bc07f0 ("drm/i915/gvt: Separate cmd scan from request allocation")
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Currently i915 request structure and shadow ring buffer are allocated
before command scan, so it will have to restore to previous states once
any error happens afterwards in the long dispatch_workload path.
This patch is to introduce a reserved ring buffer created at the beginning
of vGPU initialization. Workload will be coped to this reserved buffer and
be scanned first, the i915 request and shadow ring buffer are only
allocated after the result of scan is successful.
To balance the memory usage and buffer alloc time, the coming bigger ring
buffer will be reallocated and kept until more bigger buffer is coming.
v2:
- use kmalloc for the smaller ring buffer, realloc if required. (Zhenyu)
v3:
- remove the dynamically allocated ring buffer. (Zhenyu)
v4:
- code style polish.
- kfree previous allocated buffer once kmalloc failed. (Zhenyu)
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
For vfio-pci, if the region support MMAP then it should support both
mmap and normal file access. The user-space is free to choose which is
being used. For qemu, we just need add 'x-no-mmap=on' for vfio-pci
option.
Currently GVTg only support MMAP for BAR2. So GVTg will not work when
user turn on x-no-mmap option.
This patch added file style access for BAR2, aka the GPU aperture. We
map the entire aperture partition of active vGPU to kernel space when
guest driver try to enable PCI Memory Space. Then we redirect the file
RW operation from kvmgt to this mapped area.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1458032
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Final pile of features for 4.14
- New ioctl to change NOA configurations, plus prep (Lionel)
- CCS (color compression) scanout support, based on the fancy new
modifier additions (Ville&Ben)
- Document i915 register macro style (Jani)
- Many more gen10/cnl patches (Rodrigo, Pualo, ...)
- More gpu reset vs. modeset duct-tape to restore the old way.
- prep work for cnl: hpd_pin reorg (Rodrigo), support for more power
wells (Imre), i2c pin reorg (Anusha)
- drm_syncobj support (Jason Ekstrand)
- forcewake vs gpu reset fix (Chris)
- execbuf speedup for the no-relocs fastpath, anv/vk low-overhead ftw (Chris)
- switch to idr/radixtree instead of the resizing ht for execbuf id->vma
lookups (Chris)
gvt:
- MMIO save/restore optimization (Changbin)
- Split workload scan vs. dispatch for more parallel exec (Ping)
- vGPU full 48bit ppgtt support (Joonas, Tina)
- vGPU hw id expose for perf (Zhenyu)
Bunch of work all over to make the igt CI runs more complete/stable.
Watch https://intel-gfx-ci.01.org/tree/drm-tip/shards-all.html for
progress in getting this ready. Next week we're going into production
mode (i.e. will send results to intel-gfx) on hsw, more platforms to
come.
Also, a new maintainer tram, I'm stepping out. Huge thanks to Jani for
being an awesome co-maintainer the past few years, and all the best
for Jani, Joonas&Rodrigo as the new maintainers!
* tag 'drm-intel-next-2017-08-18' of git://anongit.freedesktop.org/git/drm-intel: (179 commits)
drm/i915: Update DRIVER_DATE to 20170818
drm/i915/bxt: use NULL for GPIO connection ID
drm/i915: Mark the GT as busy before idling the previous request
drm/i915: Trivial grammar fix s/opt of/opt out of/ in comment
drm/i915: Replace execbuf vma ht with an idr
drm/i915: Simplify eb_lookup_vmas()
drm/i915: Convert execbuf to use struct-of-array packing for critical fields
drm/i915: Check context status before looking up our obj/vma
drm/i915: Don't use MI_STORE_DWORD_IMM on Sandybridge/vcs
drm/i915: Stop touching forcewake following a gen6+ engine reset
MAINTAINERS: drm/i915 has a new maintainer team
drm/i915: Split pin mapping into per platform functions
drm/i915/opregion: let user specify override VBT via firmware load
drm/i915/cnl: Reuse skl_wm_get_hw_state on Cannonlake.
drm/i915/gen10: implement gen 10 watermarks calculations
drm/i915/cnl: Fix LSPCON support.
drm/i915/vbt: ignore extraneous child devices for a port
drm/i915/cnl: Setup PAT Index.
drm/i915/edp: Allow alternate fixed mode for eDP if available.
drm/i915: Add support for drm syncobjs
...
The current context logic only updates the descriptor of context when
it's being pinned to graphics memory space. But this cannot satisfy the
requirement of shadow context. The addressing mode of the pinned shadow
context descriptor may be changed according to the guest addressing mode.
And this won't be updated, as the already pinned shadow context has no
chance to update its descriptor. And this will lead to GPU hang issue,
as shadow context is used with wrong descriptor. This patch fixes this
issue by letting the pinned shadow context descriptor update its
addressing mode on demand.
This patch fixes GPU HANG issue which happends after changing the
grub parameter i915.enable_ppgtt form 0x01 to 0x03 or vice versa and
then rebooting the guest.
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Kechen Lu <kechen.lu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
To perform the workload scan and shadow in ELSP writing stage for
performance consideration, the workload scan and shadow stuffs
should be factored out from dispatch_workload().
v2:Put context pin before i915_add_request;
Refine the comments;
Rename some APIs;
v3:workload->status should set only when error happens.
v4:i915_add_request is must to have after i915_gem_request_alloc.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
MMIO block with tracked mmio, is introduced for the sake of performance
of searching tracked mmio. All the tracked mmio needs to get the initial
value from the HW state during vGPU being created. This patch is to
initialize the tracked registers in MMIO block with the HW state.
v2: Add "Fixes:" line for this patch (Zhenyu)
Fixes: 65f9f6febf ("drm/i915/gvt: Optimize MMIO register handling for some large MMIO blocks")
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Use resetting_eng to identify which engine is resetting
so the rest ones' workload won't be impacted
v2:
- use ENGINE_MASK(ring_id) instead of (1 << ring_id). (Zhenyu)
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This reverts commit 62d02fd1f8.
The rwsem recursive trace should not be fixed from kvmgt side by using
a workqueue and it is an issue should be fixed in VFIO. So this one
should be reverted.
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: stable@vger.kernel.org # v4.10+
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
On Skylake platform, The traced virtual mmio registers are up to 2039.
So tuning the hash table size to improve lookup performance.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We count all the tracked virtual MMIO registers, which can help us to
tune the MMIO hash table.
v2: Move num_tracked_mmio into gvt structure.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Function calls are expensive. I have see obvious overhead call to
these wrappers in perf data, especially from the cmd parser side.
So make these simple wrappers be inline to kill them all.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Type u8 is big enough to contain all MMIO attribute flags. As the
total MMIO size is 2MB so we saved 1.5MB memory.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In some cases, GVT-g is accessing MMIO without holding runtime_pm
and this patch can add the inline API for doing the runtime_pm get/put
to make sure when accessing HW MMIO the i915 HW is really powered on.
Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch decouple the time slice calculation and scheduler, let
other event be able to trigger scheduling without impact the
calculation for QoS.
v2: add only one new enum definition.
v3: fix typo.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently gvt dmesg is so heavy at drm.debug=0x2 that guest and
host almost couldn't run on xengt.
This patch transfer these repeated messages into trace, so dmesg
is light at drm.debug=0x2, and user could get the target message through
trace event and trace filter.
Suggested-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJY881cAAoJEHm+PkMAQRiGG4UH+wa2z6Qet36Uc4nXFZuSMYrO
ErUWs1QpTDDv4a+LE4fgyMvM3j9XqtpfQLy1n70jfD14IqPBhHe4gytasAf+8lg1
YvddFx0Yl3sygVu3dDBNigWeVDbfwepW59coN0vI5nrMo+wrei8aVIWcFKOxdMuO
n72u9vuhrkEnLJuQk7SF+t4OQob9McXE3s7QgyRopmlKhKo7mh8On7K2BRI5uluL
t0j5kZM0a43EUT5rq9xR8f5pgtyfTMG/FO2MuzZn43MJcZcyfmnOP/cTSIvAKA5U
1i12lxlokYhURNUe+S6jm8A47TrqSRSJxaQJZRlfGJksZ0LJa8eUaLDCviBQEoE=
=6QWZ
-----END PGP SIGNATURE-----
Merge tag 'v4.11-rc7' into drm-next
Backmerge Linux 4.11-rc7 from Linus tree, to fix some
conflicts that were causing problems with the rerere cache
in drm-tip.
This patch introduces two functions for activating/de-activating vGPU in
mdev ops.
A racing condition was found between virtual vblank emulation and KVGMT
mdev release path. V-blank emulation will emulate and inject V-blank
interrupt for every active vGPU with holding gvt->lock, while in mdev
release path, it will directly release hypervisor handle without changing
vGPU status or taking gvt->lock, so a kernel oops is encountered when
vblank emulation is injecting a interrupt with a invalid hypervisor
handle. (Reported by Terrence)
To solve this problem, we factor out vGPU activation/de-activation from
vGPU creation/destruction path and let KVMGT mdev release ops de-activate
the vGPU before release hypervisor handle. Once a vGPU is de-activated,
GVT-g will not emulate v-blank for it or touch the hypervisor handle.
Fixes: 659643f ("drm/i915/gvt/kvmgt: add vfio/mdev support to KVMGT")
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
vGPU resource is allocated by scheduler. To account for non-allocated
free cycles, we create an idle vGPU as the placeholder similar to idle task
concept, which is useful to handle some corner cases in scheduling policy.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The weight defines proportional control of physical GPU resource
shared between vGPUs. So far the weight is tied to a specific vGPU
type, i.e when creating multiple vGPUs with different types, they
will inherit different weights.
e.g. The weight of type GVTg_V5_2 is 8, the weight of type GVTg_V5_4
is 4, so vGPU of type GVTg_V5_2 has double vGPU resource of vGPU type
GVTg_V5_4.
TODO: allow user control the weight setting in the future.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add some statistic routine to collect the time when vGPU is
scheduled in/out and the time of the last ctx submission.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently the scheduler is triggered by delayed_work, which doesn't
provide precision at microsecond level. Move to hrtimer instead for
more accurate control.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJYzznuAAoJEHm+PkMAQRiGAzMIAJDBo5otTMMLhg8eKj8Cnab4
2NyaoWDN6mtU427rzEKEfZlTtp3gIBVdFex5x442weIdw6BgRQW0dvF/uwEn08yI
9Wx7VJmIUyH9M8VmhDtkUTFrhwUGr29qb3JhENMd7tv/CiJaehGRHCT3xqo5BDdu
xiyPcwSkwP/NH24TS91G87gV6r0I0oKLSAxu+KifEFESrb8gaZaduslzpEj3m/Ds
o9EPpfzaiGAdW5EdNfPtviYbBk7ZOXwtxdMV+zlvsLcaqtYnFEsJZd2WyZL0zGML
VXBVxaYtlyTeA7Mt8YYUL+rDHELSOtCeN5zLfxUvYt+Yc0Y6LFBLDOE5h8b3eCw=
=uKUo
-----END PGP SIGNATURE-----
BackMerge tag 'v4.11-rc3' into drm-next
Linux 4.11-rc3 as requested by Daniel
GVTg has introduced the context status notifier to schedule the GVTg
workload. At that time, the notifier is bound to GVTg context only,
so GVTg is not aware of host workloads.
Now we are going to improve GVTg's guest workload scheduler policy,
and add Guc emulation support for new Gen graphics. Both these two
features require acknowledgment for all contexts running on hardware.
(But will not alter host workload.) So here try to make some change.
The change is simple:
1. Move the context status notifier head from i915_gem_context to
intel_engine_cs. Which means there is a notifier head per engine
instead of per context. Execlist driver still call notifier for
each context sched-in/out events of current engine.
2. At GVTg side, it binds a notifier_block for each physical engine
at GVTg initialization period. Then GVTg can hear all context
status events.
In this patch, GVTg do nothing for host context event, but later
will add a function there. But in any case, the notifier callback is
a noop if this is no active vGPU.
Since intel_gvt_init() is called at early initialization stage and
require the status notifier head has been initiated, I initiate it in
intel_engine_setup().
v2: remove a redundant newline. (chris)
Fixes: 3c7ba6359d ("drm/i915: Introduce execlist context status change notification")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232
Signed-off-by: Changbin Du <changbin.du@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.com
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
(cherry picked from commit 3fc03069bc)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170321144720.17020-1-chris@chris-wilson.co.uk
GVTg has introduced the context status notifier to schedule the GVTg
workload. At that time, the notifier is bound to GVTg context only,
so GVTg is not aware of host workloads.
Now we are going to improve GVTg's guest workload scheduler policy,
and add Guc emulation support for new Gen graphics. Both these two
features require acknowledgment for all contexts running on hardware.
(But will not alter host workload.) So here try to make some change.
The change is simple:
1. Move the context status notifier head from i915_gem_context to
intel_engine_cs. Which means there is a notifier head per engine
instead of per context. Execlist driver still call notifier for
each context sched-in/out events of current engine.
2. At GVTg side, it binds a notifier_block for each physical engine
at GVTg initialization period. Then GVTg can hear all context
status events.
In this patch, GVTg do nothing for host context event, but later
will add a function there. But in any case, the notifier callback is
a noop if this is no active vGPU.
Since intel_gvt_init() is called at early initialization stage and
require the status notifier head has been initiated, I initiate it in
intel_engine_setup().
v2: remove a redundant newline. (chris)
Fixes: 3c7ba6359d ("drm/i915: Introduce execlist context status change notification")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232
Signed-off-by: Changbin Du <changbin.du@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.com
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This assigns resolution definition for each vGPU type. For smaller
resource type we should limit max resolution, so e.g limit to 1024x768
for 64M type, others are still default to 1920x1200.
v2: Fix for actual 1920x1200 resolution
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Previous vGPU type create tried to determine vGPU type name e.g _1, _2
based on the number of mdev devices can be created, but different type
might have very different resource size depending on physical device.
We need to split type name vs. actual mdev resource and create fixed
vGPU type with determined size for consistence.
With this we'd like to fix vGPU types for _1, _2, _4 and _8 now, each
type has fixed defined resource size. Available mdev instances that could
be created is determined by physical resource, and user should query
for that before creating.
Cc: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Windows guest will notitfy GVT-g to request more resources through g2v
interface, when its resources are not enough.
This patch is to handle this case and let vgpu enter failsafe mode to
avoid too many error messages.
Signed-off-by: Min He <min.he@intel.com>
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
New failsafe mode is introduced, when we detect guest not supporting
GVT-g.
In failsafe mode, we will ignore all the MMIO and cfg space read/write
from guest.
This patch can fix the issue that when guest kernel or graphics driver
version is too low, there will be a lot of kernel traces in host.
V5: rebased onto latest gvt-staging
V4: changed coding style by Zhenyu and Ping's advice
V3: modified coding style and error messages according to Zhenyu's comment
V2: 1) implemented MMIO/GTT/WP pages read/write logic; 2) used a unified
function to enter failsafe mode
Signed-off-by: Min He <min.he@intel.com>
Signed-off-by: Pei Zhang <pei.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As we switched to memremap for opregion, shouldn't use any __iomem
for that, and move to use memcpy instead.
This fixed static check errors for:
CHECK drivers/gpu/drm/i915//gvt/opregion.c
drivers/gpu/drm/i915//gvt/opregion.c:142:31: warning: incorrect type in argument 1 (different address spaces)
drivers/gpu/drm/i915//gvt/opregion.c:142:31: expected void *addr
drivers/gpu/drm/i915//gvt/opregion.c:142:31: got void [noderef] <asn:2>*opregion_va
drivers/gpu/drm/i915//gvt/opregion.c:160:35: warning: incorrect type in assignment (different address spaces)
drivers/gpu/drm/i915//gvt/opregion.c:160:35: expected void [noderef] <asn:2>*opregion_va
drivers/gpu/drm/i915//gvt/opregion.c:160:35: got void *
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Our function tests found several issues related to reusing vGPU
instance. They are qemu reboot failure, guest tdr after reboot, host
hang when reboot guest. All these issues are caused by dirty status
inherited from last VM.
This patch fix all these issues by resetting a virtual GPU before VM
use it. The reset logical is put into a low level function
_intel_gvt_reset_vgpu(), which supports Device Model Level Reset, Full
GT Reset and Per-Engine Reset.
vGPU Device Model Level Reset (DMLR) simulates the PCI reset to reset
the whole vGPU to default state as when it is created, including GTT,
execlist, scratch pages, cfg space, mmio space, pvinfo page, scheduler
and fence registers. The ultimate goal of vGPU DMLR is that reuse a
vGPU instance by different virtual machines. When we reassign a vGPU
to a virtual machine we must issue such reset first.
Full GT Reset and Per-Engine GT Reset are soft reset flow for GPU engines
(Render, Blitter, Video, Video Enhancement). It is defined by GPU Spec.
Unlike the FLR, GT reset only reset particular resource of a vGPU per
the reset request. Guest driver can issue a GT reset by programming
the virtual GDRST register to reset specific virtual GPU engine or all
engines.
Since vGPU DMLR and GT reset can share some code so we implement both
these two into one single function intel_gvt_reset_vgpu_locked(). The
parameter dmlr is to identify if we will do FLR or GT reset. The
parameter engine_mask is to specific the engines that need to be
resetted. If value ALL_ENGINES is given for engine_mask, it means
the caller requests a full gt reset that we will reset all virtual
GPU engines.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Jike Song <jike.song@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Move the mmio space inititation function setup_vgpu_mmio()
and cleanup function clean_vgpu_mmio() in vgpu.c to dedicated
source file mmio.c, and rename them as intel_vgpu_init_mmio()
and intel_vgpu_clean_mmio() respectively.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a new function intel_vgpu_reset_cfg_space()
to reset vGPU configuration space. This function will unmap gttmmio
and aperture if they are mapped before. Then entire cfg space will
be restored to default values.
Currently we only do such reset when vGPU is not owned by any VM
so we simply restore entire cfg space to default value, not following
the PCIe FLR spec that some fields should remain unchanged.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Move the configuration space inititation function setup_vgpu_cfg_space()
in vgpu.c to dedicated source file cfg_space.c, and rename the function
as intel_vgpu_init_cfg_space().
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introudces a new function intel_vgpu_reset_resource() to
reset allocated vgpu resources by intel_vgpu_alloc_resource(). So far
we only need clear the fence registers. The function _clear_vgpu_fence()
will reset both virtual and physical fence registers to 0.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The release action might be triggered from either user's closing
mdev or the detaching event of kvm and vfio_group, so this patch
introduces an atomic to prevent double-release.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
KVMGT leverages vfio/mdev to mediate device accesses from guest,
this patch adds the vfio/mdev support, thereby completes the
functionality. An intel_vgpu is presented as a mdev device,
and full userspace API compatibility with vfio-pci is kept.
An intel_vgpu_ops is provided to mdev framework, methods get
called to create/remove a vgpu, to open/close it, and to
access it.
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Xiaoguang Chen <xiaoguang.chen@intel.com>
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
For 64bit bar while reading the higher 32bit the value should be returned
directly.
In the current implementation the higher 32bit value was discarded and not
written to the cfg space of vgpu which lead to an incorrect bar size.
Signed-off-by: Xiaoguang Chen <xiaoguang.chen@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
KVMGT is the MPT implementation based on VFIO/KVM. It provides
a kvmgt_mpt ops to gvt for vGPU access mediation, e.g. to
mediate and emulate the MMIO accesses, to inject interrupts
to vGPU user, to intercept the GTT writing and replace it with
DMA-able address, to write-protect guest PPGTT table for
shadowing synchronization, etc. This patch provides the MPT
implementation for GVT, not yet functional due to theabsence
of mdev.
It's built as kvmgt.ko, depends on vfio.ko, kvm.ko and mdev.ko,
and being required by i915.ko. To not introduce hard dependency
in i915.ko, we used indirect symbol reference. But that means
users have to include kvmgt.ko into init ramdisk if their
i915.ko is included.
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Xiaoguang Chen <xiaoguang.chen@intel.com>
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
There are currently 4 methods in intel_gvt_io_emulation_ops
to emulate CFG/MMIO reading/writing for intel vGPU. A possibly
better scope is: add 3 more methods for vgpu create/destroy/reset
respectively, and rename the ops to 'intel_gvt_ops', then pass
it to the MPT module (say the future kvmgt) to use: they are
all methods for external usage.
Signed-off-by: Jike Song <jike.song@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
By providing predefined vGPU types, users can choose which type a vgpu
to create and use, without specifying detailed parameters.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Jike Song <jike.song@intel.com>