Only reset vgpu execlist state of the exact engine which gets reset
request from VM. After read context status from HWSP enabled, KMD will use
the saved CSB read pointer but not always read from MMIO. When one engine
reset happen, only the read pointer of this engine will be reset, in GVT-g
host side also need to align with this policy, otherwise VM may get wrong
CSB status after one engine reset compeleted.
v2: Split refine and fix patch, code refine(Zhenyu)
v3: Move active flag of vgpu scheduler into sched_data(Zhenyu)
Cc: Fred Gao <fred.gao@intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Stop gvt scheduler timer if no vGPU exists, otherwise it keeps
gvt service thread busy to handle request schedule event but no
actual schedule activity required.
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
The current schedule policy rely on a 1ms timer to execute workload. This
can introduce maximum 1ms unnecessary latency. This is especially bad for
small media workloads.
And I don't think we need this timer for QoS, but the change is not simply
remove the code. So I made a new API intel_gvt_kick_schedule() for future
change.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We have implemented delayed ring mmio switch mechanism to reduce
unnecessary mmio switch. While the vGPU is being destroyed or
detached from VM, we need to force the ring switch to host context.
The later deadline is missed. Then it got a chance that word load
from VM2 might execute under the ring context of VM1 which was
attached to a same vGPU instance. Finally, the GPU is hang.
This patch guarantee the two deadline are performed.
v2: Remove unused variable 'scheduler'
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
gvt-next-2017-06-08
First gvt-next pull for 4.13:
- optimization for per-VM mmio save/restore (Changbin)
- optimization for mmio hash table (Changbin)
- scheduler optimization with event (Ping)
- vGPU reset refinement (Fred)
- other misc refactor and cleanups, etc.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170608093547.bjgs436e3iokrzdm@zhen-hp.sh.intel.com
This patch decouple the time slice calculation and scheduler, let
other event be able to trigger scheduling without impact the
calculation for QoS.
v2: add only one new enum definition.
v3: fix typo.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Commit ab9da627906a ("drm/i915: make context status notifier head be
per engine") gives us a chance to inspect every single request. Then
we can eliminate unnecessary mmio switching for same vGPU. We only
need mmio switching for different VMs (including host).
This patch introduced a new general API intel_gvt_switch_mmio() to
replace the old intel_gvt_load/restore_render_mmio(). This function
can be further optimized for vGPU to vGPU switching.
To support individual ring switch, we track the owner who occupy
each ring. When another VM or host request a ring we do the mmio
context switching. Otherwise no need to switch the ring.
This optimization is very useful if only one guest has plenty of
workloads and the host is mostly idle. The best case is no mmio
switching will happen.
v2:
o fix missing ring switch issue. (chuanxiao)
o support individual ring switch.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's no need to switch vgpu if next vgpu is the same with current
vgpu, otherwise it will make performance drop in some case.
v2: correct the comments.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
As those debug messages might appear in every timer call for scheduler,
it's too noisy, eat too much log and aren't meaningful. So remove them.
Cc: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's too chatty to have three places to tell us which one
is next vgpu for schedule. My log file was bloated to eat
all disk space..
Cc: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The timeslice usage will determine vGPU whether has chance to
schedule or not at every vGPU switch checkpoint.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This method tries to guarantee precision in second level, with the
adjustment conducted in every 100ms. At the end of each vGPU switch
calculate the sched time and subtract it from the time slice
allocated; the allocated time slice for every 100ms together with
remaining timeslice, will be used to decide how much timeslice
allocated to this vGPU in the next 100ms slice, with the end goal
to guarantee weight ratio in second level.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The weight defines proportional control of physical GPU resource
shared between vGPUs. So far the weight is tied to a specific vGPU
type, i.e when creating multiple vGPUs with different types, they
will inherit different weights.
e.g. The weight of type GVTg_V5_2 is 8, the weight of type GVTg_V5_4
is 4, so vGPU of type GVTg_V5_2 has double vGPU resource of vGPU type
GVTg_V5_4.
TODO: allow user control the weight setting in the future.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out the scheduler to a more clear structure, the basic
logic is to find out next vGPU first and then schedule it.
vGPUs were ordered in a LRU list, scheduler scan from the LRU
list head and choose the first vGPU who has pending workload.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add some statistic routine to collect the time when vGPU is
scheduled in/out and the time of the last ctx submission.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently the scheduler is triggered by delayed_work, which doesn't
provide precision at microsecond level. Move to hrtimer instead for
more accurate control.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fix to correctly assign 1ms for gvt scheduler interval time,
as previous code using HZ is pretty broken. And use no delay
for start gvt scheduler function.
Fixes: 4b63960ebd ("drm/i915/gvt: vGPU schedule policy framework")
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: stable@vger.kernel.org # v4.10+
Acked-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Remove below unimportant log which is too noisy.
'no current vgpu search from q head'
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Remove the variable 'execlist' as it's unused in function
vgpu_has_pending_workload.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Mark all local functions & variables as static.
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Switch to use new for_each_engine() helper to properly access
enabled intel_engine_cs as i915 core has changed that to be
dynamic managed. At GVT-g init time would still depend on ring
mask to determine engine list as it's earlier.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
i915 core should only call functions and structures exposed through
intel_gvt.h. Remove internal gvt.h and i915_pvinfo.h.
Change for internal intel_gvt structure as private handler which
not requires to expose gvt internal structure for i915 core.
v2: Fix per Chris's comment
- carefully handle dev_priv->gvt assignment
- add necessary bracket for macro helper
- forward declartion struct intel_gvt
- keep free operation within same file handling alloc
v3: fix use after free and remove intel_gvt.initialized
v4: change to_gvt() to an inline
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch introduces a vGPU schedule policy framework, with a timer based
schedule policy module for now
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>