2nd round of 4.14 features:
- prep for deferred fbdev setup
- refactor fixed 16.16 computations and skl+ wm code (Mahesh Kumar)
- more cnl paches (Rodrigo, Imre et al)
- tighten context cleanup and handling (Chris Wilson)
- fix interlaced handling on skl+ (Mahesh Kumar)
- small bits as usual
* tag 'drm-intel-next-2017-07-17' of git://anongit.freedesktop.org/git/drm-intel: (84 commits)
drm/i915: Update DRIVER_DATE to 20170717
drm/i915: Protect against deferred fbdev setup
drm/i915/fbdev: Always forward hotplug events
drm/i915/skl+: unify cpp value in WM calculation
drm/i915/skl+: WM calculation don't require height
drm/i915: Addition wrapper for fixed16.16 operation
drm/i915: cleanup fixed-point wrappers naming
drm/i915: Always perform internal fixed16 division in 64 bits
drm/i915: take-out common clamping code of fixed16 wrappers
drm/i915/cnl: Add missing type case.
drm/i915/cnl: Add max allowed Cannonlake DC.
drm/i915: Make DP-MST connector info work
drm/i915/cnl: Get DDI clock based on PLLs.
drm/i915/cnl: Inherit RPS stuff from previous platforms.
drm/i915/cnl: Gen10 render context size.
drm/i915/cnl: Don't trust VBT's alternate pin for port D for now.
drm/i915: Fix the kernel panic when using aliasing ppgtt
drm/i915/cnl: Cannonlake color init.
drm/i915/cnl: Add force wake for gen10+.
x86/gpu: CNL uses the same GMS values as SKL
...
The req->fence.error will be set if this request caused GPU hang so
we can use this value to workload->status to indicate whether this
GVT request caused any problem. If it caused GPU hang, we shouldn't
trigger any context switch back to the guest.
v2:
- only take -EIO from fence->error. (Zhenyu)
Fixes: 8f1117abb4 (drm/i915/gvt: handle workload lifecycle properly)
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
For the vGPU workloads, now GVT-g use per vGPU scheduler, the per-ring
work_thread only pick workload belongs to the current vGPU. And with time
slice based scheduler, it waits all the engines become idle before do vGPU
switch. So we can run free dispatch in per-ring work_thread, different ring
running in different 'vGPU' won't happen.
For the workloads between vGPU and Host, this scheduler_mutex can't block
host to dispatch workload into other ring engines.
Here remove this mutex since it impacts the performance when applications
use more than 1 ring engines in 1 vgpu.
ring0 running in vGPU1, ring1 running in Host. Will happen.
ring0 running in vGPU1, ring1 running in vGPU2. Won't happen.
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This reverts commit 62d02fd1f8.
The rwsem recursive trace should not be fixed from kvmgt side by using
a workqueue and it is an issue should be fixed in VFIO. So this one
should be reverted.
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: stable@vger.kernel.org # v4.10+
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The command buffer address in context like ring buffer base address
and wa_ctx address need to be audit to make sure they are in the
valid GGTT range.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It will causes memory leak, if the function setup_spt_oos() fail,
in the function intel_gvt_init_gtt(),
which allocated by get_zeroed_page() and mapped by dma_map_page().
Unmap and free the page, after STP oos initialize fail,
it will fix this issue.
Signed-off-by: Zhou, Wenjia <zhiyuan_zhu@htc.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZYseIAAoJEAx081l5xIa+85kP/0zKzKKVzZXSXG2TAGb5jNfk
Ex+TELG8tWk9KBxA7lEE5c0WEsnP79cNoXZLQu8wlUzO8+kwQK5Bz0zgNUkpSuo1
RthwdsxBQX1++UxB+HoSG+dOa7hkKVqlgQR3z9qyhsBXzetkJV0DoYcpMV0A1EWd
6Jzt+AvCShVkcW+21LqHPlc5EIVewrDMoA3oU6aYCLhyAOUTVvvQB2ML8YApH7TM
JrSrzCFHTrQEBbGUrZQhzR0sZzZzk9byntb/I/mdVbHeCyIHiL8sC4PfWSOyyazm
GkPnA8G3aFAY9haBRz9jG/VBr1yVb0mCBjkWQ1lGfIAOCDDSc+d7PDXdG+i4AewK
jZheXlrDIdGgmJLy4W3rdEqJvdf7UQHZOs8594OL19l4+FxCTrol1JSHSMeavCvr
8bUNil9Jb/ONU/wmp+q55U0k4TCTyerUA7gKnuaJAwBvd4n78/PKmQnbrWinDyJc
GQXp6zESk9bKt5DXSnVZuVf4POTzpuAsQkkfX1V2y145EHTQYfS3jLENWqEjyZUy
QtKCHZvRkJfGaFU4Pr+vBo9Iu1GlA5OiOv08QadldTT4OxUI0T6yaLDobHCQfKPE
sc3wCuCM+/dAnqoKDcGC4hAmF8zDdO0kw65P2m7uC6T9Jm1G35CioKbzo+fzUhuL
fg5TBpbp2Wwe2oPA5iBm
=2S5N
-----END PGP SIGNATURE-----
Merge tag 'drm-for-v4.13' into drm-intel-next-queued
Resync with the main drm-next pull request for 4.13. What we really
need is to fully resync with pending drm-misc, but that's not yet
possible due to the still ongoing merge window.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The dpy_reg_mmio_read_x functions directly copy 4 bytes data to the
target address with considering the length. If may cause the target
memory corrupted if the requested length less than 4 bytes. Fix it
for safety even we already have some checking to avoid this happen.
And for convince, the 3 functions are merged.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
When host connects a crt screen, linux guest will detect two
screens: crt and dp. This is wrong as linux guest has only
one dp.
In order to avoid guest get host crt screen, we should set
ADPA_CRT_HOTPLUG_MONITOR to none. But MMIO_RO(PCH_ADPA) prevent
from that. So MMIO_DH should be used instead of MMIO_RO.
v2: Clear its staus to none at initialize, so guest don't
get host crt.(Zhangyu)
v3: SKL doesn't have this register, limit it to pre_skl.(xiong)
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
On BDW, when host physical screen and guest virtual screen aren't on
the same DDI port, guest i915 driver prints the following error and
stop running.
[ 6.775873] BUG: unable to handle kernel NULL pointer dereference
at 0000000000000068
[ 6.775928] IP: intel_ddi_clock_get+0x81/0x430 [i915]
[ 6.776206] Call Trace:
[ 6.776233] ? vgpu_read32+0x4f/0x100 [i915]
[ 6.776264] intel_ddi_get_config+0x11c/0x230 [i915]
[ 6.776298] intel_modeset_setup_hw_state+0x313/0xd40 [i915]
[ 6.776334] intel_modeset_init+0xe49/0x18d0 [i915]
[ 6.776368] ? vgpu_write32+0x53/0x100 [i915]
[ 6.776731] ? intel_i2c_reset+0x42/0x50 [i915]
[ 6.777085] ? intel_setup_gmbus+0x32a/0x350 [i915]
[ 6.777427] i915_driver_load+0xabc/0x14d0 [i915]
[ 6.777768] i915_pci_probe+0x4f/0x70 [i915]
The null pointer is guest intel_crtc_state->shared_dpll which is
setted in haswell_get_ddi_pll(). When guest and host screen are
on different DDI port, host driver won't set PORT_CLK_SET(guest_port),
so haswell_get_ddi_pll() will return null and don't set
pipe_config->shared_dpll, once the following program refernce this
structure, it will print the above error.
This patch set the initial val of guest PORT_CLK_SEL(guest_port) to
LCPLL_810. And guest i915 driver will reset this value according to
guest screen mode.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
There are two kinds of locking sequence.
One is in the thread which is started by vfio ioctl to do
the iommu unmapping. The locking sequence is:
down_read(&group_lock) ----> mutex_lock(&cached_lock)
The other is in the vfio release thread which will unpin all
the cached pages. The lock sequence is:
mutex_lock(&cached_lock) ---> down_read(&group_lock)
And, the cache_lock is used to protect the rb tree of the cache
node and doing vfio unpin doesn't require this lock. Move the
vfio unpin out of the cache_lock protected region.
v2:
- use for style instead of do{}while(1). (Zhenyu)
Fixes: f30437c5e7 ("drm/i915/gvt: add KVMGT support")
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: stable@vger.kernel.org # v4.10+
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Final pile of features for 4.13
New uabi:
- batch bo in first slot, for faster execbuf assembly in userspace
(Chris Wilson)
- (sub)slice getparam, needed for mesa perf support (Robert Bragg)
First pile of patches for cnl/cfl support, maintained by Rodrigo but
with lots of contributions from others. Still incomplete since public
review still ongoing.
Features/refactoring:
- Make execbuf faster (Chris Wilson), a pile of series to make execbuf
buffer handling have fewer passes, use less list walking, postpone
more work to async workers and shuffle buffers less, all to make the
common case much faster (in some cases at least).
- cold boot support for glk dsi (Madhav Chauhan)
- Clean up pipe A quirk and related old platform hacks (Ville)
- perf sampling support for kbl/glk (Lionel)
- perf cleanups (Robert Bragg)
- wire atomic state to backlight code, to avoid pipe lookup hacks
(Maarten)
- reduce request waiting latency/overhead to remove the spinning and
associated cpu cycle wasting (Chris)
- fix 90/270 rotation wm computation (Ville)
- new ddb allocation algo for skl (Kumar Mahesh)
- fix regression due to system suspend optimiazatino (Imre)
- the usual pile of small cleanups and refactors all over
GVT updates contained in this tag:
- optimization for per-VM mmio save/restore (Changbin)
- optimization for mmio hash table (Changbin)
- scheduler optimization with event (Ping)
- vGPU reset refinement (Fred)
- other misc refactor and cleanups, etc.
* tag 'drm-intel-next-2017-06-19' of git://anongit.freedesktop.org/git/drm-intel: (170 commits)
drm/i915: Update DRIVER_DATE to 20170619
drm/i915/cfl: Introduce Coffee Lake workarounds.
drm/i915: Store 9 bits of PCI Device ID for platforms with a LP PCH
drm/i915: Stash a pointer to the obj's resv in the vma
drm/i915: Async GPU relocation processing
drm/i915: Allow execbuffer to use the first object as the batch
drm/i915: Wait upon userptr get-user-pages within execbuffer
drm/i915: First try the previous execbuffer location
drm/i915: Store a persistent reference for an object in the execbuffer cache
drm/i915: Eliminate lots of iterations over the execobjects array
drm/i915: Disable EXEC_OBJECT_ASYNC when doing relocations
drm/i915: Pass vma to relocate entry
drm/i915: Store a direct lookup from object handle to vma
drm/i915: Fix retrieval of hangcheck stats
drm/i915: Store i915_gem_object_is_coherent() as a bit next to cache-dirty
drm/i915: Mark CPU cache as dirty on every transition for CPU writes
drm/i915: Make i915_vma_destroy() static
drm/i915: Actually attach the tv_format property to the SDVO connector
Revert "drm/i915/skl: New ddb allocation algorithm"
drm/i915/glk: Add cold boot sequence for GLK DSI
...
If we move the actual cleanup of the context to a worker, we can allow
the final free to be called from any context and avoid undue latency in
the caller.
v2: Negotiate handling the delayed contexts free by flushing the
workqueue before calling i915_gem_context_fini() and performing the final
free of the kernel context directly
v3: Flush deferred frees before new context allocations
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170620110547.15947-2-chris@chris-wilson.co.uk
gvt-next-2017-06-08
First gvt-next pull for 4.13:
- optimization for per-VM mmio save/restore (Changbin)
- optimization for mmio hash table (Changbin)
- scheduler optimization with event (Ping)
- vGPU reset refinement (Fred)
- other misc refactor and cleanups, etc.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170608093547.bjgs436e3iokrzdm@zhen-hp.sh.intel.com
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJZPdbLAAoJEHm+PkMAQRiGx4wH/1nCjfnl6fE8oJ24/1gEAOUh
biFdqJkYZmlLYHVtYfLm4Ueg4adJdg0wx6qM/4RaAzmQVvLfDV34bc1qBf1+P95G
kVF+osWyXrZo5cTwkwapHW/KNu4VJwAx2D1wrlxKDVG5AOrULH1pYOYGOpApEkZU
4N+q5+M0ce0GJpqtUZX+UnI33ygjdDbBxXoFKsr24B7eA0ouGbAJ7dC88WcaETL+
2/7tT01SvDMo0jBSV0WIqlgXwZ5gp3yPGnklC3F4159Yze6VFrzHMKS/UpPF8o8E
W9EbuzwxsKyXUifX2GY348L1f+47glen/1sedbuKnFhP6E9aqUQQJXvEO7ueQl4=
=m2Gx
-----END PGP SIGNATURE-----
BackMerge tag 'v4.12-rc5' into drm-next
Linux 4.12-rc5 for nouveau fixes
during the emulation of virtual reset:
1. only reset the engine related mmio ending with MMIO
offset Master_IRQ, not include display stuff.
2. fences are not required to set default
value as well to prevent screen flicking.
this will fix the issue of Guest screen hang while running
Force tdr in Linux guest.
v2:
- only reset the engine related mmio. (Zhenyu & Zhiyuan)
v3:
- IMR/Ring mode registers are not save/restored. (Changbin)
v4:
- redefine the MMIO reset offset for easy understanding. (Zhenyu)
- pvinfo can be reset. (Zhenyu)
v5:
- add more comments for mmio reset. (Zhenyu)
Cc: Changbin Du <changbin.du@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Lv zhiyuan <zhiyuan.lv@intel.com>
Cc: Zhang Yulei <yulei.zhang@intel.com>
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Emulating the GDRST read behavior correctly to ack the
guest reset request.
v2:
- split the original patch into two:
GDRST read handler and virtual gpu reset. (Zhenyu)
v3:
- emulate the GDRST read right after write. (Zhenyu)
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhang Yulei <yulei.zhang@intel.com>
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
On Skylake platform, The traced virtual mmio registers are up to 2039.
So tuning the hash table size to improve lookup performance.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We count all the tracked virtual MMIO registers, which can help us to
tune the MMIO hash table.
v2: Move num_tracked_mmio into gvt structure.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Function calls are expensive. I have see obvious overhead call to
these wrappers in perf data, especially from the cmd parser side.
So make these simple wrappers be inline to kill them all.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Type u8 is big enough to contain all MMIO attribute flags. As the
total MMIO size is 2MB so we saved 1.5MB memory.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The size, length, addr_mask fields actually are not necessary. Every
tracked mmio has DWORD size, and addr_mask is a legacy field.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Some of traced MMIO registers are a large continuous section. These
stuffed the MMIO lookup hash table and so waste lots of memory and
get much lower lookup performance.
Here we picked out these sections by special handling. These sections
include:
o Display pipe registers, total 768.
o The PVINFO page, total 1024.
o MCHBAR_MIRROR, total 65536.
o CSR_MMIO, total 3072.
So we removed 70,400 items from the hash table, and speed up guest
boot time by ~500ms.
v2:
o add a local function find_mmio_block().
o fix comments.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
add gtt_invalidate API to handle the GTT TLB flush instead of
hiding in write_pte64 function. This can avoid overkill when using
write_pte64
Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In some cases, GVT-g is accessing MMIO without holding runtime_pm
and this patch can add the inline API for doing the runtime_pm get/put
to make sure when accessing HW MMIO the i915 HW is really powered on.
Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This flag is already set in the top level Makefile of the kernel.
Also, by having set CONFIG_DRM_I915_GVT, thereby appending -Wall to
ccflags, you undo all the -Wno-* cflags previously set in the Make
variable KBUILD_CFLAGS.
For example:
cc foo.c -Wall -Wno-format -Wall
resets -Wformat.
Signed-off-by: Nick Desaulniers <nick.desaulniers@gmail.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
remove all the legacy pre-BDW mmio handlers and the corresponding
usage/definition since pre-BDW platforms are not supported in GVT
environment.
v2:
- clean up all the left dirty code before BDW, e.g
all D_HSW usage and itself, D_IVB, D_PRE_BDW. (Zhenyu)
v3:
- change is based on gvt-staging. (Zhenyu)
Signed-off-by: fred gao <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The time based scheduler poll context busy status at every
micro-second during vGPU switch, it will make GPU idle for a while
when the context is very small and completed before the next
micro-second arrival. Trigger scheduling immediately after context
complete will eliminate GPU idle and improve performance.
Create two vGPU with same type, run Heaven simultaneously:
Before this patch:
+---------+----------+----------+
| | vGPU1 | vGPU2 |
+---------+----------+----------+
| Heaven | 357 | 354 |
+-------------------------------+
After this patch:
+---------+----------+----------+
| | vGPU1 | vGPU2 |
+---------+----------+----------+
| Heaven | 397 | 398 |
+-------------------------------+
v2: Let need_reschedule protect by gvt-lock.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This patch decouple the time slice calculation and scheduler, let
other event be able to trigger scheduling without impact the
calculation for QoS.
v2: add only one new enum definition.
v3: fix typo.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Since cmd message have been recorded in trace, gvt_dbg_cmd isn't
necessary. This will reduce much of dmesg as gvt_dbg_cmd is repeated
on each workload.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Currently gvt dmesg is so heavy at drm.debug=0x2 that guest and
host almost couldn't run on xengt.
This patch transfer these repeated messages into trace, so dmesg
is light at drm.debug=0x2, and user could get the target message through
trace event and trace filter.
Suggested-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
kernel hangcheck needs to check RING_INSTDONE and SC_INSTDONE registers'
state to know if hardware is still running. In GVT-g environment, we need
to emulate these registers changing for all the guests although they are
not render owner. Here we return the physical state for all the guests,
then if INSTDONE is changing guest can know hardware is still running
although its workload is pending.
Read INSTDONE isn't one correct way to know if guest trigger gfx reset,
especially with Linux guest, it will read ACTH first, then check INSTDONE
and SUBSLICE registers to check if hardware is still running, at last
trigger gfx reset when it finds all the registers is frozen. In Windows
guest, read INSTDONE usually happens when OS detect TDR.
With the difference between Windows and Linux guest, "disable_warn_untrack"
may let debug log run into wrong state(Linux guest trigger hangcheck
with no ACTHD changed, then check INSTDONE), but actually there is no TDR
happened.
The new policy is always WARN with untrack MMIO r/w. Bad effect is many
noisy untrack mmio warning logs exist when real TDR happen. Even so you can
control the log output or not by setting the debug mask bit.
v2: remove log in instdone_mmio_read
Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Weinan Li <weinan.z.li@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Commit ab9da627906a ("drm/i915: make context status notifier head be
per engine") gives us a chance to inspect every single request. Then
we can eliminate unnecessary mmio switching for same vGPU. We only
need mmio switching for different VMs (including host).
This patch introduced a new general API intel_gvt_switch_mmio() to
replace the old intel_gvt_load/restore_render_mmio(). This function
can be further optimized for vGPU to vGPU switching.
To support individual ring switch, we track the owner who occupy
each ring. When another VM or host request a ring we do the mmio
context switching. Otherwise no need to switch the ring.
This optimization is very useful if only one guest has plenty of
workloads and the host is mostly idle. The best case is no mmio
switching will happen.
v2:
o fix missing ring switch issue. (chuanxiao)
o support individual ring switch.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The function intel_vgpu_submit_execlist could be more simpler. It
actually does:
1) validate the submission. The first context must be valid,
and all two must be privilege_access.
2) submit valid contexts. The first one need emulate schedule_in.
We do not need a bitmap, valid desc copy valid_desc. Local variable
emulate_schedule_in also can be optimized out.
v2: dump desc content in err msg (Zhi Wang)
Signed-off-by: Changbin Du <changbin.du@intel.com>
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The gvt:gvt_command trace involve unnecessary overhead even this trace is
not enabled. We need improve it.
The kernel trace infrastructure provide a full api to define a trace event.
We should leverage them if possible. And one important thing is that a trace
point should store raw data but not format string.
This patch include two part work:
1) Refactor the gvt_command trace definition, including:
o only store raw trace data.
o use __dynamic_array() to declare a variable size buffer.
o use __print_array() to format raw cmd data.
o rename vm_id as vgpu_id.
2) Improve the trace invoking, including:
o remove the cycles calculation for handler. We can get this data
by any perf tool.
o do not make a backup for raw cmd data which just doesn't make sense.
With this patch, this trace has no overhead if it is not enabled. And we are
trace style now.
The final output example:
gvt workload 0-211 [000] ...1 120.555964: gvt_command: vgpu1 ring 0: buf_type 0, ip_gma e161e880, raw cmd {0x4000000}
gvt workload 0-211 [000] ...1 120.556014: gvt_command: vgpu1 ring 0: buf_type 0, ip_gma e161e884, raw cmd {0x7a000004,0x1004000,0xe1511018,0x0,0x7d,0x0}
gvt workload 0-211 [000] ...1 120.556062: gvt_command: vgpu1 ring 0: buf_type 0, ip_gma e161e89c, raw cmd {0x7a000004,0x140000,0x0,0x0,0x0,0x0}
gvt workload 0-211 [000] ...1 120.556110: gvt_command: vgpu1 ring 0: buf_type 0, ip_gma e161e8b4, raw cmd {0x10400002,0xe1511018,0x0,0x7d}
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJZK2lrAAoJEHm+PkMAQRiGm3AH/13F1DlIk05aSXHoDr/idIpR
GMHmk3YF+EuFjsL463Sh6s/SSWmz0Lda8euaoB4wCWvQFX2ZjTE+aOd79XlRiZJQ
OTtLkV9I41eXIJUpEOHia7xZiCsbw+usqcHrm1aBoSh5KKV2iQmEOrnJdibqJVOF
eXUMphNK/zFtAd2bKtQSxkaBnOOqsQUgVQSkr2K9rSg25l0KokFC6c5K5IjLn4x9
QgDY4wmMvHrDz0CtpoqlNM4XqbsDJVrFeZGfg6hlMqSRDeXeg4h3Ol0VfIT496RP
QBdrDb6hWO+HKt9B0M+7Q+8a/Fsw+5dtpqv1W/Wlr0i4CS6euU8NChAmrpkrqGo=
=m5ba
-----END PGP SIGNATURE-----
Backmerge tag 'v4.12-rc3' into drm-next
Linux 4.12-rc3
Daniel has requested this for some drm-intel-next work.
With enabling this workaround, can observe GPU hang issue on Gen9. As
currently host side doesn't have this workaround, disable it from GVT
side.
v2:
- Fix indent error.(Zhenyu)
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
It's no need to switch vgpu if next vgpu is the same with current
vgpu, otherwise it will make performance drop in some case.
v2: correct the comments.
Signed-off-by: Ping Gao <ping.a.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Needn't to restore the in-context MMIO when SCHEDULE_OUT. Sometimes
with restoring the in-context MMIO, some GPU hang can be observed. So
remove the in-context MMIO restore
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Since unifying ringbuffer/execlist submission to use
engine->pin_context, we ensure that the intel_ring is available before
we start constructing the request. We can therefore move the assignment
of the request->ring to the central i915_gem_request_alloc() and not
require it in every engine->request_alloc() callback. Another small step
towards simplification (of the core, but at a cost of handling error
pointers in less important callers of engine->pin_context).
v2: Rearrange a few branches to reduce impact of PTR_ERR() on gcc's code
generation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Oscar Mateo <oscar.mateo@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170504093308.4137-1-chris@chris-wilson.co.uk
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZCTzvAAoJEAx081l5xIa+9kcQAJsQiija4/7QGx6IzakOMqjx
WulJ3zYG/cU/HLwCBcuWRDF6wAj+7iWNeLCPmolHwEazcI8tQVdgMlWtbdMbDh8U
ckzD3FBXsEVfIfab+u6tyoUkm3l/VDhMXbjkUK7NTo/+dkRqe5LuFfZPCGN09jft
Y+5salkRXzDhXPSFsqmjfzhx1v7PTgf0a5HUenKWEWOv+sJQaW4/iPvcDSIcg5qR
l9WjAqro1NpFYhUodnh6DkLeledL1U5whdtp/yvrUAck8y+WP/jwGYmQ7pZ0UkQm
f0M3kV6K67ox9eqN++jsGX5o8sB1qF01Uh95kBAnyzYzsw4ZlMCx6pV7PDX+J88M
UBNMEqX10hrLkNJA9lGjPWx+/6fudcwg9anKvTRO3Uyx7MbYoJAgjzAM+yBqqtV0
8Otxa4Bw0V2pmUD+0lqJDERRvE77VCXkLb8SaI5lQo0MHpQqT2cZA+GD+B+rZHO6
Ie5LDFY87vM2GG1IECufG+xOa3v6sn2FfQ1ouu1KNGKOAMBKcQCQyQx3kGVuNW2i
HDACVXALJgXdRlVLm4jydOCZdRoguX7AWmRjtdwxgaO+lBcGfLhkXdjLQ7Ho+29p
32ArJfkZPfA53vMB6lHxAfbtrs1q2RzyVnPHj/KqeJnGZbABKTsF2HQ5BQc4Xq/J
mqXoz6Oubdvk4Pwyx7Ne
=UxFF
-----END PGP SIGNATURE-----
Merge tag 'tags/drm-for-v4.12' into drm-intel-next-queued
Backmerge the main drm-next pull to sync up.
Chris also pointed out that
commit ade0b0c965
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Sat Apr 22 09:15:37 2017 +0100
drm/i915: Confirm the request is still active before adding it to the await
is double-applied in the git merge, so make sure we get this right.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
drm/i915 and gvt fixes for drm-next/v4.12
* tag 'drm-intel-next-fixes-2017-04-27' of git://anongit.freedesktop.org/git/drm-intel:
drm/i915: Confirm the request is still active before adding it to the await
drm/i915: Avoid busy-spinning on VLV_GLTC_PW_STATUS mmio
drm/i915/selftests: Allocate inode/file dynamically
drm/i915: Fix system hang with EI UP masked on Haswell
drm/i915: checking for NULL instead of IS_ERR() in mock selftests
drm/i915: Perform link quality check unconditionally during long pulse
drm/i915: Fix use after free in lpe_audio_platdev_destroy()
drm/i915: Use the right mapping_gfp_mask for final shmem allocation
drm/i915: Make legacy cursor updates more unsynced
drm/i915: Apply a cond_resched() to the saturated signaler
drm/i915: Park the signaler before sleeping
drm/i915/gvt: fix a bounds check in ring_id_to_context_switch_event()
drm/i915/gvt: Fix PTE write flush for taking runtime pm properly
drm/i915/gvt: remove some debug messages in scheduler timer handler
drm/i915/gvt: add mmio init for virtual display
drm/i915/gvt: use directly assignment for structure copying
drm/i915/gvt: remove redundant ring id check which cause significant CPU misprediction
drm/i915/gvt: remove redundant platform check for mocs load/restore
drm/i915/gvt: Align render mmio list to cacheline
drm/i915/gvt: cleanup some too chatty scheduler message
Pre-calculate engine context size based on engine class and device
generation and store it in the engine instance.
v2:
- Squash and get rid of hw_context_size (Chris)
v3:
- Move after MMIO init for probing on Gen7 and 8 (Chris)
- Retained rounding (Tvrtko)
v4:
- Rebase for deferred legacy context allocation
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: intel-gvt-dev@lists.freedesktop.org
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
gvt-next-fixes-2017-04-20
- some code optimization from Changbin
- debug message cleanup after QoS merge
- misc fixes for display mmio init, reset vgpu warning, etc.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>