LRC lifetime is well defined so we can cache the page pointing
to the object backing store in the context in order to avoid
walking over the object SG page list from the interrupt context
without the big lock held.
v2: Also cache the mapping. (Chris Wilson)
v3: Unmap on the error path.
v4: No need to cache the page. (Chris Wilson)
v5: No need to dirty the page on unpin. (Chris Wilson)
v6: kmap() cannot fail and use kmap_to_page to simplify unpin.
(Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1452877965-32042-1-git-send-email-tvrtko.ursulin@linux.intel.com
LRC code was calling GEM API like i915_gem_obj_ggtt_offset from
places where the struct_mutex cannot be grabbed (irq handlers).
To avoid that this patch caches some interesting bits and values
in the engine and context structures.
Some usages are also removed where they are not needed like a
few asserts which are either impossible or have been checked
already during engine initialization.
Side benefit is also that interrupt handlers and command
submission stop evaluating invariant conditionals, like what
Gen we are running on, on every interrupt and every command
submitted.
This patch deals with logical ring context id and descriptors
while subsequent patches will deal with the remaining issues.
v2:
* Cache the VMA instead of the address. (Chris Wilson)
* Incorporate Dave Gordon's good comments and function name.
v3:
* Extract ctx descriptor template to a function and group
functions dealing with ctx descriptor & co together near
top of the file. (Dave Gordon)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Gordon <david.s.gordon@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1452870629-13830-1-git-send-email-tvrtko.ursulin@linux.intel.com
Add a common function to return "on" or "off" string based on the
argument, and drop the local versions of it.
This is the onoff version of
commit 42a8ca4cb4
Author: Jani Nikula <jani.nikula@intel.com>
Date: Thu Aug 27 16:23:30 2015 +0300
drm/i915: add yesno utility function
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1452768814-29787-1-git-send-email-jani.nikula@intel.com
If we go into suspend with unclaimed access detected,
it would be nice to catch that access on a next suspend path.
So instead of just notifying about it, arm the unclaimed
mmio checks on suspend side.
We want to keep the asymmetry on resume, as if it was
on resume path, it was not driver that is responsible so
no point in arming mmio debugs.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1452261080-6979-2-git-send-email-mika.kuoppala@intel.com
We have done unclaimed register access check in normal
(mmio_debug=0) mode once per write. This adds probability
of finding the exact sequence where we did the bad access, but
also adds burden to each write.
As we have mmio_debug available for more fine grained analysis,
give up accuracy of detecting correct spot at the first occurrence
by doing the one shot detection and arming of mmio_debug in hangcheck
and in modeset. This removes the write path performance burden.
v2: Remove gratuitous DRM_DEBUG and return value, comments (Chris)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450250808-14864-1-git-send-email-mika.kuoppala@intel.com
Currently interrupt code is the only place checking
for the unclaimed register access prior to actual register
macros using the same functionality. Rename the function
and make it return bool so that the possible error message
context is clear in the caller side. The motivation is to allow
usage of unclaimed detection on arbitrary places.
v2: rebase, s/access/mmio, s/dev/dev_priv
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450189512-30360-2-git-send-email-mika.kuoppala@intel.com
In addition to calculating final watermarks, let's also pre-calculate a
set of intermediate watermark values at atomic check time. These
intermediate watermarks are a combination of the watermarks for the old
state and the new state; they should satisfy the requirements of both
states which means they can be programmed immediately when we commit the
atomic state (without waiting for a vblank). Once the vblank does
happen, we can then re-program watermarks to the more optimal final
value.
v2: Significant rebasing/rewriting.
v3:
- Move 'need_postvbl_update' flag to CRTC state (Daniel)
- Don't forget to check intermediate watermark values for validity
(Maarten)
- Don't due async watermark optimization; just do it at the end of the
atomic transaction, after waiting for vblanks. We do want it to be
async eventually, but adding that now will cause more trouble for
Maarten's in-progress work. (Maarten)
- Don't allocate space in crtc_state for intermediate watermarks on
platforms that don't need it (gen9+).
- Move WaCxSRDisabledForSpriteScaling:ivb into intel_begin_crtc_commit
now that ilk_update_wm is gone.
v4:
- Add a wm_mutex to cover updates to intel_crtc->active and the
need_postvbl_update flag. Since we don't have async yet it isn't
terribly important yet, but might as well add it now.
- Change interface to program watermarks. Platforms will now expose
.initial_watermarks() and .optimize_watermarks() functions to do
watermark programming. These should lock wm_mutex, copy the
appropriate state values into intel_crtc->active, and then call
the internal program watermarks function.
v5:
- Skip intermediate watermark calculation/check during initial hardware
readout since we don't trust the existing HW values (and don't have
valid values of our own yet).
- Don't try to call .optimize_watermarks() on platforms that don't have
atomic watermarks yet. (Maarten)
v6:
- Rebase
v7:
- Further rebase
v8:
- A few minor indentation and line length fixes
v9:
- Yet another rebase since Maarten's patches reworked a bunch of the
code (wm_pre, wm_post, etc.) that this was previously based on.
v10:
- Move wm_mutex to dev_priv to protect against racing commits against
disjoint CRTC sets. (Maarten)
- Drop unnecessary clearing of cstate->wm.need_postvbl_update (Maarten)
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1452108870-24204-1-git-send-email-matthew.d.roper@intel.com
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Although we can do a good job of reading out hardware state, the
graphics firmware may have programmed the watermarks in a creative way
that doesn't match how i915 would have chosen to program them. We
shouldn't trust the firmware's watermark programming, but should rather
re-calculate how we think WM's should be programmed and then shove those
values into the hardware.
We can do this pretty easily by creating a dummy top-level state,
running it through the check process to calculate all the values, and
then just programming the watermarks for each CRTC.
v2: Move watermark sanitization after our BIOS fb reconstruction; the
watermark calculations that we do here need to look at pstate->fb,
which isn't setup yet in intel_modeset_setup_hw_state(), even
though we have an enabled & visible plane.
v3:
- Don't move 'active = optimal' watermark assignment; we just undo
that change in the next patch anyway. (Ville)
- Move atomic helper locking fix to separate patch. (Maarten)
v4:
- Grab connection_mutex before calling atomic helper to duplicate
state. The connector loop inside the helper will throw a WARN
if we don't hold something to protect the connector list (and the
helper itself doesn't try to lock the list).
- Make failure to calculate watermarks for inherited state a WARN()
since it probably indicates a serious problem in either our state
readout code or our watermark code for this platform.
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
On skylake when calculating plane visibility with the crtc in
dpms off mode the real cdclk may be different from what it would be
if the crtc was active. This may result in a WARN_ON(cdclk < crtc_clock)
from skl_max_scale. The fix is to keep a atomic_cdclk that would be true
if all crtc's were active.
This is required to get the same calculations done correctly regardless
of dpms mode.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1447945645-32005-12-git-send-email-maarten.lankhorst@linux.intel.com
Parallel modesets are still not allowed, but this will allow updating
a different crtc during a modeset if the clock is not changed.
Additionally when all pipes are DPMS off the cdclk will be lowered
to the minimum allowed.
Changes since v1:
- Add dev_priv->active_crtcs for tracking which crtcs are active.
- Rename min_cdclk to min_pixclk and move to dev_priv.
- Add a active_crtcs mask which is updated atomically.
- Add intel_atomic_state->modeset which is set on modesets.
- Commit new pixclk/active_crtcs right after state swap.
Changes since v2:
- Make the changes related to max_pixel_rate calculations more readable.
Changes since v3:
- Add cherryview and missing WARN_ON to readout.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Using __stringify(x) instead of #x adds support for macros as
a parameter and compile-time concatenation reduces the runtime
overhead.
Slightly increases the .text size but should not matter.
v2:
- Define I915_STATE_WARN_ON though I915_STATE_WARN
(Bikeshed inspiration by Chris)
v3:
- More specific commit message
v4:
- Do not directly pass arbitary string as format, instead
guard with "%s" (Dave)
Cc: Rob Clark <robdclark@gmail.com>
Cc: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450441647-23924-3-git-send-email-joonas.lahtinen@linux.intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Otherwise usage in the i915 debug macros yields problems due to
i915_drv.h <-> i915_trace.h <-> intel_drv.h include loops.
v2:
- Document not-so-obvious need for linux/cache.h (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450436898-20408-2-git-send-email-joonas.lahtinen@linux.intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
commit 344df9809f ("drm/i915/skl: Disable coarse power gating up until F0")
failed to take into account that the same workaround is used in guc
when forcewake is sampled.
Wrap the condition check inside a macro and use it in both places
to fix the guc side scope.
Cc: Dave Gordon <david.s.gordon@intel.com>
Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450286318-6854-1-git-send-email-mika.kuoppala@intel.com
Limit busywaiting only to the request currently being processed by the
GPU. If the request is not currently being processed by the GPU, there
is a very low likelihood of it being completed within the 2 microsecond
spin timeout and so we will just be wasting CPU cycles.
v2: Check for logical inversion when rebasing - we were incorrectly
checking for this request being active, and instead busywaiting for
when the GPU was not yet processing the request of interest.
v3: Try another colour for the seqno names.
v4: Another colour for the function names.
v5: Remove the forced coherency when checking for the active request. On
reflection and plenty of recent experimentation, the issue is not a
cache coherency problem - but an irq/seqno ordering problem (timing issue).
Here, we do not need the w/a to force ordering of the read with an
interrupt.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
Cc: "Rantala, Valtteri" <valtteri.rantala@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1449833608-22125-4-git-send-email-chris@chris-wilson.co.uk
As we mark the preallocated objects as bound, we should also flag them
correctly as being map-and-fenceable (if appropriate!) so that later
users do not get confused and try and rebind the pinned vma in order to
get a map-and-fenceable binding.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: "Goel, Akash" <akash.goel@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: drm-intel-fixes@lists.freedesktop.org
Link: http://patchwork.freedesktop.org/patch/msgid/1448029000-10616-1-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In some cases we want to check whether we hold an RPM wakelock reference
for the whole duration of a sequence. To achieve this add a new RPM
atomic sequence counter that we increment any time the wakelock refcount
drops to zero. Check whether the sequence number stays the same during
the atomic section and that we hold the wakelock at the beginning of the
section.
Motivated by Chris.
v2-v3:
- unchanged
v4:
- swap the order of atomic_read() and assert_rpm_wakelock_held() in
assert_rpm_atomic_begin() to avoid race
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450203038-5150-10-git-send-email-imre.deak@intel.com
Atm, we assert that the device is not suspended until the point when the
device is truly put to a suspended state. This is fine, but we can catch
more problems if we check that RPM refcount is non-zero. After that one
drops to zero we shouldn't access the device any more, even if the actual
device suspend may be delayed. Change assert_rpm_wakelock_held()
accordingly to check for a non-zero RPM refcount in addition to the
current device-not-suspended check.
For the new asserts to work we need to annotate every place explicitly in
the code where we expect that the device is powered. The places where we
only assume this, but may not hold an RPM reference:
- driver load
We assume the device to be powered until we enable RPM. Make this
explicit by taking an RPM reference around the load function.
- system and runtime sudpend/resume handlers
These handlers are called when the RPM reference becomes 0 and know the
exact point after which the device can get powered off. Disable the
RPM-reference-held check for their duration.
- the IRQ, hangcheck and RPS work handlers
These handlers are flushed in the system/runtime suspend handler
before the device is powered off, so it's guaranteed that they won't
run while the device is powered off even though they don't hold any
RPM reference. Disable the RPM-reference-held check for their duration.
In all these cases we still check that the device is not suspended.
These explicit annotations also have the positive side effect of
documenting our assumptions better.
This caught additional WARNs from the atomic modeset path, those should
be fixed separately.
v2:
- remove the redundant HAS_RUNTIME_PM check (moved to patch 1) (Ville)
v3:
- use a new dedicated RPM wakelock refcount to also catch cases where
our own RPM get/put functions were not called (Chris)
- assert also that the new RPM wakelock refcount is 0 in the RPM
suspend handler (Chris)
- change the assert error message to be more meaningful (Chris)
- prevent false assert errors and check that the RPM wakelock is 0 in
the RPM resume handler too
- prevent false assert errors in the hangcheck work too
- add a device not suspended assert check to the hangcheck work
v4:
- rename disable/enable_rpm_asserts to disable/enable_rpm_wakeref_asserts
and wakelock_count to wakeref_count
- disable the wakeref asserts in the IRQ handlers and RPS work too
- update/clarify commit message
v5:
- mark places we plan to change to use proper RPM refcounting with
separate DISABLE/ENABLE_RPM_WAKEREF_ASSERTS aliases (Chris)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1450227139-13471-1-git-send-email-imre.deak@intel.com
The RVDA and RVDS (raw VBT data address and size) fields of the ASLE
mailbox may specify an alternate location for VBT instead of mailbox #4.
Use the alternate location if available and valid, falling back to
mailbox #4 otherwise.
v2: Update debug logging (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Mika Kahola <mika.kahola@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450178280-28020-1-git-send-email-jani.nikula@intel.com
In the future the VBT might not be in mailbox #4 of the ACPI OpRegion,
thus unavailable in i915_opregion, so add a separate file for the VBT.
v2: Drop the locking as unneeded (Chris)
v3: Rebase
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450178232-27780-1-git-send-email-jani.nikula@intel.com
Make the validation function a boolean operating on a buffer of given
size, removing the extra pointer dances.
Move the OpRegion based VBT validation to intel_opregion_setup(), only
initializing opregion->vbt if it's valid.
v2: move logging about valid VBT to opregion setup too (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1450178175-27420-1-git-send-email-jani.nikula@intel.com
Here are the patchset to add get_eld op to audio component for
communicating more directly between i915 and HD-audio.
Currently, the HDMI/DP audio status and ELD are notified and obtained
via the hardware-level communication over HD-audio unsolicited event
and verbs although the graphics driver holds the exactly same
information. As we already have a notification via audio component,
this is another step forward; namely, the audio driver may fetch
directly the audio status and ELD via the new component op.
The commits are based on Dave's latest drm-next branch.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJWaXTVAAoJEGwxgFQ9KSmkdZIQAKg2NQN1CGOIb+ce80UmJ9ap
myZH42aZXytPrMAhHQCw2mvf8aL18EKyQcGv+anLdFM+AqlzZvH/exfOblkr88lg
ZMGr0bXVNa2Lfyrgt9blUwH58uETQZC4P3BrI0cPqIJBdPagDxnxoaV1e/21g/9p
0a2bhiz0gn4OFb83vpi5pL4hGv+BQwwlmkOujcVg7yxR7ylnYI419NM9Z+Lbmfq7
p5jId6Q3EwBw6vpWryOI2TElM3VDThoOGCOtkfmZx6o4fDZ0bdl8CYVCgRFwZZCe
kk01Caa+5+CW88MlJ1VX6gLy0WRlPY0AFreCWKgy5HCUNqew9ruhUeMj4+C1oHpj
ui/79ULLRN2hmu2rvU8lZb0ClihXkDCBN8p89j6o2I+y1aIcUtxvY9Srg5w2tVBe
Ue+OSB3lA4rdnuSjxZiaPf+V4rozIyNJHRjo6xNdY0zuScB4lw9Bh7IYXmj8B8OW
k3LklToj4ZGeyCgfcTQwztAh7fFEXUb1wN+lLqCt3b9688zvMYTQlJ8ZdtK+t188
3DNz9QjjPd4DcxLypl1VpM2Xv3AhuFfugq0oEuQq9bXs7qtj+iLmSWWdmhUNaVWb
Qot21vJEHDii6jtoLdbVMTEZTWyr2nXUfUNFJpUgitif2UhqqgecnR16Fi05pjTv
+Th/GvjddrQ0oe9DwVGY
=NShN
-----END PGP SIGNATURE-----
Merge tag 'drm-i915-get-eld' of tiwai/sound into drm-intel-next-queued
Add get_eld audio component for i915/HD-audio
Currently, the HDMI/DP audio status and ELD are notified and obtained
via the hardware-level communication over HD-audio unsolicited event
and verbs although the graphics driver holds the exactly same
information. As we already have a notification via audio component,
this is another step forward; namely, the audio driver may fetch
directly the audio status and ELD via the new component op.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
In various places, a single page of a (regular) GEM object is mapped into
CPU address space and updated. In each such case, either the page or the
the object should be marked dirty, to ensure that the modifications are
not discarded if the object is evicted under memory pressure.
The typical sequence is:
va = kmap_atomic(i915_gem_object_get_page(obj, pageno));
*(va+offset) = ...
kunmap_atomic(va);
Here we introduce i915_gem_object_get_dirty_page(), which performs the
same operation as i915_gem_object_get_page() but with the side-effect
of marking the returned page dirty in the pagecache. This will ensure
that if the object is subsequently evicted (due to memory pressure),
the changes are written to backing store rather than discarded.
Note that it works only for regular (shmfs-backed) GEM objects, but (at
least for now) those are the only ones that are updated in this way --
the objects in question are contexts and batchbuffers, which are always
shmfs-backed.
Separate patches deal with the cases where whole objects are (or may
be) dirtied.
v3: Mark two more pages dirty in the page-boundary-crossing
cases of the execbuffer relocation code [Chris Wilson]
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1449773486-30822-2-git-send-email-david.s.gordon@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch adds a reverse mapping from a digital port number to
intel_encoder object containing the corresponding intel_digital_port.
It simplifies the query of the encoder a lot.
Note that, even if it's a valid digital port, the dig_port_map[] might
point still to NULL -- usually it implies a DP MST port. Due to this
fact, the NULL check in each place has no WARN_ON() and just skips the
port. Once when the situation changes in future, we might introduce
WARN_ON() for a more strict check.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
The cherryview device shares many characteristics with the valleyview
device. When support was added to the driver for cherryview, the
corresponding device info structure included .is_valleyview = 1.
This is not correct and leads to some confusion.
This patch changes .is_valleyview to .is_cherryview in the cherryview
device info structure and simplifies the IS_CHERRYVIEW macro.
Then where appropriate, instances of IS_VALLEYVIEW are replaced with
IS_VALLEYVIEW || IS_CHERRYVIEW or equivalent.
v2: Use IS_VALLEYVIEW || IS_CHERRYVIEW instead of defining a new macro.
Also add followup patches to fix issues discovered during the first
review. (Ville)
v3: Fix some style issues and one gen check. Remove CRT related changes
as CRT is not supported on CHV. (Imre, Ville)
v4: Make a few more optimizations. (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Wayne Boyer <wayne.boyer@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1449692975-14803-1-git-send-email-wayne.boyer@intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Userspace can pass in an offset that it presumes the object is located
at. The kernel will then do its utmost to fit the object into that
location. The assumption is that userspace is handling its own object
locations (for example along with full-ppgtt) and that the kernel will
rarely have to make space for the user's requests.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
v2: Fixed incorrect eviction found by Michal Winiarski - fix suggested by Chris
Wilson. Fixed incorrect error paths causing crash found by Michal Winiarski.
(Not published externally)
v3: Rebased because of trivial conflict in object_bind_to_vm. Fixed eviction
to allow eviction of soft-pinned objects when another soft-pinned object used
by a subsequent execbuffer overlaps reported by Michal Winiarski.
(Not published externally)
v4: Moved soft-pinned objects to the front of ordered_vmas so that they are
pinned first after an address conflict happens to avoid repeated conflicts in
rare cases (Suggested by Chris Wilson). Expanded comment on
drm_i915_gem_exec_object2.offset to cover this new API.
v5: Added I915_PARAM_HAS_EXEC_SOFTPIN parameter for detecting this capability
(Kristian). Added check for multiple pinnings on eviction (Akash). Made sure
buffers are not considered misplaced without the user specifying
EXEC_OBJECT_SUPPORTS_48B_ADDRESS. User must assume responsibility for any
addressing workarounds. Updated object2.offset field comment again to clarify
NO_RELOC case (Chris). checkpatch cleanup.
v6: Trivial rebase on latest drm-intel-nightly
v7: Catch attempts to pin above the max virtual address size and return
EINVAL (Tvrtko). Decouple EXEC_OBJECT_SUPPORTS_48B_ADDRESS and
EXEC_OBJECT_PINNED flags, user must pass both flags in any attempt to pin
something at an offset above 4GB (Chris, Daniel Vetter).
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Akash Goel <akash.goel@intel.com>
Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Cc: Michal Winiarski <michal.winiarski@intel.com>
Cc: Zou Nanhai <nanhai.zou@intel.com>
Cc: Kristian Høgsberg <hoegsberg@gmail.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Acked-by: PDT
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449575707-20933-1-git-send-email-thomas.daniel@intel.com
Let's introduce ULT and ULX Kabylake definitions and start
using it for a propper DDI buffer translation.
v2: Remove extra white space. (Paulo)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
This reverts commit 6d65ba943a.
Mika Kuoppala traced down a use-after-free crash in module unload to
this commit, because ring->last_context is leaked beyond when the
context gets destroyed. Mika submitted a quick fix to patch that up in
the context destruction code, but that's too much of a hack.
The right fix is instead for the ring to hold a full reference onto
it's last context, like we do for legacy contexts.
Since this is causing a regression in BAT it gets reverted before we
can close this.
Cc: Nick Hoath <nicholas.hoath@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Dai <yu.dai@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93248
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Use the first retired request on a new context to unpin
the old context. This ensures that the hw context remains
bound until it has been written back to by the GPU.
Now that the context is pinned until later in the request/context
lifecycle, it no longer needs to be pinned from context_queue to
retire_requests.
This fixes an issue with GuC submission where the GPU might not
have finished writing back the context before it is unpinned. This
results in a GPU hang.
v2: Moved the new pin to cover GuC submission (Alex Dai)
Moved the new unpin to request_retire to fix coverage leak
v3: Added switch to default context if freeing a still pinned
context just in case the hw was actually still using it
v4: Unwrapped context unpin to allow calling without a request
v5: Only create a switch to idle context if the ring doesn't
already have a request pending on it (Alex Dai)
Rename unsaved to dirty to avoid double negatives (Dave Gordon)
Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
Split out per engine cleanup from context_free as it
was getting unwieldy
Corrected locking (Dave Gordon)
v6: Removed some bikeshedding (Mika Kuoppala)
Added explanation of the GuC hang that this fixes (Daniel Vetter)
v7: Removed extra per request pinning from ring reset code (Alex Dai)
Added forced ring unpin/clean in error case in context free (Alex Dai)
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
Issue: VIZ-4277
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Dai <yu.dai@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Alex Dai <yu.dai@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Directly call intel_fbc_calculate_cfb_size() in the only place that
actually needs it, and use the proper check before removing the stolen
node. IMHO, this change makes our code easier to understand.
v2: Use drm_mm_node_allocated() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This was already on my TODO list, and was requested both by Chris and
Ville, for different reasons. The advantages are avoiding a frequent
malloc/free pair, and the locality of having the work structure
embedded in dev_priv. The maximum used memory is also smaller since
previously we could have multiple allocated intel_fbc_work structs at
the same time, and now we'll always have a single one - the one
embedded on dev_priv. Of course, we're now using a little more memory
on the cases where there's nothing scheduled.
The biggest challenge here is to keep everything synchronized the way
it was before.
Currently, when we try to activate FBC, we allocate a new
intel_fbc_work structure. Then later when we conclude we must delay
the FBC activation a little more, we allocate a new intel_fbc_work
struct, and then adjust dev_priv->fbc.fbc_work to point to the new
struct. So when the old work runs - at intel_fbc_work_fn() - it will
check that dev_priv->fbc.fbc_work points to something else, so it does
nothing. Everything is also protected by fbc.lock.
Just cancelling the old delayed work doesn't work because we might
just cancel it after the work function already started to run, but
while it is still waiting to grab fbc.lock. That's why we use the
"dev_priv->fbc.fbc_work == work" check described in the paragraph
above.
So now that we have a single work struct we have to introduce a new
way to synchronize everything. So we're making the work function a
normal work instead of a delayed work, and it will be responsible for
sleeping the appropriate amount of time itself. This way, after it
wakes up it can grab the lock, ask "were we delayed or cancelled?" and
then go back to sleep, enable FBC or give up.
v2:
- Spelling fixes.
- Rebase after changing the patch order.
- Fix ms/jiffies confusion.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
The goal is to call FBC enable/disable only once per modeset, while
activate/deactivate/update will be called multiple times.
The enable() function will be responsible for deciding if a CRTC will
have FBC on it and then it will "lock" FBC on this CRTC: it won't be
possible to change FBC's CRTC until disable(). With this, all checks
and resource acquisition that only need to be done once per modeset
can be moved from update() to enable(). And then the update(),
activate() and deactivate() code will also get simpler since they
won't need to worry about the CRTC being changed.
The disable() function will do the reverse operation of enable(). One
of its features is that it should only be called while the pipe is
already off. This guarantees that FBC is stopped and nothing is
using the CFB.
With this, the activate() and deactivate() functions just start and
temporarily stop FBC. They are the ones touching the hardware enable
bit, so HW state reflects dev_priv->crtc.active.
The last function remaining is update(). A lot of times I thought
about renaming update() to activate() or try_to_activate() since it's
called when we want to activate FBC. The thing is that update() may
not only decide to activate FBC, but also deactivate or keep it on the
same state, so I'll leave this name for now.
Moving code to enable() and disable() will also help in case we decide
to move FBC to pipe_config or something else later.
The current patch only puts the very basic code on enable() and
disable(). The next commits will take care of moving more stuff from
update() to the new functions.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
v4: Rebase again after upstream changes.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
The long term goal is to have enable/disable as the higher level
functions and activate/deactivate as the lower level functions, just
like we do for PSR and for the CRTC. This way, we'll run enable and
disable once per modeset, while update, activate and deactivate will
be run many times. With this, we can move the checks and code that
need to run only once per modeset to enable(), making the code simpler
and possibly a little faster.
This patch is just the first step on the conversion: it starts by
converting the current low level functions from enable/disable to
activate/deactivate. This patch by itself has no benefits other than
making review and rebase easier. Please see the next patches for more
details on the conversion.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This thing where we need to get the crtc either from the work
structure or the fbc structure itself is confusing and unnecessary.
Set fbc.crtc right when scheduling the enable work so we can always
use it.
The problem is not what gets passed and how to retrieve it. The
problem is that when we're in the other parts of the code we always
have to keep in mind that if FBC is already enabled we have to get the
CRTC from place A, if FBC is scheduled we have to get the CRTC from
place B, and if it's disabled there's no CRTC. Having a single place
to retrieve the CRTC from allows us to treat the "is enabled" and "is
scheduled" cases as the same case, reducing the mistake surface. I
guess I should add this to the commit message.
Besides the immediate advantages, this is also going to make one of
the next commits much simpler. And even later, when we introduce
enable/disable + activate/deactivate, this will be even simpler as
we'll set the CRTC at enable time. So all the
activate/deactivate/update code can just look at the single CRTC
variable regardless of the current state.
v2: Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
drm-intel-next-2015-11-20-rebased:
4 weeks because of my vacation, so a bit more:
- final bits of the typesafe register mmio functions (Ville)
- power domain fix for hdmi detection (Imre)
- tons of fixes and improvements to the psr code (Rodrigo)
- refactoring of the dp detection code (Ander)
- complete rework of the dmc loader and dc5/dc6 handling (Imre, Patrik and
others)
- dp compliance improvements from Shubhangi Shrivastava
- stop_machine hack from Chris to fix corruptions when updating GTT ptes on bsw
- lots of fifo underrun fixes from Ville
- big pile of fbc fixes and improvements from Paulo
- fix fbdev failures paths (Tvrtko and Lukas Wunner)
- dp link training refactoring (Ander)
- interruptible prepare_plane for atomic (Maarten)
- basic kabylake support (Deepak&Rodrigo)
- don't leak ringspace on resets (Chris)
drm-intel-next-2015-10-23:
- 2nd attempt at atomic watermarks from Matt, but just prep for now
- fixes all over
* tag 'drm-intel-next-2015-11-20-merged' of git://anongit.freedesktop.org/drm-intel: (209 commits)
drm/i915: Update DRIVER_DATE to 20151120
drm/i915: take a power domain reference while checking the HDMI live status
drm/i915: take a power domain ref only when needed during HDMI detect
drm/i915: Tear down fbdev if initialization fails
async: export current_is_async()
Revert "drm/i915: Initialize HWS page address after GPU reset"
drm/i915: Fix oops caused by fbdev initialization failure
drm/i915: Fix i915_ggtt_view_equal to handle rotation correctly
drm/i915: Stuff rotation params into view union
drm/i915: Drop return value from intel_fill_fb_ggtt_view
drm/i915 : Fix to remove unnecsessary checks in postclose function.
drm/i915: add MISSING_CASE to a few port/aux power domain helpers
drm/i915/ddi: fix intel_display_port_aux_power_domain() after HDMI detect
drm/i915: Remove platform specific *_dp_detect() functions
drm/i915: Don't do edp panel detection in g4x_dp_detect()
drm/i915: Send TP1 TP2/3 even when panel claims no NO_TRAIN_ON_EXIT.
drm/i915: PSR: Don't Skip aux handshake on DP_PSR_NO_TRAIN_ON_EXIT.
drm/i915: Reduce PSR re-activation time for VLV/CHV.
drm/i915: Delay first PSR activation.
drm/i915: Type safe register read/write
...
ironlake_{enable,disable}_display_irq() each just call
ilk_update_display_irq() so let's make them static inlines.
While at it s/ironlake/ilk/ to make things shorter, and a bit more
consistent with the ibx functions.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448294777-13722-3-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Commit "30c964a drm/i915: Detect virtual south bridge" detects and
handles the southbridge emulated by vmware esx. Add the ich9 south
bridge emulated by 'qemu -M q35'.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>