A few odd cases:
- mgag200 someho had a totally unused drm_dma_handle_t. Remove it.
- i915 still uses the legacy pci dma alloc api, so grows an include.
Everything else fairly standard.
v2: Include "drm_legacy.h" in drm.ko source files for consistency.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
And move a few legayc functions to start things over there.
It compiles ...
Inspired by a patch from Dave Airlie, but with a split between drm.ko
private legacy functions and stuff used by drivers.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Dave asked me to do the backmerge before sending him the revised pull
request, so here we go. Nothing fancy in the conflicts, just a few
things changed right next to each another.
Conflicts:
drivers/gpu/drm/drm_irq.c
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
I've read INVBL as "invalid backlight" and got mightly confused.
The #defines are already fairly long and we can afford to extend
them a bit more without resulting in ugly code all over.
I'm not sure how useful the complicated bitmask return value of these
functions really are since no one checks them. But for now let's keep
things as is.
Cc: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
One step closer to dropping all the drm_bus_* code:
Add a driver->set_busid() callback and make all drivers use the generic
helpers. Nouveau is the only driver that uses two different bus-types with
the same drm_driver. This is totally broken if both buses are available on
the same machine (unlikely, but lets be safe). Therefore, we create two
different drivers for each platform during module_init() and set the
set_busid() callback respectively.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Move internal declarations to drm_legacy.h and add drm_legacy_*() prefix
to all legacy functions.
[airlied: add a bit of an explaination to drm_legacy.h]
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
drm-intel-next-2014-08-22:
- basic code for execlist, which is the fancy new cmd submission on gen8. Still
disabled by default (Ben, Oscar Mateo, Thomas Daniel et al)
- remove the useless usage of console_lock for I915_FBDEV=n (Chris)
- clean up relations between ctx and ppgtt
- clean up ppgtt lifetime handling (Michel Thierry)
- various cursor code improvements from Ville
- execbuffer code cleanups and secure batch fixes (Chris)
- prep work for dev -> dev_priv transition (Chris)
- some of the prep patches for the seqno -> request object transition (Chris)
- various small improvements all over
* tag 'drm-intel-next-2014-09-01' of git://anongit.freedesktop.org/drm-intel: (86 commits)
drm/i915: fix suspend/resume for GENs w/o runtime PM support
drm/i915: Update DRIVER_DATE to 20140822
drm: fix plane rotation when restoring fbdev configuration
drm/i915/bdw: Disable execlists by default
drm/i915/bdw: Enable Logical Ring Contexts (hence, Execlists)
drm/i915/bdw: Document Logical Rings, LR contexts and Execlists
drm/i915/bdw: Print context state in debugfs
drm/i915/bdw: Display context backing obj & ringbuffer info in debugfs
drm/i915/bdw: Display execlists info in debugfs
drm/i915/bdw: Disable semaphores for Execlists
drm/i915/bdw: Make sure gpu reset still works with Execlists
drm/i915/bdw: Don't write PDP in the legacy way when using LRCs
drm/i915: Track cursor changes as frontbuffer tracking flushes
drm/i915/bdw: Help out the ctx switch interrupt handler
drm/i915/bdw: Avoid non-lite-restore preemptions
drm/i915/bdw: Handle context switch events
drm/i915/bdw: Two-stage execlist submit process
drm/i915/bdw: Write the tail pointer, LRC style
drm/i915/bdw: Implement context switching (somewhat)
drm/i915/bdw: Emission of requests with logical rings
...
Conflicts:
drivers/gpu/drm/i915/i915_drv.c
Before sharing common parts between the system and runtime s/r
handlers we WARNed if the runtime s/r handlers were called on GENs that
didn't support RPM. But this WARN is not correct if the same handler is
called from the system s/r path, since that can happen on any platform.
This also broke system s/r on old platforms.
The issue was introduced in
commit 016970beb0
Author: Sagar Kamble <sagar.a.kamble@intel.com>
Date: Wed Aug 13 23:07:06 2014 +0530
v2:
- remove the WARN and depend on the HAS_RUNTIME_PM check in
rutime_suspend/resume instead (Daniel)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82751
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
- Setting dp M2/N2 values plus state checker support (Vandana Kannan)
- chv power well support (Ville)
- DP training pattern 3 support for chv (Ville)
- cleanup of the hsw/bdw ddi pll code, prep work for skl (Damien)
- dsi video burst mode support (Shobhit)
- piles of other chv fixes all over (Ville et. al.)
- cleanup of the ddi translation tables setup code (Damien)
- 180 deg rotation support (Ville & Sonika Jindal)
* tag 'drm-intel-next-2014-08-08' of git://anongit.freedesktop.org/drm-intel: (59 commits)
drm/i915: Update DRIVER_DATE to 20140808
drm/i915: No busy-loop wait_for in the ring init code
drm/i915: Add sprite watermark programming for VLV and CHV
drm/i915: Round-up clock and limit drain latency
drm/i915: Generalize drain latency computation
drm/i915: Free pending page flip events at .preclose()
drm/i915: clean up PPGTT checking logic
drm/i915: Polish the chv cmnlane resrt macros
drm/i915: Hack to tie both common lanes together on chv
drm/i915: Add cherryview_update_wm()
drm/i915: Update DDL only for current CRTC
drm/i915: Parametrize VLV_DDL registers
drm/i915: Fill out the FWx watermark register defines
drm: Resetting rotation property
drm/i915: Add rotation property for sprites
drm: Add rotation_property to mode_config
drm/i915: Make intel_plane_restore() return an error
drm/i915: Add 180 degree sprite rotation support
drm/i915: Introduce a for_each_intel_encoder() macro
drm/i915: Demote the DRRS messages to debug messages
...
We still have a few missing bits and pieces to have execlists enabled by
default eg. the error capture or the render state initialization and so
it wouldn't be wise to enable it by default on BDW just yet.
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82740
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The time has come, the Walrus said, to talk of many things.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add theory of operation notes to intel_lrc.c and comments to externally
visible functions.
v2: Add notes on logical ring context creation.
v3: Use kerneldoc.
v4: Integrate it in the DocBook template.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2, v3)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Drop hunk about render ring init function since that's not
yet merged.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This has turned out to be really handy in debug so far.
Update:
Since writing this patch, I've gotten similar code upstream for error
state. I've used it quite a bit in debugfs however, and I'd like to keep
it here at least until preemption is working.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
This patch was accidentally dropped in the first Execlists version, and
it has been very useful indeed. Put it back again, but as a standalone
debugfs file.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
v2: Take the device struct_mutex rather than mode_config mutex for
atomic state capture.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Warn and return if LRCs are not enabled.
v3: Grab the Execlists spinlock (noticed by Daniel Vetter).
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
v4: Lock the struct mutex for atomic state capture
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Up until recently, semaphores weren't enabled in BDW so we didn't care
about them. But then Rodrigo came and enabled them:
commit 521e62e49a
Author: Rodrigo Vivi <rodrigo.vivi@intel.com>
drm/i915: Enable semaphores on BDW
So now we have to explicitly disable them for Execlists until both
features play nicely.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we reset a ring after a hang, we have to make sure that we clear
out all queued Execlists requests.
v2: The ring is, at this point, already being correctly re-programmed
for Execlists, and the hangcheck counters cleared.
v3: Daniel suggests to drop the "if (execlists)" because the Execlists
queue should be empty in legacy mode (which is true, if we do the
INIT_LIST_HEAD).
v4: Do the pending intel_runtime_pm_put
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is mostly for correctness so that we know we are running the LR
context correctly (this is, the PDPs are contained inside the context
object).
v2: Move the check to inside the enable PPGTT function. The switch
happens in two places: the legacy context switch (that we won't hit
when Execlists are enabled) and the PPGTT enable, which unfortunately
we need. This would look much nicer if the ppgtt->enable was part of
the ring init, where it logically belongs.
v3: Move the check to the start of the enable PPGTT function. None
of the legacy PPGTT enabling is required when using LRCs as the
PPGTT is enabled in the context descriptor and the PDPs are written
in the LRC.
v4: Clarify comment based on review feedback.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Resolve conflicts with ppgtt_enable rework.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we may retrain the DP link even if the CRTC is inactive through
HPD work->intel_dp_check_link_status(). This in turn can lock up the PHY
(at least on BYT), since the DP port is disabled.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=81948
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: stable@vger.kernel.org (3.16+)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Atm we may leave eDP VDD enabled during system suspend after the CRTCs
are disabled through an HPD->DPCD read event. So disable VDD during
suspend at a point when no HPDs can occur.
Note that runtime suspend doesn't have the same problem, since there the
RPM ref held by VDD provides already the needed serialization.
v2:
- add note to commit message about the runtime suspend path (Ville)
- use edp_panel_vdd_off_sync(), so we can keep the WARN in
edp_panel_vdd_off() (Ville)
v3:
- rebased on -fixes (for_each_intel_encoder()->list_for_each_entry())
(Imre)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v2)
Cc: stable@vger.kernel.org (3.16+)
[Jani: fix sparse warning reported by Fengguang Wu]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Make sure these work handlers don't run after we system suspend or
unload the driver. Note that we don't cancel the handlers during runtime
suspend. That could lead to a lockup, since we take a runtime PM ref
from the handlers themselves. Fortunaltely canceling there is not needed
since the RPM ref itself provides for the needed serialization.
v2:
- fix the order of canceling dig_port_work wrt. hotplug_work (Ville)
- zero out {long,short}_hpd_port_mask and hpd_event_bits for speed
(Ville)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: stable@vger.kernel.org (3.16+)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Atm, the HPD IRQ reenable timer can get rearmed right after it's
canceled. Also to access the HPD IRQ mask registers we need to wake up
the HW.
Solve both issues by converting the reenable timer to a delayed work and
grabbing a runtime PM reference in the work. By this we can also forgo
canceling the timer during runtime suspend, since the only important
thing there is that the HW is awake when we write the registers and
that's ensured by the RPM ref. So do the cancelation only during driver
unload time; this is also a requirement for an upcoming patch where we
want to cancel all HPD related works only during system suspend and
driver unload time, but not during runtime suspend.
Note that there is still a race between the HPD IRQ reenable work and
drm_irq_uninstall() during driver unload, where the work can reenable
the HPD IRQs disabled by drm_irq_uninstall(). This isn't a problem since
the HPD IRQs will still be effectively masked by the first level
interrupt mask.
v2-3:
- unchanged
v4:
- use proper API for changing the expiration time for an already pending
delayed work (Jani)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v2)
Cc: stable@vger.kernel.org (3.16+)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Ville noticed that we can call ibx_digital_port_connected() which accesses
the HW without holding any power well/runtime pm reference. Fix this by
holding a display port power domain reference around the whole hpd_pulse
handler.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Cc: stable@vger.kernel.org (3.16+)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Make sure the cursor gets fully clipped when enabling it on a disabled
crtc via setplane. This will prevent the lower level code from
attempting to enable the cursor in hardware.
Cc: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
During suspend we turn off the crtcs, but leave the staged config in
place so that we can restore the display(s) to their previous state on
resume.
During resume when we attempt to apply the force pipe A quirk we use the
load detect mechanism. That doesn't check whether there was an already
staged configuration for the crtc since that's not even possible during
normal runtime load detection. But during resume it is possible, and if
we just blindly go and overwrite the staged crtc configuration for the
load detection we can no longer restore the display to the correct
state.
Even worse, we don't even clear all the staged connector->encoder->crtc
links so we may end up using a cloned setup for the load detection, and
after we're done we just clear the links related to the VGA output
leaving the links for the other outputs in place. This will eventually
result in calling intel_set_mode() with mode==NULL but with valid
connector->encoder->crtc links which will result in dereferencing the
NULL mode since the code thinks it will have to a modeset.
To avoid these problems don't use any crtc with new_enabled==true for
load detection.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org (for 3.16)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
intel_enable_pipe_a() gets called with all the modeset locks already
held (by drm_modeset_lock_all()), so trying to grab the same
locks using another drm_modeset_acquire_ctx is going to fail miserably.
Move most of the drm_modeset_acquire_ctx handling (init/drop/fini)
out from intel_{get,release}_load_detect_pipe() into the callers
(intel_{crt,tv}_detect()). Only the actual locking and backoff
handling is left in intel_get_load_detect_pipe(). And in
intel_enable_pipe_a() we just share the mode_config.acquire_ctx from
drm_modeset_lock_all() which is already holding all the relevant locks.
It's perfectly legal to lock the same ww_mutex multiple times using the
same ww_acquire_ctx. drm_modeset_lock() will convert the returned
-EALREADY into 0, so the caller doesn't need to do antyhing special.
Fixes a hang on resume on my 830.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
We treat other plane updates in the same fashion. Spotted because
Rodrigo kept reporting a bug in the PSR code where the frontbuffer was
eternally stuck with a dirty cursor bit set.
The psr testcase should have caught this, but that i-g-t is kaputt.
Rodrigo is signed up to fix that.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by-and-Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we receive a storm of requests for the same context (see gem_storedw_loop_*)
we might end up iterating over too many elements in interrupt time, looking for
contexts to squash together. Instead, share the burden by giving more
intelligence to the queue function. At most, the interrupt will iterate over
three elements.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the current Execlists feeding mechanism, full preemption is not
supported yet: only lite-restores are allowed (this is: the GPU
simply samples a new tail pointer for the context currently in
execution).
But we have identified an scenario in which a full preemption occurs:
1) We submit two contexts for execution (A & B).
2) The GPU finishes with the first one (A), switches to the second one
(B) and informs us.
3) We submit B again (hoping to cause a lite restore) together with C,
but in the time we spend writing to the ELSP, the GPU finishes B.
4) The GPU start executing B again (since we told it so).
5) We receive a B finished interrupt and, mistakenly, we submit C (again)
and D, causing a full preemption of B.
The race is avoided by keeping track of how many times a context has been
submitted to the hardware and by better discriminating the received context
switch interrupts: in the example, when we have submitted B twice, we won´t
submit C and D as soon as we receive the notification that B is completed
because we were expecting to get a LITE_RESTORE and we didn´t, so we know a
second completion will be received shortly.
Without this explicit checking, somehow, the batch buffer execution order
gets messed with. This can be verified with the IGT test I sent together with
the series. I don´t know the exact mechanism by which the pre-emption messes
with the execution order but, since other people is working on the Scheduler
+ Preemption on Execlists, I didn´t try to fix it. In these series, only Lite
Restores are supported (other kind of preemptions WARN).
v2: elsp_submitted belongs in the new intel_ctx_submit_request. Several
rebase changes.
v3: Clarify how the race is avoided, as requested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Align function parameters ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Handle all context status events in the context status buffer on every
context switch interrupt. We only remove work from the execlist queue
after a context status buffer reports that it has completed and we only
attempt to schedule new contexts on interrupt when a previously submitted
context completes (unless no contexts are queued, which means the GPU is
free).
We canot call intel_runtime_pm_get() in an interrupt (or with a spinlock
grabbed, FWIW), because it might sleep, which is not a nice thing to do.
Instead, do the runtime_pm get/put together with the create/destroy request,
and handle the forcewake get/put directly.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v2: Unreferencing the context when we are freeing the request might free
the backing bo, which requires the struct_mutex to be grabbed, so defer
unreferencing and freeing to a bottom half.
v3:
- Ack the interrupt inmediately, before trying to handle it (fix for
missing interrupts by Bob Beckett <robert.beckett@intel.com>).
- Update the Context Status Buffer Read Pointer, just in case (spotted
by Damien Lespiau).
v4: New namespace and multiple rebase changes.
v5: Squash with "drm/i915/bdw: Do not call intel_runtime_pm_get() in an
interrupt", as suggested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Context switch (and execlist submission) should happen only when
other contexts are not active, otherwise pre-emption occurs.
To assure this, we place context switch requests in a queue and those
request are later consumed when the right context switch interrupt is
received (still TODO).
v2: Use a spinlock, do not remove the requests on unqueue (wait for
context switch completion).
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v3: Several rebases and code changes. Use unique ID.
v4:
- Move the queue/lock init to the late ring initialization.
- Damien's kmalloc review comments: check return, use sizeof(*req),
do not cast.
v5:
- Do not reuse drm_i915_gem_request. Instead, create our own.
- New namespace.
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2-v5)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[davnet: Checkpatch + wash-up s/BUG_ON/WARN_ON/.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Each logical ring context has the tail pointer in the context object,
so update it before submission.
v2: New namespace.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A context switch occurs by submitting a context descriptor to the
ExecList Submission Port. Given that we can now initialize a context,
it's possible to begin implementing the context switch by creating the
descriptor and submitting it to ELSP (actually two, since the ELSP
has two ports).
The context object must be mapped in the GGTT, which means it must exist
in the 0-4GB graphics VA range.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
v2: This code has changed quite a lot in various rebases. Of particular
importance is that now we use the globally unique Submission ID to send
to the hardware. Also, context pages are now pinned unconditionally to
GGTT, so there is no need to bind them.
v3: Use LRCA[31:12] as hwCtxId[19:0]. This guarantees that the HW context
ID we submit to the ELSP is globally unique and != 0 (Bspec requirements
of the software use-only bits of the Context ID in the Context Descriptor
Format) without the hassle of the previous submission Id construction.
Also, re-add the ELSP porting read (it was dropped somewhere during the
rebases).
v4:
- Squash with "drm/i915/bdw: Add forcewake lock around ELSP writes" (BSPEC
says: "SW must set Force Wakeup bit to prevent GT from entering C6 while
ELSP writes are in progress") as noted by Thomas Daniel
(thomas.daniel@intel.com).
- Rename functions and use an execlists/intel_execlists_ namespace.
- The BUG_ON only checked that the LRCA was <32 bits, but it didn't make
sure that it was properly aligned. Spotted by Alistair Mcaulay
<alistair.mcaulay@intel.com>.
v5:
- Improved source code comments as suggested by Chris Wilson.
- No need to abstract submit_ctx away, as pointed by Brad Volkin.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch. Sigh.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On a previous iteration of this patch, I created an Execlists
version of __i915_add_request and asbtracted it away as a
vfunc. Daniel Vetter wondered then why that was needed:
"with the clean split in command submission I expect every
function to know wether it'll submit to an lrc (everything in
intel_lrc.c) or wether it'll submit to a legacy ring (existing
code), so I don't see a need for an add_request vfunc."
The honest, hairy truth is that this patch is the glue keeping
the whole logical ring puzzle together:
- i915_add_request is used by intel_ring_idle, which in turn is
used by i915_gpu_idle, which in turn is used in several places
inside the eviction and gtt codes.
- Also, it is used by i915_gem_check_olr, which is littered all
over i915_gem.c
- ...
If I were to duplicate all the code that directly or indirectly
uses __i915_add_request, I'll end up creating a separate driver.
To show the differences between the existing legacy version and
the new Execlists one, this time I have special-cased
__i915_add_request instead of adding an add_request vfunc. I
hope this helps to untangle this Gordian knot.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Adjust to ringbuf->FIXME_lrc_ctx per the discussion with
Thomas Daniel.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The execlist patches have a bit a convoluted and long history and due
to that have the actual submission still misplaced deeply burried in
the low-level ringbuffer handling code. This design goes back to the
legacy ringbuffer code with its tricky lazy request and simple work
submissiion using ring tail writes. For that reason they need a
ring->ctx backpointer.
The goal is to unburry that code and move it up into a level where the
full execlist context is available so that we can ditch this
backpointer. Until that's done make it really obvious that there's
work still to be done.
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Thomas Daniel <thomas.daniel@intel.com>
Acked-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The current error state harks back to the era of just a single VM. For
full-ppgtt, we capture every bo on every VM. It behoves us to then print
every bo for every VM, which we currently fail to do and so miss vital
information in the error state.
v2: Use the vma address rather than -1!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV, post S0i3 during i915_drm_thaw following issue is observed during ring
initialization.
[ 335.604039] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607340] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607345] [drm:init_ring_common] ERROR failed to set render ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 337.610645] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613952] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613956] [drm:init_ring_common] ERROR failed to set bsd ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 339.617256] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 339.617258] -----------[ cut here ]-----------
[ 339.617267] WARNING: CPU: 0 PID: 6 at drivers/gpu/drm/i915/intel_ringbuffer.c:1666 intel_cleanup_ring+0xe6/0xf0()
[ 339.617396] --[ end trace 5ef5ed1a3c92e2a6 ]--
[ 339.617428] [drm:__i915_drm_thaw] ERROR failed to re-initialize GPU, declaring wedged!
This is happening since wake is not enabled and Gunit registers are not restored.
For this system suspend/resume paths need to follow save/restore and additional
platform specific setup in suspend_complete and resume_prepare.
suspend_complete is shared unconditionaly for VLV, HSW, BDW. resume_prepare for
HSW and BDW has pc8 disabling which is needed during thaw_early so sharing
uncondtionally. For VLV and SNB runtime resume specific sequence exists.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Goel, Akash <akash.goel@intel.com>
Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With this change, intel_runtime_suspend and intel_runtime_resume functions
become completely platform agnostic. Platform specific suspend/resume
changes are moved to intel_suspend_complete and intel_resume_prepare.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Goel, Akash <akash.goel@intel.com>
Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rather than take and release the console_lock() around a non-existent
DRM_I915_FBDEV, move the lock acquisation into the callee where it will
be compiled out by the config option entirely. This includes moving the
deferred fb_set_suspend() dance and encapsulating it entirely within
intel_fbdev.c.
v2: Use an integral work item so that we can explicitly flush the work
upon suspend/unload.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Add the flush_work in fbdev_fini per the mailing list
discussion. And s/BUG_ON/WARN_ON/ because.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ville pointed out the GCCism __builtin_types_compatible_p() that we
could use to replace our heavily casted presumption __I915__ macro that
was based on comparing struct sizes.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
845/865 support different cursor sizes as well, albeit a bit differently
than later platforms. Add the necessary code to make them work.
Untested due to lack of hardware.
v2: Warn but accept invalid stride (Chris)
Rewrite the cursor size checks for other platforms (Chris)
v3: More polish and magic to the cursor size checks (Chris)
v4: Moar polish and a comment (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ever since
commit 5efb3e2838
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Wed Apr 9 13:28:53 2014 +0300
drm/i915/chv: Add cursor pipe offsets
the only difference between i9xx_update_cursor() and ivb_update_cursor()
was the hsw+ pipe csc handling. Let's unify them and we can rid
outselves of some duplicated code.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CURSIZE register exists on 845/865 only, so move it to
i845_update_cursor(). Changes to cursor size must be done only when the
cursor is disabled, so do the write just before enabling the cursor.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make sure the cursor gets fully clipped when enabling it on a disabled
crtc via setplane. This will prevent the lower level code from
attempting to enable the cursor in hardware.
Cc: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Also remove related WARN_ONs which seem to have been hit since a rather
long time. But apperently no one noticed since our module reload is
already WARNING-infested :(
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want to move the aliasing ppgtt cleanup back into the global
gtt cleanup code for symmetry, but first we need to create such
a place.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Address space cleanup isn't really a job for the low-level cleanup
callbacks. Without this change we can't reuse the low-level cleanup
callback for the aliasing ppgtt cleanup.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all the flow is streamlined the rule is simple: We create
a new ppgtt for a new context when we have full ppgtt enabled.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's a bit a confusion since we track the global gtt,
the aliasing and real ppgtt in the ctx->vm pointer. And not
all callers really bother to check for the different cases and just
presume that it points to a real ppgtt.
Now looking closely we don't actually need ->vm to always point at an
address space - the only place that cares actually has fixup code
already to decide whether to look at the per-proces or the global
address space.
So switch to just tracking the ppgtt directly and ditch all the
extraneous code.
v2: Fixup the ppgtt debugfs file to not oops on a NULL ctx->ppgtt.
Also drop the early exit - without aliasing ppgtt we want to dump all
the ppgtts of the contexts if we have full ppgtt.
v3: Actually git add the compile fix.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Cc: "Thierry, Michel" <michel.thierry@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
OTC-Jira: VIZ-3724
[danvet: Resolve conflicts with execlist patches while applying.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>