Add a mechanism by which we can evade the leading edge of vblank. This
guarantees that no two sprite register writes will straddle on either
side of the vblank start, and that means all the writes will be latched
together in one atomic operation.
We do the vblank evade by checking the scanline counter, and if it's too
close to the start of vblank (too close has been hardcoded to 100usec
for now), we will wait for the vblank start to pass. In order to
eliminate random delayes from the rest of the system, we operate with
interrupts disabled, except when waiting for the vblank obviously.
Note that we now go digging through pipe_to_crtc_mapping[] in the
vblank interrupt handler, which is a bit dangerous since we set up
interrupts before the crtcs. However in this case since it's the vblank
interrupt, we don't actually unmask it until some piece of code
requests it.
v2: preempt_check_resched() calls after local_irq_enable() (Jesse)
Hook up the vblank irq stuff on BDW as well
v3: Pass intel_crtc instead of drm_crtc (Daniel)
Warn if crtc.mutex isn't locked (Daniel)
Add an explicit compiler barrier and document the barriers (Daniel)
Note the irq vs. modeset setup madness in the commit message (Daniel)
v4: Use prepare_to_wait() & co. directly and eliminate vbl_received
v5: Refactor intel_pipe_handle_vblank() vs. drm_handle_vblank() (Chris)
Check for min/max scanline <= 0 (Chris)
Don't call intel_pipe_update_end() if start failed totally (Chris)
Check that the vblank counters match on both sides of the critical
section (Chris)
v6: Fix atomic update for interlaced modes
v7: Reorder code for better readability (Chris)
v8: Drop preempt_check_resched(). It's not available to modules
anymore and isn't even needed unless we ourselves cause
a wakeup needing reschedule while interrupts are off
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Sourab Gupta <sourabgupta@gmail.com>
Reviewed-by: Akash Goel <akash.goels@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the rest of the code to enable this is in my branch. Without my
branch, hitting > 32b offsets is impossible. The code has always
"supported" 64b, but it's never actually been run of tested. This change
doesn't actually fix anything. [1] I am not sure why X won't work yet. I
do not get hangs or obvious errors.
There are 3 fixes grouped together here. First is to remove the
hardcoded 0 for the upper dword of the relocation. The next fix is to
use a 64b value for target_offset. The final fix is to not directly
apply target_offset to reloc->delta. reloc->delta is part of ABI, and so
we cannot change it. As it stands, 32b is enough to represent everything
we're interested in representing anyway. The main problem is, we cannot
add greater than 32b values to it directly.
[1] Almost all of intel-gpu-tools is not yet ready to test 64b
relocations. There are a few places that expect 32b values for offsets
and these all won't work.
Cc: Rafael Barbalho <rafael.barbalho@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Previously, our code only had a 32b offset value for where the
batchbuffer starts. With full PPGTT, and 64b canonical GPU address
space, that is an insufficient value. The code to expand is pretty
straight forward, and only one platform needs to do anything with the
extra bits.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rafael Barbalho <rafael.barbalho@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
SDVO is used by both crtcs using the i9xx_ and the ironlake_
functions. For both cases there is nothing between the
encoder->mode_set and the encoder->pre_enable calls that touches the
hardware.
The vlv_ functions are different since they enable the pll before the
->pre_enable hook. But SDVO isn't supported on vlv platforms, so this
doesn't matter.
We've also already clean up all the sdvo state computation logic, all
relevant parts are already in the ->compute_config hook. So we can
just get rid of the ->mode_set hook by converting it to a ->pre_enable
hook.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We only set a few bits in the ADPA register, which we then read back
in the enable/disable hooks. So we can just move that bit of state
computation code to the place where we need it since setting these
bits without enabling the CRT encoder has no effects.
The only exceptions are the hotplug bits since they affect the hotplug
detection logic, but we already set those in the ->reset function and
then never touch them.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently for the i9xx crtc hooks there's nothing between the call to
encoder->mode_set and encoder->pre_enable which touches the hardware.
Therefore, since tv is only used on gen3/4, we can just move the hook.
Yay for easy cases!
The only other important thing to check is that the new
->pre_enable hook is idempotent wrt the sw state since now it can
be called multiple times (due to DPMS). After a the bit of refactoring
this is now easy to check: It only reads crtc->config and computes
derived state but otherwise leaves it as-is, so we're good.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The pipe and plane _are_ disabled when we call this. So replace it
all with the corresponding assert (as self-documenting code) and
rip out all the lore.
Checking for a disabled plane would require us to export those macros
from intel_display.c, but if the pipe is off the plane isn't working
either. So this single check is good enough.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We only support TV-out on gen3/4 mobile platforms, and i915gm is the
only one that matches.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently for the i9xx crtc hooks there's nothing between the call to
encoder->mode_set and encoder->pre_enable which touches the hardware.
Therefore, since dvo is only used on gen2, we can just move the hook.
Yay for easy cases!
The only other important thing to check is that the new
->pre_enable hook is idempotent wrt the sw state since now it can be
called multiple times (due to DPMS). It only reads crtc->config but
otherwise leaves it as-is, so we're good.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For a bunch of reasons we want to move away from the ->mode_set
callbacks: All hw state setup needs to move into ->enable hooks (so
that DOMS can do runtime pm) and all the configuration setup needs to
move into the compute_config functions.
To start with this make the enocer->mode_set callback optional.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The BIOS can enable a pipe but leave the primary plane disabled. This
coflicts with out current idea of primary_enabled. Read the actual
hardware plane state and set primary_enabled appropriately.
We currently assume that primary_enabled is always true when we're about
to disable a crtc. That needs to change now as the plane may not be
enabled. So replace the relevant WARNs with early returns in
intel_{enable,disable}_primary_hw_plane().
Fixes the following warning
[ 3.831602] WARNING: CPU: 0 PID: 1112 at linux/drivers/gpu/drm/i915/intel_display.c:1918 intel_disable_primary_hw_plane+0xe4/0xf0 [i915]()
which got introduced here by me:
commit e9e39655c0c30cddc3f8c09a757678a24dd36737
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Mon Apr 28 15:53:25 2014 +0300
drm/i915: Remove useless checks from primary enable/disable
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add_request has always contained both the semaphore mailbox updates as
well as the breadcrumb writes. Since the semaphore signal is the one
which actually knows about the number of dwords it needs to emit to the
ring, we move the ring_begin to that function. This allows us to remove
the hideously shared #define
On a related not, gen8 will use a different number of dwords for
semaphores, but not for add request.
v2: Make number of dwords an explicit part of signalling (via function
argument). (Chris)
v3: very slight comment change
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This abstraction again is in preparation for gen8. Gen8 will bring new
semantics for doing this operation.
While here, make the writes of MI_NOOPs explicit for non-existent rings.
This should have been implicit before.
NOTE: This is going to be removed in a few patches.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will be helpful in abstracting some of the code in preparation for
gen8 semaphores.
v2: Move mbox stuff to a separate struct
v3: Rebased over VCS2 work
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During the initial power well enabling on the driver init/resume path
we can avoid initialzing part of the HW/SW state that will be
initialized anyway by the subsequent init/resume code. For some steps
like HPD initialization this redundancy is not only an overhead but an
actual problem, since they can't be run this early in the overall init
sequence.
Add a flag marking the init phase and skip reinitialzing state that is
not strictly necessary based on that.
This is also needed by the upcoming HPD init restructuring by Thierry
and Daniel.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In commit 691e6415c8
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Apr 9 09:07:36 2014 +0100
drm/i915: Always use kref tracking for all contexts.
we populated fake contexts on all platforms. These were identical to the
full hardware context tracking structs, except for the ctx->obj used to
store the hardware state. However, there remained one place where we
assumed that if a context existed, it would have an object associated
with it.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77717
Testcase: igt/drv_suspend/debugfs-reader
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add a new function intel_get_crtc_scanline() that returns the current
scanline counter for the crtc.
v2: Rebase after vblank timestamp changes.
Use intel_ prefix instead of i915_ as is more customary for
display related functions.
Include DRM_SCANOUTPOS_INVBL in the return value even w/o
adjustments, for a bit of extra consistency.
v3: Change the implementation to be based on DSL on all gens,
since that's enough for the needs of atomic updates, and
it will avoid complicating the scanout position calculations
for the vblank timestamps
v4: Don't break scanline wraparound for interlaced modes
Reviewed-by: Sourab Gupta <sourabgupta@gmail.com>
Reviewed-by: Akash Goel <akash.goels@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Seems I've been a bit dense with regards to the start of vblank
vs. the scanline counter / pixel counter.
After staring at the pixel counter on gen4 I came to the conclusion
that the start of vblank interrupt and scanline counter increment
happen at the same time. The scanline counter increment is documented
to occur at start of hsync, which means that the start of vblank
interrupt must also trigger there. Looking at the pixel counter value
when the scanline wraps from vtotal-1 to 0 confirms that, as the pixel
counter at that point reads hsync_start. This also clarifies why we see
need the +1 adjustment to the scaline counter. The counter actually
starts counting from vtotal-1 on the first active line.
I also confirmed that the frame start interrupt happens ~1 line after
the start of vblank, but the frame start occurs at hblank_start instead.
We only use the frame start interrupt on gen2 where the start of vblank
interrupt isn't available. The only important thing to note here is that
frame start occurs after vblank start, so we don't have to play any
additional tricks to fix up the scanline counter.
The other thing to note is the fact that the pixel counter on gen3-4
starts counting from the start of horizontal active on the first active
line. That means that when we get the start of vblank interrupt, the
pixel counter reads (htotal*(vblank_start-1)+hsync_start). Since we
consider vblank to start at (htotal*vblank_start) we need to add a
constant (htotal-hsync_start) offset to the pixel counter, or else we
risk misdetecting whether we're in vblank or not.
I talked a bit with Art Runyan about these topics, and he confirmed my
findings. And that the same rules should hold for platforms which don't
have the pixel counter. That's good since without the pixel counter it's
rather difficult to verify the timings to this accuracy.
So the conclusion is that we can throw away all the ISR tricks I added,
and just increment the scanline counter by one always.
Reviewed-by: Sourab Gupta <sourabgupta@gmail.com>
Reviewed-by: Akash Goel <akash.goels@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It seems we need this at least for the current platforms we have, but
probably not later. In any event, it should cause too much harm as we do
the same thing on several other platforms.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The same register exists for querying and programming eDRAM AKA eLLC. So
we can simply use it. For now, use all the same defaults as we had
for Haswell, since like Haswell, I have no further details.
I do not actually have a part with eDRAM, so I cannot test this.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I don't have any insight on what parts can do what. The docs do seem to
suggest WT caching works in at least the same manner as it does on
Haswell.
The addr = 0 is to shut up GCC:
drivers/gpu/drm/i915/i915_gem_gtt.c:80:7: warning: 'addr' may be used
uninitialized in this function [-Wmaybe-uninitialized]
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On BDW we don't enable RC6 at the moment, but this isn't reflected in
the (sanitized) i915.enable_rc6 option. So make enable_rc6 report
correctly that RC6 is disabled, which will also effectively disable RPM
on BDW (since RPM depends on RC6).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77565
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
assert_plane_enabled() is now triggering during FDI link train because
we no longer enable planes that early.
This problem got introduced in:
commit a5c4d7bc18
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Fri Mar 7 18:32:13 2014 +0200
drm/i915: Disable/enable planes as the first/last thing during modeset on ILK+
Just drop the assert since we shouldn't need planes for link training.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Squash in fixup for now unused plane local variable, reported
by 0-day tester.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This reverts commit 60f2b4af12.
The same warning has been fixed in e5081a538a and
these two commits got merged in 74e99a84de2d0980320612db8015ba606af42114 which
caused another warning. Simply, the reverted commit casted the pointer
difference to unsigned long and the other commit changed the output type from
long to ptrdiff_t.
The other commit fixes the original warning the better way so I'm reverting
this commit now.
Signed-off-by: Jan Moskyto Matejka <mq@suse.cz>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In recent dmesg logs reported for unrelated issues I noticed some power
domain WARNs caused by the following.
The workaround
commit ce35255032
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Fri Sep 20 10:14:23 2013 +0300
drm/i915: Fix unclaimed register access due to delayed VGA memory disable
and following fixup of it
commit a148532065
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Mon Sep 16 17:38:34 2013 +0300
drm/i915: Move power well init earlier during driver load
was partially reverted by
commit 7f16e5c141
Merge: 9d1cb915e01dc7
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Mon Nov 4 16:28:47 2013 +0100
Merge tag 'v3.12' into drm-intel-next
but kept the power domain put calls on the error path.
I think for now we can keep things as-is (not reintroduce the w/a) and just fix
the error path, since
- nobody complained seeing this issue
- according to Ville someone is reworking the VGA arbitration scheme at the
moment and when that's ready we have to rethink this part anyway
So fix this by just removing the put calls from the error path as well.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ville noticed that we have this nice kerneldoc but it's not integrated
anywhere. Fix this asap!
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Brad Volkin <bradley.d.volkin@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A common issue we have is that retiring requests causes recursion
through GTT manipulation or page table manipulation which we can only
handle at very specific points. However, to maintain internal
consistency (enforced through our sanity checks on write_domain at
various points in the GEM object lifecycle) we do need to retire the
object prior to marking it with a new write_domain, and also clear the
write_domain for the implicit flush following a batch.
Note that this then allows the unbound objects to still be on the active
lists, and so care must be taken when removing objects from unbound lists
(similar to the caveats we face processing the bound lists).
v2: Fix i915_gem_shrink_all() to handle updated object lifetime rules,
by refactoring it to call into __i915_gem_shrink().
v3: Missed an object-retire prior to changing cache domains in
i915_gem_object_set_cache_leve()
v4: Rebase
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
lib/interval_tree.c provides a simple interface for an interval-tree
(an augmented red-black tree) but is only built when testing the generic
macros for building interval-trees. For drivers with modest needs,
export the simple interval-tree library as is.
v2: Lots of help from Michel Lespinasse to only compile the code
as required:
- make INTERVAL_TREE a config option
- make INTERVAL_TREE_TEST select the library functions
and sanitize the filenames & Makefile
- prepare interval_tree for being built as a module if required
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Michel Lespinasse <walken@google.com>
[Acked for inclusion via drm/i915 by Andrew Morton.]
[danvet: switch to _GPL as per the mailing list discussion.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I've seen latencies up to 15msec, so increase the timeout to 20msec.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will be needed by the VLV runtime PM helpers too, so factor it out.
Also add a safety check for the case where the previous force-off is
still pending, since I'm not sure if Punit can handle a new setting
while the previous one hasn't settled yet.
v2:
- unchanged
v3:
- add a note to the commit message about the safety check (Ville)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When enabling runtime PM on VLV, GT power save enabling becomes relatively
frequent, so optimize it a bit.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During runtime suspend there can be a last pending rps.work, so make
sure it's canceled. Note that in the runtime suspend callback we can't
get any RPS interrupts since it's called only after the GPU goes idle
and we set the minimum RPS frequency. The next possibility for an RPS
interrupt is only after getting an RPM ref (for example because of a new
GPU command) and calling the RPM resume callback.
v2:
- patch introduced in v2 of the patchset
v3:
- Change the order of canceling the rps.work and disabling interrupts to
avoid the race between interrupt disabling and the the rps.work. Race
spotted by Ville.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need to re-init sizzling on all platforms so move it to the
platform independent runtime resume callback. The ring frequency reinit
is also needed everywhere except on VLV, but gen6_update_ring_freq()
will be a noop on VLV, so we can move this function too to platform
independent code.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is needed by the next patch moving the call out from platform
specific RPM callbacks to platform independent code.
No functional change.
v2:
- patch introduce in v2 of the patchset
v3:
- simplify platform check condition (Ville)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need to disable the interrupts for all platforms, so make the helpers
for this platform independent and call them from them platform
independent runtime suspend/resume callbacks.
On HSW/BDW this will move interrupt disabling/re-enabling at the
beginning/end of runtime suspend/resume respectively, but I don't see
any reason why this would cause a problem there. In any case this seems
to be the correct thing to do even on those platforms.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV we depend on RC6 to save the GT render and media HW context
before going to the D3 state via RPM, so as a preparation for the
VLV RPM support (added in an upcoming patch) disable RPM if RC6 is
disabled.
There is probably a similar dependency on other platforms too, so for
safety require RC6 for those too. For these platforms (SNB, HSW, BDW)
this is then a possible fix.
v2:
- require RC6 for all RPM platforms, not just for VLV (Paulo, Daniel)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm, an invalid enable_rc6 module option will be silently ignored, so
emit an info message about it. Doing an early sanitization we can also
reuse intel_enable_rc6() in a follow-up patch to see if RC6 is actually
enabled. Currently the caller would have to filter a non-zero return
value based on the platform we are running on. For example on VLV with
i915.enable_rc6 set to 2, RC6 won't be enabled but atm
intel_enable_rc6() would still return 2 in this case.
v2:
- simplify the platform check condition (Ville)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm, we call intel_gt_powersave_enable() for GEN6 and GEN7 but disable
it for everything starting from GEN6. This is a problem in case of BDW.
Since I don't have a BDW to test if RC6 works properly, just keep it
disabled for now and fix only the disable function.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Some platforms need additional power domains to be on in addition to the
device D0 state to access the panel registers.
Suggested by Daniel.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76987
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
While checking the error capture path I noticed that we lacked the
power domain-on check for PIPESTAT so fix this by moving that to where
the rest of pipe registers are captured.
The move also revealed that we actually don't include this register in
the error report, so fix that too.
v2:
- patch introduced in v2 of the patchset
v3:
- add back !HAS_PCH_SPLIT check (Ville)
[ Ignore my previous comment about the gen<=5 || vlv check, I realized
that it's the same as !HAS_PCH_SPLIT. ]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
While checking the error capture path I noticed that this register is
read twice for GEN2, so fix this and also move the read where it's done
for other platforms.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we can end up in the GPU reset deferred work in D3 state if the last
runtime PM reference is dropped between detecting a hang/scheduling the
work and executing the work. At least one such case I could trigger is
the simulated reset via the i915_wedged debugfs entry. Fix this by
getting an RPM reference around accessing the HW in the reset work.
v2:
- Instead of getting/putting the RPM reference in the reset work itself,
get it already before scheduling the work. By this we also prevent
going to D3 before the work gets to run, in addition to making sure
that we run the work itself in D0. (Ville, Daniel)
v3:
- fix inverted logic fail when putting the RPM ref on behalf of a
cancelled GPU reset work (Ville)
v4:
- Taking the RPM ref in the interrupt handler isn't really needed b/c
it's already guaranteed that we hold an RPM ref until the end of the
reset work in all cases we care about. So take the ref in the reset
work (for cases like i915_wedged_set). (Daniel)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Be we read and chase pointers from the VBT, it is prudent to make sure
that those accesses are wholly contained within the MMIO region, or else
we may cause a kernel panic during boot.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make sure that the whole BDB section is within the MMIO region prior to
accessing it contents. That we don't read outside of the secion is left
up to the individual section parsers.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
At least on VLV but probably on other platforms too we depend on RC6
being enabled for RPM, so disable RPM until the delayed RC6 enabling
completes.
v2:
- explain the reason for the _noresume version of RPM get (Daniel)
- use the simpler 'if (schedule_work()) rpm_get();' instead of
'if (!cancel_work_sync()) rpm_get(); schedule_work();'
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Getting struct_mutex around the whole intel_enable_gt_powersave()
function is not necessary, since it's only needed for the ILK path
therein.
This will make intel_enable_gt_powersave() useable on the RPM resume
path for >=GEN6 (added in an upcoming patch to reset the RPS state
during RPM resume), where we can't (and need not) get this mutex.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
These debugfs entries access registers that need the D0 power state so
get an RPM ref for them.
v2:
- for all these entries we only need D0 state, so get only an RPM ref,
not a power domain ref (Daniel, Paulo)
- the dpio entry is not an issue any more as it got removed (Ville)
- restore commit message from v1 (Paulo)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>