That way the control flow is clearer, and it prepares the stage
to extract these ums functions and hide them somewhere.
There's still tons of display stuff outside of these, but that
requires more work.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Implements WaVSRefCountFullforceMissDisable as documented in the BSpec
3D workarounds chapter.
Cc: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Our suspend code touches a lot of registers all over the place, so we
need to enable the power well before suspending.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
[danvet: Fixup compilation by stealing the header decl from the
dynamic power wells patch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of setting it at the beginning of haswell_crtc_mode_set, let's
set it at the beginning of intel_crtc_mode_set. When
intel_crt_mode_set calls drm_vblank_pre_modeset we already need to
have the transcoder_edp correctly set, because eventually
drm_vblank_pre_modeset calls functions that call i915_pipe_enabled
from i915_irq.c, which will read PIPECONF(cpu_transcoder).
This is a bug that affects us since we added support for
TRANSCODER_EDP, but I was only able to see the problem after
suspending a machine with the power well disabled (got an "unclaimed
register" error.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We should avoid touching registers that are on the power down well
when we don't need to, because if we touch these registers when the
power well is disabled we'll get tons of "unclaimed register"
messages. This commit fixes some of these messages.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The current code was wrong in many different ways, so this is a full
rewrite. We don't have "different power wells for different parts of
the GPU", we have a single power well, but we have multiple registers
that can be used to request enabling/disabling the power well. So
let's be a good citizen and only use the register we're suppose to
use, except when we're loading the driver, where we clear the request
made by the BIOS.
If any of the registers is requesting the power well to be enabled, it
will be enabled. If none of the registers is requesting the power well
to be enabled, it will be disabled.
For now we're just forcing the power well to be enabled, but in the
next commits we'll change this.
V2:
- Remove debug messages that could be misleading due to possible
race conditions with KVMr, Debug and BIOS.
- Don't wait on disabling: after a conversaion with a hardware
engineer we discovered that the "restriction" on bit 31 is just
for the "enable" case, and we don't even need to wait on the
"disable" case.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
SR01 needs to be touched to disable VGA on non-UMS setups too.
So the sequencer registers need to include the appripriate offset
on VLV.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When passing the DP/HDMI/SDVO registers to the encoder init functions,
include the VLV specific offset in the value.
v2: Resolved conflicts w/ VLV SDVO elimination
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Don't call intel_sdvo_init() for VLV.
Preserve the same behaviour as when intel_sdvo_init() would
have returned false.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of using ADPA/VLV_ADPA/PCH_ADPA in various parts of
intel_crt code, just use adpa_reg which always contains the
correct value for the platform.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Dropped the clock gating registers
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will cause display registers to include the correct
offset on VLV.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
GPIO/GMBUS registers must be offset on VLV, so simply
adjust gpio_mmio_base to include the correct offset.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of 0x18xxxx use (VLV_DISPLAY_BASE + xxxx).
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of 0x18xxxx use (VLV_DISPLAY_BASE + xxxx).
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CURSIZE is not present on VLV, so it was left out, as were the IVB
specific cursor B registers.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add an optional offset to intel_device_info, which will added
to most display register offsets.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Use intel_dig_port->port rather than intel_dp->output_reg.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Use intel_dig_port->port rather than intel_hdmi->sdvox_erg.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Because the register does not exist in gen5+.
This patch solves "unclaimed register" messages on Haswell after
suspend/resume.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
commit 09153000b8
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Wed Dec 12 14:06:44 2012 +0100
drm/i915: rework locking for intel_dpio|sbi_read|write
reworked the locking around sbi_read/write functions for 3.8-fixes.
But
commit dde86e2db5
Author: Paulo Zanoni <paulo.r.zanoni@intel.com>
Date: Sat Dec 1 12:04:25 2012 -0200
drm/i915: add lpt_init_pch_refcl
Added new use-cases in the -next tree which has not been updated in
the merge. Fix it up.
Reported-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Tested-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
ELD info should be updated dynamically according to hot plug event.
For haswell chip, clear/set the eld valid bit and output enable bit
from callback intel_disable/eanble_ddi().
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Wang Xingchao <xingchao.wang@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When machine was rebooted or module was reloaded,
gem_hw_init() set last_seqno to be identical to next_seqno.
This lead to situation that waits for first ever request
always passed immediately regardless if it was actually
executed.
Use gem_set_seqno() to be consistent how hw is
initialized on init, wrap and on resume.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44156
Reported-by: Alan Zimmerman <alan.zimm@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=31522#c35
[Note: There are more than one broken setups in the bug. This fixes one.]
Reported-by: Martins <andrissr@inbox.lv>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59628
Reported-by: Roland Gruber <post@rolandgruber.de>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Damien Lespiau wondered how race the gpu reset/hang detection code is
against concurrent gpu resets/hang detections or combinations thereof.
Luckily the single work item is guranteed to never run concurrently,
so reset handling is already single-threaded.
Hence we only have to worry about concurrent hang detections, or a
hang detection firing off while we're still processing an older gpu
reset request. Due to the new mechanism of setting the reset in
progress flag and the ordering guaranteed by the schedule_work
function there's nothing to do but add a comment explaining why we're
safe.
The only thing I've noticed is that we still try to reset the gpu now,
even when it is declared terminally wedged. Add a check for that to
avoid continous warnings about failed resets, in case the hangcheck
timer ever gets stuck.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the previous patch the state transition handling of the reset
code itself is now (hopefully) race free and solid. But that still
leaves out everyone else - with the various lock-free wait paths
we have there's the possibility that the reset happens between the
point where we read the seqno we should wait on and the actual wait.
And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll
happily wait for a seqno which will in all likelyhood never signal.
In practice this is not a big problem since the X server gets
constantly interrupted, and can then submit more work (hopefully) to
unblock everyone else: As soon as a new seqno write lands, all waiters
will unblock. But running the i-g-t reset testcase ZZ_hangman can
expose this race, especially on slower hw with fewer cpu cores.
Now looking forward to ARB_robustness and friends that's not the best
possible behaviour, hence this patch adds a reset_counter to be able
to detect any reset, even if a given thread never observed the
in-progress state.
The important part is to correctly order things:
- The write side needs to increment the counter after any seqno gets
reset. Hence we need to do that at the end of the reset work, and
again wake everyone up. We also need to place a barrier in between
any possible seqno changes and the counter increment, since any
unlock operations only guarantee that nothing leaks out, but not
that at later load operation gets moved ahead.
- On the read side we need to ensure that no reset can sneak in and
invalidate the seqno. In all cases we can use the one-sided barrier
that unlock operations guarantee (of the lock protecting the
respective seqno/ring pair) to ensure correct ordering. Hence it is
sufficient to place the atomic read before the mutex/spin_unlock and
no additional barriers are required.
The end-result of all this is that we need to wake up everyone twice
in a reset operation:
- First, before the reset starts, to get any lockholders of the locks,
so that the reset can proceed.
- Second, after the reset is completed, to allow waiters to properly
and reliably detect the reset condition and bail out.
I admit that this entire reset_counter thing smells a bit like
overkill, but I think it's justified since it makes it really explicit
what the bail-out condition is. And we need a reset counter anyway to
implement ARB_robustness, and imo with finer-grained locking on the
horizont this is the most resilient scheme I could think of.
v2: Drop spurious change in the wait_for_error EXIT_COND - we only
need to wait until we leave the reset-in-progress wedged state.
v3: Don't play tricks with barriers in the throttle ioctl, the
spin_unlock is barrier enough.
I've also considered using a little helper to grab the current
reset_counter, but then decided that hiding the atomic_read isn't a
great idea, since having it explicitly show up in the code is a nice
remainder to reviews to check the memory barriers.
v4: Add a comment to explain why we need to fall through in
__wait_seqno in the end variable assignments.
v5: Review from Damien:
- s/smb/smp/ in a comment
- don't increment the reset counter after we've set it to WEDGED. Now
we (again) properly wedge the gpu when the reset fails.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we seem to have brought order to the GTT barriers, the last one
to review is the terminal barrier before we unbind the buffer from the
GTT. This needs to only be performed if the buffer still resides in the
GTT domain, and so we can skip some needless barriers otherwise.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With a fence, we only need to insert a memory barrier around the actual
fence alteration for CPU accesses through the GTT. Performing the
barrier in flush-fence was inserting unnecessary and expensive barriers
for never fenced objects.
Note removing the barriers from flush-fence, which was effectively a
barrier before every direct access through the GTT, revealed that we
where missing a barrier before the first access through the GTT. Lack of
that barrier was sufficient to cause GPU hangs.
v2: Add a couple more comments to explain the new barriers
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have two important transitions of the wedged state in the current
code:
- 0 -> 1: This means a hang has been detected, and signals to everyone
that they please get of any locks, so that the reset work item can
do its job.
- 1 -> 0: The reset handler has completed.
Now the last transition mixes up two states: "Reset completed and
successful" and "Reset failed". To distinguish these two we do some
tricks with the reset completion, but I simply could not convince
myself that this doesn't race under odd circumstances.
Hence split this up, and add a new terminal state indicating that the
hw is gone for good.
Also add explicit #defines for both states, update comments.
v2: Split out the reset handling bugfix for the throttle ioctl.
v3: s/tmp/wedged/ sugested by Chris Wilson. Also fixup up a rebase
error which prevented this patch from actually compiling.
v4: To unify the wedged state with the reset counter, keep the
reset-in-progress state just as a flag. The terminally-wedged state is
now denoted with a big number.
v5: Add a comment to the reset_counter special values explaining that
WEDGED & RESET_IN_PROGRESS needs to be true for the code to be
correct.
v6: Fixup logic errors introduced with the wedged+reset_counter
unification. Since WEDGED implies reset-in-progress (in a way we're
terminally stuck in the dead-but-reset-not-completed state), we need
ensure that we check for this everywhere. The specific bug was in
wait_for_error, which would simply have timed out.
v7: Extract an inline i915_reset_in_progress helper to make the code
more readable. Also annote the reset-in-progress case with an
unlikely, to help the compiler optimize the fastpath. Do the same for
the terminally wedged case with i915_terminally_wedged.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
While auditing the code I've noticed one place (the throttle ioctl)
which does not yet wait for the reset handler to complete and doesn't
properly decode the wedge state into -EAGAIN/-EIO. Fix this up by
calling the right helpers. This might explain the oddball "my
compositor just died in a successfull gpu reset" reports. Or maybe not, since
current mesa doesn't use this ioctl to throttle command submission.
The throttle ioctl doesn't take the struct_mutex, so to avoid busy-looping
with -EAGAIN while a reset is in process, check for errors first and wait
for the handler to complete if a reset is pending by calling
i915_gem_wait_for_error.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
And to make Ben Widawsky happier, use the gpu_error instead of
the entire device as the argument in some functions.
Drop the outdated comment on ->wedged for now, a follow-up patch will
change the semantics and add a proper comment again.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This has been sprinkled all over the place in dev_priv. I think
it'd be good to also move all the code into a separate file like
i915_gem_error.c, but that's for another patch.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>