As we'll see in the next patch, being able to evict for just 1 VM is
handy.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
lifted from Daniel:
pread/pwrite isn't about the object's domain at all, but purely about
synchronizing for outstanding rendering. Replacing the call to
set_to_gtt_domain with a wait_rendering would imo improve code
readability. Furthermore we could pimp pread to only block for
outstanding writes and not for reads.
Since you're not the first one to trip over this: Can I volunteer you
for a follow-up patch to fix this?
v2: Switch the pwrite patch to use \!read_only. This was a typo in the
original code. (Chris, Daniel)
Recommended-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Fix up the logic fumble - wait_rendering has a bool readonly
paramater, set_to_gtt_domain otoh has bool write. Breakage reported by
Jani Nikula, I've double-checked that igt/gem_concurrent_blt/prw-*
would have caught this.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Fixed with
commit 10603caacf
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Mon Aug 26 19:51:06 2013 -0300
drm/i915: Apply the force-detect VGA w/a to Valleyview
Signed-off-by: Jesse Barnes <jbarnes@virtuosugeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_ddi_enable_transcoder_func() picked the sync flags from crtc->mode
instead of the pipe config adjusted_mode. Fix the problem and hopefully
rid my HSW machine of the remaining pipe config warnings.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The purpose of the function is to find out whether the object is still
bound in any address space. This can be easily checked by looking at the
vma currently associated with the object, rather than asking if any of
the global address spaces have an active vma on the object.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ignoring the legacy DRI1 code, and a couple of special cases (to be
discussed later), all access to the ring is mediated through requests.
The first write to a ring will grab a seqno and mark the ring as having
an outstanding_lazy_request. Either through explicitly adding a request
after an execbuffer or through an implicit wait (either by the CPU or by
a semaphore), that sequence of writes will be terminated with a request.
So we can ellide all the intervening writes to the tail register and
send the entire command stream to the GPU at once. This will reduce the
number of *serialising* writes to the tail register by a factor or 3-5
times (depending upon architecture and number of workarounds, context
switches, etc involved). This becomes even more noticeable when the
register write is overloaded with a number of debugging tools. The
astute reader will wonder if it is then possible to overflow the ring
with a single command. It is not. When we start a command sequence to
the ring, we check for available space and issue a wait in case we have
not. The ring wait will in this case be forced to flush the outstanding
register write and then poll the ACTHD for sufficient space to continue.
The exception to the rule where everything is inside a request are a few
initialisation cases where we may want to write GPU commands via the CS
before userspace wakes up and page flips.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Pull the expected max WM level determinations out to a separate
function. Will have another user soon.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Unify the code a bit to use ilk_compute_wm_level for all watermark
levels.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
hsw_pipe_wm_parameters and hsw_wm_maximums typically are read only. Make
them const.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make the call to intel_update_watermarks() just once or twice during
modeset. Ideally it should happen independently when each plane gets
enabled/disabled, but for now it seems better to keep it in central
place. We can improve things when we get all the planes sorted out
in a better way.
When enabling set up the watermarks just before the pipe is enabled.
And when disabling we need to wait until we've marked the crtc as
inactive, as otherwise intel_crtc_active() would still think the pipe
is enabled and the computed watermarks would reflect that.
v2: Pimp up the commit message a bit
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Passing the appropriate crtc to intel_update_watermarks() should help
in avoiding needless work in the future.
v2: Avoid clash with internal 'crtc' variable in some wm functions
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Score and action reveals what all the rings were doing
and why hang was declared. Add idle state so that
we can distinguish between waiting and idle ring.
v2: - add idle as a hangcheck action
- consensed hangcheck status to single line (Chris)
- mark active explicitly when we are making progress (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now when we have mechanism in place to track which context
was guilty of hanging the gpu, it is possible to punish
for bad behaviour.
If context has recently submitted a faulty batchbuffers guilty of
gpu hang and submits another batch which hangs gpu in quick
succession, ban it permanently. If ctx is banned, no more
batchbuffers will be queued for execution.
There is no need for global wedge machinery anymore and
it would be unwise to wedge the whole gpu if we have multiple
hanging batches queued for execution. Instead just ban
the guilty ones and carry on.
v2: Store guilty ban status bool in gpu_error instead of pointers
that might become danling before hang is declared.
v3: Use return value for banned status instead of stashing state
into gpu_error (Chris Wilson)
v4: - rebase on top of fixed hang stats api
- add define for ban period
- rename commit and improve commit msg
v5: - rely context banning instead of wedging the gpu
- beautification and fix for ban calculation (Chris)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
VLV has per-pipe PP registers. Set up power sequencing on mode set. The
connector init time setup is problematic, since we don't have a pipe at
that time. Cook up something.
v2:
- use vlv_power_sequencer_pipe() also in _pp_{ctrl,stat}_reg()
- use PANEL_PORT_SELECT_DPC_VLV (Ville)
v3: make checkpatch happier
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Make checkpatch a bit more happier still ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Remove duplicates, add VLV specific macros for port B and C.
v2: also add PANEL_PORT_SELECT_DPC_VLV for clarity (Ville)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Follow-up to
commit 5004945f1d
Author: Jani Nikula <jani.nikula@intel.com>
Date: Tue Jul 30 12:20:32 2013 +0300
drm/i915: move encoder->enable callback later in VLV crtc enable
v2: Rebase on the renamed enable hooks, adding clarity (Ville)
Reference: http://mid.gmane.org/CAKMK7uFs9EMvMW8BnS24e5UNm1D7JrfVg3SD5SDFtVEamGfOOg@mail.gmail.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In line with the rest of the code base. No functional changes.
v2: also s/intel_pre_enable_dp/g4x_pre_enable_dp/ for consistency (Ville)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's totally unused, so remove the last mode_fixup appearance in i915.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The patch doesn't contain functional change, but is to prepare for
future platform which has different DPIO phy. The additional pipe
parameter will use to select which phy to target for.
v2: Update the commit message and add static for the new function.
(Jani/Ville)
Signed-off-by: Chon Ming Lee <chon.ming.lee@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It is possible for us to be forced to perform an allocation for the lazy
request whilst running the shrinker. This allocation may fail, leaving
us unable to reclaim any memory leading to premature OOM. A neat
solution to the problem is to preallocate the request at the same time
as acquiring the seqno for the ring transaction. This means that we can
report ENOMEM prior to touching the rings.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Prior to preallocating an request for lazy emission, rename the existing
field to make way (and differentiate the seqno from the request struct).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
ironlake_fdi_compute_config() already checks that we have enough
FDI bandwidth. And it doesn't just use a hardcoded value but takes
into account factors such as the actual FDI frequency, shared FDI
B/C lanes, etc.
Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For DP pll settings, there is only two golden configs. Instead of
running through the algorithm to determine it, hardcode the value and get it
determine in intel_dp_set_clock.
v2: Rework on the intel_limit compiler warning. (Jani)
Signed-off-by: Chon Ming Lee <chon.ming.lee@intel.com>
[danvet: Fix up checkpatch issues.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
eDP 1.4 supports 4-5 extra link rates in additional to current 2 link
rate. Create a structure to store the DPLL divisor data to improve
readability.
v2: Fix the gen4_dpll/pch_dpll initialization to C99
designated initializers, and use a single loop for all platforms. (Jani and Daniel)
Signed-off-by: Chon Ming Lee <chon.ming.lee@intel.com>
[danvet: Fix up checkpatch warnings.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The spec says to notify prior to power down and after power up. It is
unclear whether it makes a difference.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Notifying the bios lets it enter power saving states.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The bios interface seems messy, and it's hard to tell what the bios
really wants. At first, only add support for DDI based machines (hsw+),
and see how it turns out.
The spec says to notify prior to power down and after power up. It is
unclear whether it makes a difference.
v2:
- squash notification function and callers patches together (Daniel)
- move callers to haswell_crtc_{enable,disable} (Daniel)
- rename notification function (Chris)
v3:
- separate notification function and callers again, as it's not clear
whether the display power state notification is the right thing to do
after all
v4: per Paulo's review:
- drop LVDS
- WARN on unsupported encoder types
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In preparation for followup work.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
SWSCI is a driver to bios call interface.
This checks for SWSCI availability and bios requested callbacks, and
filters out any calls that shouldn't happen. This way the callers don't
need to do the checks all over the place.
v2: silence some checkpatch nagging
v3: set PCI_SWSCI bit 0 to trigger interrupt (Mengdong Lin)
v4: remove an extra #define (Jesse)
v5: spec says s/w is responsible for clearing PCI_SWSCI bit 0 too
v6: per Paulo's review and more:
- fix sub-function mask
- add exit parameter
- add define for set panel details call
- return more errors from swsci
- clean up the supported/requested callbacks bit masks mess
- use DSLP for timeout
- fix build for CONFIG_ACPI=n
v7: tiny adjustment of requested vs. supported SBCB callbacks handling (Paulo)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The comments were a little out-of-sequence with the code, forcing the
reader to jump around whilst reading. Whilst moving the comments around,
add one to explain the context reference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We use the request to ensure we hold a reference to the context for the
duration that it remains in use by the ring. Each request only holds a
reference to the current context, hence we emit a request after
switching contexts with the final reference to the old context. However,
the extra interrupt caused by that request is not useful (no timing
critical function will wait for the context object), instead the overhead
of servicing the IRQ shows up in some (lightweight) benchmarks. In order
to keep the useful property of using the request to manage the context
lifetime, we want to add a dummy request that is associated with the
interrupt from the subsequent real request following the batch.
The extra interrupt was added as a side-effect of using
i915_add_request() in
commit 112522f678
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu May 2 16:48:07 2013 +0300
drm/i915: put context upon switching
v2: Daniel convinced me that the request here was solely for context
lifetime tracking and that we have the active ref to keep the object
alive whilst the MI_SET_CONTEXT. So the only concern then is which
context should get the blame for MI_SET_CONTEXT failing. The old scheme
added a request for the old context so that any hang upto and including
the switch away would mark the old context as guilty. Now any hang here
implicates the new context. However since we have already gone through a
complete flush with the last context in its last request, and all that
lies in no-man's-land is an invalidate flush and the MI_SET_CONTEXT, we
should be safe in not unduly placing blame on the new context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The saga around the breadcrumb vmas used by execbuf continues ...
This time around we've managed to unconditionally move the object to
the unbound list on the last vma unbind even though it might never
have been on either the bound or unbound list. Hilarity ensued.
Chris Wilson tracked this one down but compared to his patches I've
simply opted to completely separate the unbound case for not-yet bound
vmas. Otherwise we imo end up with semantically hard to parse checks
around the list_move_tail(global_list, ...).
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68462
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We still maintain code internally that cares about preliminary support.
Leaving the check here doesn't hurt anyone, and should keep things more
in line.
This time around, stick the info in the intel_info structure, and also
change the error from DRM_ERROR->DRM_INFO.
This is a partial revert of:
commit 590e4df8c8
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Wed May 8 10:45:15 2013 -0700
drm/i915: VLV support is no longer preliminary
Daniel, I'll provide the fix ups for internal too if/when you merge
this (if you want).
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Batchbuffers constructed by userspace can conditionalise their URB
allocations through the use of the MI_SET_PREDICATE command. This
command can read the MI_PREDICATE_RESULT_2 register to see how many
slices are enabled on GT3, and by virtue of the result, scale their
memory allocations to fit enabled memory.
Of course, this only works if the kernel sets the appropriate bit in the
register first.
v2: Better commit subject and message by Chris Wilson.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Credits-to: Yejun Guo <yejun.guo@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This fixes a printf warn from gcc:
drivers/gpu/drm/i915/intel_dsi_cmd.c: In function ‘dsi_vc_send_long’:
drivers/gpu/drm/i915/intel_dsi_cmd.c:181:2: warning: format ‘%x’ expects argument of type ‘unsigned int’, but argument 7 has type ‘size_t’ [-Wformat=]
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Initial parsing of the VBT MIPI block. For now, just store the panel id
if found.
Note: Again there seems to be no documentation for this piece of lore.
The doc situation for byt+ is just a bad joke :(
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Note: No one seems to have docs for this, so this patch here is just
unreviewed black magic :(
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: ymohanma <yogesh.mohan.marimuthu@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
[danvet: Add note about the doc situation.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
DPLL is not needed for DSI
v2: Rebase due to added DSI PLL assertion patch.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For DSI, we need to be asserting DSI PLL, not DPLL.
This is a somewhat stopgap implementation. It's slightly ugly to have to
pass the dsi parameter to intel_enable_pipe().
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2:
- Grab dpio_lock mutex in vlv_enable_dsi_pll().
- Add and call vlv_disable_dsi_pll().
v3: Mostly based on Ville's review comments.
- Only pipe A has DSI PLL lock bit.
- Add more of CCK REG bit definitions for DSI PLL.
- Make tables static.
- Move clock gating out of the clock calculation functions.
- DSI PLL LDO power gating.
- Put alternative MNP from table calc behind #ifdef.
v4: s/CKK/CLK/ in the CCK REG bit definitions (Ville).
Signed-off-by: ymohanma <yogesh.mohan.marimuthu@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This does not include any panel specific sub-encoders yet.
v2: Fix fixed mode handling (Daniel)
v3: Mostly based on Ville's review comments.
- Fix MIPI_HS_TX_TIMEOUT.
- DPI_ENABLE only for video mode.
- Drop ULPS usage for now, use DEVICE_READY only.
- Set MIPI_INIT_COUNT based on txclkesc.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Rebase due to register bit definition change.
v3: Mostly based on Ville's review comments.
- Use size_t for length all around.
- Reuse dsi_vc_send_short in dsi_vc_send_long.
- Remove stale/incorrect comments.
- Reverse special packet sent interrupt check.
- Use DSI controller regs for reading, not adapter.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The sub-encoder model is copied from DVO.
v2: Add attached_connector to struct intel_dsi.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add definitions for VLV MIPI DSI registers.
v2: Small fixes per Ville's review comments.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Add comment this is pipe A only (Ville)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For GPIO NC, CCK, CCU, and GPS CORE.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A follow-on to the update of the LLC coherency logic is that we can rely
on the LLC being coherent with the CS for rewriting batchbuffers
irrespective of their cache domain. (This should have no effect
currently as all the batch buffers are expected to be I915_CACHE_LLC and
so using the cpu relocation path anyway.)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The important bugfix here is that we must not unlink the vma when
we keep it around as a placeholder for the execbuf code. Since then we
won't find it again when execbuf gets interrupt and restarted and
create a 2nd vma. And since the code as-is isn't fit yet to deal with
more than one vma, hilarity ensues.
Specifically the dma map/unmap of the sg table isn't adjusted for
multiple vmas yet and will blow up like this:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
IP: [<ffffffffa008fb37>] i915_gem_gtt_finish_object+0x73/0xc8 [i915]
PGD 56bb5067 PUD ad3dd067 PMD 0
Oops: 0000 [#1] SMP
Modules linked in: tcp_lp ppdev parport_pc lp parport ipv6 dm_mod dcdbas snd_hda_codec_hdmi pcspkr snd_hda_codec_realtek serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support snd_hda_intel snd_hda_codec lpc_ich snd_hwdep mfd_core snd_pcm snd_page_alloc snd_timer snd soundcore acpi_cpufreq i915 video button drm_kms_helper drm mperf freq_table
CPU: 1 PID: 16650 Comm: fbo-maxsize Not tainted 3.11.0-rc4_nightlytop_d93f59_debug_20130814_+ #6957
Hardware name: Dell Inc. OptiPlex 9010/03JR84, BIOS A01 05/04/2012
task: ffff8800563b3f00 ti: ffff88004bdf4000 task.ti: ffff88004bdf4000
RIP: 0010:[<ffffffffa008fb37>] [<ffffffffa008fb37>] i915_gem_gtt_finish_object+0x73/0xc8 [i915]
RSP: 0018:ffff88004bdf5958 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8801135e0000 RCX: ffff8800ad3bf8e0
RDX: ffff8800ad3bf8e0 RSI: 0000000000000000 RDI: ffff8801007ee780
RBP: ffff88004bdf5978 R08: ffff8800ad3bf8e0 R09: 0000000000000000
R10: ffffffff86ca1810 R11: ffff880036a17101 R12: ffff8801007ee780
R13: 0000000000018001 R14: ffff880118c4e000 R15: ffff8801007ee780
FS: 00007f401a0ce740(0000) GS:ffff88011e280000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000008 CR3: 000000005635c000 CR4: 00000000001407e0
Stack:
ffff8801007ee780 ffff88005c253180 0000000000018000 ffff8801135e0000
ffff88004bdf59a8 ffffffffa0088e55 0000000000000011 ffff8801007eec00
0000000000018000 ffff880036a17101 ffff88004bdf5a08 ffffffffa0089026
Call Trace:
[<ffffffffa0088e55>] i915_vma_unbind+0xdf/0x1ab [i915]
[<ffffffffa0089026>] __i915_gem_shrink+0x105/0x177 [i915]
[<ffffffffa0089452>] i915_gem_object_get_pages_gtt+0x108/0x309 [i915]
[<ffffffffa0085ba9>] i915_gem_object_get_pages+0x61/0x90 [i915]
[<ffffffffa008f22b>] ? gen6_ppgtt_insert_entries+0x103/0x125 [i915]
[<ffffffffa008a113>] i915_gem_object_pin+0x1fa/0x5df [i915]
[<ffffffffa008cdfe>] i915_gem_execbuffer_reserve_object.isra.6+0x8d/0x1bc [i915]
[<ffffffffa008d156>] i915_gem_execbuffer_reserve+0x229/0x367 [i915]
[<ffffffffa008dbf6>] i915_gem_do_execbuffer.isra.12+0x4dc/0xf3a [i915]
[<ffffffff810fc823>] ? might_fault+0x40/0x90
[<ffffffffa008eb89>] i915_gem_execbuffer2+0x187/0x222 [i915]
[<ffffffffa000971c>] drm_ioctl+0x308/0x442 [drm]
[<ffffffffa008ea02>] ? i915_gem_execbuffer+0x3ae/0x3ae [i915]
[<ffffffff817db156>] ? __do_page_fault+0x3dd/0x481
[<ffffffff8112fdba>] vfs_ioctl+0x26/0x39
[<ffffffff811306a2>] do_vfs_ioctl+0x40e/0x451
[<ffffffff817deda7>] ? sysret_check+0x1b/0x56
[<ffffffff8113073c>] SyS_ioctl+0x57/0x87
[<ffffffff8135bbfe>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff817ded82>] system_call_fastpath+0x16/0x1b
Code: 48 c7 c6 84 30 0e a0 31 c0 e8 d0 e9 f7 ff bf c6 a7 00 00 e8 07 af 2c e1 41 f6 84 24 03 01 00 00 10 75 44 49 8b 84 24 08 01 00 00 <8b> 50 08 48 8b 30 49 8b 86 b0 04 00 00 48 89 c7 48 81 c7 98 00
RIP [<ffffffffa008fb37>] i915_gem_gtt_finish_object+0x73/0xc8 [i915]
RSP <ffff88004bdf5958>
CR2: 0000000000000008
As a consequence we need to change the "only one vma for now" check in
vma_unbind - since vma_destroy isn't always called the obj->vma_list
might not be empty. Instead check that the vma list is singular at the
beginning of vma_unbind. This is also more symmetric with bind_to_vm.
This fixes the igt/gem_evict_everything|alignment testcases.
v2:
- Add a paranoid WARN to mark_free in the eviction code to make sure
we never try to evict a vma used by the execbuf code right now.
- Move the check for a temporary execbuf vma into vma_destroy -
otherwise the failure path cleanup in bind_to_vm will blow up.
Our first attempting at fixing this was
commit 1be81a2f2cfd8789a627401d470423358fba2d76
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Aug 20 12:56:40 2013 +0100
drm/i915: Don't destroy the vma placeholder during execbuffer reservation
Squash with this when merging!
v3: Improvements suggested in Chris' review:
- Move the WARN_ON in vma_destroy that checks for vmas with an drm_mm
allocation before the early return.
- Bail out if we hit the WARN in mark_free to hopefully make the
kernel survive for long enough to capture it.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68298
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68171
Tested-by: lu hua <huax.lu@intel.com> (v2)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The execbuffer handle and exec_link were moved from the object into the
vma. As the vma may be unbound and destroyed whilst attempting to
reserve the execbuffer objects (either through a forced unbind to fix up
a misalignment or through an evict-everything call) we need to prevent
the free of the i915_vma itself. Otherwise not only is the list of
objects to reserve corrupt, but we continue to reference stale vma
entries.
Fixes kernel crash with i-g-t/gem_evict_everything
This regression has been introduced in
commit 04038a515d6eda6dd0857c0ade0b3950d372f4c0
Author: Ben Widawsky <ben@bwidawsk.net>
AuthorDate: Wed Aug 14 11:38:36 2013 +0200
drm/i915: Convert execbuf code to use vmas
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
References: http://www.spinics.net/lists/intel-gfx/msg32038.html
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68298
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the execbuf code we don't clean up any vmas which ended up not
getting bound for code simplicity. To make sure that we don't end up
creating multiple vma for the same vm kill the somewhat dangerous
vma_create function and inline it into lookup_or_create.
This is just a safety measure to prevent surprises in the future.
Also update the somewhat confused comment in the execbuf code and
clarify what kind of magic is going on with a new one.
v2: Keep the function separate as requested by Chris. But give it a __
prefix for paranoia and move it tighter together with the other vma
stuff.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The dpll actually runs at the port clock so we don't need
to multiply it again with the pixel multiplier to get the
adjusted_mode.clock. This is in contrast to the ironlake
pixel clock readout code which uses the fdi dotclock: That
one does _not_ run with multiplied pixels.
This issue goes back to the original clock readout code added
in
commit f1f644dc66
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Thu Jun 27 00:39:25 2013 +0300
drm/i915: get mode clock when reading the pipe config v9
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The sdvo input timing needs to be the actual mode, the sdvo
encoder automatically adjusts for the need of pixel doubling or
quadrupling. This was lost in pipe config conversion of the
pixel multiplier in
commit 6cc5f341b5
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Wed Mar 27 00:44:53 2013 +0100
drm/i915: add pipe_config->pixel_multiplier
While at it ditch the intel_ prefix from the crtc in
intel_sdvo_mode_set.
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Historically we've run our own driver hotplug handling in our own
work-queue, which then launched the drm core hotplug handling in the
system workqueue. This is important since we flush our own driver
workqueue in the pageflip code while hodling modeset locks, and only
the drm hotplug code grabbed these locks. But with
commit 69787f7da6
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Tue Oct 23 18:23:34 2012 +0000
drm: run the hpd irq event code directly
this was changed and now we could deadlock in our flip handler if
there's a hotplug work blocking the progress of the crucial unpin
works. So this broke the careful deadlock avoidance implemented in
commit b4a98e57fc
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Nov 1 09:26:26 2012 +0000
drm/i915: Flush outstanding unpin tasks before pageflipping
Since the rule thus far has been that work items on our own workqueue
may never grab modeset locks simply restore that rule again.
v2: Add a comment to the declaration of dev_priv->wq to warn readers
about the tricky implications of using it. Suggested by Chris Wilson.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Stuart Abercrombie <sabercrombie@chromium.org>
Reported-by: Stuart Abercrombie <sabercrombie@chromium.org>
References: http://permalink.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/26239
Cc: stable@vger.kernel.org
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Squash in a comment at the place where we schedule the work.
Requested after-the-fact by Chris on irc since the hpd work isn't the
only place we botch this.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Somehow we've lost the error handling in the patch split-up between
the internal and external patch. This regression has been introduced
in
commit 5032d871f7
Author: Rafael Barbalho <rafael.barbalho@intel.com>
Date: Wed Aug 21 17:10:51 2013 +0100
drm/i915: Cleaning up the relocate entry function
This bug is exercised by igt/gem_reloc_vs_gpu/interruptible.
Cc: Rafael Barbalho <rafael.barbalho@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_fixed_panel_mode() overwrote the adjusted_mode with the fixed mode
only partially. Notably it forgot to copy over the sync flags. The LVDS code however programmed the hardware with the sync flags from fixed mode, and then later the pipe config comparison obviously failed as we
filled out the adjusted_mode in get_config from the real registers.
Just call drm_mode_copy() in intel_fixed_panel_mode() to copy over the
whole thing, and then just use adjusted_mode in the LVDS code to figure
out which sync settings the hardware needs.
Also constify the fixed_mode argument to intel_fixed_panel_mode().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
One needs to call __sg_free_table() if __sg_alloc_table() fails, but
sg_alloc_table() does that for us already.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewd-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is intended to add VGA arbiter support for Intel HD graphics on
Core processors. The old GMCH registers no longer exist, so even
though it appears that i915 participates in VGA arbitration, it doesn't
work. On Intel HD graphics we already attempt to disable VGA regions
of the device. This makes registering as a VGA client unnecessary since
we don't intend to operate differently depending on how many VGA devices
are present. We can disable VGA memory regions by clearing the memory
enable bit in the VGA MSR. That only leaves VGA IO, which we update
the VGA arbiter to know that we don't participate in VGA memory
arbitration. We also add a hook on unload to re-enable memory and
reinstate VGA memory arbitration.
v3: Use explicit LEGACY_IO | LEGACY_MEM when restoring rather than
LEGACY_MASK, per Ville's comments.
v2: I915_READ/WRITE accessors don't work in i915_disable_vga, use inb/outb
directly. Also, on the driver unbind VGA enable path, acquire legacy
IO to re-enable VGA memory. Correct comment.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Add patch changelog. Also squash in a fixup to have a dummy
static inline for vga_set_legacy_decoding for CONFIG_VGA_ARB=n as
reported by the 0-day kernel build bot.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
fixup 2
As we attempt to kmalloc after calling get_pages, there is a possibility
that the shrinker may reap the pages we just acquired. To prevent this
we need to increment the pages_pin_count early, so rearrange the code
and error paths to make it so.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We shouldn't disable the trickle feed bits on Haswell. Our
documentation explicitly says the trickle feed bits of PRI_CTL and
CUR_CTL should not be programmed to 1, and the hardware engineer also
asked us to not program the SPR_CTL field to 1. Leaving the bits as 1
could cause underflows.
Reported-by: Arthur Runyan <arthur.j.runyan@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Systems with Intel graphics controllers set aside memory exclusively for
gfx driver use. This memory is not always marked in the E820 as
reserved or as RAM, and so is subject to overlap from E820 manipulation
later in the boot process. On some systems, MMIO space is allocated on
top, despite the efforts of the "RAM buffer" approach, which simply
rounds memory boundaries up to 64M to try to catch space that may decode
as RAM and so is not suitable for MMIO.
v2: use read_pci_config for 32 bit reads instead of adding a new one
(Chris)
add gen6 stolen size function (Chris)
v3: use a function pointer (Chris)
drop gen2 bits (Daniel)
v4: call e820_sanitize_map after adding the region
v5: fixup comments (Peter)
simplify loop (Chris)
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66726
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66844
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For use by userspace (at some point in the future) and other kernel code.
v2: move PCI IDs to uabi (Chris)
move PCI IDs to drm/ (Dave)
v3: fixup Quanta detection - needs to come first (Daniel)
v4: fix up PCI match structure init for easier use by userspace (Chris)
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The helper exists, might as well use it instead of __GFP_ZERO.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
RCS flips do work on Iybridge+ so long as we can unmask the messages
through DERRMR. However, there are quite a few workarounds mentioned
regarding unmasking more than one event or triggering more than one
message through DERRMR. Those workarounds in principle prevent us from
performing pipelined flips (and asynchronous flips across multiple
planes) and equally apply to the "known good" BCS ring. Given that it
already appears to work, and also appears to work with unmasking all 3
planes at once (and queuing flips across multiple planes), be brave.
Bugzlla: https://bugs.freedesktop.org/show_bug.cgi?id=67600
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Lightly-tested-by: Stephane Marchesin <marchesin@icps.u-strasbg.fr>
Cc: Stephane Marchesin <marchesin@icps.u-strasbg.fr>
Cc: Ben Widawsky <ben@bwidawsk.net>
Tested-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We now have more devices using ring->private than not, and they all want
the same structure. Worse, I would like to use a scratch page from
outside of intel_ringbuffer.c and so for convenience would like to reuse
ring->private. Embed the object into the struct intel_ringbuffer so that
we can keep the code clean.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If need to enable the panel fitter, the crtc timings have to be
programmed according to the panel's native (fixed) mode. This isn't the
case atm, since after the encoder changes adjusted_mode to fixed
mode the crtc_* timing fields of adjusted_mode will stay at their original
non-native values that the user passed in. This results in a corrupted
output.
One exception is when we have a second pass of computing encoder configs
due to bandwidth limitation, since then we'll set adjusted_mode.crtc_*
fields to the fixed mode values set in the first pass; so in this case
things will work out.
Fix this by updating the adjusted_mode.crtc_* fields when we set the
fixed panel mode.
This regression has been introduced in
commit 135c81b8c3
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Sun Jul 21 21:37:09 2013 +0200
drm/i915: clean up crtc timings computation
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We already have a big splashing *ERROR* for all the relevant cases of
hangs, so this one here is redudant. And it results in an unclean
dmesg when running with simulated hangs. Regression has been
introduced in
commit 05407ff889
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Thu May 30 09:04:29 2013 +0300
drm/i915: detect hang using per ring hangcheck_score
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68641
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It can be useful to compare at times the current vs requested frequency
of the GPU, so provide the contents of RPNSWREQ alonside CAGF.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It appears that Valleyview shares its VGA encoder with more recent
siblings and requires the same forced detection cycle after a hardware
reset before we can rely on hotplugging.
Reported-and-tested-by: kobeqin <kobe.qin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67733
Tested-by: kobeqin <kobe.qin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Check for gen >= 5 insted, acked by Chris.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Valleyview has its own render power state implementation with different
capability knobs - it has no RP0,RP1,RPn but rather RPe.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67734
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: kobe.qin@intel.com
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In reset we try to restore the forcewake state to
pre reset state, using forcewake_count. The reset
doesn't seem to clear the forcewake bits so we
get warn on forcewake ack register not clearing.
Use same mechanism as intel_uncore_sanitize() does
when loading driver to reset the forcewake bits, right
after the chip has been reset.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Submitting a batchbuffer which simulates a gpu
hang by doing MI_BATCH_BUFFER_START into itself,
to test hangcheck, started to hard hang the whole box
(IVB). Bisecting lead to this commit:
commit 664b422c2966cd39b8f67e8d53a566ea8c877cd6
Author: Vinit Azad <vinit.azad@intel.com>
Date: Wed Aug 14 13:34:33 2013 -0700
drm/i915: Only unmask required PM interrupts
Experimenting with the mask register showed that
unmasking EI UP will prevent the hard hang in IVB and SNB.
HSW doesn't hang with EI UP masked.
Considering we are just disabling interrupts that aren't even
delivered to driver, this change is more likely to paper over some
weirdness in gpu's internal state machine. But until better
explanation can be found, let's trade little bit of power
for stability on these architectures.
v2: - Unmask EI_EXPIRED directly in I915_WRITE (Vinit)
v3: - Only unmask on SNB and IVB
Cc: Vinit Azad <vinit.azad@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Acked-by: Vinit Azad <vinit.azad@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Enable support for drm render nodes for i915 by flagging the ioctls that
are safe and just needed for rendering.
v2: mark reg_read, set_caching and get_caching (ickle, danvet)
Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Alex writes:
This is the radeon drm-next request. Big changes include:
- support for dpm on CIK parts
- support for ASPM on CIK parts
- support for berlin GPUs
- major ring handling cleanup
- remove the old 3D blit code for bo moves in favor of CP DMA or sDMA
- lots of bug fixes
[airlied: fix up a bunch of conflicts from drm_order removal]
* 'drm-next-3.12' of git://people.freedesktop.org/~agd5f/linux: (898 commits)
drm/radeon/dpm: make sure dc performance level limits are valid (CI)
drm/radeon/dpm: make sure dc performance level limits are valid (BTC-SI) (v2)
drm/radeon: gcc fixes for extended dpm tables
drm/radeon: gcc fixes for kb/kv dpm
drm/radeon: gcc fixes for ci dpm
drm/radeon: gcc fixes for si dpm
drm/radeon: gcc fixes for ni dpm
drm/radeon: gcc fixes for trinity dpm
drm/radeon: gcc fixes for sumo dpm
drm/radeonn: gcc fixes for rv7xx/eg/btc dpm
drm/radeon: gcc fixes for rv6xx dpm
drm/radeon: gcc fixes for radeon_atombios.c
drm/radeon: enable UVD interrupts on CIK
drm/radeon: fix init ordering for r600+
drm/radeon/dpm: only need to reprogram uvd if uvd pg is enabled
drm/radeon: check the return value of uvd_v1_0_start in uvd_v1_0_init
drm/radeon: split out radeon_uvd_resume from uvd_v4_2_resume
radeon kms: fix uninitialised hotplug work usage in r100_irq_process()
drm/radeon/audio: set up the sads on DCE3.2 asics
drm/radeon: fix handling of variable sized arrays for router objects
...
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_gem_dmabuf.c
drivers/gpu/drm/i915/intel_pm.c
drivers/gpu/drm/radeon/cik.c
drivers/gpu/drm/radeon/ni.c
drivers/gpu/drm/radeon/r600.c
Need to get my stuff out the door ;-) Highlights:
- pc8+ support from Paulo
- more vma patches from Ben.
- Kconfig option to enable preliminary support by default (Josh
Triplett)
- Optimized cpu cache flush handling and support for write-through caching
of display planes on Iris (Chris)
- rc6 tuning from Stéphane Marchesin for more stability
- VECS seqno wrap/semaphores fix (Ben)
- a pile of smaller cleanups and improvements all over
Note that I've ditched Ben's execbuf vma conversion for 3.12 since not yet
ready. But there's still other vma conversion stuff in here.
* tag 'drm-intel-next-2013-08-23' of git://people.freedesktop.org/~danvet/drm-intel: (62 commits)
drm/i915: Print seqnos as unsigned in debugfs
drm/i915: Fix context size calculation on SNB/IVB/VLV
drm/i915: Use POSTING_READ in lcpll code
drm/i915: enable Package C8+ by default
drm/i915: add i915.pc8_timeout function
drm/i915: add i915_pc8_status debugfs file
drm/i915: allow package C8+ states on Haswell (disabled)
drm/i915: fix SDEIMR assertion when disabling LCPLL
drm/i915: grab force_wake when restoring LCPLL
drm/i915: drop WaMbcDriverBootEnable workaround
drm/i915: Cleaning up the relocate entry function
drm/i915: merge HSW and SNB PM irq handlers
drm/i915: fix how we mask PMIMR when adding work to the queue
drm/i915: don't queue PM events we won't process
drm/i915: don't disable/reenable IVB error interrupts when not needed
drm/i915: add dev_priv->pm_irq_mask
drm/i915: don't update GEN6_PMIMR when it's not needed
drm/i915: wrap GEN6_PMIMR changes
drm/i915: wrap GTIMR changes
drm/i915: add the FCLK case to intel_ddi_get_cdclk_freq
...
This lets drivers see the flags requested by the application
[airlied: fixup for rcar/imx/msm]
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
With all the common infoframe bits now in place, we can finally write
the vendor specific infoframes in our driver.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Fix the typo introduced in
commit 1a2eb4604b
Author: Keith Packard <keithp@keithp.com>
Date: Wed Nov 16 16:26:07 2011 -0800
drm/i915: Hook up Ivybridge eDP
This fixes eDP link-training failures and cases where all voltage swing
/pre-emphasis levels were tried and failed during clock recovery and -
as a fallback - we go on to do channel equalization with the last voltage
swing/pre-emphasis level which will succeed. Both issues can lead to a
blank screen.
v2:
- improve commit message
CC: stable@vger.kernel.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=64880
Tested-by: Jeremy Moles <cubicool@gmail.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For optimus and powerxpress muxless we really want the GPU
driver deciding when to power up/down the GPU, not userspace.
This adds the ability for a driver to dynamically power up/down
the GPU and remove the switcheroo from controlling it, the
switcheroo reports the dynamic state to userspace also.
It also adds 2 power domains, one for machine where the power
switch is controlled outside the GPU D3 state, so the powerdown
ordering is done correctly, and the second for the hdmi audio
device to make sure it can resume for PCI config space accesses.
v1.1: fix build with switcheroo off
v2: add power domain support for radeon and v1 nvidia dsms
v2.1: fix typo in off case
v3: add audio power domain for hdmi audio + misc audio fixes
v4: use PCI_SLOT macro, drop power reference on hdmi audio resume
failure also.
Signed-off-by: Dave Airlie <airlied@redhat.com>
I don't like seeing signed seqnos. Make them unsigned.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the different context sizes reported in the CXT_SIZE register
aren't meant to be simply added together.
While BSpec is somewhat unclear on the topic of the actual context
size, empirical tests have now revealed the truth. So let's add a
big fat comment to remind people how it all works.
As a result of correctly interpreting CXT_SIZE, the IVB context
size is reduced from three pages to two, while SNB context size
remains at two pages.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we don't use the return value of a mmio read our coding style is to
use the POSTING_READ macro. This avoids cluttering the mmio traces.
While at it add the missing posting read in the lcpll enable function
that Paulo spotted.
v2: Drop the _NOTRACE changes, tracing such wait_for loops in the modeset
code might actually be rather useful!
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This should be working, so enable it by default. Also easy to revert.
v2: Rebase, s/allow/enable/.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We currently only enter PC8+ after all its required conditions are
met, there's no rendering, and we stay like that for at least 5
seconds.
I chose "5 seconds" because this value is conservative and won't make
us enter/leave PC8+ thousands of times after the screen is off: some
desktop environments have applications that wake up and do rendering
every 1-3 seconds, even when the screen is off and the machine is
completely idle.
But when I was testing my PC8+ patches I set the default value to
100ms so I could use the bad-behaving desktop environments to
stress-test my patches. I also thought it would be a good idea to ask
our power management team to test different values, but I'm pretty
sure they would ask me for an easy way to change the timeout. So to
help these 2 cases I decided to create an option that would make it
easier to change the default value. I also expect people making
specific products that use our driver could try to find the perfect
timeout for them.
Anyway, fixing the bad-behaving applications will always lead to
better power savings than just changing the timeout value: you need to
stop waking the Kernel, not quickly put it back to sleep again after
you wake it for nothing. Bad sleep leads to bad mood!
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make it print the value of the variables on the PC8 struct.
v2: Update to recent renames and add the new fields.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch allows PC8+ states on Haswell. These states can only be
reached when all the display outputs are disabled, and they allow some
more power savings.
The fact that the graphics device is allowing PC8+ doesn't mean that
the machine will actually enter PC8+: all the other devices also need
to allow PC8+.
For now this option is disabled by default. You need i915.allow_pc8=1
if you want it.
This patch adds a big comment inside i915_drv.h explaining how it
works and how it tracks things. Read it.
v2: (this is not really v2, many previous versions were already sent,
but they had different names)
- Use the new functions to enable/disable GTIMR and GEN6_PMIMR
- Rename almost all variables and functions to names suggested by
Chris
- More WARNs on the IRQ handling code
- Also disable PC8 when there's GPU work to do (thanks to Ben for
the help on this), so apps can run caster
- Enable PC8 on a delayed work function that is delayed for 5
seconds. This makes sure we only enable PC8+ if we're really
idle
- Make sure we're not in PC8+ when suspending
v3: - WARN if IRQs are disabled on __wait_seqno
- Replace some DRM_ERRORs with WARNs
- Fix calls to restore GT and PM interrupts
- Use intel_mark_busy instead of intel_ring_advance to disable PC8
v4: - Use the force_wake, Luke!
v5: - Remove the "IIR is not zero" WARNs
- Move the force_wake chunk to its own patch
- Only restore what's missing from RC6, not everything
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was causing WARNs in one machine, so instead of trying to guess
exactly which hotplug bits should exist, just do the test on the
non-HPD bits. We don't care about the state of the hotplug bits, we
just care about the others, that need to be 1.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If LCPLL is disabled, there's a chance we might be in package C8 state
or deeper, and we'll get a hard hang when restoring LCPLL (also, a red
led lights up on my motherboard). So grab the force_wake, which will
get us out of RC6 and, as a consequence, out of PC8+ (since we need
RC6 to get into PC8+).
Note: Discussions with hw designers are still ongoing what exactly
goes boom here. But I think we can go ahead and just merge this little
hack for now until it's clear what we actually need.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
[danvet: Add small note about the current state of the discussion
around this hack.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Turns out the BIOS will do this for us as needed, and if we try to do it
again we risk hangs or other bad behavior.
Note that this seems to break libva on ChromeOS after resumes (but
strangely _not_ after booting up).
This essentially reverts
commit b4ae3f22d2
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Thu Jun 14 11:04:48 2012 -0700
drm/i915: load boot context at driver init time
and
commit b3bf076697
Author: Paulo Zanoni <paulo.r.zanoni@intel.com>
Date: Tue Nov 20 13:27:44 2012 -0200
drm/i915: implement WaMbcDriverBootEnable on Haswell
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reported-and-Tested-by: Stéphane Marchesin <marcheu@chromium.org>
[danvet: Add note about impact and regression citation.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As the relocate entry function was getting a bit too big I've moved
the code that used to use either the cpu or the gtt to for the
relocation into two separate functions.
Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Because hsw_pm_irq_handler does exactly what gen6_rps_irq_handler does
and also processes the 2 additional VEBOX bits. So merge those
functions and wrap the VEBOX bits on a HAS_VEBOX check. This
check isn't really necessary since the bits are reserved on
SNB/IVB/VLV, but it's a good documentation on who uses them.
v2: - Change IS_HASWELL check to HAS_VEBOX
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It seems we've been doing this ever since we started processing the
RPS events on a work queue, on commit "drm/i915: move gen6 rps
handling to workqueue", 4912d04193.
The problem is: when we add work to the queue, instead of just masking
the bits we queued and leaving all the others on their current state,
we mask the bits we queued and unmask all the others. This basically
means we'll be unmasking a bunch of interrupts we're not going to
process. And if you look at gen6_pm_rps_work, we unmask back only
GEN6_PM_RPS_EVENTS, which means the bits we unmasked when adding work
to the queue will remain unmasked after we process the queue.
Notice that even though we unmask those unrelated interrupts, we never
enable them on IER, so they don't fire our interrupt handler, they
just stay there on IIR waiting to be cleared when something else
triggers the interrupt handler.
So this patch does what seems to make more sense: mask only the bits
we add to the queue, without unmasking anything else, and so we'll
unmask them after we process the queue.
As a side effect we also have to remove that WARN, because it is not
only making sure we don't mask useful interrupts, it is also making
sure we do unmask useless interrupts! That piece of code should not be
responsible for knowing which bits should be unmasked, so just don't
assert anything, and trust that snb_disable_pm_irq should be doing the
right thing.
With i915.enable_pc8=1 I was getting ocasional "GEN6_PMIIR is not 0"
error messages due to the fact that we unmask those unrelated
interrupts but don't enable them.
Note: if bugs start bisecting to this patch, then it probably means
someone was relying on the fact that we unmask everything by accident,
then we should fix gen5_gt_irq_postinstall or whoever needs the
accidentally unmasked interrupts. Or maybe I was just wrong and we
need to revert this patch :)
Note: This started to be a more real issue with the addition of the
VEBOX support since now we do enable more than just the minimal set of
RPS interrupts in the IER register. Which means after the first rps
interrupt has happened we will never mask the VEBOX user interrupts
again and so will blow through cpu time needlessly when running video
workloads.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Add note that this started to matter with VEBOX much more.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On SNB/IVB/VLV we only call gen6_rps_irq_handler if one of the IIR
bits set is part of GEN6_PM_RPS_EVENTS, but at gen6_rps_irq_handler we
add all the enabled IIR bits to the work queue, not only the ones that
are part of GEN6_PM_RPS_EVENTS. But then gen6_pm_rps_work only
processes GEN6_PM_RPS_EVENTS, so it's useless to add anything that's
not GEN6_PM_RPS_EVENTS to the work queue.
As a bonus, gen6_rps_irq_handler looks more similar to
hsw_pm_irq_handler, so we may be able to merge them in the future.
v2: - Add a WARN in case we queued something we're not going to
process.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If the error interrupts are already disabled, don't disable and
reenable them. This is going to be needed when we're in PC8+, where
all the interrupts are disabled so we won't risk re-enabling
DE_ERR_INT_IVB.
v2: Use dev_priv->irq_mask (Chris)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just like irq_mask and gt_irq_mask, use it to track the status of
GEN6_PMIMR so we don't need to read it again every time we call
snb_update_pm_irq.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I did some brief tests and the "new_val = pmimr" condition usually
happens a few times after exiting games.
Note: This is also prep work to track the GEN6_PMIMR register state in
dev_priv->pm_imr. This happens in the next patch.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
[danvet: Add note to explain why we want this, as per the discussion
between Chris and Paulo.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just like we're doing with the other IMR changes.
One of the functional changes is that not every caller was doing the
POSTING_READ.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>