When PPGTT support was originally enabled, it was only designed to
support 1 PPGTT. It therefore made sense to simply hide the GGTT space
required to enable this from the drm_mm allocator.
Since we intend to support full PPGTT, which means more than 1, and they
can be created and destroyed ad hoc it will be required to use the
proper allocation techniques we already have.
The first step here is to make the existing single PPGTT use the
allocator.
The astute observer will notice that we are reserving space in the GGTT
for the PDEs for the lifetime of the address space, and would be right
to question whether or not this is a good idea. It does not make a
difference with this current patch only the aliasing PPGTT (indeed the
PDEs should still be hidden from the shrinker). For the future, we are
allocating from top to bottom to avoid using the precious "gtt
space" The GGTT space at that point should only be used for scanout, HW
contexts, ringbuffers, HWSP, PDEs, and a couple of other small buffers
(potentially) used by the kernel. Everything else should be mapped into
a PPGTT. To put the consumption in more tangible terms, it takes
approximately 4 sets of PDEs to equal one 19x10 framebuffer (with no
fancy stride or alignment constraints). 3/4 of the total [average] GGTT
can be used for PDEs, and hopefully never touch the 1/4 that the
framebuffer needs.
The astute, and persistent observer might ask about the page tables
which are also pinned for the address space. This waste is unfortunate.
We use 2MB of memory per address space. We leave wrapping the PDEs as a
real GEM object as a TODO.
v2: Align PDEs to 64b in GTT
Allocate the node dynamically so we can use drm_mm_put_block
Now tested on IGT
Allocate node at the top to avoid fragmentation (Chris)
v3: Use Chris' top down allocator
v4: Embed drm_mm_node into ppgtt struct (Jesse)
Remove hunks which didn't belong (Jesse)
v5: Don't subtract guard page since we now killed the guard page prior
to this patch. (Ben)
v6: Rebased and removed guard page stuff.
Added a chunk to the commit message
Allow adding a context to mappable region
v7: Undo v3, so we can make the drm patch last in the series
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v4)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
squash: drm/i915: allow PPGTT to use mappable
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The plan to to make every file descriptor have a default context. To
accommodate this, generalize out default context setup function so it
can be used at file open time.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We **need** to do this for exactly 1 reason, because we want to embed a
PPGTT into the context, but we don't want to special case the default
context.
To achieve that, we must be able to initialize contexts after the GTT is
setup (so we can allocate and pin the default context's BO), but before
the PPGTT and rings are initialized. This is because, currently, context
initialization requires ring usage. We don't have rings until after the
GTT is setup. If we split the enabling part of context initialization,
the part requiring the ringbuffer, we can untangle this, and then later
embed the PPGTT
Incidentally this allows us to also adhere to the original design of
context init/fini in future patches: they were only ever meant to be
called at driver load and unload.
v2: Move hw_contexts_disabled test in i915_gem_context_enable() (Chris)
v3: BUG_ON after checking for disabled contexts. Or else it blows up pre
gen6 (Ben)
v4: Forward port
Modified enable for each ring, since that patch is earlier in the series
Dropped ring arg from create_default_context so it can be used by others
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch adds to changes for contexts on reset:
Sets last context to default - this will prevent the context switch
happening after a reset. That switch is not possible because the
rings are hung during reset and context switch requires reset. This
behavior will need to be reworked in the future, but this is what we
want for now.
In the future, we'll also want to reset the guilty context to
uninitialized. We should wait for ARB_Robustness related code to land
for that.
This is somewhat for paranoia. Because we really don't know what the
GPU was doing when it hung, or the state it was in (mid context write,
for example), later restoring the context is a bad idea. By setting the
flag to not initialized, the next load of that context will not restore
the state, and thus on the subsequent switch away from the context will
overwrite the old data.
NOTE: This code needs a fixup when we actually have multiple VMs. The
issue that can occur is inactive objects in a VM will need to be
destroyed before the last context unref. This can now happen via the
fake switch introduced in this patch (and it other ways in the future)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Previously we dropped the association of a context to a ring. It is
however very important to know which ring a context ran on (we could
have reused the other member, but I was nitpicky).
This is very important when we switch address spaces, which unlike
context objects, do change per ring.
As an example, if we have:
RCS BCS
ctx A
ctx A
ctx B
ctx B
Without tracking the last ring B ran on, we wouldn't know to switch the
address space on BCS in the last row.
As a result, we no longer need to track which ring a context "belongs"
to, as it never really made much sense anyway.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we want to use contexts in more abstract terms (specifically with
PPGTT in mind), we need to allow them to be specified for any ring.
Since the upcoming patches will bring about the use of multiple address
spaces, and each ring needs to have an address space programmed (which
we intend to do at context switch time), we can no longer only use RCS.
With multiple rings having a last context, we must now unreference these
contexts.
NOTE: This commit requires an update to intel-gpu-tools to make it not
fail.
v2: Rebased with some logical conflicts.
Squashed in the context fini refcount patch
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the introduction of contexts per fd in the future, one can easily
envision more contexts being used. We do not have an easy remedy to
reduce the space requirements of the contexts, we can make things
slightly better by using less stringent alignments on later hardware.
Ville: Since I can almost predict you'll point this out. I can no longer
find the docs which specify the 64k requirement on certain gen6 SKUs. If
you'd like to change that too, be my guest.
CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We'll be doing a bit more stuff with each file, so having our own open
function should make things clean.
This also allows us to easily add conditionals for stuff we don't want
to do when we don't have HW contexts.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The only place we were using it was for GEN6, which won't have PPGTT
support anyway (ie. the VM is always the same). To clear things up,
(it only added confusion for me since it doesn't allow us to assert
vma->vm is what we always want, when just looking at the code).
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To sum up what goes on here, we abstract the vma binding, similarly to
the previous object binding. This helps for distinguishing legacy
binding, versus modern binding. To keep the code churn as minimal as
possible, I am leaving in insert_entries(). It serves as the per
platform pte writing basically. bind_vma and insert_entries do share a
lot of similarities, and I did have designs to combine the two, but as
mentioned already... too much churn in an already massive patchset.
What follows are the 3 commits which existed discretely in the original
submissions. Upon rebasing on Broadwell support, it became clear that
separation was not good, and only made for more error prone code. Below
are the 3 commit messages with all their history.
drm/i915: Add bind/unbind object functions to VMA
drm/i915: Use the new vm [un]bind functions
drm/i915: reduce vm->insert_entries() usage
drm/i915: Add bind/unbind object functions to VMA
As we plumb the code with more VM information, it has become more
obvious that the easiest way to deal with bind and unbind is to simply
put the function pointers in the vm, and let those choose the correct
way to handle the page table updates. This change allows many places in
the code to simply be vm->bind, and not have to worry about
distinguishing PPGTT vs GGTT.
Notice that this patch has no impact on functionality. I've decided to
save the actual change until the next patch because I think it's easier
to review that way. I'm happy to squash the two, or let Daniel do it on
merge.
v2:
Make ggtt handle the quirky aliasing ppgtt
Add flags to bind object to support above
Don't ever call bind/unbind directly for PPGTT until we have real, full
PPGTT (use NULLs to assert this)
Make sure we rebind the ggtt if there already is a ggtt binding. This
happens on set cache levels.
Use VMA for bind/unbind (Daniel, Ben)
v3: Reorganize ggtt_vma_bind to be more concise and easier to read
(Ville). Change logic in unbind to only unbind ggtt when there is a
global mapping, and to remove a redundant check if the aliasing ppgtt
exists.
v4: Make the bind function a bit smarter about the cache levels to avoid
unnecessary multiple remaps. "I accept it is a wart, I think unifying
the pin_vma / bind_vma could be unified later" (Chris)
Removed the git notes, and put version info here. (Daniel)
v5: Update the comment to not suck (Chris)
v6:
Move bind/unbind to the VMA. It makes more sense in the VMA structure
(always has, but I was previously lazy). With this change, it will allow
us to keep a distinct insert_entries.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: Use the new vm [un]bind functions
Building on the last patch which created the new function pointers in
the VM for bind/unbind, here we actually put those new function pointers
to use.
Split out as a separate patch to aid in review. I'm fine with squashing
into the previous patch if people request it.
v2: Updated to address the smart ggtt which can do aliasing as needed
Make sure we bind to global gtt when mappable and fenceable. I thought
we could get away without this initialy, but we cannot.
v3: Make the global GTT binding explicitly use the ggtt VM for
bind_vma(). While at it, use the new ggtt_vma helper (Chris)
At this point the original mailing list thread diverges. ie.
v4^:
use target_obj instead of obj for gen6 relocate_entry
vma->bind_vma() can be called safely during pin. So simply do that
instead of the complicated conditionals.
Don't restore PPGTT bound objects on resume path
Bug fix in resume path for globally bound Bos
Properly handle secure dispatch
Rebased on vma bind/unbind conversion
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: reduce vm->insert_entries() usage
FKA: drm/i915: eliminate vm->insert_entries()
With bind/unbind function pointers in place, we no longer need
insert_entries. We could, and want, to remove clear_range, however it's
not totally easy at this point. Since it's used in a couple of place
still that don't only deal in objects: setup, ppgtt init, and restore
gtt mappings.
v2: Don't actually remove insert_entries, just limit its usage. It will
be useful when we introduce gen8. It will always be called from the vma
bind/unbind.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Using the current state of the page directory registers, we can
determine which of our address spaces was active when the hang occurred.
This allows us to scan through all the address spaces to identify the
"active" one during error capture.
v2: Rebased for BDW error detection. BDW error detection is similar
except instead of PP_DIR_BASE, we can use the PDP registers.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Add FIXME about global gtt misuse.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The existing check was insufficient to determine whether we can use the
GTT mapping to read out the object during error capture.
The previous condition was, if the object has a GGTT mapping, and the
reloc is in the GTT range... the can happen with opjects mapped into
multiple vms (one of which being the GTT).
There are two solutions to this problem:
1. This patch, which avoid reading the io mapping
2. Use the GGTT offset with the io mapping.
Since error capture is about recording the most accurate possible error
state, and the error was caused by the object not in the GGTT - I opted
for the former.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This came from a patch called, "drm/i915: Move active to vma"
When moving an object to the inactive list, we do it for all VMs for
which the object is bound.
The primary difference from that patch is this time around we don't not
track 'active' per vma, but rather by object. Therefore, we only need
one unref.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was found by code inspection. If the GTT setup fails then we are
left without properly tearing down the drm_mm.
Hopefully this never happens.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To be able to effectively use the GGTT object lookup function, we don't
want to warn when there is no GGTT mapping. Let the caller deal with it
instead.
Originally, I had intended to have this behavior, and has not
introduced the WARN. It was introduced during review with the addition
of the follow commit
commit 5c2abbeab7
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date: Tue Sep 24 09:57:57 2013 -0700
drm/i915: Provide a cheap ggtt vma lookup
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since the beginning, the functions which try to properly reference the
aliasing PPGTT have deferences a potentially null aliasing_ppgtt member.
Since the accessors are meant to be global, this will not do.
Introduced originally in:
commit a70a3148b0
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Wed Jul 31 16:59:56 2013 -0700
drm/i915: Make proper functions for VMs
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The initial implementation of this function used MMIO to write the PDPs.
Upon review it was determined (correctly) that the docs say to use LRI.
The issue is there are times where we want to do a synchronous write
(GPU reset).
I've tested this, and it works. I've verified with as many people as
possible that it should work.
This should fix the failing reset problems.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
iQEcBAABAgAGBQJSogqUAAoJEHm+PkMAQRiGM2MIAJrr5KEXEWuuAR4+JkkWBK7A
+dVT4n1MM4wP/aCIyriSlq7kgT03Wxk4Q4wKsj2wZvDQkNgEQjrctgIihc75jqi5
126nmT3YXJLwgDpFA3RHZUWve3j3vfUG53rRuk7K9Xx1sGWU3Ls7BuInvQZ//+QS
6UB4UuEAalmose5U8ToXQfMqZhjwreZKeb64TEZwFvu2klv4cnka1L/zHbmQGgRg
2Pfv+aUrjsYE8s9lkEKX8MIQsDn28Q5Lsv7XIEQwo2at4rYbJaxX6usuC1OI0MQ5
BLUn1GgtvOidq6FzSg6kXiA/MJYH3J0S+p4uULWAprxA+KeJRbWNRroM94W1qAk=
=1Wcq
-----END PGP SIGNATURE-----
Merge tag 'v3.13-rc3' into drm-intel-next-queued
Linux 3.13-rc3
I need a backmerge for two reasons:
- For merging the ppgtt patches from Ben I need to pull in the bdw
support.
- We now have duplicated calls to intel_uncore_forcewake_reset in the
setup code to due 2 different patches merged into -next and 3.13.
The conflict is silen so I need the merge to be able to apply
Deepak's fixup patch.
Conflicts:
drivers/gpu/drm/i915/intel_display.c
Trivial conflict, it doesn't even show up in the merge diff.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently, PC8 is enabled at modeset_global_resources, which is called
after intel_modeset_update_state. Due to this, there's a small race
condition on the case where we start enabling PC8, then do a modeset
while PC8 is still being enabled. The racing condition triggers a WARN
because intel_modeset_update_state will mark the CRTC as enabled, then
the thread that's still enabling PC8 might look at the data structure
and think that PC8 is being enabled while a pipe is enabled. Despite
the WARN, this is not really a bug since we'll wait for the
PC8-enabling thread to finish when we call modeset_global_resources.
The spec says the CRTC cannot be enabled when we disable LCPLL, so we
had a check for crtc->base.enabled. If we change to crtc->active we
will still prevent disabling LCPLL while the CRTC is enabled, and we
will also prevent the WARN above.
This is a replacement for the previous patch named
"drm/i915: get/put PC8 when we get/put a CRTC"
Testcase: igt/pm_pc8/modeset-lpsp-stress-no-wait
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we force the hw to idle as our first step during unload, we can abort
the unload upon failure. Later we can probe whether the hardware remain
active even after we try to shut it down.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Doing it early prevents moving and relocating objects in vain
for contexts that won't get any GPU time.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It is useful to assert that if the object is bound, then it must have
its pages pinned to prevent the shrinker from reaping its backing store.
This is even more useful with the introduction of real-ppgtt whereupon
we may have the object bound into several vma, with each instance
pinning the backing store. This assertion breaks down during unbind
where we unpinned the backing store before decoupling the vma binding.
This can be fixed with a trivial reording of the unbind sequence, which
reinforces the
pin pages
bind to vma
...
unbind from vma
unpin pages
concept.
v2: Bonus comment
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Tested-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV, FIFO will be shared by both SW and HW. So, we read the
free entries through register and update dev_priv variable
and wait for only 20 entries to be free
From Deepak's follow-up mail explaining why vlv is special:
"On SB, Out of 64 FIFO Entries, 20 Entries will be used by HW and
remaining 44 will be used by the SW,. I think due to this reason, we
have a threshold of 20 Entries."
"On VLV, HW and SW can access all 64 fifo entries, I don't think
having a threshold of 20 Entries is mandatory on VLV. Also, since both
SW and HW can access all 64 Entries. I think on VLV, we need to update
the fifo_count before waiting for the FIFO."
v2: Apply mask when we read the number of free FIFO entries (Ville).
v3: Mask applied after reading the register (Deepak).
Signed-off-by: Deepak S <deepak.s@intel.com>
[danvet: Add further explanation from Deepak to commit message.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Only plane A is FBC capable on gen2 (like gen3), but the panel fitter
is hooked up to pipe B, so we want to prefer pipe B + plane A.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add the code comment Chris requested in his review.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Initialize the FBC vfuncs on gen2 and gen3 chipsets. Also make
a clean split for gen7+ vs. gen5+ vfunc initialization.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On gen2 and gen3 chipsets FBC is supported only on plane A. Fix (and
simplify) the plane checks in intel_update_fbc() accordingly.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilons <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add a REG_WRITE_FOOTER macro as a counterpart to the REG_WRITE_HEADER.
The current code has the spin_lock() in the HEADER, but the
spin_unlock() is open coded, which looks rather confusing on the first
glance. A bit of additional symmetry might help.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When inspecting reports that boot/suspend/resume times are unusual it
would be useful to clearly identify the time we must spend waiting for
the hardware to complete its task. In this case we have a notification
before we start waiting for the panel to change state, but none
afterwards - which would be useful.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Checkpatch tells me
WARNING: __packed is preferred over __attribute__((packed))
so switch over to __packed across the driver before adding new packed
structs.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Check that the N and P dividers don't cause a divide by zero.
This shouldn't happen under normal circumstances, but can
happen eg. under simulation.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's a pain for two reasons:
- The vga plane redisablign requires actual legacy vgao i/o to pull
of. The hw engineers really botched this one here :(
- There seem to be some BIOS out there which send out lid events when
unplugging. Together with our broken DP code, which disables the
port when the cable is lost, this results in an immediate modeset
call, which can hang on the wait for outstanding flips.
- Also we don't want to force a modeset on machines where it's not
really needed, see the referenced bug.
We might want to extend this in general to also all machines that
support opregion, since there the BIOS supposedly should manage the
gfx hardware more cooperatively.
v2: Pimp commit message a bit.
Cc: Roland Dreier <roland@kernel.org>
References: https://bugs.freedesktop.org/show_bug.cgi?id=65486
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We're currently misprinting the port name when vlv_wait_port_ready()
times out. Fix it by using port_name().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We had some mode_valid() vfuncs returning an int, others the enum. Let's
use the latter everywhere.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we call intel_display_power_enabled() from
i915_capture_error_state() in IRQ context and then take a mutex. To fix
this add a new intel_display_power_enabled_sw() which returns the domain
state based on software tracking as opposed to reading the actual HW
state.
Since we use domain_use_count for this without locking on the reader
side make sure we increase the counter only after enabling all required
power wells and decrease it before disabling any of these power wells.
Regression introduced in
commit 1b02383464b4a915627ef3b8fd0ad7f07168c54c
Author: Imre Deak <imre.deak@intel.com>
Date: Tue Sep 24 16:17:09 2013 +0300
drm/i915: support for multiple power wells
Note that atm we depend on the value returned by
intel_display_power_enabled_sw() in i915_capture_error_state() to avoid
unclaimed register access reports. This was never guaranteed though,
since another thread can disable the power concurrently. If this is a
problem we need another explicit way to disable the reporting during
error captures.
v2:
- remove barriers as the caller can't depend on the value
returned from i915_capture_error_state_sw() anyway (Ville)
- dump the state of pipe/transcoder power domain state (Daniel)
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This should just be a debug. Add another debug msg to the inherit path
while we're at it.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72098
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Setting this bit restores all ring contexts in parallel rather than
serially. Matches current BWG recommendations.
Tested-by: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Deepak S <deepak.s@inel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We use timeout mode, and we need to lower the timeout to get good RC6
residency when loads are running. This gets me from 0% residency during
glxgears to 77%, which is a pretty good improvement. This value also
matches the current BWG recommentations.
Tested-by: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Deepak S <deepak.s@inel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It leads to a big mess when stuff interleaves. Especially with the new
patch I've submitted for the drm core to no longer artificially split
up debug messages.
v2: The size parameter to snprintf includes the terminating 0, but the
return value does not. Adjust the logic accordingly. Spotted by Mika.
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It doesn't like that we assign 0 to a pointer, it wants the real NULL.
On closer look that initialization is actually bogus, and the compiler
can easily see that we never use it unitialized. So let's just drop
this.
Cc: Deepak S <deepak.s@intel.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was fumbled in the conversion to per-engine forcewake.
Cc: Deepak S <deepak.s@intel.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV the GTFIFOCTL register has other bits besides the number of free
entries in the GT wake FIFO. Apply a mask when we read th register to
make sure we don't misinterpret the number of free FIFO entries.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: There's some unclarity about hsw, but brushed off as todays'
Bspec just acting up a bit.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV GTFIFODBG has more bits. Just report them all.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Forcewake counts for valleyview are not exposed throgh DebugFS.
Exposing with this change.
Signed-off-by: Deepak S <deepak.s@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>