The other additional step in the DSI sequence for EHL.
v2:
- Using REG_BIT()(Matt)
- Fixed commit message typo(Vandita)
BSpec: 20597
Cc: Uma Shankar <uma.shankar@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619233134.20009-2-jose.souza@intel.com
EHL has 2 additional steps in the DSI sequence, this is one of then
the lane latency optimization for DW1.
BSpec: 20597
Cc: Uma Shankar <uma.shankar@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619233134.20009-1-jose.souza@intel.com
Since commit 79ffac8599 ("drm/i915: Invert the GEM wakeref
hierarchy"), the request creation itself took responsibility for
managing the engine/GT wakerefs and so we can remove the redundant grabs
in our selftests.
References: 79ffac8599 ("drm/i915: Invert the GEM wakeref hierarchy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620102432.31580-1-chris@chris-wilson.co.uk
Our intel_rings are always flushed as they are continually used to submit
commands to the GPU, and so do not need to be flushed on unpinning. This
avoids pulling in the flush_ggtt_writes locking into our context
unpin, which we want to allow from atomic context (for simplicity).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619203504.4220-1-chris@chris-wilson.co.uk
If we have multiple contexts of equal priority pending execution,
activate a timer to demote the currently executing context in favour of
the next in the queue when that timeslice expires. This enforces
fairness between contexts (so long as they allow preemption -- forced
preemption, in the future, will kick those who do not obey) and allows
us to avoid userspace blocking forward progress with e.g. unbounded
MI_SEMAPHORE_WAIT.
For the starting point here, we use the jiffie as our timeslice so that
we should be reasonably efficient wrt frequent CPU wakeups.
Testcase: igt/gem_exec_scheduler/semaphore-resolve
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-2-chris@chris-wilson.co.uk
When using a global seqno, we required a precise stop-the-workd event to
handle preemption and unwind the global seqno counter. To accomplish
this, we would preempt to a special out-of-band context and wait for the
machine to report that it was idle. Given an idle machine, we could very
precisely see which requests had completed and which we needed to feed
back into the run queue.
However, now that we have scrapped the global seqno, we no longer need
to precisely unwind the global counter and only track requests by their
per-context seqno. This allows us to loosely unwind inflight requests
while scheduling a preemption, with the enormous caveat that the
requests we put back on the run queue are still _inflight_ (until the
preemption request is complete). This makes request tracking much more
messy, as at any point then we can see a completed request that we
believe is not currently scheduled for execution. We also have to be
careful not to rewind RING_TAIL past RING_HEAD on preempting to the
running context, and for this we use a semaphore to prevent completion
of the request before continuing.
To accomplish this feat, we change how we track requests scheduled to
the HW. Instead of appending our requests onto a single list as we
submit, we track each submission to ELSP as its own block. Then upon
receiving the CS preemption event, we promote the pending block to the
inflight block (discarding what was previously being tracked). As normal
CS completion events arrive, we then remove stale entries from the
inflight tracker.
v2: Be a tinge paranoid and ensure we flush the write into the HWS page
for the GPU semaphore to pick in a timely fashion.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-1-chris@chris-wilson.co.uk
With multiple uncore to initialize (GT vs Display), it makes little
sense to have the vgpu_check inside uncore_init(). We also have
a catch-22 scenario where the uncore is required to read the vgpu
capabilities while the vgpu capabilities are required to decide if
we need to initialize forcewake support. To remove this circular
dependency, we can perform the required MMIO access by mmapping just
the vgtif shared page in mmio space and use raw accessors.
v2: rename check_vgpu to detect_vgpu (Chris)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-7-daniele.ceraolospurio@intel.com
We'd like to introduce a display uncore with no forcewake domains, so
let's avoid wasting memory and be ready to allocate only what we need.
Even without multiple uncore, we still don't need all the domains on all
gens.
v2: avoid hidden control flow, improve checks (Tvrtko), fix IVB special
case, add failure injection point
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-6-daniele.ceraolospurio@intel.com
We always call some of the setup/cleanup functions for forcewake, even
if the feature is not actually available. Skipping these operations if
forcewake is not available saves us some operations on older gens and
prepares us for having a forcewake-less display uncore.
v2: do not make suspend/resume functions forcewake-specific (Chris,
Tvrtko), use GEM_BUG_ON in internal forcewake-only functions (Tvrtko)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-5-daniele.ceraolospurio@intel.com
Let's get rid of it before it proliferates, since with split GT/Display
uncores the container_of won't work anymore.
I've kept the rpm pointer as well to minimize the pointer chasing in the
MMIO accessors.
v2: swap parameter order for intel_uncore_init_early (Tvrtko)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-4-daniele.ceraolospurio@intel.com
uncore_sanitize performs no action on the uncore structure and just
calls intel_sanitize_gt_powersave, so we can just call the latter
directly.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-3-daniele.ceraolospurio@intel.com
Instead of going through the if-else chain every time, let's save the
function in the uncore structure. Note that the new functions are
purposely not used from the reg read/write functions to keep the
inlining there.
While at it, use the new macro to call the old ones to clean the code a
bit.
v2: Rename macros for no-forcewake function assignment (Tvrtko)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190620010021.20637-2-daniele.ceraolospurio@intel.com
Remember to keep the rings pinned as well as the context image until the
GPU is no longer active.
v2: Introduce a ring->pin_count primarily to hide the
mock_ring that doesn't fit into the normal GGTT vma picture.
v3: Order is important in teardown, ringbuffer submission needs to drop
the pin count on the engine->kernel_context before it can gleefully free
its ring.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110946
Fixes: ce476c80b8 ("drm/i915: Keep contexts pinned until after the next kernel context switch")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619170135.15281-1-chris@chris-wilson.co.uk
EHL has a mux on combo PHY A that allows it to be driven either by an
internal display (DDI-A or DSI DPHY) or by an external display (DDI-D).
This is a motherboard design decision that can not be changed on the
fly. Unfortunately there are no strap registers that allow us to detect
the board configuration directly, so let's use the VBT to try to figure
it out and program the mux accordingly.
For now if we run across a broken VBT that tries to claim that PHY A
is attached to both internal and external displays at the same time,
we'll resolve the conflict in favor of the internal display. To help
debug these kind of bad VBT's, let's also add a quick DRM_DEBUG message
during child device parsing so that it's easier to understand these
cases if they show up in bug reports.
v2:
- Confirmed that VBT's dvo port refers to the DDI and not the PHY.
Thus we can check more explicitly for (ddi_d && !(ddi_a || dsi)). If
a bad VBT contradicts itself, let internal display win. (Ville)
v3:
- Switch condition from !IS_ICELAKE to IS_ELKHARTLAKE. Although the
convention is usually to assume that future platforms will inherit
all current platform behavior, this feels more like a one-platform
quirk. (Ville)
- Update commit message to describe what we do if/when we encounter
broken VBT's, and note that the new debug print during child device
parsing is intentional.
Cc: Clint Taylor <Clinton.A.Taylor@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618175131.9139-1-matthew.d.roper@intel.com
In the unlikely case the request completes while we regard it as not even
executing on the GPU (see the next patch!), we have to flush any pending
execution callbacks at retirement and ensure that we do not add any
more.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-4-chris@chris-wilson.co.uk
With the upcoming change to automanaged i915_active, the intent is that
whenever we wait on the set of active fences, they are signaled and
collected. The requirement is that all successful returns from
i915_request_wait() signal the fence, so fixup the one remaining path
where we may return before the interrupt has been run.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190619112341.9082-3-chris@chris-wilson.co.uk
Since commit eb8d0f5af4 ("drm/i915: Remove GPU reset dependence on
struct_mutex"), the I915_WAIT_LOCKED flags passed to i915_request_wait()
has been defunct. Now go ahead and remove it from all callers.
References: eb8d0f5af4 ("drm/i915: Remove GPU reset dependence on struct_mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-3-chris@chris-wilson.co.uk
The process_csb routine from execlists_submission is incompatible with
the GuC backend. Add a warning to detect if we accidentally end up in
the wrong spot.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618110736.31155-1-chris@chris-wilson.co.uk
The idea behind keeping the saturation mask local to a context backfired
spectacularly. The premise with the local mask was that we would be more
proactive in attempting to use semaphores after each time the context
idled, and that all new contexts would attempt to use semaphores
ignoring the current state of the system. This turns out to be horribly
optimistic. If the system state is still oversaturated and the existing
workloads have all stopped using semaphores, the new workloads would
attempt to use semaphores and be deprioritised behind real work. The
new contexts would not switch off using semaphores until their initial
batch of low priority work had completed. Given sufficient backload load
of equal user priority, this would completely starve the new work of any
GPU time.
To compensate, remove the local tracking in favour of keeping it as
global state on the engine -- once the system is saturated and
semaphores are disabled, everyone stops attempting to use semaphores
until the system is idle again. One of the reason for preferring local
context tracking was that it worked with virtual engines, so for
switching to global state we could either do a complete check of all the
virtual siblings or simply disable semaphores for those requests. This
takes the simpler approach of disabling semaphores on virtual engines.
The downside is that the decision that the engine is saturated is a
local measure -- we are only checking whether or not this context was
scheduled in a timely fashion, it may be legitimately delayed due to user
priorities. We still have the same dilemma though, that we do not want
to employ the semaphore poll unless it will be used.
v2: Explain why we need to assume the worst wrt virtual engines.
Fixes: ca6e56f654 ("drm/i915: Disable semaphore busywaits on saturated systems")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Cc: Dmitry Ermilov <dmitry.ermilov@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-8-chris@chris-wilson.co.uk
To do frontbuffer tracking we are depending on Display WA #0884 to
exit PSR when there is a frontbuffer modification but according to
user reports a write to CURSURFLIVE do not cause PSR to exit in older
gens so lets force a PSR exit.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110799
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by: Thomas Rohwer <trohwer85@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190617195154.30292-1-jose.souza@intel.com
This has caught me out on countless occasions, when we retrieve a pointer
from the submission/execlists backend, it does not carry a reference to
the context or ring. Those are only pinned while the request is active,
so if we see the request is already completed, it may be in the process
of being retired and those pointers defunct.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110938
Fixes: 3a068721a9 ("drm/i915: Show ring->start for the ELSP context/request queue")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618161951.28820-2-chris@chris-wilson.co.uk
Be sure to cleanup after live_evict by flushing any residual state off
the GPU using igt_flush_test.
Tvrtko mentioned that it is probably wise to stop repeating this ad hoc
around the tests and implement a live test runner.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618161951.28820-1-chris@chris-wilson.co.uk
Previously, we wanted to shrink the pages of freed objects before they
were finally RCU collected. However, by removing the struct_mutex
serialisation around the active reference, we need to acquire an extra
reference around the wait. Unfortunately this means that we have to skip
objects that are waiting RCU collection.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110937
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-2-chris@chris-wilson.co.uk
Updates the live_workarounds selftest to handle whitelisted
registers that are flagged as read only.
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618010108.27499-5-John.C.Harrison@Intel.com
Updated whitelist table for ICL.
v2: Reduce changes to just those required for media driver until
the selftest can be updated to support the new features of the
other entries.
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618010108.27499-4-John.C.Harrison@Intel.com
Newer hardware requires setting up whitelists on engines other than
render. So, extend the whitelist code to support all engines.
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618010108.27499-3-John.C.Harrison@Intel.com
Newer hardware adds flags to the whitelist work-around register. These
allow per access direction privileges and ranges.
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618010108.27499-2-John.C.Harrison@Intel.com
Rename pipe_config_err() to pipe_config_mismatch(), and also print
whether we're doing the fastset check or the sw vs. hw state readout
check. Should make the logs a bit less confusing when they're not
filled with what looks like a real error.
Also rename the 'adjust' variable to 'fastset' to make it clear what
it means.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190612130801.2085-3-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Now that intel_pipe_config_compare() no longer clobbers the passed
in state we can make both crtc states const. And while at we simplify
the calling convention, and clean up intel_compare_link_m_n() a bit.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190612130801.2085-2-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
We're now calling intel_pipe_config_compare(..., true) uncoditionally
which means we're always going clobber the calculated M/N values with
the old values if the fuzzy M/N check passes. That causes problems
because the fuzzy check allows for a huge difference in the values.
I'm actually tempted to just make the M/N checks exact, but that might
prevent fastboot from kicking in when people want it. So for now let's
overwrite the computed values with the old values only if decide to skip
the modeset.
v2: Copy has_drrs along with M/N M2/N2 values
Cc: stable@vger.kernel.org
Cc: Blubberbub@protonmail.com
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Tested-by: Blubberbub@protonmail.com
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110782
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110675
Fixes: d19f958db2 ("drm/i915: Enable fastset for non-boot modesets.")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190612172423.25231-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Since commit 1ba627148e ("drm: Add reservation_object to
drm_gem_object"), struct drm_gem_object grew its own builtin
reservation_object rendering our own private one bloat. Remove our
redundant reservation_object and point into obj->base.resv instead.
References: 1ba627148e ("drm: Add reservation_object to drm_gem_object")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618125858.7295-1-chris@chris-wilson.co.uk
Though we pin the context first before taking the pm wakeref, during
retire we need to unpin before dropping the pm wakeref (breaking the
"natural" onion). During the unpin, we may need to attach a cleanup
operation on to the engine wakeref, ergo we want to keep the engine
awake until after the unpin.
v2: Push the engine wakeref into the barrier so we keep the onion unwind
ordering in the request itself
Fixes: ce476c80b8 ("drm/i915: Keep contexts pinned until after the next kernel context switch")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190618074153.16055-1-chris@chris-wilson.co.uk
If the user is clearing the log buffer too slowly, we overflow. As this
is an expected condition, and the driver tries to handle it, reduce the
error message down to a notice.
Michal mentioned that another cause would be incorrect reset handling,
so we don't want to lose the notification entirely.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110817
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190617100917.13110-1-chris@chris-wilson.co.uk
Fix the plethora of Sphinx build errors after moving the display files
under a subdirectory.
Fixes: 379bc10023 ("drm/i915: move modesetting output/encoder code under display/")
Fixes: df0566a641 ("drm/i915: move modesetting core code under display/")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190617102944.25129-1-jani.nikula@intel.com
Although EHL introduces a new PCH, the South Display part of the PCH
that we care about is nearly identical to ICP, just with some pins
remapped. Most notably, Port C is mapped to the pins that ICP uses for
TC Port 1.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190615004210.16656-1-matthew.d.roper@intel.com
Currently, we perform a locked update of the shadow entry when
allocating a page directory entry such that if two clients are
concurrently allocating neighbouring ranges we only insert one new entry
for the pair of them. However, we also need to serialise both clients
wrt to the actual entry in the HW table, or else we may allow one client
or even a third client to proceed ahead of the HW write. My handwave
before was that under the _pathological_ condition we would see the
scratch entry instead of the expected entry, causing a temporary
glitch. That starvation condition will eventually show up in practice, so
fix it.
The reason for the previous cheat was to avoid having to free the extra
allocation while under the spinlock. Now, we keep the extra entry
allocated until the end instead.
v2: Fix error paths for gen6
Fixes: 1d1b5490b9 ("drm/i915/gtt: Replace struct_mutex serialisation for allocation")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190617140426.7203-1-chris@chris-wilson.co.uk
In intel_package_header version 2 there's a new field in the
fw_info table that must be 0, otherwise it's not the correct DMC
firmware. Add a check for version 2 or later.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-10-lucas.demarchi@intel.com
Complete the extraction of functions to parse specific parts of the
firmware. The return of the function parse_csr_fw() is now redundant
since it already sets the dmc_payload field. Changing it is left for
later to avoid noise in the commit.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-7-lucas.demarchi@intel.com
Like parse_csr_fw_css() this parses the package_header from firmware and
saves the relevant fields in the csr struct. In this function we also
lookup the fw_info we are interested in.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-6-lucas.demarchi@intel.com
Let's start splitting the parse function, making all of them return the
number of bytes parsed - different versions of the firmware header may
require different sizes for the structures.
v2: rework remaining bytes calculation on new protection for amount of
bytes read
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-5-lucas.demarchi@intel.com
Move fw_info out of struct intel_package_header to allow it to grow more
easily in future. To make a cleaner move, let's also extract a function to
search the header for the dmc_offset.
While reviewing this code I wondered why we continued the search even
after finding a suitable firmware. Add a comment to explain we will
continue to try to find a more specific firmware version, even if this
is not required by the spec.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-3-lucas.demarchi@intel.com
Change all fields in intel_package_header and intel_dmc_header whose
meaning are 1-byte numbers to use u8.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190607091230.1489-2-lucas.demarchi@intel.com
Allocate all page directory variants with alloc_pd. As
the lvl3 and lvl4 variants differ in manipulation, we
need to check for existence of backing phys page before accessing
it.
v2: use err in returns
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190614164350.30415-5-mika.kuoppala@linux.intel.com