Maintain base page handling functions in order of
alloc, free, init. No functional changes.
v2: s/Introduce/Maintain (Michel)
v3: Rebase
Cc: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
An earlier patch was added to reserve space in the ring buffer for the
commands issued during 'add_request()'. The initial version was
pessimistic in the way it handled buffer wrapping and would cause
premature wraps and thus waste ring space.
This patch updates the code to better handle the wrap case. It no
longer enforces that the space being asked for and the reserved space
are a single contiguous block. Instead, it allows the reserve to be on
the far end of a wrap operation. It still guarantees that the space is
available so when the wrap occurs, no wait will happen. Thus the wrap
cannot fail which is the whole point of the exercise.
Also fixed a merge failure with some comments from the original patch.
v2: Incorporated suggestion by David Gordon to move the wrap code
inside the prepare function and thus allow a single combined
wait_for_space() call rather than doing one before the wrap and
another after. This also makes the prepare code much simpler and
easier to follow.
v3: Fix for 'effective_size' vs 'size' during ring buffer remainder
calculations (spotted by Tomas Elf).
For: VIZ-5115
CC: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For the purpose of state checking we only care about the DPLL HW flags
that we actually program, so mask off the ones that we don't.
This fixes one set of DPLL state check failures.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add support for reading out the HW state for DDI ports. Since the actual
programming is very similar to the CHV/VLV DPIO PLL programming we can
reuse much of the logic from there.
This fixes the state checker failures I saw on my BXT with HDMI output.
v2:
- rebased on v2 of patch 4/5
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Depending on the platform the port clock fed to the pipe can be the PLL's
post-divided fast clock rate or a /5 divided version of it. To make this
more obvious across the platforms calculate this port clock along with
the rest of the PLL parameters.
This is also needed by the next patch where we can reuse the CHV helper
for the BXT PLL HW readout code; so export the corresponding helper.
While at it also add a more descriptive name to the helpers and a
comment explaining what's being calculated.
No functional change.
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move the helper next to the PLL helpers of the other platforms for
clarity.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Sonika Jindal <sonika.jindal@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Although we have a fixed setting for the PLL9 and EBB4 registers, it
still makes sense to check them together with the rest of PLL registers.
While at it also remove a redundant comment about 10 bit clock enabling.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Sonika Jindal <sonika.jindal@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch adds support for 0.85V VccIO on Skylake Y,
separate buffer translation tables for Skylake U,
and support for I_boost for the entries that needs this.
Changes in v2:
* Refactored the code a bit to move all DDI signal level setup to
intel_ddi.c
Issue: VIZ-5677
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
Reviewed-by: Antti Koskipää <antti.koskipaa@linux.intel.com>
[danvet: Apply style polish checkpatch suggested.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The hardware supposedly ignores the WM1 watermarks while the PND
deadline mode is enabled, but clear out the register just in case.
This is what the other OS does, and it does make register dumps look
more consistent when we don't have partial WM1 values lingering in
the registers (some WM1 watermarks already get zeroed when the actually
used DSPFW registers get written).
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Allow tweaking the VLV/CHV memory latencies thorugh sysfs, like we do
for ILK+.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Enabling PM5/DDR DVFS with multiple active pipes isn't a validated
configuration. It does seem to work most of the time at least, but
there is clearly an additional risk of underruns, so let's not play
with fire.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CxSR (or maxfifo on VLV/CHV) blocks somne changes to the plane control
register (enable bit at least, not quite sure about the rest). So in
order to have the plane enable/disable when we want we need to first
kick the hardware out of cxsr.
Unfortunateloy this requires some extra vblank waits. For the CxSR
enable after the plane update we should eventually use an async
vblank worker, but since we don't have that just do sync vblank
waits. For the disable case we have no choice but to do it
synchronously.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In order to get decnet memory self refresh residency on VLV, flip it
over to the new CHV way of doing things. VLV doesn't do PM5 or DDR DVFS
so it's a bit simpler.
I'm not sure the currently memory latency used for CHV is really
appropriate for VLV. Some further testing will probably be needed to
figure that out.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Consider which planes are active and compute the FIFO split based on the
relative data rates. Since we only consider the pipe src width rather
than the plane width when computing watermarks it seems best to do the
same when computing the FIFO split as well. This means the only thing we
actually have to consider for the FIFO splut is the bpp, and we can
ignore the rest.
I've just stuffed the logic into the watermark code for now. Eventually
it'll need to move into the atomic update for the crtc.
There's also one extra complication I've not yet considered; Some of the
DSPARB registers contain bits related to multiple pipes. The registers
are double buffered but apparently they update on the vblank of any
active pipe. So doing the FIFO reconfiguration properly when multiple
pipes are active is not going to be fun. But let's ignore that mess for
now.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Turns out the VLV/CHV system agent doesn't understand memory
latencies, so trying to rely on the PND deadline mechanism is not
going to fly especially when DDR DVFS is enabled. Currently we try to
avoid the problems by lying to the system agent about the deadlines
and setting the FIFO watermarks to 8 cachelines. This however leads to
bad memory self refresh residency.
So in order to satosfy everyone we'll just give up on the deadline
scheme and program the watermarks old school based on the worst case
memory latency.
I've modelled this a bit on the ILK+ approach where we compute multiple
sets of watermarks for each pipe (PM2,PM5,DDR DVFS) and when merge thet
appropriate one later with the watermarks from other pipes. There isn't
too much to merge actually since each pipe has a totally independent
FIFO (well apart from the mess with the partially shared DSPARB
registers), but still decopuling the pipes from each other seems like a
good idea.
Eventually we'll want to perform the watermark update in two phases
around the plane update to avoid underruns due to the single buffered
watermark registers. But that's still in limbo for ILK+ too, so I've not
gone that far yet for VLV/CHV either.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Read out the current watermark settings from the hardware at driver init
time. This will allow us to compare the newly calculated values against
the currrent ones and potentially avoid needless WM updates.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Try to update the watermarks on the right side of the plane update. This
is just a temporary hack until we get the proper two part update into
place. However in the meantime this might have some chance of at least
working.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want cxsr exit to happen ASAP, so toss in some POSTING_READ()s to
make sure things are really kicked off.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can't elide the fb tracking invalidate if the buffer is already in
the right domain since that would lead to missed screen updates. I'm
pretty sure I've written this already before but must have gotten lost
unfortunately :(
v2: Chris observed that all internal set_domain users already
correctly do the fb invalidate on their own, hence we can move this
just into the set_domain ioctl instead.
v3: I screwed up setting the invalidate ORIGIN_* correctly (Chris).
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We cannot let IPS enabled with no plane on the pipe:
BSpec: "IPS cannot be enabled until after at least one plane has
been enabled for at least one vertical blank." and "IPS must be
disabled while there is still at least one plane enabled on the
same pipe as IPS." This restriction apply to HSW and BDW.
However a shortcut path on update primary plane function
to make primary plane invisible by setting DSPCTRL to 0
was leting IPS enabled while there was no
other plane enabled on the pipe causing flickerings that we were
believing that it was caused by that other restriction where
ips cannot be used when pixel rate is greater than 95% of cdclok.
v2: Don't mess with Atomic path as pointed out by Ville.
v3: Rebase after a long time and atomic path changes.
Accept Ville suggestion of not check !fb
v4: Re-factore on dinq
Reference: https://bugs.freedesktop.org/show_bug.cgi?id=85583
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
[danvet: Make it compile]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can't improve a 0 deviation, so when we find such a divider, skip the
remaining ones they won't be better.
This short-circuit the search for 34 of the 373 test frequencies in the
corresponding i-g-t test (tools/skl_compute_wrpll)
v2: Place the short-circuiting code in skl_compute_wrpll() (Paulo)
(I'm sure nobody will notice the spurious removal of a blank line)
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Suggested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Broxton is using a different register and different bit ordering
for rps status capabilities.
Also GT perf freqency register is different for Broxton so update
that.
Signed-off-by: Bob Paauwe <bob.j.paauwe@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently, if an odd divider improves the deviation (minimizes it), we
take that divider. The recommendation is to prefer even dividers.
v2: Move the check at the right place after having inverted the two for
loops in the previous patch.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The HW validation team came back from further testing with a slightly
changed constraint on the deviation between the DCO frequency and the
central frequency. Instead of +-4%, it's now +1%/-6%.
Unfortunately, the previous algorithm didn't quite cope with these new
constraints, the reason being that it wasn't thorough enough looking at
the possible divider candidates.
The new algorithm looks at all dividers, which is definitely a hammer
approach (we could reduce further the set of dividers to good ones as a
follow up, at the cost of a bit more complicated code). But, at least,
we can now satisfy the +1%/+6% rule for all the "Well known" HDMI
frequencies of my test set (373 entries).
On that subject, the new code is quite extensively tested in
intel-gpu-tools (tools/skl_compute_wrpll).
v2: Fix cycling between central frequencies and dividers (Paulo)
Properly choose the minimal deviation between postive and negative
candidates (Paulo).
On the 373 test frequencies, v2 computes better dividers than v1 (ie
more even dividers and lower deviation on average):
v1: average deviation: 206.52
v2: average deviation: 194.47
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After Mika's ppgtt cleanup series, all the other free functions have
drm_device as the first parameter, except this one.
No functional changes.
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A safer way to update the PDPx registers is sending lri commands, added
in the ring before the batchbuffer start. Otherwise, the ctx must be idle
before trying to change anything (but the ring-tail) in the ctx image. An
example where the ctx won't be idle is lite-restore.
This patch depends on 5b7e4c9ce ("drm/i915/gtt: Mark TLBS dirty for gen8+").
v2: Combine lri writes (and save 8 commands). (Mika)
v3: Rebase after ring/req changes, and removed references to deprecated patches.
Cc: Dave Gordon <david.s.gordon@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There is no need for atomicity here. Convert all bitmap
operations to nonatomic variants.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Scratch page is part of struct i915_address_space. Move other
scratch entities into the same struct. This is a preparatory patch
for having only one instance of each scratch_pt/pd.
v2: make commit msg more readable
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v1)
[danvet: Bikeshed summary to avoid confusion with vmas.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Write page directory entry without using superfluous
indirect function. Also remove unused device parameter
from the encode function.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Dynamic page table allocation might wake the shrinker
when memory is requested for page table structures.
As this happens when we try to allocate the virtual address
during binding, our vma might be among the targets for eviction.
We should do i915_vma_pin() and do pin early in there like Chris
suggests but this is interim solution.
Shield our vma from shrinker by incrementing pin count before
the virtual address is allocated.
The proper place to fix this would be in gem, inside of
i915_vma_pin(). But we don't have that yet so take the short
cut as a intermediate solution.
Testcase: igt/gem_ctx_thrash
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Lay out scratch page structure in similar manner than other
paging structures. This allows us to use the same tools for
setup and teardown.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make paging structure type agnostic *_px macros to access
page dma struct, the backing page and the dma address.
This makes the code less cluttered on internals of
i915_page_dma.
v2: Superfluous const -> nonconst removed
v3: Rebased
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As there is flushing involved when we have done the cpu
write, make functions for mapping for cpu space. Make macros
to map any type of paging structure.
v2: Make it clear tha flushing kunmap is only for ppgtt (Ville)
v3: Flushing fixed (Ville, Michel). Removed superfluous semicolon
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When we setup page directories and tables, we point the entries
to a to the next level scratch structure. Make this generic
by introducing a fill_page_dma which maps and flushes. We also
need 32 bit variant for legacy gens.
v2: Fix flushes and handle valleyview (Ville)
v3: Now really fix flushes (Michel, Ville)
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This has slipped in somewhere but it was harmless
as we check the page pointer before teardown.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the paging structures are now similar and mapped for
dma. The unmapping is taken care of by common accessors, so
don't overload the reader with such details.
v2: Be consistent with goto labels (Michel)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All our paging structures have struct page and dma address
for that page.
Add struct for page/dma address pairs and use it to make
the setup and teardown for different paging structures
identical.
Include the page directory offset also in the struct for legacy
gens. Rename it to clearly point out that it is offset into the
ggtt.
v2: Add comment about ggtt_offset (Michel)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The legacy mode mm switch and the execlist context assignment
needs dma address for the page directories.
Introduce a function that encapsulates the scratch_pd dma
fallback if no pd is found.
v2: Rebase, s/ring/req
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> (v1)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can have exactly 4GB sized ppgtt with 32bit system.
size_t is inadequate for this.
v2: Convert a lot more places (Daniel)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Check the allocation area against the known end
of address space instead of against fixed value.
v2: Return ENODEV on internal bugs (Chris)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When we touch gen8+ page maps, mark them dirty like we
do with previous gens.
v2: Update comment (Joonas)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently we don't have any real indication when a pipe gets
enabled/disabled. Add some.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Avoid some 'switch (plane->type)' by storing the fronbuffer_bits in
intel_plane.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: use singular frontbuffer_bits in intel_plane since a plan can
only ever have one bit. Discussed with Ville on irc.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The registers and process differ from other platforms. If the hardware
was programmed incorrectly, this will return invalid cdclk values, which
should then cause reprogramming of the hardware.
v2(Matt): Return 19.2 MHz when DE PLL is disabled (Ville)
v3: Make less assumptions about the hardware state (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Bob Paauwe <bob.j.paauwe@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently object size is returned for the rotated VMA size which can be
bigger than the rotated view itself. Since the binding code pads all
excess size with scratch pages the only minor issue with this is wasting
some GGTT space, but still feels nicer to fix and report the real size.
v2: Rebase for tracking size in bytes instead of pages.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This way data is available as soon as the view is passed into the call chain.
v2: Store size in bytes instead of pages under the appropriate name. (Chris Wilson)
v3: Use uint64_t instead of size_t. (Daniel Vetter)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It is only used in logging and it doesn't need to exist on its own.
Also it was misleading to log view size as object size.
v2: Improve commit message. (Joonas Lahtinen)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
[danvet: s/%lu/%zu/ where needed, reported by 0-day.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the new DRRS code it kinda sticks out, and we never managed to
get this to work well enough without causing issues. Time to wave
goodbye.
I've decided to keep the logic for programming the reduced clocks
intact, but everything else is gone. If anyone ever wants to resurrect
this we need to redo it all anyway on top of the frontbuffer tracking.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This typo lead to the crtc scaler getting enabled incorrectly and an
evantual state checker mismatch about the scaler_id.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In Indirect context w/a batch buffer,
WaClearSlmSpaceAtContextSwitch
This WA performs writes to scratch page so it must be valid, this check
is performed before initializing the batch with this WA.
v2: s/PIPE_CONTROL_FLUSH_RO_CACHES/PIPE_CONTROL_FLUSH_L3 (Ville)
v3: GTT bit in scratch address should be mbz (Chris)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The frontbuffer code gives us accurate information about activity,
let's use it. Again this should avoid unecessary updates when multiple
screens are on.
Also realign function paramaters, I couldn't resist that bit of OCD.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Durgadoss R <durgadoss.r@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The current code tracks business across all pipes, but we're only
really interested in the one pipe DRRS is enabled on. Fairly tiny
optimization, but something I noticed while reading the code. But it
might matter a bit when e.g. showing a video or something only on the
external screen, while the panel is kept static.
Also regroup the code slightly: First compute new bitmasks, then take
appropriate actions.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Durgadoss R <durgadoss.r@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The current code tracks business across all pipes, but we're only
really interested in the one pipe DRRS is enabled on. Fairly tiny
optimization, but something I noticed while reading the code. But it
might matter a bit when e.g. showing a video or something only on the
external screen, while the panel is kept static.
Also regroup the code slightly: First compute new bitmasks, then take
appropriate actions.
Cc: Ramalingam C <ramalingam.c@intel.com>
Cc: Sivakumar Thulasimani <sivakumar.thulasimani@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
I was momentarily confused until I've double-checked that these
functions really only compute state and don't update the hardware
state. They once did that, but since Ander's rework of the dpll
computation flow that's no longer the case.
Rename them to avoid further confusion.
Note that the ilk code already follows the compute_dpll naming scheme
for computing the actual register value. DDI code goes with _calc_,
but that is close enough.
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Useful to figure out whether stuck bits are due to the frontbuffer
tracking code as opposed to individual consumers (who have their own
bitmask tracking).
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Paulo noticed that the fbc frontbuffer tracking flush callback
occasionally gets a call without any bit set. This can happen when we
have to filter flush calls due to e.g. gpu rendering. Filter these
out.
Reported-by: Paulo Zanoni <przanoni@gmail.com>
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The current/old frontbuffer might still have gpu frontbuffer rendering
pending. But once flipped it won't have the corresponding frontbuffer
bits any more and hence the request retire function won't ever clear
the corresponding busy bits. The async flip tracking (with the
flip_prepare and flip_complete functions) already does this, but
somehow I've forgotten to do this for synchronous flips.
Note that we don't track outstanding rendering of the new framebuffer
with busy_bits since all our plane update code waits for previous
rendering to complete before displaying a new buffer. Hence a new
buffer will never be busy.
v2: Drop the spurious inline Ville spotted.
v3: Don't touch flip_bits in the synchronsou frontbuffer_flip
function, noticed by Paulo.
v4: Remove one more inline that slipped through (Paulo).
Reported-by: Paulo Zanoni <przanoni@gmail.com>
Cc: Paulo Zanoni <przanoni@gmail.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Testcase: igt/kms_frontbuffer_tracking/fbc-modesetfrombusy
Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
To initialize WA batch, at the moment we first allocate batch and then check
whether we have any WA to be initialized for the given Gen; if we don't have
any WA then we WARN the user, destroy the batch and return but this is causing
another WARN in cleanup code complaining about sleeping in atomic context.
Till we understand this better and to keep things simpler, bail out early
if we don't have WA.
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Kernel 0-day framework reported warnings with WA batch patches, this patch
fixes those warnings and an additional warning reported in intel_lrc.c file.
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As there is no OLR to check, the check_olr() function is now a no-op and can be
removed.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A bunch of the low level LRC functions were passing around ringbuf and ctx
pairs. In a few cases, they took the r/c pair and a request as well. This is all
quite messy and unnecesary. The context_queue() call is especially bad since the
fake request code got removed - it takes a request and three extra things that
must be extracted from the request and then it checks them against what it finds
in the request. Removing all the derivable data makes the code much simpler all
round.
This patch updates those functions to just take the request structure.
Note that logical_ring_wait_for_space now takes a request structure but already
had a local request pointer that it uses to scan for something to wait on. To
avoid confusion the local variable has been renamed 'target' (it is searching
for a target request to do something with) and the parameter has been called req
(to guarantee anything accidentally missed gets a compiler error).
v2: Updated commit message re wait_for_space (Tomas Elf review comment).
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The LRC submission code requires a request for tracking purposes. It does not
actually require that request to 'complete' it simply uses it for keeping hold
of reference counts on contexts and such like.
Previously, the fall back path of polling for space in the ring would start by
submitting any outstanding work that was sat in the buffer. This submission was
not done as part of the request that that work was owned by because that would
lead to complications with the request being submitted twice. Instead, a null
request structure was passed in to the submit call and a fake one was created.
That fall back path has long since been obsoleted and has now been removed. Thus
there is never any need to fake up a request structure. This patch removes that
code. A couple of sanity check warnings are added as well, just in case.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In _i915_add_request(), the request is associated with a userland client.
Specifically it is linked to the 'file' structure and the current user process
is recorded. One problem here is that the current user process is not
necessarily the same as when the request was submitted to the driver. This is
especially true when the GPU scheduler arrives and decouples driver submission
from hardware submission. Note also that it is only in the case where the add
request comes from an execbuff call that there is a client to associate. Any
other add request call is kernel only so does not need to do it.
This patch moves the client association into a separate function. This is then
called from the execbuffer code path itself at a sensible time. It also removes
the now redundant 'file' pointer from the add request parameter list.
An extra cleanup of the client association is also added to the request clean up
code for the eventuality where the request is killed after association but
before being submitted (e.g. due to out of memory error somewhere). Once the
submission has happened, the request is on the request list and the regular
request list removal will clear the association. Note that this still needs to
happen at this point in time because the request might be kept floating around
much longer (due to someone holding a reference count) and the client should not
be worrying about this request after it has been retired.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The outstanding_lazy_request is no longer used anywhere in the driver.
Everything that was looking at it now has a request explicitly passed in from on
high. Everything that was relying upon it behind the scenes is now explicitly
creating/passing/submitting its own private request. Thus the OLR can be
removed.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Much of the driver has now been converted to passing requests around instead of
rings/ringbufs/contexts. Thus the function for retreiving the request from a
ring (i.e. the OLR) is no longer used and can be removed.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that the *_ring_begin() functions no longer call the request allocation
code, it is finally safe for the request allocation code to call *_ring_begin().
This is important to guarantee that the space reserved for the subsequent
i915_add_request() call does actually get reserved.
v2: Renamed functions according to review feedback (Tomas Elf).
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that everything above has been converted to use requests,
intel_logical_ring_begin() can be updated to take a request instead of a
ringbuf/context pair. This also means that it no longer needs to lazily allocate
a request if no-one happens to have done it earlier.
Note that this change makes the execlist signature the same as the legacy
version. Thus the two functions could be merged into a ring->begin() wrapper if
required.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that everything above has been converted to use requests, intel_ring_begin()
can be updated to take a request instead of a ring. This also means that it no
longer needs to lazily allocate a request if no-one happens to have done it
earlier.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated intel_ring_cacheline_align() to take a request instead of a ring.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the various ring->signal() implementations to take a request instead of
a ring. This removes their reliance on the OLR to obtain the seqno value that
should be used for the signal.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the ring->sync_to() implementations to take a request instead of a ring.
Also updated the tracer to include the request id.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
[danvet: Rebase since I didn't merge the patch which added ->uniq.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the ring->emit_bb_start() implementation to take a request instead of a
ringbuf/context pair.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the various ring->dispatch_execbuffer() implementations to take a
request instead of a ring.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the ring->emit_request() implementation to take a request instead of a
ringbuf/request pair. Also removed its use of the OLR for obtaining the
request's seqno.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the various ring->add_request() implementations to take a request
instead of a ring. This removes their reliance on the OLR to obtain the seqno
value that the request should be tagged with.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the various ring->emit_flush() implementations to take a request instead
of a ringbuf/context pair.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated intel_emit_post_sync_nonzero_flush(), gen7_render_ring_cs_stall_wa() and
gen8_emit_pipe_control() to take requests instead of rings.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the various ring->flush() functions to take a request instead of a ring.
Also updated the tracer to include the request id.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
[danvet: Rebase since I didn't merge the addition of req->uniq.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the switch_mm() code paths to take a request instead of a ring. This
includes the myriad *_mm_switch functions themselves and a bunch of PDP related
helper functions.
v2: Rebased to newer tree.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the *_ring_flush_all_caches() functions to take requests instead of
rings or ringbuf/context pairs.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the *_ring_workarounds_emit() functions to take requests instead of
ring/context pairs.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated *_ring_invalidate_all_caches(), i915_reset_gen7_sol_offsets() and
i915_emit_box() to take request structures instead of ring or ringbuf/context
pairs.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated mi_set_context() to take a request structure instead of a ring and
context pair.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted i915_gem_l3_remap() to take a request structure instead of a ring.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that everything above has been converted to use request structures, it is
possible to update the lower level move_to_active() functions to be request
based as well.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all callers of i915_add_request() have a request pointer to hand, it is
possible to update the add request function to take a request pointer rather
than pulling it out of the OLR.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the display page flip code to do explicit request creation and
submission rather than relying on the OLR and just hoping that the request
actually gets submitted at some random point.
The sequence is now to create a request, queue the work to the ring, assign the
known request to the flip queue work item then actually submit the work and post
the request.
Note that every single flip function used to finish with
'__intel_ring_advance(ring);'. However, immediately after they return there is
now an add request call which will do the advance anyway. Thus the many
duplicate advance calls have been removed.
v2: Updated commit message with comment about advance removal.
v3: The request can now be allocated by the _sync() code earlier on. Thus the
page flip path does not necessarily need to allocate a new request, it may be
able to re-use one.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The overlay update code path to do explicit request creation and submission
rather than relying on the OLR to do the right thing.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The plan is to pass requests around as the basic submission tracking structure
rather than rings and contexts. This patch updates the i915_gem_object_sync()
code path.
v2: Much more complex patch to share a single request between the sync and the
page flip. The _sync() function now supports lazy allocation of the request
structure. That is, if one is passed in then that will be used. If one is not,
then a request will be allocated and passed back out. Note that the _sync() code
does not necessarily require a request. Thus one will only be created until
certain situations. The reason the lazy allocation must be done within the
_sync() code itself is because the decision to need one or not is not really
something that code above can second guess (except in the case where one is
definitely not required because no ring is passed in).
The call chains above _sync() now support passing a request through which most
callers passing in NULL and assuming that no request will be required (because
they also pass in NULL for the ring and therefore can't be generating any ring
code).
The exeception is intel_crtc_page_flip() which now supports having a request
returned from _sync(). If one is, then that request is shared by the page flip
(if the page flip is of a type to need a request). If _sync() does not generate
a request but the page flip does need one, then the page flip path will create
its own request.
v3: Updated comment description to be clearer about 'to_req' parameter (Tomas
Elf review request). Rebased onto newer tree that significantly changed the
synchronisation code.
v4: Updated comments from review feedback (Tomas Elf)
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the two render_state_init() functions to take a request pointer instead
of a ring. This removes their reliance on the OLR.
v2: Rebased to newer tree.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that everything above has been converted to use requests, it is possible to
update init_context() to take a request pointer instead of a ring/context pair.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In execlist mode, context initialisation is deferred until first use of the
given context. This is because execlist mode has per ring context state and thus
many more context storage objects than legacy mode and many are never actually
used. Previously, the initialisation commands were written to the ring and
tagged with some random request structure via the OLR. This seemed to be causing
a null pointer deference bug under certain circumstances (BZ:88865).
This patch adds explicit request creation and submission to the deferred
initialisation code path. Thus removing any reliance on or randomness caused by
the OLR.
Note that it should be possible to move the deferred context creation until even
later - when the context is actually switched to rather than when it is merely
validated. This would allow the initialisation to be done within the request of
the work that is wanting to use the context. Hence, the extra request that is
created, used and retired just for the context init could be removed completely.
However, this is left for a follow up patch.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated do_switch() to take a request pointer instead of a ring/context pair.
v2: Removed some overzealous req-> dereferencing.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that the request is guaranteed to specify the context, it is possible to
update the context switch code to use requests rather than ring and context
pairs. This patch updates i915_switch_context() accordingly.
Also removed the warning that the request's context must match the last context
switch's context. As the context switch now gets the context object from the
request structure, there is no longer any scope for the two to become out of
step.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The final step in removing the OLR from i915_gem_init_hw() is to pass the newly
allocated request structure in to each step rather than passing a ring
structure. This patch updates both i915_ppgtt_init_ring() and
i915_gem_context_enable() to take request pointers.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that a single per ring loop is being done for all the different
intialisation steps in i915_gem_init_hw(), it is possible to add proper request
management as well. The last remaining issue is that the context enable call
eventually ends up within *_render_state_init() and this does its own private
_i915_add_request() call.
This patch adds explicit request creation and submission to the top level loop
and removes the add_request() from deep within the sub-functions.
v2: Updated for removal of batch_obj from add_request call in previous patch.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The render state initialisation code does an explicit i915_add_request() call to
commit the init commands. It was passing in the initialisation batch buffer to
add_request() as the batch object parameter. However, the batch object entry in
the request structure (which is all that parameter is used for) is meant for
keeping track of user generated batch buffers for blame tagging during GPU
hangs.
This patch clears the batch object parameter so that kernel generated batch
buffers are not tagged as being user generated.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The start of day context initialisation code in i915_gem_context_enable() loops
over each ring and calls the legacy switch context or the execlist init context
code as appropriate.
This patch moves the ring looping out of that function in to the top level
caller i915_gem_init_hw(). This means the a single pass can be made over all
rings doing the PPGTT, L3 remap and context initialisation of each ring
altogether.
For: VIZ-5115
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>