Fix up the error unwind for logical_ring_init() failing by moving the
cleanup into the callers who own the various bits of state during
initialisation, so we don't forget to free the state allocated by the
caller.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920195948.16448-1-chris@chris-wilson.co.uk
Once we have flushed the first request through the system to both load a
context and record the default state; tell the GPU to park and idle
itself, putting itself immediately (hopefully at least) into a
powersaving state, and allowing ourselves to start from known state
after setting up all our bookkeeping.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920161343.1117-1-chris@chris-wilson.co.uk
It really wants dev_priv anyway, also now matches i915_gem_init_stolen.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920142707.19659-2-matthew.auld@intel.com
If we copy all the contents of the sg across and not just the page link,
we can then also put it to work in fake_get_huge_pages and beyond.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920142707.19659-1-matthew.auld@intel.com
Stolen memory is lost across S4 (hibernate) or S3-RST as it is a portion
of ordinary volatile RAM. As we allocate our rings from stolen, this may
include the rings used for our preempt context and their breadcrumb
instructions. In order to allow preemption following hibernation and
loss of stolen memory, we therefore need to repopulate the instructions
inside the lost ring upon resume. To handle both module load and resume,
we simply defer constructing the ring to first use.
Testcase: igt/drv_selftest/live_gem
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180919205432.18394-1-chris@chris-wilson.co.uk
We need to exercise the HW and submission paths for switching contexts
rapidly to check that features such as execlists' wa_tail are adequate.
Plus it's an interesting baseline latency metric.
v2: Check the initial request for allocation errors
v3: Use finite waits for more robust handling of broken code
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920105809.1872-1-chris@chris-wilson.co.uk
PSR requires AUX IO power well to be enabled. This was already in place
for CNL, extend this for ICL too. Not enabling the power well results in
the aux error interrupts when the hardware exits PSR.
Reported-by: Casey G Bowman <casey.g.bowman@intel.com>
Reported-by: Jyoti R Yadav <jyoti.r.yadav@intel.com>
Cc: Matt Atwood <matthew.s.atwood@intel.com>
Cc: Jyoti R Yadav <jyoti.r.yadav@intel.com>
Cc: Casey G Bowman <casey.g.bowman@intel.com>
Tested-by: Casey G Bowman <casey.g.bowman@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914001822.2503-1-dhinakaran.pandiyan@intel.com
SDVO encoders can have multiple different types of outputs hanging off
them. Currently the code tries to muck around with various is_foo
flags in the encoder to figure out which type its driving. That doesn't
work with atomic and other stuff, so let's nuke those flags and just
look at which type of connector we're actually dealing with.
The is_hdmi we'll need as that's not discoverable via the output flags,
but we'll just move it under the connector.
We'll also move the sdvo fixed mode handling out from the .get_modes()
hook into the sdvo lvds init function so that we can bail out properly
if there is no fixed mode to be found.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180917151504.8754-1-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Smatch reports:
../drivers/gpu/drm/i915/intel_sprite.c:1192 skl_plane_check_fb() warn: was || intended here instead of &&?
Obviously smatch is correct here since we're trying to check if we're
using either of the ccs modifiers. Since we now have is_ccs_modifier()
let's use it to fix this.
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: e21c2d3310 ("drm/i915: Move skl plane fb related checks into a better place")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180918131059.793-1-ville.syrjala@linux.intel.com
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Clean up some cases where we're dealing with GTT pages instead of
system pages to use I915_GTT_PAGE_SIZE instead of PAGE_SHIT. So
just replace the the shifts with mul/div as appropriate. These
are the easy ones, the rest probably need some actual thought.
No real changes in the generated asm. Only gen8_ppgtt_insert_4lvl()
was affected as gcc decided to do the following change:
- be9: 89 d9 mov %ebx,%ecx
- beb: c1 e1 0c shl $0xc,%ecx
- bee: 48 63 c9 movslq %ecx,%rcx
+ be9: 48 63 cb movslq %ebx,%rcx
+ bec: 48 c1 e1 0c shl $0xc,%rcx
and that then shifted a bunch of the offset by one byte. I presume
the sign extensions in the asm are due to integer promotions from
u16 etc. Hopefully someone has confirmed that those don't end up
doing the wrong thing for us.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180917171414.19220-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
The prior assumption was that we did not need to reset the CSB on
wedging when cancelling the outstanding requests as it would be cleaned
up in the subsequent reset prior to restarting the GPU. However, what
was not accounted for was that in preparing for the reset, we would try
to process the outstanding CSB entries. If the GPU happened to complete
a CS event just as we were performing the cancellation of requests, that
event would be kept in the CSB until the reset -- but our bookkeeping
was cleared, causing confusion when trying to complete the CS event.
v2: Use a sanitize on unwedge to avoid interfering with eio suspend
(where we intentionally disable GPU reset).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107925
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-3-chris@chris-wilson.co.uk
If an asynchronous wait on a foriegn fence, we print a warning
indicating which fence was not signaled. As i915_sw_fences become more
common, include the debug hint (the symbol-name of the target) to help
identify the waiter. E.g.
[ 31.968144] Asynchronous wait on fence sw_sync:gem_eio:1 timed out (hint:submit_notify [i915])
We also want to downgrade from a warning to a notice (normal but
significant condition) as the timeout is imposed and controlled by the
caller (i.e. it is deliberate) and can be provoked by userspace.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914124007.18790-1-chris@chris-wilson.co.uk
That we use a WB mapping for updating the RING_TAIL register inside the
context image even on !llc machines has been a source of consternation
for every reader. It appears to work on bsw+, but it may just have been
that we have been incredibly bad at detecting the errors.
v2: With extra enthusiasm.
v3: Drop force of map type for pinned default_state as by the time we
pin it, the map type is always WB and doesn't conflict with the earlier
use by ce->state.
v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so
we do not need the MAP_FORCE littered around the backends
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
Check we can indeed acquire a WB mapping of the context image on module
load. Later this will give us the opportunity to validate that we can
switch from WC to WB as required.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-2-chris@chris-wilson.co.uk
Now that we reload both RING_HEAD and RING_TAIL when rebinding the
context, we do not need to scrub those registers immediately on resume.
v2: Handle the perma-pinned contexts.
v3: Set RING_TAIL on context-pin so that we always have known state in
the context image for the ring registers and all parties have similar
code (ripe for refactoring).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-1-chris@chris-wilson.co.uk
In order to reduce latency when checking for idle we kick the tasklet
directly. Sometimes this is not enough as it is queued on another cpu
and so to improve the accuracy of this idle-check (and so to reduce
latency overall by avoiding another pass, or worse declaring a timeout!)
wait for the tasklet to complete.
References: https://bugs.freedesktop.org/show_bug.cgi?id=107916
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-2-chris@chris-wilson.co.uk
If we try and fail to allocate a i915_request, we apply some
backpressure on the clients to throttle the memory allocations coming
from i915.ko. Currently, we wait until completely idle, but this is far
too heavy and leads to some situations where the only escape is to
declare a client hung and reset the GPU. The intent is to only ratelimit
the allocation requests and to allow ourselves to recycle requests and
memory from any long queues built up by a client hog.
Although the system memory is inherently a global resources, we don't
want to overly penalize an unlucky client to pay the price of reaping a
hog. To reduce the influence of one client on another, we can instead of
waiting for the entire GPU to idle, impose a barrier on the local client.
(One end goal for request allocation is for scalability to many
concurrent allocators; simultaneous execbufs.)
To prevent ourselves from getting caught out by long running requests
(requests that may never finish without userspace intervention, whom we
are blocking) we need to impose a finite timeout, ideally shorter than
hangcheck. A long time ago Paul McKenney suggested that RCU users should
ratelimit themselves using judicious use of cond_synchronize_rcu(). This
gives us the opportunity to reduce our indefinite wait for the GPU to
idle to a wait for the RCU grace period of the previous allocation along
this timeline to expire, satisfying both the local and finite properties
we desire for our ratelimiting.
There are still a few global steps (reclaim not least amongst those!)
when we exhaust the immediate slab pool, at least now the wait is itself
decoupled from struct_mutex for our glorious highly parallel future!
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106680
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-1-chris@chris-wilson.co.uk
IPC may cause underflows if not used with dual channel symmetric
memory configuration. Disable IPC for non symmetric configurations in
affected platforms.
Display WA #1141
Changes Since V1:
- Re-arrange the code.
- update wrapper to return if memory is symmetric (Rodrigo)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-6-mahesh1.kumar@intel.com
Memory with 16GB dimms require an increase of 1us in level-0 latency.
This patch implements the same.
Bspec: 4381
changes since V1:
- s/memdev_info/dram_info
- make skl_is_16gb_dimm pure function
Changes since V2:
- make is_16gb_dimm more generic
- rebase
Changes since V3:
- Simplify condition (Maarten)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831110942.9234-1-mahesh1.kumar@intel.com
This patch adds support to decode system memory bandwidth and other
parameters for skylake and Gen9+ platforms, which will be used for
arbitrated display memory bandwidth calculation in GEN9 based
platforms and WM latency level-0 Work-around calculation on GEN9+.
Changes Since V1:
- s/memdev_info/dram_info
- create a struct to hold channel info
Changes Since V2:
- rewrite code to adhere i915 coding style
- not valid for GLK
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-3-mahesh1.kumar@intel.com
This patch adds support to decode system memory bandwidth and other
parameters for broxton platform, which will be used for arbitrated
display memory bandwidth calculation in GEN9 based platforms and
WM latency level-0 Work-around calculation on GEN9+ platforms.
Changes since V1:
- s/memdev_info/dram_info
Changes since V2:
- Adhere to i915 coding style (Rodrigo)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-2-mahesh1.kumar@intel.com
Add Support to load DMC on Icelake.
While at it, also add support to load the firmware
during system resume.
v2: load firmware during system resume.(Imre)
v3: enable has_csr for icelake.(Jyoti)
v4: Only load the firmware in this patch
Cc: Jyoti Yadav <jyoti.r.yadav@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180828003844.4682-2-anusha.srivatsa@intel.com
If we have framebuffers that are >= 4GiB in size we will overflow
the fb size check in intel_fill_fb_info().
Currently that is only possible with NV12 and CCS as offsets[1]
may be anything between 0 and 0xffffffff. offsets[0] is currently
required to be 0 so we can't hit the overflow with any single
plane format (thanks to max fb size of 8kx8k and max stride of
32 KiB).
In the future we may allow almost any framebuffer to exceed 4GiB
in size so we really should fix the overflow. Not that the overflow
is particularly dangerous. It's mostly just a sanity check against
insane userspace. The display engine can't write to memory anyway
so I suppose in the worst case we might anger the hw by attempting
scanout past the end of the ggtt, or we might scan out some data
that we're not supposed to see from other parts of the ggtt.
Note that triggering this overflow depends on the driver
aligning the fb height to the next tile boundary to push the
calculated size above 4GiB. With linear buffers the effective
tile height is one so that never happens, and the core already
has a check for 32bit overflow of offsets[]+pitches[]*height.
v2: Drop the unnecessary cast (Chris)
Testcase: igt/kms_big_fb/x-tiled-addfb-size-offset-overflow
Testcase: igt/kms_big_fb/y-tiled-addfb-size-offset-overflow
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912180443.28649-1-ville.syrjala@linux.intel.com
Use I915_GTT_PAGE_SIZE when talking about GTT pages rather than
physical pages.
There are some PAGE_SHIFTs left though. Not sure if we want to
introduce I915_GTT_PAGE_SHIFT or what?
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> # at least some of it :)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913150405.706-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
We can remove the update-via-batch-buffer code path, which is basically an
effective duplicate of update-via-context-image path, if we notice that
after we have idled the GPU, we can update the context image even of the
kernel context directly. (Update-via-batch-buffer path existed only to
solve the problem of how to update the kernel context image.)
Only additional thing needed is to activate the edited configuration by
sending one empty request down the pipe. This accomplishes context restore
of the updated kernel context and so the OA configuration gets written out
to it's control registers.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912152930.28237-1-tvrtko.ursulin@linux.intel.com
Move the display w/a #1175 to a better place. That place
being the new skl+ specific plane->check() hook. This leaves
the skl_check_plane_surface() stuff to deal with the gtt offset
and src coordinate stuff as originally envisioned.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-12-ville.syrjala@linux.intel.com
Move the skl+ specific framebuffer related checks from
intel_plane_atomic_check_with_state() into a new function
(skl_plane_check_fb()) which we'll simply call from the skl
plane->check() hook.
v2: Split out the Y/Yf+CCS vs. interlaced change (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-11-ville.syrjala@linux.intel.com
Split up intel_check_primary_plane() and intel_check_sprite_plane()
into per-platform variants. This way we can get a unified behaviour
between the SKL universal planes, and we stop checking for non-SKL
specific scaling limits for the "sprite" planes. And we now get
a natural place where to add more plarform specific checks.
v2: Split the .check_plane() calling convention change out (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-10-ville.syrjala@linux.intel.com
We can easily calculate the plane can_scale/min_downscale on demand.
And later on we'll probably want to start calculating these dynamically
based on the cdclk just as skl already does.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-9-ville.syrjala@linux.intel.com
Let's store the final plane stride in the plane state. This avoids
having to pick between the normal vs. rotated stride during hardware
programming. And once we get GTT remapping the plane stride will
no longer match the fb stride so we'll need a place to store it
anyway.
v2: Keep checking fb->pitches[0] for cursor as later on we won't
populate plane_state->color_plane[0].stride for invisible planes
and we have been checking the cursor fb stride even for invisible
planes
v3: s/betwen/between in commit msg (José)
v4: Check color_plane[0].stride instead of fb->pitches[0] in
the skl_check_main_surface() X-tiling kludge
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911150139.23922-1-ville.syrjala@linux.intel.com
Each plane may have different stride limitations. Let's add a new
plane function to retutn the maximum stride for each plane. There's
going to be some use for this outside the .atomic_check() stuff hence
the separate hook.
v2: Fix ilk+ x-tiled max stride to be 32k (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-3-ville.syrjala@linux.intel.com
Rename some of the tile_offset() functions to aligned_offset() since
they operate on both linear and tiled functions. And we'll include
_plane_ in the name of all the variants that take a plane state.
Should make it more clear which function to use where.
v2: Pimp the patch subject a bit (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-2-ville.syrjala@linux.intel.com
If the caller supplies more than 4G of objects and than one that has to
be in the low 4G, it is possible for the low 4G to be full before we
attempt to find room for the last object that must be there. As we don't
reorder the two types, every pass hits the same problem and we fail with
ENOSPC. However, if we impose a little bit of ordering between the two
classes of objects, on the second pass we will be able to fit the
special object as we do it first. For setups that only use !48b objects,
we now reverse the order between passes, hopefully making the subsequent
passes more likely to succeed given that we are trying a different
order (rather than repeating the previous pass!)
v2: Quick one line explanation for the relative priorities given to
reservations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912101133.31377-1-chris@chris-wilson.co.uk
Userspace should be free to race against itself and shoot itself in
the foot if it so desires to adjust a parameter at the same time as
submitting a batch to that context. As such, the struct_mutex in context
setparam is only being used to serialise userspace against itself and
not for any protection of internal structs and so is superfluous.
v2: Separate user_flags from internal flags to reduce chance of
interference; and use locked bit ops for user updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911132206.23032-1-chris@chris-wilson.co.uk
The user parameters to put_image are not copied back to userspace
(DRM_IOW), and so we can modify the ioctl parameters (having already been
copied to a temporary kernel struct) directly and use those in place,
avoiding another temporary malloc and lots of manual copying.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-2-chris@chris-wilson.co.uk
Given that we are now reasonably confident in our ability to detect and
reserve the stolen memory (physical memory reserved for graphics by the
BIOS) for ourselves on most machines, we can put it to use. In this
case, we need a page to hold the overlay registers.
On an i915g running MythTv, H Buus noticed that
commit 6a2c4232ec
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Nov 4 04:51:40 2014 -0800
drm/i915: Make the physical object coherent with GTT
introduced stuttering into his video playback. After discarding the
likely suspect of it being the physical cursor updates, we were left
with the use of the phys object for the overlay. And lo, if we
completely avoid using the phys object (allocated just once on module
load!) by switching to stolen memory, the stuttering goes away.
For lack of a better explanation, claim victory and kill two birds with
one stone.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107600
Fixes: 6a2c4232ec ("drm/i915: Make the physical object coherent with GTT")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-1-chris@chris-wilson.co.uk