... so we handle that for i915_gem_fault() in the same manner as
ERESTARTSYS, or we send a SIGBUS to the faulting application.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
A lot of minor tweaks to fix the tracepoints, improve the outputting for
ftrace, and to generally make the tracepoints useful again. It is a start
and enough to begin identifying performance issues and gaps in our
coverage.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By returning EAGAIN upon a wedged GPU before attempting to wait, we
would hit an infinite loop of repeating operation without ever
progressing. Instead this needs to be EIO so that userspace knows that
the GPU is truly wedged and not in the process of error recovery.
Similarly, we need to handle the error recovery during i915_gem_fault.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is just an idea that might or might not be a good idea,
it basically adds two ioctls to create a dumb and map a dumb buffer
suitable for scanout. The handle can be passed to the KMS ioctls to create
a framebuffer.
It looks to me like it would be useful in the following cases:
a) in development drivers - we can always provide a shadowfb fallback.
b) libkms users - we can clean up libkms a lot and avoid linking
to libdrm_*.
c) plymouth via libkms is a lot easier.
Userspace bits would be just calls + mmaps. We could probably
mark these handles somehow as not being suitable for acceleartion
so as top stop people who are dumber than dumb.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Instead of reporting EIO upfront in the entrance of an ioctl that may or
may not attempt to use the GPU, defer the actual detection of an invalid
ioctl to when we issue a GPU instruction. This allows us to continue to
use bo in video memory (via pread/pwrite and mmap) after the GPU has hung.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We can only utilize the stolen portion of the GTT if we are in sole
charge of the hardware. This is only true if using GEM and KMS,
otherwise VESA continues to access stolen memory.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
There are I915_NUM_RINGS-1 inter-ring synchronisation counters, but we
were clearing I915_NUM_RINGS of them. Oops.
Reported-by: Jiri Slaby <jirislaby@gmail.com>
Tested-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Rather than evicting an object at random, which is unlikely to alleviate
the memory pressure sufficient to allow us to continue, zap the entire
aperture. That should give the system long enough to recover and reap
some pages from the evicted objects, forestalling the allocation error
for the new object.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to retire active buffers whilst no client is active, we need to
insert our own flush requests onto the ring.
This is useful for servers that queue up some rendering and then go to
sleep as it allows us to the complete processing of those requests,
potentially making that memory available again much earlier.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We need to ensure that writes through the GTT land before any
modification to the MMIO registers and so must impose a mandatory write
barrier when flushing the GTT domain. This was revealed by relaxing the
write ordering by experimentally mapping the registers and the GATT as
write-combining.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The relative-to-general state default is useless as it means having to
rewrite the streaming kernels for each batch. Relative-to-surface is
more useful, as that stream usually needs to be rewritten for each
batch. And absolute addressing mode, vital if you start streaming
state, is also only available by adjusting the register...
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to enforce the correct memory barriers for irq get/put, we need
to perform the actual counting using atomic operations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order for bos to retire eventually, a request must be sent down the
ring. This is expected, for example, by occlusion queries for which mesa
will wait upon (whilst running glean) before issuing more batches and so
the normal activity upon the ring is suspended and we need to emit a
request to clear the idle ring.
Reported-by: Jinjin, Wang <jinjin.wang@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30380
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
I'm still seeing tiling corruption of PutImage and CopyArea (I think)
under mutter on pnv, so obviously the pipelining logic is deeply flawed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The bulk of the change is to convert the growing list of rings into an
array so that the relationship between the rings and the semaphore sync
registers can be easily computed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As the tracepoint is now decoupled from when the actual register is
assigned and was never complemented by detailing when the object lost
its fence, it has outlived its limited usefulness. Profiling the actual
stalls is a far more profitable venture anyway.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As the userspace mappings are torn down on every GPU write, we prefer to
track when the buffer is activated (via a fresh i915_gem_fault). This
makes the LRU conceptually simpler. With coherent mappings, the
remaining use-case for set_domain_ioctl is GPU synchronisation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With this change, every batchbuffer can use all available fences (save
pinned and scanout, of course) without ever stalling the gpu!
In theory. Currently the actual pipelined update of the register is
disabled due to some stability issues. However, just the deferred update
is a significant win.
Based on a series of patches by Daniel Vetter.
The premise is that before every access to a buffer through the GTT we
have to declare whether we need a register or not. If the access is by
the GPU, a pipelined update to the register is made via the ringbuffer,
and we track the last seqno of the batches that access it. If by the
CPU we wait for the last GPU access and update the register (either
to clear or to set it for the current buffer).
One advantage of being able to pipeline changes is that we can defer the
actual updating of the fence register until we first need to access the
object through the GTT, i.e. we can eliminate the stall on set_tiling.
This is important as the userspace bo cache does not track the tiling
status of active buffers which generate frequent stalls on gen3 when
enabling tiling for an already bound buffer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... so that upon first use after resume we will reacquire the fence reg.
Reported-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We don't track gpu flush request in any special way. So even with
obj->write_domain == 0, a gpu flush might be outstanding but no
yet executed. Even worse, the latest request might use the object
only for reading. So and unconditional call to object_wait_rendering
is needed for !pipelined.
Hence revert that patch fully and untangle the flushing from the
synchronization again.
Reported-by: Keith Packard <keithp@keithp.com>
Tested-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Besides the minimal improvement in reducing the execbuffer overhead, the
real benefit is clarifying a few routines.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
A number of dragons have been seen lurking within the execbuffer code.
The first step is then to isolate them from the rest and begin to
scrutinise them in depth. Suggested by Daniel Vetter.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Simply remove our accounting of objects inside the aperture, keeping
only track of what is in the aperture and its current usage. This
removes the over-complication of BUGs that were attempting to keep the
accounting correct and also removes the overhead of the accounting on
the hot-paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With KMS, we can simply relinquish the fence when we idle the GPU and
reassign it upon first use.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Avoid evicting buffers that will be used later in the batch in order to
make room for the initial buffers by pinning all bound buffers in a
single pass before binding (and evicting for) fresh buffer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This used to check the precondition that all fences were to be located
in a mappable area, redundant now as those two parameters are combined
into one.
After pinning, we assert that the buffer is bound into the desired
region.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The pipe control object is allocated by the device for the sole use of the
render ringbuffer. Move this detail from the general code to the render
ring buffer initialisation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Combining map_and_fenceable revealed a bug in
i915_gem_object_gtt_size() in that it always computed the appropriate
fence size for the object regardless of tiling state which caused us to
over-allocate linear buffers when binding to the GTT.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is required to restore gtt mappings on resume when agp is gone.
The right way to do this would be to make sturct drm_mm_node embeddable
and use the allocation list maintained by the drm memory manager. But
that's a bigger project. Getting rid of the per bo agp_mem will save
more memory than this wastes, anyway.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Currently if we hit a pagefault when applying a user relocation for the
execbuffer, we bail and return EFAULT to the application. Instead, we
need to unwind, drop the dev->struct_mutex, copy all the relocation
entries to a vmalloc array (to avoid any potential circular deadlocks
when resolving the pagefault), retake the mutex and then apply the
relocations. Afterwards, we need to again drop the lock and copy the
vmalloc array back to userspace.
v2: Incorporate feedback from Daniel Vetter.
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Commit 2549d6c2 removed the vmalloc used for temporary storage of the
relocation lists used during execbuffer. However, our use of vmalloc was
being protected by an integer overflow check which we do want to
preserve!
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Linus Torvalds found that it was rather trivial to trigger a system
freeze:
In fact, with lockdep, I don't even need to do the sysrq-d thing: it
shows the bug as it happens. It's the X server taking the same lock
recursively.
Here's the problem:
=============================================
[ INFO: possible recursive locking detected ]
2.6.37-rc2-00012-gbdbd01a #7
---------------------------------------------
Xorg/2816 is trying to acquire lock:
(&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c626c>] i915_gem_fault+0x50/0x17e
but task is already holding lock:
(&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c403b>] i915_mutex_lock_interruptible+0x28/0x4a
other info that might help us debug this:
2 locks held by Xorg/2816:
#0: (&dev->struct_mutex){+.+.+.}, at: [<ffffffff812c403b>] i915_mutex_lock_interruptible+0x28/0x4a
#1: (&mm->mmap_sem){++++++}, at: [<ffffffff81022d4f>] page_fault+0x156/0x37b
This recursion was introduced by rearranging the locking to avoid the
double locking on the fast path (4f27b5d and fbd5a26d) and the
introduction of the prefault to encourage the fast paths (b5e4f2b). In
order to undo the problem, we rearrange the code to perform the access
validation upfront, attempt to prefault and then fight for control of the
mutex. the best case scenario where the mutex is uncontended the
prefaulting is not wasted.
Reported-and-tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
g33/pineview doesn't have any alignment constrains for unfenced tiled
buffers. But older chips have. Fix this.
Problem introduced in a00b10c360.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
An old and oft reported bug, is that of the GPU hanging on a
MI_WAIT_FOR_EVENT following a mode switch. The cause is that the GPU is
waiting on a scanline counter on an inactive pipe, and so waits for a
very long time until eventually the user reboots his machine.
We can prevent this either by moving the WAIT into the kernel and
thereby incurring considerable cost on every swapbuffers, or by waiting
for the GPU to retire the last batch that accesses the framebuffer
before installing a new one. As mode switches are much rarer than swap
buffers, this looks like an easy choice.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28964
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=29252
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
... and so prevent a potential circular reference:
[ INFO: possible circular locking dependency detected ]
2.6.37-rc1-uwe1+ #4
-------------------------------------------------------
Xorg/1401 is trying to acquire lock:
(&mm->mmap_sem){++++++}, at: [<c01e4ddb>] might_fault+0x4b/0xa0
but task is already holding lock:
(&dev->struct_mutex){+.+.+.}, at: [<f869c3ac>]
i915_mutex_lock_interruptible+0x3c/0x60 [i915]
which lock already depends on the new lock.
When the locking around the pwrite ioctl was simplified, I did not spot
that the phys path never took any locks and so we introduced this
potential circular reference.
Reported-by: Uwe Helm <uwe.helm@googlemail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Instead of killing the process, just return no page found and reschedule
the process giving the GPU some time to (hopefully) recover.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
a00b10c360 "Only enforce fence limits inside the GTT" also
added a fenceable/mappable disdinction when binding/pinning buffers.
This only complicates the code with no pratical gain:
- In execbuffer this matters on for g33/pineview, as this is the only
chip that needs fences and has an unmappable gtt area. But fences
are only possible in the mappable part of the gtt, so need_fence
implies need_mappable. And need_mappable is only set independantly
with relocations which implies (for sane userspace) that the buffer
is untiled.
- The overlay code is only really used on i8xx, which doesn't have
unmappable gtt. And it doesn't support tiled buffers, currently.
- For all other buffers it's a bug to pass in a tiled bo.
In short, this disdinction doesn't have any practical gain.
I've also reverted mapping the overlay and context pages as possibly
unmappable. It's not worth being overtly clever here, all the big
gains from unmappable are for execbuf bos.
Also add a comment for a clever optimization that confused me
while reading the original patch by Chris Wilson.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When merging Daniel's full-gtt patches I had a set of tweaks which I
thought I had undone. I was half right...
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31286
Reported-by: jinjin.wang@intel.com
Reported-by: Alexey Fisher <bug-track@fisher-privat.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Take two passes to evict everything whilst searching for sufficient free
space to bind the batchbuffer. After searching for sufficient free space
using LRU eviction, evict everything that is purgeable and try again.
Only then if there is insufficient free space (or the GTT is too badly
fragmented) evict everything from the aperture and try one last time.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Accessing the uninitialised obj->pages instead of the local page lead to
an OOPs.
Reported-by: Xavier Chantry <chantry.xavier@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
So long as we adhere to the fence registers rules for alignment and no
overlaps (including with unfenced accesses to linear memory) and account
for the tiled access in our size allocation, we do not have to allocate
the full fenced region for the object. This allows us to fight the bloat
tiling imposed on pre-i965 chipsets and frees up RAM for real use. [Inside
the GTT we still suffer the additional alignment constraints, so it doesn't
magic allow us to render larger scenes without stalls -- we need the
expanded GTT and fence pipelining to overcome those...]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Also spotted by Dan Carpenter.
obj->pin_count is unsigned so the BUG_ON(obj->pin_count<0) will never
trigger.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The error code is only expected during the actual pruning and not during
the first measurement (nr_to_scan == 0) pass.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
It is possible for the active list to only contain a read-only buffer so
that the ring->gpu_write_list remains entry. This leads to an
inconsistency between i915_gpu_is_active() and i915_gpu_idle() causing
an infinite spin during the shrinker and an assertion failure that
i915_gpu_idle() does indeed flush all buffers from the active lists.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In order to force a page-fault on a GTT mapping after we start using it
from the GPU and so enforce correct CPU/GPU synchronisation, we need to
invalidate the mapping.
Pointed out by Owain G. Ainsworth.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By using read_cache_page() for individual pages during pwrite/pread we
can eliminate an unnecessary large allocation (and immediate free) of
obj->pages. Also this eliminates any potential nesting of get/put pages,
simplifying the code and preparing the path for greater things.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Since we rarely use the mmap_offset and it is easily computable from the
obj->map_list.hash, remove it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Eliminate the racy device unload by embedding a shrinker into each
device. Smaller, simpler code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
More precisely: For those that _need_ to be mappable. Also add two
BUG_ONs in fault and pin to check the consistency of the mappable
flag.
Changes in v2:
- Add tracking of gtt mappable space (to notice mappable/unmappable
balancing issues).
- Improve the mappable working set tracking by tracking fault and pin
separately.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This way we can make some more educated guesses as to why exactly
we can't use 2G apertures to their full potential ;)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
At least the part that's currently enabled by the BIOS.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In i915_gem_object_pin obviously unbind only if mappable is true.
This is the last part to enable gtt_mappable_end != gtt_size, which
the next patch will do.
v2: Fences on g33/pineview only work in the mappable part of the
gtt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Like before add a parameter mappable (also to gem_object_pin) and
set it depending upon the context. Only bos that are brought into
the gtt due to an execbuffer call can be put into the unmappable
part of the gtt, everything else (especially pinned objects) need
to be put into the mappable part of the gtt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Add a mappable parameter to i915_gem_evict_something to distinguish
the two cases (non-restricted vs. mappable gtt allocations). No
functional changes because the mappable limit is set to the end of
the gtt currently.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Preparing the ringbuffer for adding new commands can fail (a timeout
whilst waiting for the GPU to catch up and free some space). So check
for any potential error before overwriting HEAD with new commands, and
propagate that error back to the user where possible.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The ringbuffer keeps a pointer to the parent device, so we can use that
instead of passing around the pointer on the stack.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* 'drm-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (476 commits)
vmwgfx: Implement a proper GMR eviction mechanism
drm/radeon/kms: fix r6xx/7xx 1D tiling CS checker v2
drm/radeon/kms: properly compute group_size on 6xx/7xx
drm/radeon/kms: fix 2D tile height alignment in the r600 CS checker
drm/radeon/kms/evergreen: set the clear state to the blit state
drm/radeon/kms: don't poll dac load detect.
gpu: Add Intel GMA500(Poulsbo) Stub Driver
drm/radeon/kms: MC vram map needs to be >= pci aperture size
drm/radeon/kms: implement display watermark support for evergreen
drm/radeon/kms/evergreen: add some additional safe regs v2
drm/radeon/r600: fix tiling issues in CS checker.
drm/i915: Move gpu_write_list to per-ring
drm/i915: Invalidate the to-ring, flush the old-ring when updating domains
drm/i915/ringbuffer: Write the value passed in to the tail register
agp/intel: Restore valid PTE bit for Sandybridge after bdd3072
drm/i915: Fix flushing regression from 9af90d19f
drm/i915/sdvo: Remove unused encoding member
i915: enable AVI infoframe for intel_hdmi.c [v4]
drm/i915: Fix current fb blocking for page flip
drm/i915: IS_IRONLAKE is synonymous with gen == 5
...
Fix up conflicts in
- drivers/gpu/drm/i915/{i915_gem.c, i915/intel_overlay.c}: due to the
new simplified stack-based kmap_atomic() interface
- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c: added .llseek entry due to BKL
removal cleanups.
Keep the current interface but ignore the KM_type and use a stack based
approach.
The advantage is that we get rid of crappy code like:
#define __KM_PTE \
(in_nmi() ? KM_NMI_PTE : \
in_irq() ? KM_IRQ_PTE : \
KM_PTE0)
and in general can stop worrying about what context we're in and what kmap
slots might be appropriate for that.
The downside is that FRV kmap_atomic() gets more expensive.
For now we use a CPP trick suggested by Andrew:
#define kmap_atomic(page, args...) __kmap_atomic(page)
to avoid having to touch all kmap_atomic() users in a single patch.
[ not compiled on:
- mn10300: the arch doesn't actually build with highmem to begin with ]
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
... to prevent flush processing of an idle (or even absent) ring.
This fixes a regression during suspend from 87acb0a5.
Reported-and-tested-by: Alexey Fisher <bug-track@fisher-privat.net>
Tested-by: Peter Clifton <pcjc2@cam.ac.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When the object has been written to by the gpu it remains on the ring
until its flush has been retired. However, when the object is moving to
the ring and the associated cache needs to be invalidated, we need to
perform the flush on the target ring, not the one it came from (which is
NULL in the reported case and so the flush was entirely absent).
Reported-by: Peter Clifton <pcjc2@cam.ac.uk>
Reported-and-tested-by: Alexey Fisher <bug-track@fisher-privat.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Whilst moving the code around in 9af90d19f, I dropped the or'ing in of
new write domains which would zero out the write domain for a render
target if later reused as a source later in the batch. This meant that
we might drop a required flush before reading from the render target.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31043
Reported-by: xunx.fang@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Based on an original patch by Zhenyu Wang, this initializes the BLT ring for
SandyBridge and enables support for user execbuffers.
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If the userspace driver is using a constant relocation array with a
static buffer, they will pass the same relocation array back to the
kernel. So we *do* need to update the presumed offset value in those
relocations to reflect the current object so that they remain correct
with future batchbuffers and we avoid the necessity of having to suspend
execution and perform redundant relocations.
Fixes the regression introduced by 12f889c for applications using
absolute addressing on trees of buffer (i.e. the current consumers of
libdrm_intel.so).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30996
Reported-by: Wang, Jinjin <jinjin.wang@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
To handle retirements, we need per-ring tracking of active objects.
To handle evictions, we need global tracking of active objects.
As we enable more rings, rebuilding the global list from the individual
per-ring lists quickly grows tiresome and overly complicated. Tracking the
active objects in two lists is the lesser of two evils.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... by always initialising the empty ringbuffer it is always then safe
to check whether it is active.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The most frequent relocation within a batchbuffer is a contiguous sequence
of vertex buffer relocations, for which we can virtually eliminate the
drm_gem_object_lookup() overhead by caching the last handle to object
translation.
In doing so we refactor the pin and relocate retry loop out of
do_execbuffer into its own helper function and so improve the error
paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
One of the primarily consumers of the i915 driver is X, a large signal
driven application. Frequently when writing into the buffers, there is a
pending signal which causes us not to take the interruptible lock but
then we need to take that same lock around the object unreference. By
rearranging the code to do the interruptible lock as the first check, we
can avoid the frequent additional locking around the unreference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... to avoid reacquiring it to drop the object reference count on
exit. Note we have to make sure we now drop (and reacquire) the lock
around acquiring the mm semaphore on the slow paths.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
After allocation a handle for the fresh object, we know that we can
safely drop the refcnt without triggering a free so we do not need the
mutex. Strangely, this mutex acquisition is the one that appears on
driver profiles.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Avoid an early eviction of the batch buffer into the uncached GTT
domain, and so do the relocation fixup in cacheable memory.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... perform an access validation check up front instead and copy them in
on-demand, during i915_gem_object_pin_and_relocate(). As around 20% of
the CPU overhead may be spent inside vmalloc for the relocation entries
when submitting an execbuffer [for x11perf -aa10text], the savings are
considerable and result in around a 10% throughput increase [for glyphs].
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Currently, if a batch buffer refers to an object with a pending flip,
then we sleep until that pending flip is completed (unpinned and
signalled). This is so that a flip can be queued and the user can
continue rendering to the backbuffer oblivious to whether the buffer is
still pinned as the scan out. (The kernel arbitrating at the last moment
to stall the batch and wait until the buffer is unpinned and replaced as
the front buffer.)
As we only have a queue depth of 1, we can simply wait for the current
pending flip to complete and continue rendering. We can achieve this
with a single WAIT_FOR_EVENT command inserted into the ring buffer prior
to executing the batch, *without* stalling the client.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* 'drm-intel-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/ickle/drm-intel:
drm/i915: Rephrase pwrite bounds checking to avoid any potential overflow
drm/i915: Sanity check pread/pwrite
drm/i915: Use pipe state to tell when pipe is off
drm/i915: vblank status not valid while training display port
drivers/gpu/drm/i915/i915_gem.c: Add missing error handling code
drm/i915: Fix refleak during eviction.
drm/i915: fix GMCH power reporting
Move the access control up from the fast paths, which are no longer
universally taken first, up into the caller. This then duplicates some
sanity checking along the slow paths, but is much simpler.
Tracked as CVE-2010-2962.
Reported-by: Kees Cook <kees@ubuntu.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Extend the error handling code with operations found in other nearby error
handling code
A simplified version of the sematic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@r exists@
@r@
statement S1,S2,S3;
constant C1,C2,C3;
@@
*if (...)
{... S1 return -C1;}
...
*if (...)
{... when != S1
return -C2;}
...
*if (...)
{... S1 return -C3;}
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
The return from move_to_gtt_domain() may indicate a pending signal which
needs to handled as opposed to an actual error, for instance, so report
the original return value rather than forcing an EINVAL.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6:
vmwgfx: Fix fb VRAM pinning failure due to fragmentation
vmwgfx: Remove initialisation of dev::devname
vmwgfx: Enable use of the vblank system
vmwgfx: vt-switch (master drop) fixes
drm/vmwgfx: Fix breakage introduced by commit "drm: block userspace under allocating buffer and having drivers overwrite it (v2)"
drm: Hold the mutex when dropping the last GEM reference (v2)
drm/gem: handlecount isn't really a kref so don't make it one.
drm: i810/i830: fix locked ioctl variant
drm/radeon/kms: add quirk for MSI K9A2GM motherboard
drm/radeon/kms: fix potential segfault in r600_ioctl_wait_idle
drm: Prune GEM vma entries
drm/radeon/kms: fix up encoder info messages for DFP6
drm/radeon: fix PCI ID 5657 to be an RV410
When the GPU is reset, the fence registers are invalidated, so release
the objects and clear them out.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Only drm/i915 does the bookkeeping that makes the information useful,
and the information maintained is driver specific, so move it out of the
core and into its single user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com>
There were lots of places being inconsistent since handle count
looked like a kref but it really wasn't.
Fix this my just making handle count an atomic on the object,
and have it increase the normal object kref.
Now i915/radeon/nouveau drivers can drop the normal reference on
userspace object creation, and have the handle hold it.
This patch fixes a memory leak or corruption on unload, because
the driver had no way of knowing if a handle had been actually
added for this object, and the fbcon object needed to know this
to clean itself up properly.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
At that point as the object is no longer in any GPU write domain it must
not be on the list, so the list_del() is redundant.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Just reschedule the retire requests again if the device is currently
busy. The request list will be pruned along other paths so will never
grow unbounded and so we can afford to miss the occasional pruning.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With multiple rings generating requests independently, the outstanding
requests must also be track independently.
Reported-by: Wang Jinjin <jinjin.wang@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30380
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Introduced by 48b956c5, I had thought I had already fixed this. Oh well.
Reported-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Daniel Vetter pointed out that in this case is would be clearer and
cleaner to use a spinlock instead of a mutex to protect the per-file
request list manipulation. Make it so.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Owain Ainsworth reported an issue between the interaction of the
hangcheck and userspace immediately (and permanently) falling back to
s/w rasterisation. In order to break the mutex and begin resetting the
GPU, we must abort the current operation (usually within the wait) and
climb sufficiently far back up the call chain to drop the mutex. In his
implementation, Owain has a loop within the ioctl handler to detect the
hang and then sleep until the error handler has run. I've chosen to
return to userspace and report an EAGAIN which should trigger the
userspace ioctl handler to repeat the call (simply because it felt less
invasive...). Before hitting a wedged GPU, we then wait upon completion
of the error handler.
Reported-by: Owain G. Ainsworth <zerooa@googlemail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Avoid cause latencies in other clients by not taking the global struct
mutex and moving the per-client request manipulation a local per-client
mutex. For example, this allows a compositor to schedule a page-flip
(through X) whilst an OpenGL application is monopolising the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We need to drain the pending flips prior to disabling the pipe during
modeset, and these need to be done in an uninterruptible fashion.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is already performed with the pipelined flush, so by the time we
schedule the flush in the page-flip, the ring is NULL and we OOPs
instead.
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
A minor typo caused a single fence register to be incorrectly
programmed, resulting in occassional tiling corruption.
Reported-and-tested-by: Hans de Bruin <bruinjm@xs4all.nl>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=18962
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Keep a list of pinned objects and display it via debugfs. Now all
objects that exist in the GTT are always tracked on one of the
active, flushing, inactive or pinned lists.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If we have queued a page flip on the current fb and then request a mode
change, wait until the page flip completes before performing the new
request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Track if the gpu requires the fence for the execution of a batch buffer
and so only wait upon the retirement of the object's last rendering
seqno if the fence is in use by the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Introduce intel_init_render_ring_buffer(), intel_init_bsd_ring_buffer
for ring initialization.
Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Clear the GPU read domain for the inactive objects on a reset so that
they are correctly invalidated on reuse.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Owain Ainsworth noticed that the reset code failed to clear the flushing
list leaving the driver in an inconsistent state following a hung GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When flushing the GPU domains,we emit a flush on *both* rings, even
though they share a unified cache. Only emit the flush on the currently
active ring.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Change the semantics to retire any buffer older than the current seqno
rather than repeatedly calling calling the function to retire the
buffer at the head of the list matching the request seqno.
Whilst this should have no semantic impact on the implementation, Daniel
was wondering if there was a bug where we might miss a retirement and so
end up with a continually growing active list.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With 5 places to update when adding handling for fence registers, it is
easy to overlook one or two. Correct that oversight, but fence
management should be improved before a new set of registers is added.
Bugzilla: https://bugs.freedesktop.org/show_bug?id=30199
Original patch by: Yuanhan Liu <yuanhan.liu@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
As we currently may need to acquire a fence register during a modeset,
we need to be able to do so in an uninterruptible manner. So expose that
parameter to the callers of the fence management code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This ensures that we do wait upon the flushes to complete if necessary
and avoid the visual tears, whilst enabling pipelined page-flips.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
I pulled the wrong version of the patch from Daniel Vetter which was
missing the read barriers -- and the one that was causing all the trouble
was from i915_gem_object_put_fence_reg(), leading to GPU hangs on gen3.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By reducing the hangcheck frequency we check less often, conserving
resources, and still detect a lock up quickly. On a fast machine with a
slow GPU (like a Core2 paired with a 945G) it is easy for the hangcheck to
misfire as we check too fast.
Also once hung and if we fail to completely reset the chip, we have a
nasty habit of proclaming a hang many times a second and generating a
strobe-like display.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This spinlock only served debugging purposes in a time when we could not
be sure of the mutex ever being released upon a GPU hang. As we now
should be able rely on hangcheck to do the job for us (and that error
reporting should not itself require the struct mutex) we can kill the
incomplete attempt at protection.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By allocating the request prior to writing to the ringbuffer, we can
abort the operation without leaving the GPU in an inconsistent state.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... take advantage of the new implicit request issuing of
i915_wait_request.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
One caller (for the pageflip support) wants a purely pipelined flush.
Distinguish this case by a new parameter. This will also be useful
later on for pipelined fencing.
v2: Simplify the code by depending upon the implicit request emitting
of i915_wait_request.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[ickle: And drop the non-interruptible support in the process.]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By moving one i915_add_request we can solely depend on the new
auto-seqno-numbering behaviour.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
i915_gem_object_move_to_active can handle zero seqno for us now.
And not emitting a request is not fatal here - we'll try to emit
a new one if we have to wait for some rendering to complete.
In case this assumption ever gets accidentally broken, there's already
a BUG_ON to catch it in i915_do_wait_request.
So just silently ignore ENOMEM here instead of screwing up the whole
drm.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
... instead of threading flush_domains through the execbuf code to
i915_add_request.
With this change 2 small cleanups are possible (likewise the majority
of the patch):
- The flush_domains parameter of i915_add_request is always 0. Drop it
and the corresponding logic.
- Ditto for the seqno param of i915_gem_process_flushing_list.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Previously I thought that one interrupt per batchbuffer should be
enough. Now tedious benchmarking showed this to be wrong.
Therefore track whether any commands have been isssued with a future
seqno (like pipelined fencing changes or flushes). If this is the case
emit a request before issueing the batchbuffer.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Now that we can move objects to the active list without already having
emitted a request, move the flushing list handling into i915_gem_flush.
This makes more sense and allows to drop a few i915_add_request calls
that are not strictly necessary.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Sometimes (like when flushing in preparation of batchbuffer execution)
we know that we'll emit a request but haven't yet done so. Allow this
case by simply taking the next seqno by default. Ensure that a request
is eventually emitted before waiting for an request by issuing it
in i915_wait_request iff this is not yet done.
Also replace one open-coded version of i915_gem_object_wait_rendering,
to prevent future code-diversion.
Chris Wilson asked me to explain and clarify what this patch does and why.
Here it goes:
Old way of moving objects onto the active list and associating them with a
reques:
1. i915_add_request + store the returned seqno somewhere
2. i915_gem_object_move_to_active (with the stored seqno as parameter)
For the current users, this is all fine. But I'd like to associate objects
(and fence regs) with the batchbuffer request deep down in the execbuf
call-chain. I thought about three ways of implementing this.
a) Don't care, just emit request when we need a new seqno. When heavily
pipelining fence reg changes, this would have caused tons of superflous
request (and corresponding irqs).
b) Thread all changed fences, objects, whatever through the execbuf-maze,
so that when we emit a request, we can store the new seqno at all the right
places.
c) Kill that seqno-threading-around business by simply storing the next
seqno, i.e. allow 2. to be done before 1. in the above sequence.
I've decided to implement c) (in this patch). The following patches are
just fall-out that resulted from this small conceptual change.
* We can handle the flushing list processing where we actually emit a flush
(i915_gem_flush and i915_retire_commands) instead of in i915_add_request.
The code makes IMHO more sense this way (and i915_add_request looses the
flush_domains parameter, obviously).
* We can avoid emitting unnecessary requests. IMHO there's no point in
emitting more than one request per batchbuffer (with or without an
corresponding irq).
* By enforcing 2. before 1. ordering in the above sequence the seqno
argument of i915_gem_object_move_to_active is redundant and can be
dropped.
v2: Now i915_wait_request issues request if it is not yet emitted.
Also introduce i915_gem_next_request_seqno(dev) just in case we ever
need to do some prep work before using a new seqno.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[ickle: Keep i915_gem_object_set_to_display_plane() uninterruptible.]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
ums-gem code correctly cancels the retire work (at lastclose time),
kms does not do so. Fix this by canceling the work right after ideling
the gpu.
While staring at the code I noticed that the work function is not
static. Fix this, too.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is the first patch to clean up module unload races due to
outstanding timers/work. Preparatory step: Thou shalt not destroy
the workqueue when new work might still get enqued.
Now error_work gets queued by the hangcheck timer and only (atomically)
reads the chip wedged status. So cancel it right after the hangcheck
timer is killed. But the hangcheck is armed by interrupts, so move
everything after irqs are disabled.
Also change a del_timer to a del_timer_sync in the ums gem code, the
hangcheck timer is self-rearming.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Sandybridge GTT has new cache control bits in PTE, which controls
graphics page cache in LLC or LLC/MLC, so we need to extend the mask
function to respect the new bits.
And set cache control to always LLC only by default on Gen6.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
copy_to_user() returns the number of bytes remaining to be copied and
I'm pretty sure we want to return a negative error code here.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
This reverts commit 86f100b136.
The kref API requires the handlecount to be initialised to one on object
creation (so that kref_get() doesn't complain upon first use) so the
dalliance in the drivers is required in order to sink the initial
floating reference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel: (58 commits)
drm/i915,intel_agp: Add support for Sandybridge D0
drm/i915: fix render pipe control notify on sandybridge
agp/intel: set 40-bit dma mask on Sandybridge
drm/i915: Remove the conflicting BUG_ON()
drm/i915/suspend: s/IS_IRONLAKE/HAS_PCH_SPLIT/
drm/i915/suspend: Flush register writes before busy-waiting.
i915: disable DAC on Ironlake also when doing CRT load detection.
drm/i915: wait for actual vblank, not just 20ms
drm/i915: make sure eDP PLL is enabled at the right time
drm/i915: fix VGA plane disable for Ironlake+
drm/i915: eDP mode set sequence corrections
drm/i915: add panel reset workaround
drm/i915: Enable RC6 on Ironlake.
drm/i915/sdvo: Only set is_lvds if we have a valid fixed mode.
drm/i915: Set up a render context on Ironlake
drm/i915 invalidate indirect state pointers at end of ring exec
drm/i915: Wake-up wait_request() from elapsed hang-check (v2)
drm/i915: Apply i830 errata for cursor alignment
drm/i915: Only update i845/i865 CURBASE when disabled (v2)
drm/i915: FBC is updated within set_base() so remove second call in mode_set()
...
We now attempt to free "active" objects following a GPU hang as either
the GPU will be reset or the hang is permenant. In either case, the GPU
writes will not be flushed to main memory and it should be safe to
return that memory back to the system.
The BUG_ON(active) is thus overkill and can erroneously fire after a
EIO.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This is consistent with trying to access a filename that not exist
within a directory which is a good analogy here. The main reason for the
change is that it is easy to confuse the error code of EBADF as an
performing an ioctl on an invalid file descriptor (rather than an
unknown object).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
i830 requires 32bpp cursors to be aligned to 16KB, so we have to expose
the alignment parameter to i915_gem_attach_phys_object().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
shmfs doesn't actually implement i_ops->truncate() so we were not
immedatiately releasing the backing pages when shrinking the gfx cache
under OOM. Instead use a combination of truncate_inode_pages() and
i_ops->truncate_range() as is used by shmem_delete_inode().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
In order to reduce the penalty of fallbacks under memory pressure and to
avoid a potential immediate ping-pong of evicting a mmaped buffer, we
move the object to the tail of the inactive list when a page is freshly
faulted or the object is moved into the CPU domain.
We choose not to protect the CPU objects from casual eviction,
preferring to keep the GPU active for as long as possible.
v2: Daniel Vetter found a bug where I forgot that pinned objects are
kept off the inactive list.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
The eviction code is the gnarly underbelly of memory management, and is
clearer if kept separated from the normal domain management in GEM.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This will be used by the eviction logic to maintain fairness between the
rings.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This does two little changes:
- Add an alignment parameter for evict_something. It's not really great to
whack a carefully sized hole into the gtt with the wrong alignment.
Especially since the fallback path is a full evict.
- With the inactive scan stuff we need to evict more that one object, so
move the unbind call into the helper function that scans for the object
to be evicted, too. And adjust its name.
No functional changes in this patch, just preparation.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
In order to properly track bound objects, they need to exist on one of
the inactive/active lists or be pinned. As this is a requirement, do the
work inside i915_gem_bind_to_gtt() rather than dotted around the
callsites.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This debugging trace was useful for finding the fbcon regression on
i965, and it may prove useful again in future.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Incorporates a similar patch by Daniel Vetter, the alteration being to
report the current busy state after retiring.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
This avoids the excess flush and requests on idle rings (and spamming
the debug log ;-)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This is required should we ever attempt to use an io-mapping where
KM_USER0 is verboten, such as inside an IRQ context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When creating an object, we create the handle by which it is known to
the process and which own the reference to the object. That reference to
the new handle is what we want to transfer to the process, not the lost
reference to the object; so free the local object reference *not* the
process's handle reference.
This brings i915_gem_object_create_ioctl() into line with
drm_gem_open_ioctl()
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
If we fail to flush outstanding GPU writes but return the memory to the
system, we risk corrupting memory should the GPU recovery and complete
those writes. On the other hand, if we bail early and free the object
then we have a definite use-after-free and real memory corruption.
Choose the lesser of two evils, since in order to recover from the hung
GPU we need to completely reset it, those pending writes should
never happen.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
If during the freeing of an object the unbind is interrupted by a system
call, which is quite possible if we have outstanding GPU writes that
must be flushed, the unbind is silently aborted. This still leaves the
AGP region and backing pages allocated, and perhaps more importantly,
the object remains upon the various lists exposing us to memory
corruption.
I think this is the cause behind the use-after-free, such as
Bug 15664 - Graphics hang and kernel backtrace when starting Azureus
with Compiz enabled
https://bugzilla.kernel.org/show_bug.cgi?id=15664
v2: Daniel Vetter reminded me that kernel space programming is never easy.
We cannot simply spin to clear the pending signal and so must deferred
the freeing of the object until later.
v3: Run from the top level retire requests.
v4: Tested with P(return -ERESTARTSYS)=.5 from i915_gem_do_wait_request()
v5: Rebase against Eric's for-linus tree.
v6: Refactor, split and add a comment about avoiding unbounded recursion.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Combine the iteration over active render rings into a common function.
This is in preparation for reusing the idle function to also retire
deferred free requests.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Simple fix for error propagation along the old UMS path.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6:
drm/r600: fix possible NULL pointer derefernce
drm/radeon/kms: add quirk for ASUS HD 3600 board
include/linux/vgaarb.h: add missing part of include guard
drm/nouveau: Fix crashes during fbcon init on single head cards.
drm/nouveau: fix pcirom vbios shadow breakage from acpi rom patch
drm/radeon/kms: fix shared ddc harder
drm/i915: enable low power render writes on GEN3 hardware.
drm/i915: Define MI_ARB_STATE bits
vmwgfx: return -EFAULT if copy_to_user fails
fb: handle allocation failure in alloc_apertures()
drm: radeon: check kzalloc() result
drm/ttm: Fix build on architectures without AGP
drm/radeon/kms: fix gtt MC base alignment on rs4xx/rs690/rs740 asics
drm/radeon/kms: fix possible mis-detection of sideport on rs690/rs740
drm/radeon/kms: fix legacy tv-out pal mode
A lot of 945GMs have had stability issues for a long time, this manifested as X hangs, blitter engine hangs, and lots of crashes.
one such report is at:
https://bugs.freedesktop.org/show_bug.cgi?id=20560
along with numerous distro bugzillas.
This only took a week of digging and hair ripping to figure out.
Tracked down and tested on a 945GM Lenovo T60,
previously running
x11perf -copypixwin500
or
x11perf -copywinpix500
repeatedly would cause the GPU to wedge within 4 or 5 tries, with random busy bits set.
After this patch no hangs were observed.
cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
The current shrinker implementation requires the registered callback
to have global state to work from. This makes it difficult to shrink
caches that are not global (e.g. per-filesystem caches). Pass the shrinker
structure to the callback so that users can embed the shrinker structure
in the context the shrinker needs to operate on and get back to it in the
callback via container_of().
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
The hibernate issues that got fixed in commit 985b823b91 ("drm/i915:
fix hibernation since i915 self-reclaim fixes") turn out to have been
incomplete. Vefa Bicakci tested lots of hibernate cycles, and without
the __GFP_RECLAIMABLE flag the system eventually fails to resume.
With the flag added, Vefa can apparently hibernate forever (or until he
gets bored running his automated scripts, whichever comes first).
The reclaimable flag was there originally, and was one of the flags that
were dropped (unintentionally) by commit 4bdadb9785 ("drm/i915:
Selectively enable self-reclaim") that introduced all these problems,
but I didn't want to just blindly add back all the flags in commit
985b823b91, and it looked like __GFP_RECLAIM wasn't necessary. It
clearly was.
I still suspect that there is some subtle reason we're missing that
causes the problems, but __GFP_RECLAIMABLE is certainly not wrong to use
in this context, and is what the code historically used. And we have no
idea what the causes the corruption without it.
Reported-and-tested-by: M. Vefa Bicakci <bicave@superonline.com>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Only ever assigned, never used.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[glisse: I will re-add if needed for range-restricted allocations]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Since commit 4bdadb9785 ("drm/i915:
Selectively enable self-reclaim"), we've been passing GFP_MOVABLE to the
i915 page allocator where we weren't before due to some over-eager
removal of the page mapping gfp_flags games the code used to play.
This caused hibernate on Intel hardware to result in a lot of memory
corruptions on resume. See for example
http://bugzilla.kernel.org/show_bug.cgi?id=13811
Reported-by: Evengi Golov (in bugzilla)
Signed-off-by: Dave Airlie <airlied@redhat.com>
Tested-by: M. Vefa Bicakci <bicave@superonline.com>
Cc: stable@kernel.org
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since we now get_user_pages() outside of the mutex prior to performing
the copy, we kmap() the page inside the copy routine and so need to
perform an ordinary memcpy() and not copy_from_user().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
As we do not have a requirement to be atomic and avoid sleeping whilst
performing the slow copy for shmem based pread and pwrite, we can use
kmap instead, thus simplifying the code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
We can avoid an early clflush when pwriting if we use the current CPU
write domain rather than moving the object to the GTT domain for the
purposes of the pwrite. This has the advantage of not flushing the
presumably hot data that we want to upload into the bo, and of ascribing
the clflush to the execution when profiling.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
The callers expect us to cleanup any partially initialised structures
before reporting the error.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
If the object is bigger than the entire aperture, reject it early
before evicting everything in a vain attempt to find space.
v2: Use E2BIG as suggested by Owain G. Ainsworth.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Signed-off-by: Eric Anholt <eric@anholt.net>
This particular warning is harmless as we emit during the normal
pinning process where the batch buffer requires more fences than is
available without eviction. Only if we fail to evict enough fences does
this become a problem, so include the requested number of fences in the
ultimate *error* message.
v2: Remember to compile test even trial patches to remove warnings.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Nesting domain changes will cause confusion when trying to interpret the
tracepoints describing the sequence of changes for the object, as well
as obscuring the order of operations for the reader of the code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
This saves a whooping 7 dwords. Zero functional changes. Because
some of the refcounts are rather tightly calculated, I've put
BUG_ONs in the code to check for overflows.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
The BSD (bit stream decoder) ring is used for accessing the BSD engine
which decodes video bitstream for H.264 and VC1 on G45+. It is
asynchronous with the render ring and has access to separate parts of
the GPU from it, though the render cache is coherent between the two.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Xiang Hai hao <haihao.xiang@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
The active list and request list move into the ringbuffer structure,
so each can track its active objects in the order they are in that
ring. The flushing list does not, as it doesn't matter which ring
caused data to end up in the render cache. Objects gain a pointer to
the ring they are active on (if any).
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Xiang Hai hao <haihao.xiang@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Introduces a more complete intel_ring_buffer structure with callbacks
for setup and management of a particular ringbuffer, and converts the
render ring buffer consumers to use it.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Xiang Hai hao <haihao.xiang@intel.com>
[anholt: Fixed up whitespace fail and rebased against prep patches]
Signed-off-by: Eric Anholt <eric@anholt.net>
This is preparation for supporting multiple ringbuffers on Ironlake.
The non-copy-and-paste changes are:
- de-staticing functions
- I915_GEM_GPU_DOMAINS moving to i915_drv.h to be used by both files.
- i915_gem_add_request had only half its implementation
copy-and-pasted out of the middle of it.
This lru tracks fences, not objects, so move it to where it belongs.
As a side effect, this nicely shrinks drm_i915_gem_object by two
pointers.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_drv.h
drivers/gpu/drm/radeon/r300.c
The BSD ringbuffer support that is landing in this branch
significantly conflicts with the Ironlake PIPE_CONTROL fix on master,
and requires it to be tested successfully anyway.
By idling the GPU and discarding everything we can when under extreme
memory pressure, the number of OOM-killer events is dramatically
reduced. For instance, this makes it possible to run
firefox-planet-gnome.trace again on my swapless 512MiB i915.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
My PIPE_CONTROL fix (just sent via Eric's tree) was buggy; I was
testing a whole set of patches together and missed a conversion to the
new HAS_PIPE_CONTROL macro, which will cause breakage on non-Ironlake
965 class chips. Fortunately, the fix is trivial and has been tested.
Be sure to use the HAS_PIPE_CONTROL macro in i915_get_gem_seqno, or
we'll end up reading the wrong graphics memory, likely causing hangs,
crashes, or worse.
Reported-by: Zdenek Kabelac <zdenek.kabelac@gmail.com>
Reported-by: Toralf Förster <toralf.foerster@gmx.de>
Tested-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since 965, the hardware has supported the PIPE_CONTROL command, which
provides fine grained GPU cache flushing control. On recent chipsets,
this instruction is required for reliable interrupt and sequence number
reporting in the driver.
So add support for this instruction, including workarounds, on Ironlake
and Sandy Bridge hardware.
https://bugs.freedesktop.org/show_bug.cgi?id=27108
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Luckily the change is quite a little bit less invasive than I've
feared.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Thanks to the to_intel_bo helper, this change is rather trivial.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Just embed it and adjust the pointers, No other changes (that's
for later patches).
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Just preparation, no functional change.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When drivers embed the core gem object into their own structures,
they'll have to do this. Temporarily this results in an ugly
kfree(gem_obj);
in every gem driver.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Current code is definitely crap: Largest pitch allowed spills into
the TILING_Y bit of the fence registers ... :(
I've rewritten the limits check under the assumption that 3rd gen hw
has a 3d pitch limit of 8kb (like 2nd gen). This is supported by an
otherwise totally misleading XXX comment.
This bug mostly resulted in tiling-corrupted pixmaps because the kernel
allowed too wide buffers to be tiled. Bug brought to the light by the
xf86-video-intel 2.11 release because that unconditionally enabled
tiling for pixmaps, relying on the kernel to check things. Tiling for
the framebuffer was not affected because the ddx does some additional
checks there ensure the buffer is within hw-limits.
v2: Instead of computing the value that would be written into the
hw fence registers and then checking the limits simply check whether
the stride is above the 8kb limit. To better document the hw, add
some WARN_ONs in i915_write_fence_reg like I've done for the i830
case (using the right limits).
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=27449
Tested-by: Alexander Lam <lambchop468@gmail.com>
Cc: stable@kernel.org
Signed-off-by: Eric Anholt <eric@anholt.net>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel:
drm/i915: Ignore LVDS EDID when it is unavailabe or invalid
drm/i915: Add no_lvds entry for the Clientron U800
drm/i915: Rename many remaining uses of "output" to encoder or connector.
drm/i915: Rename intel_output to intel_encoder.
agp/intel: intel_845_driver is an agp driver!
drm/i915: introduce to_intel_bo helper
drm/i915: Disable FBC on 915GM and 945GM.
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
This is a purely cosmetic change to make changes in this area easier.
And hey, it's not only clearer and typechecked, but actually shorter,
too!
[anholt: To clarify, this is a change to let us later make
drm_i915_gem_object subclass drm_gem_object, instead of having
drm_gem_object have a pointer to i915's private data]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
This could resolve HW deadlocks where a unit downstream of the VS is
waiting for more input, the VS has one vertex queued up but not
dispatched because it hopes to get one more vertex for 2x4 dispatch,
and software isn't handing more vertices down because it's waiting for
rendering to complete. The B-Spec says you should always have this
bit set.
Signed-off-by: Eric Anholt <eric@anholt.net>
The continue just after this call with loop around and wait for the
request just added just fine. This leads to slightly more compact code.
Signed-Off-by: Owain G. Ainsworth <oga@openbsd.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
The assumption that an object has only ever one write domain is deeply
threaded into gem (it's even encoded the the singular of the variable
name). Don't let userspace screw us over.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Now that we have an exact gpu write domain tracking, we don't need
to move objects to the active list ourself. i915_add_request will
take care of that under all circumstances.
Idea stolen from a patch by Chris Wilson <chris@chris-wilson.co.uk>.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
We have it, so use it. This required moving the function to avoid
a forward declaration.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
The fence_list should be lru ordered for otherwise we might try
to steal a fence reg from an active object even though there are
fences from inactive objects available. lru ordering was obeyed
for gpu access everywhere save when moving dirty objects from
flushing_list to active_list.
Fixing this cause the code to indent way to much, so I've extracted
the flushing_list processing logic into its on function.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
The spaghetti logic in there tripped up my brain's code parser for a
few secs. Prevent this from happening again by extracting the fence
stealing code into a seperate functions. IMHO this slightly clears up
the code flow.
v2: Beautified according to ickle's comments.
v3: ickle forgot to flush his comment queue ... Now there's also a
we-are-paranoid BUG_ON in there.
v4: I've forgotten to switch on my brain when doing v3. Now the BUG_ON
actually checks something useful.
v5: Clean up a stale comment as noted by Eric Anholt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
All other accesses take this spinlock, so do this here, too.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
This has a few functional changes against the old code:
* a few more unnecessary loads and stores to the drm_i915_fence_reg
objects. Also an unnecessary store to the hw fence register.
* zaps any userspace mappings before doing other flushes. Only changes
anything when userspace does racy stuff against itself.
* also flush GTT domain. This is a noop, but still try to keep the
bookkeeping correct.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
* anholt/drm-intel-next:
drm/i915: Record batch buffer following GPU error
drm/i915: give up on 8xx lid status
drm/i915: reduce some of the duplication of tiling checking
drm/i915: blow away userspace mappings before fence change
drm/i915: move a gtt flush to the correct place
agp/intel: official names for Pineview and Ironlake
drm/i915: overlay: drop superflous gpu flushes
drm/i915: overlay: nuke readback to flush wc caches
drm/i915: provide self-refresh status in debugfs
drm/i915: provide FBC status in debugfs
drm/i915: fix drps disable so unload & re-load works
drm/i915: Fix OGLC performance regression on 945
drm/i915: Deobfuscate the render p-state obfuscation
drm/i915: add dynamic performance control support for Ironlake
drm/i915: enable memory self refresh on 9xx
drm/i915: Don't reserve compatibility fence regs in KMS mode.
drm/i915: Keep MCHBAR always enabled
drm/i915: Replace open-coded eviction in i915_gem_idle()
i915_gem_object_fenceable was mostly just a repeat of the
i915_gem_object_fence_offset_ok, but also checking the size (which was
checkecd when we allowed that BO to be tiled in the first place). So
instead, export the latter function and use it in place.
Signed-Off-By: Owain G. Ainsworth <oga@openbsd.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
This aligns it with the other user of i915_gem_clear_fence_reg,
which blows away the mapping before changing the fence reg.
Only affects userspace if it races against itself when changing
tiling parameters, i.e. behaviour is undefined, anyway.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>