This allows the drivers to move a BO to the end of the LRU
without removing and adding it again.
v2: Make it more robust, handle pinned and swapable BOs as well.
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Previously, the comment was inconsistent. EDEADLK is what the ww_mutex
mechanism really returns.
Signed-off-by: Nicolai Hähnle <Nicolai.Haehnle@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We need to store device offsets in 64 bit as the device
address space may be larger than the CPU's.
Fixes GPU init failures on radeons with 4GB or more of
vram on 32 bit kernels. We put vram at the start of the
GPU's address space so the gart aperture starts at 4 GB
causing all GPU addresses in the gart aperture to get
truncated.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=89072
[airlied: fix warning on nouveau build]
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: thellstrom@vmware.com
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch adds an optional list_head parameter to ttm_eu_reserve_buffers.
If specified duplicates in the execbuf list are no longer reported as errors,
but moved to this list instead.
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds a new flag to the ttm_validate_buffer list to
add the fence as shared to the reservation object.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This reorders the list to keep track of what buffers are reserved,
so previous members are always unreserved.
This gets rid of some bookkeeping that's no longer needed,
while simplifying the code some.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
No users are left, kill it off! :D
Conversion to the reservation api is next on the list, after
that the functionality can be restored with rcu.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
This allows us to more fine grained specify where to place the buffer object.
v2: rebased on drm-next, add bochs changes as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Pages allocated using the DMA API have a coherent memory mapping. Make
this mapping visible to drivers so they can decide to use it instead of
creating their own redundant one.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Acked-by: David Airlie <airlied@linux.ie>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
bo->mem.placement is not initialized when ttm_bo_man_get_node is called,
so the flag had no effect at all.
v2: change nouveau and vmwgfx as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
ttm_tt_cache_flush's implementation was removed in 2009 by commit
c9c97b8c, but its declaration has been hiding in ttm_bo_driver.h since
then.
It has been surviving in the dark for too long now ; give it the mercy
blow.
Reviewed-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
The kerneldoc header of ttm_bo_create() was referring to another
(nonexisting) function and had a few obsolete or incorrect arguments.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Pull request of 2014-04-04
Currently only a single patch fixing up mixed use of the ttm_bo_reserve and
ww_mutex APIs
* tag 'ttm-next-2014-04-04' of git://people.freedesktop.org/~thomash/linux:
drm/ttm: Hide the implementation details of reservation
Clients like i915 need to segregate cache domains within the GTT which
can lead to small amounts of fragmentation. By allocating the uncached
buffers from the bottom and the cacheable buffers from the top, we can
reduce the amount of wasted space and also optimize allocation of the
mappable portion of the GTT to only those buffers that require CPU
access through the GTT.
For other drivers, allocating small bos from one end and large ones
from the other helps improve the quality of fragmentation.
Based on drm_mm work by Chris Wilson.
v3: Changed to use a TTM placement flag
v2: Updated kerneldoc
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Christian König <deathsimple@vodafone.de>
Signed-off-by: Lauri Kasanen <cand@gmx.com>
Signed-off-by: David Airlie <airlied@redhat.com>
A function to be used to check whether a caller has put a ref object
(opened) a struct ttm_base_object
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
With dev->anon_inode we have a global address_space ready for operation
right from the beginning. Therefore, there is no need to do a delayed
setup with TTM. Instead, set dev_mapping during initialization in
ttm_bo_device_init() and remove any "if (dev_mapping)" conditions.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Declare 'struct device' explicitly in ttm_page_alloc.h as this file
does not include any file declaring it. This removes the following
warning:
warning: 'struct device' declared inside parameter list
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Anyway, nothing big here, Three more code cleanup patches from Rashika
Kheria, and one TTM/vmwgfx patch from me that tightens security around TTM
objects enough for them to opened using prime objects from render nodes:
Previously any client could access a shared buffer using the "name", also
without actually opening it. Now a reference is required, and for render nodes
such a reference is intended to only be obtainable using a prime fd.
vmwgfx-next 2014-01-13 pull request
* tag 'vmwgfx-next-2014-01-13' of git://people.freedesktop.org/~thomash/linux:
drivers: gpu: Mark functions as static in vmwgfx_fence.c
drivers: gpu: Mark functions as static in vmwgfx_buffer.c
drivers: gpu: Mark functions as static in vmwgfx_kms.c
drm/ttm: ttm object security fixes for render nodes
When a client looks up a ttm object, don't look it up through the device hash
table, but rather from the file hash table. That makes sure that the client
has indeed put a reference on the object, or in gem terms, has opened
the object; either using prime or using the global "name".
To avoid a performance loss, make sure the file hash table entries can be
looked up from under an RCU lock, and as a consequence, replace the rwlock
with a spinlock, since we never need to take it in read mode only anymore.
Finally add a ttm object lookup function for the device hash table, that is
intended to be used when we put a ref object on a base object or, in gem terms,
when we open the object.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Needed for some vm operations; most notably unmap_mapping_range() with
even_cows = 0.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
The set_need_resched() removal fix and yet another fix in
ttm_bo_move_memcpy().
* 'ttm-fixes-3.13' of git://people.freedesktop.org/~thomash/linux:
drm/ttm: Remove set_need_resched from the ttm fault handler
drm/ttm: Don't move non-existing data
Addresses
"[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE".
In the first occurence it was used to try to be nice while releasing the
mmap_sem and retrying the fault to work around a locking inversion.
The second occurence was never used.
There has been some discussion whether we should change the locking order to
mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves
that locking order undefined. The solution that we release the mmap_sem if
tryreserve fails and wait for the buffer to become unreserved is something
we want in any case, and follows how the core vm system waits for pages
to be come unlocked while releasing the mmap_sem.
The code also outlines what needs to be changed if we want to establish the
locking order as mmap_sem -> bo::reserve.
One slight issue that remains with this code is that the fault handler might
be prone to starvation if another thread countinously reserves the buffer.
IMO that usage pattern is highly unlikely.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
If no reservation ticket is given to the execbuf reservation utilities,
try reservation with non-blocking semantics.
This is intended for eviction paths that use the execbuf reservation
utilities for convenience rather than for deadlock avoidance.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Used by the vmwgfx driver
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Use the new vma-manager infrastructure. This doesn't change any
implementation details as the vma-offset-manager is nearly copied 1-to-1
from TTM.
The vm_lock is moved into the offset manager so we can drop it from TTM.
During lookup, we use the vma locking helpers to take a reference to the
found object.
In all other scenarios, locking stays the same as before. We always
guarantee that drm_vma_offset_remove() is called only during destruction.
Hence, helpers like drm_vma_node_offset_addr() are always safe as long as
the node has a valid offset.
This also drops the addr_space_offset member as it is a copy of vm_start
in vma_node objects. Use the accessor functions instead.
v4:
- remove vm_lock
- use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock)
Cc: Dave Airlie <airlied@redhat.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Martin Peres <martin.peres@labri.fr>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Now that the code is compatible in semantics, flip the switch.
Use ww_mutex instead of the homegrown implementation.
ww_mutex uses -EDEADLK to signal that the caller has to back off,
and -EALREADY to indicate this buffer is already held by the caller.
ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes
were needed to handle this correctly.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This commit converts the source of the val_seq counter to
the ww_mutex api. The reservation objects are converted later,
because there is still a lockdep splat in nouveau that has to
resolved first.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
qxl wants to use io mapping like i915 gem does, for now
just export the symbols so the driver can implement atomic
page maps using io mapping.
Signed-off-by: Dave Airlie <airlied@redhat.com>
All legitimate users of this function outside ttm_bo.c are gone, now
it's only an implementation detail.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Instead of dropping everything, waiting for the bo to be unreserved
and trying over, a better strategy would be to do a blocking wait.
This can be mapped a lot better to a mutex_lock-like call.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
There should no longer be assumptions that reserve will always succeed
with the lru lock held, so we can safely break the whole atomic
reserve/lru thing. As a bonus this fixes most lockdep annotations for
reservations.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
All items on the lru list are always reservable, so this is a stupid
thing to keep. Not only that, it is used in a way which would
guarantee deadlocks if it were ever to be set to block on reserve.
This is a lot of churn, but mostly because of the removal of the
argument which can be nested arbitrarily deeply in many places.
No change of code in this patch except removal of the no_wait_reserve
argument, the previous patch removed the use of no_wait_reserve.
v2:
- Warn if -EBUSY is returned on reservation, all objects on the list
should be reservable. Adjusted patch slightly due to conflicts.
v3:
- Focus on no_wait_reserve removal only.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is similar to other platforms that don't allow command submission
to buffers locked on the cpu.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The mostly used lookup+get put+potential_destroy path of TTM objects
is converted to use RCU locks. This will substantially decrease the amount
of locked bus cycles during normal operation.
Since we use kfree_rcu to free the objects, no rcu synchronization is needed
at module unload time.
v2: Don't touch include/linux/kref.h
v3: Adapt to kref_get_unless_zero return value change
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
vmwgfx was its only user and always sets it to the same..
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-By: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's unused.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's unused.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All drivers set it to 0 and nothing uses it.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Convert #include "..." to #include <path/...> in kernel system headers.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
Patch 649bf3ca77 has completely
removed ttm_backend structure. Remove lingering declaration
and related (now stale) field in ttm_tt structure,
CC: Jerome Glisse <jglisse at redhat.com>
Signed-off-by: Ilija Hadzic <ihadzic at research.bell-labs.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>