The patch replaces direct access to driver_features field
by calls to helper function.
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Don't forget git add, noticed by David.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Acked-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The only user I could dig out was i915 back when ums+gem was still a
thing. But we've just very much killed that, and even when someone
screams about that we should resurrect that with a special hack
(wrapping drm_gem_mmap) in i915, not in the core code.
So good riddance to another entry point of the legacy buffer mapping
code.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Here's the updated topic/core-stuff pull request with the two patches
already merged into drm-fixes dropped.
* tag 'topic/core-stuff-2014-09-15' of git://anongit.freedesktop.org/drm-intel:
drm: Drop modeset locking from crtc init function
drm/i915/hdmi: Enable pipe pixel replication for SD interlaced modes
drm/edid: Reduce horizontal timings for pixel replicated modes
drm: Include task->name and master status in debugfs clients info
drm/gem: Fix kerneldoc typo
drm: use c99 initializers in structures
drm: fix drm_modeset_lock.h kernel-doc notation
The drm_gem_private_object_init function is called drm_gem_object_init
in its kerneldoc. Fix it.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This way drivers can't grow crazy ideas any more, and it also
helps a bit in reviewing EXPORT_SYMBOLS.
v2: Even more stuff. Unfortunately we can't move drm_vm_open_locked
because exynos does some horrible stuff with it.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drm_gem_get_pages() currently allows passing a 'gfp' parameter that is
passed to shmem combined with mapping_gfp_mask(). Given that the default
mapping_gfp_mask() is GFP_HIGHUSER, it is _very_ unlikely that anyone will
ever make use of that parameter. In fact, all drivers currently pass
redundant flags or 0.
This patch removes the 'gfp' parameter. The only reason to keep it is to
remove flags like __GFP_WAIT. But in its current form, it can only be used
to add flags. So to remove __GFP_WAIT, you'd have to drop it from the
mapping_gfp_mask, which again is stupid as this mask is used by shmem-core
for other allocations, too.
If any driver ever requires that parameter, we can introduce a new helper
that takes the raw 'gfp' parameter. The caller'd be responsible to combine
it with mapping_gfp_mask() in a suitable way. The current
drm_gem_get_pages() helper would then simply use mapping_gfp_mask() and
call the new helper. This is what shmem_read_mapping_pages{_gfp,} does
right now.
Moreover, the gfp-zone flag-usage is not obvious: If you pass a modified
zone, shmem core will WARN() or even BUG(). In other words, the following
must be true for 'gfp' passed to shmem_read_mapping_pages_gfp():
gfp_zone(mapping_gfp_mask(mapping)) == gfp_zone(gfp)
Add a comment to drm_gem_read_pages() explaining that constraint.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
shmem supports page-relocations during swapin since quite some time. It
was implemented in:
commit bde05d1ccd
Author: Hugh Dickins <hughd@google.com>
Date: Tue May 29 15:06:38 2012 -0700
shmem: replace page if mapping excludes its zone
The gem-comment about wrongly placed DMA32 pages is no longer valid.
Replace it with a proper comment but keep the BUG_ON() to verify correct
shmem behavior.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Here's my drm documentation update and driver api polish pull request.
Alex reviewed the entire pile, I've applied a little bit of spelling
polish in a few places since then and otherwise the Usual Suspects (David,
Rob, ...) don't seem up to have another look at it (I've poked them on
irc). So I think it's as good as it gets ;-)
Note that I've dropped the final imx breaker patch since that's blocked on
imx getting sane. Once that's landed I'll ping you to pick up that
straggler.
* 'drm-docs' of ssh://people.freedesktop.org/~danvet/drm: (34 commits)
drm/imx: remove drm_mode_connector_detach_encoder harder
drm: kerneldoc polish for drm_crtc.c
drm: kerneldoc polish for drm_crtc_helper.c
drm: drop error code for drm_helper_resume_force_mode
drm/crtc-helper: remove LOCKING from kerneldoc
drm: remove return value from drm_helper_mode_fill_fb_struct
drm/doc: Fix misplaced </para>
drm: remove drm_display_mode->private_size
drm: polish function kerneldoc for drm_modes.[hc]
drm/modes: drop maxPitch from drm_mode_validate_size
drm/modes: drop return value from drm_display_mode_from_videomode
drm/modes: remove drm_mode_height/width
drm: extract drm_modes.h for drm_crtc.h functions
drm: move drm_mode related functions into drm_modes.c
drm/doc: Repleace LOCKING kerneldoc sections in drm_modes.c
drm/doc: Integrate drm_modes.c kerneldoc
drm/kms: rip out drm_mode_connector_detach_encoder
drm/doc: Add function reference documentation for drm_mm.c
drm/doc: Overview documentation for drm_mm.c
drm/mm: Remove MM_UNUSED_TARGET
...
There is no need to initialize this variable, so drop it. Otherwise, the
compiler won't warn if we use it unintialized.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
All drivers currently need to clean up the vma-node manually. There is no
fancy logic involved so lets just clean it up unconditionally. The
vma-manager correctly catches multiple calls so we are fine.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
At least drm/i915 expects that the obj->dev pointer is set even in
failure paths. Specifically when the shmem initialization fails we
call i915_gem_object_free which needs to deref obj->base.dev to get at
the slab pointer in the device private structure. And the shmem
allocation can easily fail when userspace is hitting open file limits.
Doing the structure init even when the shmem file allocation fails
prevents this Oops.
This is a regression from
commit 89c8233f82
Author: David Herrmann <dh.herrmann@gmail.com>
Date: Thu Jul 11 11:56:32 2013 +0200
drm/gem: simplify object initialization
v2: Add regression note which Chris supplied.
Testcase: igt/gem_fd_exhaustion
Reported-and-Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
References: http://lists.freedesktop.org/archives/intel-gfx/2014-January/038433.html
Cc: stable@vger.kernel.org
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was hidden in a generic void * dev->mm_private. But only ever
used for gem. But thanks to this fake generic pretension no one
noticed that Rob's drm drivers are now all broken.
So just give the offset manager a type pointer and fix up msm, omapdrm
and tilcdc.
v2: Fixup compile fail.
v3: Fixup rebase fail that David spotted.
Cc: David Herrmann <dh.herrmann@gmail.com>
Cc: Rob Clark <robdclark@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
There's no reason to keep a reference to objects in the name idr. Each
handle to an object has a reference to the object and just before we
destroy the last handle we take the object out of the name idr. Thus,
if an object is in the name idr, there's at least one reference to the
object.
Or to put it another way, the name idr reference will never keep the
object alive. It just looks like it, which is confusing.
Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All drivers embed gem-objects into their own buffer objects. There is no
reason to keep drm_gem_object_alloc(), gem->driver_private and
->gem_init_object() anymore.
New drivers are highly encouraged to do the same. There is no benefit in
allocating gem-objects separately.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Ben Skeggs <skeggsb@gmail.com>
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Drivers that don't support PRIME will not have initialized the PRIME
specific private component of struct drm_file. If called for such
drivers, the drm_gem_remove_prime_handles() function will crash. Fix
it by checking for PRIME support prior to removing the PRIME handles.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
We implement automatic vma mmap() access management for all drivers using
gem_mmap. We use the vma manager to add each open-file that creates a
gem-handle to the vma-node of the underlying gem object. Once the handle
is destroyed, we drop the open-file again.
This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see
whether an open-file is granted access. In drm_gem_mmap() we use this to
verify that unprivileged users cannot guess gem offsets and map arbitrary
buffers.
Note that this manages access for _all_ gem users (also TTM+GEM), but the
actual access checks are only done for drm_gem_mmap(). TTM drivers use the
TTM mmap helpers, which need to do that separately.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The VMA offset manager uses a device-global address-space. Hence, any
user can currently map any offset-node they want. They only need to guess
the right offset. If we wanted per open-file offset spaces, we'd either
need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
both doesn't really scale, we implement access management in the VMA
manager itself.
We use an rb-tree to store open-files for each VMA node. On each mmap
call, GEM, TTM or the drivers must check whether the current user is
allowed to map this file.
We add a separate lock for each node as there is no generic lock available
for the caller to protect the node easily.
As we currently don't know whether an object may be used for mmap(), we
have to do access management for all objects. If it turns out to slow down
handle creation/deletion significantly, we can optimize it in several
ways:
- Most times only a single filp is added per bo so we could use a static
"struct file *main_filp" which is checked/added/removed first before we
fall back to the rbtree+drm_vma_offset_file.
This could be even done lockless with rcu.
- Let user-space pass a hint whether mmap() should be supported on the
bo and avoid access-management if not.
- .. there are probably more ideas once we have benchmarks ..
v2: add drm_vma_node_verify_access() helper
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... not only when the dma-buf is freshly created. In contrived
examples someone else could have exported/imported the dma-buf already
and handed us the gem object with a flink name. If such on object gets
reexported as a dma_buf we won't have it in the handle cache already,
which breaks the guarantee that for dma-buf imports we always hand
back an existing handle if there is one.
This is exercised by igt/prime_self_import/with_one_bo_two_files
Now if we extend the locked sections just a notch more we can also
plug th racy buf/handle cache setup in handle_to_fd:
If evil userspace races a concurrent gem close against a prime export
operation we can end up tearing down the gem handle before the dma buf
handle cache is set up. When handle_to_fd gets around to adding the
handle to the cache there will be no one left to clean it up,
effectily leaking the bo (and the dma-buf, since the handle cache
holds a ref on the dma-buf):
Thread A Thread B
handle_to_fd:
lookup gem object from handle
creates new dma_buf
gem_close on the same handle
obj->dma_buf is set, but file priv buf
handle cache has no entry
obj->handle_count drops to 0
drm_prime_add_buf_handle sets up the handle cache
-> We have a dma-buf reference in the handle cache, but since the
handle_count of the gem object already dropped to 0 no on will clean
it up. When closing the drm device fd we'll hit the WARN_ON in
drm_prime_destroy_file_private.
The important change is to extend the critical section of the
filp->prime.lock to cover the gem handle lookup. This serializes with
a concurrent gem handle close.
This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
with the reworking semantics and locking of the obj->dma_buf pointer
this pointer is always set as long as there's still a gem handle
around and a dma_buf associated with this gem object.
Also, the per file-priv lookup-cache for dma-buf importing is also
unified between foreign and native objects.
Hence we don't need to special case the clean any more and can simply
drop the clause which only runs for foreing objects, i.e. with
obj->import_attach set.
Note that with this change (actually with the previous one to always
set up obj->dma_buf even for foreign objects) it is no longer required
to set obj->import_attach when importing a foreing object. So update
comments accordingly, too.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The export dma-buf cache is semantically similar to an flink name. So
semantically it makes sense to treat it the same and remove the name
(i.e. the dma_buf pointer) and its references when the last gem handle
disappears.
Again we need to be careful, but double so: Not just could someone
race and export with a gem close ioctl (so we need to recheck
obj->handle_count again when assigning the new name), but multiple
exports can also race against each another. This is prevented by
holding the dev->object_name_lock across the entire section which
touches obj->dma_buf.
With the new scheme we also need to reinstate the obj->dma_buf link at
import time (in case the only reference userspace has held in-between
was through the dma-buf fd and not through any native gem handle). For
simplicity we don't check whether it's a native object but
unconditionally set up that link - with the new scheme of removing the
obj->dma_buf reference when the last handle disappears we can do that.
To make it clear that this is not just for exported buffers anymore
als rename it from export_dma_buf to dma_buf.
To make sure that now one can race a fd_to_handle or handle_to_fd with
gem_close we use the same tricks as in flink of extending the
dev->object_name_locking critical section. With this change we finally
have a guaranteed 1:1 relationship (at least for native objects)
between gem objects and dma-bufs, even accounting for races (which can
happen since the dma-buf itself holds a reference while in-flight).
This prevent igt/prime_self_import/export-vs-gem_close-race from
Oopsing the kernel. There is still a leak though since the per-file
priv dma-buf/handle cache handling is racy. That will be fixed in a
later patch.
v2: Remove the bogus dma_buf_put from the export_and_register_object
failure path if we've raced with the handle count dropping to 0.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The gem flink name holds a reference onto the object itself, and this
self-reference would prevent an flink'ed object from every being
freed. To break that loop we remove the flink name when the last
userspace handle disappears, i.e. when obj->handle_count reaches 0.
Now in gem_open we drop the dev->object_name_lock between the flink
name lookup and actually adding the handle. This means a concurrent
gem_close of the last handle could result in the flink name getting
reaped right inbetween, i.e.
Thread 1 Thread 2
gem_open gem_close
flink -> obj lookup
handle_count drops to 0
remove flink name
create_handle
handle_count++
If someone now flinks this object again, we'll get a new flink name.
We can close this race by removing the lock dropping and making the
entire lookup+handle_create sequence atomic. Unfortunately to still be
able to share the handle_create logic this requires a
handle_create_tail function which drops the lock - we can't hold the
object_name_lock while calling into a driver's ->gem_open callback.
Note that for flink fixing this race isn't really important, since
racing gem_open against gem_close is clearly a userspace bug. And no
matter how the race ends, we won't leak any references.
But with dma-buf where the userspace dma-buf fd itself is refcounted
this is a valid sequence and hence we should fix it. Therefore this
patch here is just a warm-up exercise (and for consistency between
flink buffer sharing and dma-buf buffer sharing with self-imports).
Also note that this extension of the critical section in gem_open
protected by dev->object_name_lock only works because it's now a
mutex: A spinlock would conflict with the potential memory allocation
in idr_preload().
This is exercises by igt/gem_flink_race/flink_name.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I want to wrap the creation of a dma-buf from a gem object in it,
so that the obj->export_dma_buf cache can be atomically filled in.
Instead of creating a new mutex just for that variable I've figured
I can reuse the existing dev->object_name_lock, especially since
the new semantics will exactly mirror the flink obj->name already
protected by that lock.
v2: idr_preload/idr_preload_end is now an atomic section, so need to
move the mutex locking outside.
[airlied: fix up conflict with patch to make debugfs use lock]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
No one outside of drm should use this, the official interfaces are
drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
is purely an implementation detail of gem.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is the 2nd attempt, I've always been a bit dissatisified with the
tricky nature of the first one:
http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html
The issue is that the flink ioctl can race with calling gem_close on
the last gem handle. In that case we'll end up with a zero handle
count, but an flink name (and it's corresponding reference). Which
results in a neat space leak.
In my first attempt I've solved this by rechecking the handle count.
But fundamentally the issue is that ->handle_count isn't your usual
refcount - it can be resurrected from 0 among other things.
For those special beasts atomic_t often suggest way more ordering that
it actually guarantees. To prevent being tricked by those hairy
semantics take the easy way out and simply protect the handle with the
existing dev->object_name_lock.
With that change implemented it's dead easy to fix the flink vs. gem
close reace: When we try to create the name we simply have to check
whether there's still officially a gem handle around and if not refuse
to create the flink name. Since the handle count decrement and flink
name destruction is now also protected by that lock the reace is gone
and we can't ever leak the flink reference again.
Outside of the drm core only the exynos driver looks at the handle
count, and tbh I have no idea why (it's just for debug dmesg output
luckily).
I've considered inlining the drm_gem_object_handle_free, but I plan to
add more name-like things (like the exported dma_buf) to this scheme,
so it's clearer to leave the handle freeing in its own function.
This is exercised by the new gem_flink_race i-g-t testcase, which on
my snb leaks gem objects at a rate of roughly 1k objects/s.
v2: Fix up the error path handling in handle_create and make it more
robust by simply calling object_handle_unreference.
v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
retursn 1 for 0. Oops.
v4: Squash in inlining of drm_gem_object_handle_reference as suggested
by Dave Airlie and add a note that we now have a testcase.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Trying to drop a reference we don't have is a pretty serious bug.
Trying to paper over it is an even worse offense.
So scream into dmesg with a big WARN in case that ever happens.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Calling this function with a NULL object is simply a bug, so papering
over a NULL object not a good idea.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We have three callers of this function now and it's neither
performance critical nor really small. So an inline function feels
like overkill and unecessarily separates the different parts of the
code.
Since all callers of drm_gem_object_handle_free are now in drm_gem.c
we can make that static (and remove the unused EXPORT_SYMBOL). To
avoid a forward declaration move it (and drm_gem_object_free_bug) up a
bit.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Variant of drm_gem_create_mmap_offset() which doesn't make the
assumption that virtual size and physical size (obj->size) are the same.
This is needed in omapdrm to deal with tiled buffers. And lets us get
rid of a duplicated and slightly modified version of
drm_gem_create_mmap_offset() in omapdrm.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All the gem based kms drivers really want the same function to
destroy a dumb framebuffer backing storage object.
So give it to them and roll it out in all drivers.
This still leaves the option open for kms drivers which don't use GEM
for backing storage, but it does decently simplify matters for gem
drivers.
Acked-by: Inki Dae <inki.dae@samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Cc: Ben Skeggs <skeggsb@gmail.com>
Reviwed-by: Rob Clark <robdclark@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The VMA manager is page-size based so drm_vma_node_size() returns the size
in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply
PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small
buffers.
This bug was introduced in commit:
0de23977cf
"drm/gem: convert to new unified vma manager"
Fixes i915 gtt mmap failure reported by Sedat Dilek in:
Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ]
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reported-by: Sedat Dilek <sedat.dilek@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Use the new vma manager instead of the old hashtable. Also convert all
drivers to use the new convenience helpers. This drops all the
(map_list.hash.key << PAGE_SHIFT) non-sense.
Locking and access-management is exactly the same as before with an
additional lock inside of the vma-manager, which strictly wouldn't be
needed for gem.
v2:
- rebase on drm-next
- init nodes via drm_vma_node_reset() in drm_gem.c
v3:
- fix tegra
v4:
- remove duplicate if (drm_vma_node_has_offset()) checks
- inline now trivial drm_vma_node_offset_addr() calls
v5:
- skip node-reset on gem-init due to kzalloc()
- do not allow mapping gem-objects with offsets (backwards compat)
- remove unneccessary casts
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
drm_gem_object_init() and drm_gem_private_object_init() do exactly the
same (except for shmem alloc) so make the first use the latter to reduce
code duplication.
Also drop the return code from drm_gem_private_object_init(). It seems
unlikely that we will extend it any time soon so no reason to keep it
around. This simplifies code paths in drivers, too.
Last but not least, fix gma500 to call drm_gem_object_release() before
freeing objects that were allocated via drm_gem_private_object_init().
That isn't actually necessary for now, but might be in the future.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
There is no reason to return "int" as this function never fails.
Furthermore, several drivers (ast, sis) already depend on this.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
If idr_alloc() is failed, obj->name can be error value. Also
it cleans up duplicated flink processing code.
This regression has been introduced in
commit 2e928815c1
Author: Tejun Heo <tj@kernel.org>
Date: Wed Feb 27 17:04:08 2013 -0800
drm: convert to idr_alloc()
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Cc: stable@vger.kernel.org
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The drm_gem_mmap_obj() has to be protected with dev->struct_mutex,
but some caller functions do not. So it adds mutex lock to missing
callers and adds assertion to check whether drm_gem_mmap_obj() is
called with mutex lock or not.
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The drm_gem_mmap() function first finds the GEM object to be mapped
based on the fake mmap offset and then maps the object. Split the object
mapping code into a standalone drm_gem_mmap_obj() function that can be
used to implement dma-buf mmap() operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Currently we have a problem with this:
1. i915: create gem object
2. i915: export gem object to prime
3. radeon: import gem object
4. close prime fd
5. radeon: unref object
6. i915: unref object
i915 has an imported object reference in its file priv, that isn't
cleaned up properly until fd close. The reference gets added at step 2,
but at step 6 we don't have enough info to clean it up.
The solution is to take a reference on the dma-buf when we export it,
and drop the reference when the gem handle goes away.
So when we export a dma_buf from a gem object, we keep track of it
with the handle, we take a reference to the dma_buf. When we close
the handle (i.e. userspace is finished with the buffer), we drop
the reference to the dma_buf, and it gets collected.
This patch isn't meant to fix any other problem or bikesheds, and it doesn't
fix any races with other scenarios.
v1.1: move export symbol line back up.
v2: okay I had to do a bit more, as the first patch showed a leak
on one of my tests, that I found using the dma-buf debugfs support,
the problem case is exporting a buffer twice with the same handle,
we'd add another export handle for it unnecessarily, however
we now fail if we try to export the same object with a different gem handle,
however I'm not sure if that is a case I want to support, and I've
gotten the code to WARN_ON if we hit something like that.
v2.1: rebase this patch, write better commit msg.
v3: cleanup error handling, track import vs export in linked list,
these two patches were separate previously, but seem to work better
like this.
v4: danvet is correct, this code is no longer useful, since the buffer
better exist, so remove it.
v5: always take a reference to the dma buf object, import or export.
(Imre Deak contributed this originally)
v6: square the circle, remove import vs export tracking now
that there is no difference
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop its usage.
* drm_ctxbitmap_cleanup() was calling idr_remove_all() but forgetting
idr_destroy() thus leaking all buffered free idr_layers. Replace it
with idr_destroy().
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Airlie <airlied@linux.ie>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:
| effect | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
This patch removes reserved_vm counter from mm_struct. Seems like nobody
cares about it, it does not exported into userspace directly, it only
reduces total_vm showed in proc.
Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
[akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Eric Paris <eparis@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Venkatesh Pallipadi <venki@google.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Convert #include "..." to #include <path/...> in drivers/gpu/.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
In order to support snoopable memory on non-LLC architectures (so that
we can bind vgem objects into the i915 GATT for example), we have to
avoid the prefetcher on the GPU from crossing memory domains and so
prevent allocation of a snoopable PTE immediately following an uncached
PTE. To do that, we need to extend the range allocator with support for
tracking and segregating different node colours.
This will be used by i915 to segregate memory domains within the GTT.
v2: Now with more drm_mm helpers and less driver interference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
If userspace attempts to import a buffer it exported on the same device,
we need to return the same GEM handle for it, not a new handle pointing
at the same GEM object.
v2: move removals into a single fn, no need to set to NULL. (Chris Wilson)
Signed-off-by: Dave Airlie <airlied@redhat.com>
Previously these functions would assume that vma->vm_file was the
drm_file. Although if in some cases if the drm driver needs to use
something else for the backing file (such as the tmpfs filp) then this
assumption is no longer true. But vma->vm_private_data is still the
GEM object.
With this change, now the drm_device comes from the GEM object rather
than the drm_file so the driver is more free to play with vma->vm_file.
The scenario where this comes up is for mmap'ing of cached dmabuf's
for non-coherent systems, where the driver needs to use fault handling
and PTE shootdown to simulate coherency. We can't use the vma->vm_file
of the dmabuf, which is using anon_inode's address_space. The most
straightforward thing to do is to use the GEM object's obj->filp for
vma->vm_file in all cases, for which we need this patch.
Signed-off-by: Rob Clark <rob@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The error handling code w.r.t. idr usage looks inconsistent.
In the case of drm_mode_object_get() and drm_ctxbitmap_next() the error
handling is also incomplete.
Unify the code to follow the same pattern always.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This adds the basic drm dma-buf interface layer, called PRIME. This
commit doesn't add any driver support, it is simply and agreed upon starting
point so we can work towards merging driver support for the next merge window.
Current drivers with work done are nouveau, i915, udl, exynos and omap.
The main APIs exposed to userspace allow translating a 32-bit object handle
to a file descriptor, and a file descriptor to a 32-bit object handle.
The flags value is currently limited to O_CLOEXEC.
Acknowledgements:
Daniel Vetter: lots of review
Rob Clark: cleaned up lots of the internals and did lifetime review.
v2: rename some functions after Chris preferred a green shed
fix IS_ERR_OR_NULL -> IS_ERR
v3: Fix Ville pointed out using buffer + kmalloc
v4: add locking as per ickle review
v5: allow re-exporting the original dma-buf (Daniel)
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <rob.clark@linaro.org>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Reviewed-by: Inki Dae <inki.dae@samsung.com>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Two parts to this, one is simple unplug from sysfs for the device node.
The second adds an unplugged state, if we have device opens, we
just set the unplugged state and return, if we have no device
opens we drop the drm device.
If after a lastclose we discover we are unplugged we then
drop the drm device.
v2: use an atomic for unplugged and wrap it for users,
add checks on open + mmap + ioctl entry points.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Talking to Al Viro on irc, we can see no possible reason for doing
this, the upper mmap code does it. The code has been there since
first import into drm tree I can find.
Al tracked down this as a requirement pre 2.3.51 hasn't been needed since.
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
In particular, I found I was hitting the max-file limit in the VFS,
and the EFILE was being magically transformed into ENOMEM. Confusion
reigns.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
These small changes should allow GEM to be used with non shmem objects as
well as shmem objects. In the GMA500 case it allows the base framebuffer to
appear as a GEM object and thus acquire a handle and work with KMS.
For i915 it ought to be trivial to get back the wasted memory but putting the
system fb back into stolen RAM and in general I can imagine it allowing the
use of GEM and thus KMS with all the older cards that have their framebuffer
firmly placed in video RAM.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Tested-by: Rob Clark <rob@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Soon tmpfs will stop supporting ->readpage and read_cache_page_gfp(): once
"tmpfs: add shmem_read_mapping_page_gfp" has been applied, this patch can
be applied to ease the transition.
Make i915_gem_object_get_pages_gtt() use shmem_read_mapping_page_gfp() in
the one place it's needed; elsewhere use shmem_read_mapping_page(), with
the mapping's gfp_mask properly initialized.
Forget about __GFP_COLD: since tmpfs initializes its pages with memset,
asking for a cold page is counter-productive.
Include linux/shmem_fs.h also in drm_gem.c: with shmem_file_setup() now
declared there too, we shall remove the prototype from linux/mm.h later.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Keith Packard <keithp@keithp.com>
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nouveau is going to use these hooks to map/unmap objects from a client's
private GPU address space.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Using an order 19 drm_ht for the mmap offsets is a little obscene. That
means that will a fully populated GTT with every single object mmaped at
least once in its lifetime, there will be exactly one object in each
bucket.
Typically systems only have at most a few thousand objects, though you
may see a KDE desktop hit 50000. And most of those should never be
mapped... On my systems, just using an order 10 ht would still have an
average occupancy less than 1, so apply a small safety factor and
use an order 12 ht, like the other mmap offset ht.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is just an idea that might or might not be a good idea,
it basically adds two ioctls to create a dumb and map a dumb buffer
suitable for scanout. The handle can be passed to the KMS ioctls to create
a framebuffer.
It looks to me like it would be useful in the following cases:
a) in development drivers - we can always provide a shadowfb fallback.
b) libkms users - we can clean up libkms a lot and avoid linking
to libdrm_*.
c) plymouth via libkms is a lot easier.
Userspace bits would be just calls + mmaps. We could probably
mark these handles somehow as not being suitable for acceleartion
so as top stop people who are dumber than dumb.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Only drm/i915 does the bookkeeping that makes the information useful,
and the information maintained is driver specific, so move it out of the
core and into its single user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com>
In order to be fully threadsafe we need to check that the drm_gem_object
refcount is still 0 after acquiring the mutex in order to call the free
function. Otherwise, we may encounter scenarios like:
Thread A: Thread B:
drm_gem_close
unreference_unlocked
kref_put mutex_lock
... i915_gem_evict
... kref_get -> BUG
... i915_gem_unbind
... kref_put
... i915_gem_object_free
... mutex_unlock
mutex_lock
i915_gem_object_free -> BUG
i915_gem_object_unbind
kfree
mutex_unlock
Note that no driver is currently using the free_unlocked vfunc and it is
scheduled for removal, hasten that process.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30454
Reported-and-Tested-by: Magnus Kessler <Magnus.Kessler@gmx.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
There were lots of places being inconsistent since handle count
looked like a kref but it really wasn't.
Fix this my just making handle count an atomic on the object,
and have it increase the normal object kref.
Now i915/radeon/nouveau drivers can drop the normal reference on
userspace object creation, and have the handle hold it.
This patch fixes a memory leak or corruption on unload, because
the driver had no way of knowing if a handle had been actually
added for this object, and the fbcon object needed to know this
to clean itself up properly.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Hook the GEM vm open/close ops into the generic drm vm open/close so
that the private vma entries are created and destroy appropriately.
Fixes the leak of the drm_vma_entries during the lifetime of the filp.
Reported-by: Matt Mackall <mpm@selenic.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is consistent with trying to access a filename that not exist
within a directory which is a good analogy here. The main reason for the
change is that it is easy to confuse the error code of EBADF as an
performing an ioctl on an invalid file descriptor (rather than an
unknown object).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
/* A typical clean-up sequence for objects stored in an idr tree, will
* use idr_for_each() to free all objects, if necessary, then
* idr_remove_all() to remove all ids, and idr_destroy() to free
* up the cached idr_layers.
*/
We were missing the vital idr_rmove_all() step and so were leaking
the used layers for every dri client:
unreferenced object 0xf32133c0 (size 148):
comm "plymouthd", pid 131, jiffies 4294678490 (age 2308.030s)
hex dump (first 32 bytes):
04 00 00 00 00 00 00 00 00 00 00 00 00 40 19 f3 .............@..
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<c04e5657>] create_object+0x124/0x1f1
[<c07cf100>] kmemleak_alloc+0x4c/0x90
[<c04db6a9>] kmem_cache_alloc+0xee/0x13c
[<c05c3d25>] idr_pre_get+0x24/0x61
[<f8315c9c>] drm_gem_handle_create+0x27/0x7f [drm]
[<f89925b2>] i915_gem_create_ioctl+0x4f/0x71 [i915]
[<f83148ac>] drm_ioctl+0x272/0x356 [drm]
[<c04f27c4>] vfs_ioctl+0x33/0x91
[<c04f31cf>] do_vfs_ioctl+0x46b/0x496
[<c04f3240>] sys_ioctl+0x46/0x66
[<c040325f>] sysenter_do_call+0x12/0x38
[<ffffffff>] 0xffffffff
Fixes https://bugzilla.kernel.org/show_bug.cgi?id=15803
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The pgoff option in mmap() is defined as an unsigned long
so the offset generated by DRM needs to fit into
BITS_PER_LONG for the CPU in question.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When drivers embed the core gem object into their own structures,
they'll have to do this. Temporarily this results in an ugly
kfree(gem_obj);
in every gem driver.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This function can be used by drivers who allocate the drm gem object
on their own. No functional change in here, just preparation.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Mostly obvious simplifications.
The i915 pread/pwrite ioctls, intel_overlay_put_image and
nouveau_gem_new were incorrectly using the locked versions
without locking: this is also fixed in this patch.
Signed-off-by: Luca Barbieri <luca@luca-barbieri.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch introduces the drm_gem_object_unreference_unlocked
and drm_gem_object_handle_unreference_unlocked functions that
do not require holding struct_mutex.
drm_gem_object_unreference_unlocked calls the new
->gem_free_object_unlocked entry point if available, and
otherwise just takes struct_mutex and just calls ->gem_free_object
Signed-off-by: Luca Barbieri <luca@luca-barbieri.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Having missed the ENOMEM return via i915_gem_fault(), there are probably
other paths that I also missed. By not enabling NORETRY by default these
paths can run the shrinker and take memory from the system (but not from
our own inactive lists because our shrinker can not run whilst we hold
the struct mutex) and this may allow the system to survive a little longer
whilst our drivers consume all available memory.
References:
OOM killer unexpectedly called with kernel 2.6.32
http://bugzilla.kernel.org/show_bug.cgi?id=14933
v2: Pass gfp into page mapping.
v3: Use new read_cache_page_gfp() instead of open-coding.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Some architectures compute ->vm_page_prot depending on ->vm_flags, so we
need to update the protections after adjusting the flags.
AFAIK this only affects running X under Xen; without this patch you get
lots of coloured blobs on the screen, or maybe a complete lockup. Or
anything really.
But that still depends on lots of out-of-tree stuff, so I don't think
there are any consequences for anyone else. But it is wrong in principle.
Reported-by: Jan Beulich <JBeulich@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Due to the necessity of having to take the struct_mutex, the i915
shrinker can not free the inactive lists if we fail to allocate memory
whilst processing a batch buffer, triggering an OOM and an ENOMEM that
is reported back to userspace. In order to fare better under such
circumstances we need to manually retry a failed allocation after
evicting inactive buffers.
To do so involves 3 steps:
1. Marking the backing shm pages as NORETRY.
2. Updating the get_pages() callers to evict something on failure and then
retry.
3. Revamping the evict something logic to be smarter about the required
buffer size and prefer to use volatile or clean inactive pages.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Several functions in the GEM kernel API used int as handle type, but
user API has it __u32 which is also the intended type.
Replace int with u32.
Signed-off-by: Pekka Paalanen <pq@iki.fi>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Check kzalloc retval against NULL in drm_gem_object_alloc and bail out
appropriately.
While at it merge the fail paths and jump to them by gotos at the end
of the function.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
It hasn't been used in ages, and having the user tell your how much
memory is being freed at free time is a recipe for disaster even if it
was ever used.
Signed-off-by: Eric Anholt <eric@anholt.net>
Calls to kcalloc() for a single element can be simplified to calls to
kzalloc().
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Otherwise, the PAGE_CACHE_WC would end up getting us a UC-only mapping, and
the write performance of GTT maps dropped 10x.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[anholt: cleaned up unused var]
Signed-off-by: Eric Anholt <eric@anholt.net>
Once upon a time, the DRM made the distinction between the drm_map
data structure exchanged with user space and the drm_local_map used
in the kernel.
For some reasons, while the BSD port still has that "feature", the
linux part abused drm_map for kernel internal usage as the local
map only existed as a typedef of the struct drm_map.
This patch fixes it by declaring struct drm_local_map separately
(though its content is currently identical to the userspace variant),
and changing the kernel code to only use that, except when it's a
user<->kernel interface (ie. ioctl).
This allows subsequent changes to the in-kernel format
I've also replaced the use of drm_local_map_t with struct drm_local_map
in a couple of places. Mostly by accident but they are the same (the
former is a typedef of the later) and I have some remote plans and
half finished patch to completely kill the drm_local_map_t typedef
so I left those bits in.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
This fixes potential fault at fault time if the object was unreferenced
while the mapping still existed. Now, while the mmap_offset only lives
for the lifetime of the object, the object also stays alive while a vma
exists that needs it.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The name table should only hold a single reference, so avoid leaking
additional references for secondary calls to flink().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Ensure that the object is unreferenced if we fail to allocate during
drm_gem_flink_ioctl().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Remove the member from the hash table before we free the structure!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
The mmap_region() code would temporarily set the VM_ACCOUNT flag for
anonymous shared mappings just to inform shmem_zero_setup() that it
should enable accounting for the resulting shm object. It would then
clear the flag after calling ->mmap (for the /dev/zero case) or doing
shmem_zero_setup() (for the MAP_ANON case).
This just resulted in vma merge issues, but also made for just
unnecessary confusion. Use the already-existing VM_NORESERVE flag for
this instead, and let shmem_{zero|file}_setup() just figure it out from
that.
This also happens to make it obvious that the new DRI2 GEM layer uses a
non-reserving backing store for its object allocation - which is quite
possibly not intentional. But since I didn't want to change semantics
in this patch, I left it alone, and just updated the caller to use the
new flag semantics.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add core support for mapping of GEM objects. Drivers should provide a
vm_operations_struct if they want to support page faulting of objects.
The code for handling GEM object offsets was taken from TTM, which was
written by Thomas Hellström.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
GEM allows the creation of persistent buffer objects accessible by the
graphics device through new ioctls for managing execution of commands on the
device. The userland API is almost entirely driver-specific to ensure that
any driver building on this model can easily map the interface to individual
driver requirements.
GEM is used by the 2d driver for managing its internal state allocations and
will be used for pixmap storage to reduce memory consumption and enable
zero-copy GLX_EXT_texture_from_pixmap, and in the 3d driver is used to enable
GL_EXT_framebuffer_object and GL_ARB_pixel_buffer_object.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>