There is no reason to return "int" as this function never fails.
Furthermore, several drivers (ast, sis) already depend on this.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
If idr_alloc() is failed, obj->name can be error value. Also
it cleans up duplicated flink processing code.
This regression has been introduced in
commit 2e928815c1
Author: Tejun Heo <tj@kernel.org>
Date: Wed Feb 27 17:04:08 2013 -0800
drm: convert to idr_alloc()
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Cc: stable@vger.kernel.org
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The drm_gem_mmap_obj() has to be protected with dev->struct_mutex,
but some caller functions do not. So it adds mutex lock to missing
callers and adds assertion to check whether drm_gem_mmap_obj() is
called with mutex lock or not.
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The drm_gem_mmap() function first finds the GEM object to be mapped
based on the fake mmap offset and then maps the object. Split the object
mapping code into a standalone drm_gem_mmap_obj() function that can be
used to implement dma-buf mmap() operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Currently we have a problem with this:
1. i915: create gem object
2. i915: export gem object to prime
3. radeon: import gem object
4. close prime fd
5. radeon: unref object
6. i915: unref object
i915 has an imported object reference in its file priv, that isn't
cleaned up properly until fd close. The reference gets added at step 2,
but at step 6 we don't have enough info to clean it up.
The solution is to take a reference on the dma-buf when we export it,
and drop the reference when the gem handle goes away.
So when we export a dma_buf from a gem object, we keep track of it
with the handle, we take a reference to the dma_buf. When we close
the handle (i.e. userspace is finished with the buffer), we drop
the reference to the dma_buf, and it gets collected.
This patch isn't meant to fix any other problem or bikesheds, and it doesn't
fix any races with other scenarios.
v1.1: move export symbol line back up.
v2: okay I had to do a bit more, as the first patch showed a leak
on one of my tests, that I found using the dma-buf debugfs support,
the problem case is exporting a buffer twice with the same handle,
we'd add another export handle for it unnecessarily, however
we now fail if we try to export the same object with a different gem handle,
however I'm not sure if that is a case I want to support, and I've
gotten the code to WARN_ON if we hit something like that.
v2.1: rebase this patch, write better commit msg.
v3: cleanup error handling, track import vs export in linked list,
these two patches were separate previously, but seem to work better
like this.
v4: danvet is correct, this code is no longer useful, since the buffer
better exist, so remove it.
v5: always take a reference to the dma buf object, import or export.
(Imre Deak contributed this originally)
v6: square the circle, remove import vs export tracking now
that there is no difference
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
idr_destroy() can destroy idr by itself and idr_remove_all() is being
deprecated. Drop its usage.
* drm_ctxbitmap_cleanup() was calling idr_remove_all() but forgetting
idr_destroy() thus leaking all buffered free idr_layers. Replace it
with idr_destroy().
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David Airlie <airlied@linux.ie>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:
| effect | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
This patch removes reserved_vm counter from mm_struct. Seems like nobody
cares about it, it does not exported into userspace directly, it only
reduces total_vm showed in proc.
Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
[akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Eric Paris <eparis@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Venkatesh Pallipadi <venki@google.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Convert #include "..." to #include <path/...> in drivers/gpu/.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
In order to support snoopable memory on non-LLC architectures (so that
we can bind vgem objects into the i915 GATT for example), we have to
avoid the prefetcher on the GPU from crossing memory domains and so
prevent allocation of a snoopable PTE immediately following an uncached
PTE. To do that, we need to extend the range allocator with support for
tracking and segregating different node colours.
This will be used by i915 to segregate memory domains within the GTT.
v2: Now with more drm_mm helpers and less driver interference.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
If userspace attempts to import a buffer it exported on the same device,
we need to return the same GEM handle for it, not a new handle pointing
at the same GEM object.
v2: move removals into a single fn, no need to set to NULL. (Chris Wilson)
Signed-off-by: Dave Airlie <airlied@redhat.com>
Previously these functions would assume that vma->vm_file was the
drm_file. Although if in some cases if the drm driver needs to use
something else for the backing file (such as the tmpfs filp) then this
assumption is no longer true. But vma->vm_private_data is still the
GEM object.
With this change, now the drm_device comes from the GEM object rather
than the drm_file so the driver is more free to play with vma->vm_file.
The scenario where this comes up is for mmap'ing of cached dmabuf's
for non-coherent systems, where the driver needs to use fault handling
and PTE shootdown to simulate coherency. We can't use the vma->vm_file
of the dmabuf, which is using anon_inode's address_space. The most
straightforward thing to do is to use the GEM object's obj->filp for
vma->vm_file in all cases, for which we need this patch.
Signed-off-by: Rob Clark <rob@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The error handling code w.r.t. idr usage looks inconsistent.
In the case of drm_mode_object_get() and drm_ctxbitmap_next() the error
handling is also incomplete.
Unify the code to follow the same pattern always.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This adds the basic drm dma-buf interface layer, called PRIME. This
commit doesn't add any driver support, it is simply and agreed upon starting
point so we can work towards merging driver support for the next merge window.
Current drivers with work done are nouveau, i915, udl, exynos and omap.
The main APIs exposed to userspace allow translating a 32-bit object handle
to a file descriptor, and a file descriptor to a 32-bit object handle.
The flags value is currently limited to O_CLOEXEC.
Acknowledgements:
Daniel Vetter: lots of review
Rob Clark: cleaned up lots of the internals and did lifetime review.
v2: rename some functions after Chris preferred a green shed
fix IS_ERR_OR_NULL -> IS_ERR
v3: Fix Ville pointed out using buffer + kmalloc
v4: add locking as per ickle review
v5: allow re-exporting the original dma-buf (Daniel)
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <rob.clark@linaro.org>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Reviewed-by: Inki Dae <inki.dae@samsung.com>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Two parts to this, one is simple unplug from sysfs for the device node.
The second adds an unplugged state, if we have device opens, we
just set the unplugged state and return, if we have no device
opens we drop the drm device.
If after a lastclose we discover we are unplugged we then
drop the drm device.
v2: use an atomic for unplugged and wrap it for users,
add checks on open + mmap + ioctl entry points.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Talking to Al Viro on irc, we can see no possible reason for doing
this, the upper mmap code does it. The code has been there since
first import into drm tree I can find.
Al tracked down this as a requirement pre 2.3.51 hasn't been needed since.
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
In particular, I found I was hitting the max-file limit in the VFS,
and the EFILE was being magically transformed into ENOMEM. Confusion
reigns.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
These small changes should allow GEM to be used with non shmem objects as
well as shmem objects. In the GMA500 case it allows the base framebuffer to
appear as a GEM object and thus acquire a handle and work with KMS.
For i915 it ought to be trivial to get back the wasted memory but putting the
system fb back into stolen RAM and in general I can imagine it allowing the
use of GEM and thus KMS with all the older cards that have their framebuffer
firmly placed in video RAM.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Tested-by: Rob Clark <rob@ti.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Soon tmpfs will stop supporting ->readpage and read_cache_page_gfp(): once
"tmpfs: add shmem_read_mapping_page_gfp" has been applied, this patch can
be applied to ease the transition.
Make i915_gem_object_get_pages_gtt() use shmem_read_mapping_page_gfp() in
the one place it's needed; elsewhere use shmem_read_mapping_page(), with
the mapping's gfp_mask properly initialized.
Forget about __GFP_COLD: since tmpfs initializes its pages with memset,
asking for a cold page is counter-productive.
Include linux/shmem_fs.h also in drm_gem.c: with shmem_file_setup() now
declared there too, we shall remove the prototype from linux/mm.h later.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Keith Packard <keithp@keithp.com>
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nouveau is going to use these hooks to map/unmap objects from a client's
private GPU address space.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Using an order 19 drm_ht for the mmap offsets is a little obscene. That
means that will a fully populated GTT with every single object mmaped at
least once in its lifetime, there will be exactly one object in each
bucket.
Typically systems only have at most a few thousand objects, though you
may see a KDE desktop hit 50000. And most of those should never be
mapped... On my systems, just using an order 10 ht would still have an
average occupancy less than 1, so apply a small safety factor and
use an order 12 ht, like the other mmap offset ht.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is just an idea that might or might not be a good idea,
it basically adds two ioctls to create a dumb and map a dumb buffer
suitable for scanout. The handle can be passed to the KMS ioctls to create
a framebuffer.
It looks to me like it would be useful in the following cases:
a) in development drivers - we can always provide a shadowfb fallback.
b) libkms users - we can clean up libkms a lot and avoid linking
to libdrm_*.
c) plymouth via libkms is a lot easier.
Userspace bits would be just calls + mmaps. We could probably
mark these handles somehow as not being suitable for acceleartion
so as top stop people who are dumber than dumb.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Only drm/i915 does the bookkeeping that makes the information useful,
and the information maintained is driver specific, so move it out of the
core and into its single user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com>
In order to be fully threadsafe we need to check that the drm_gem_object
refcount is still 0 after acquiring the mutex in order to call the free
function. Otherwise, we may encounter scenarios like:
Thread A: Thread B:
drm_gem_close
unreference_unlocked
kref_put mutex_lock
... i915_gem_evict
... kref_get -> BUG
... i915_gem_unbind
... kref_put
... i915_gem_object_free
... mutex_unlock
mutex_lock
i915_gem_object_free -> BUG
i915_gem_object_unbind
kfree
mutex_unlock
Note that no driver is currently using the free_unlocked vfunc and it is
scheduled for removal, hasten that process.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30454
Reported-and-Tested-by: Magnus Kessler <Magnus.Kessler@gmx.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
There were lots of places being inconsistent since handle count
looked like a kref but it really wasn't.
Fix this my just making handle count an atomic on the object,
and have it increase the normal object kref.
Now i915/radeon/nouveau drivers can drop the normal reference on
userspace object creation, and have the handle hold it.
This patch fixes a memory leak or corruption on unload, because
the driver had no way of knowing if a handle had been actually
added for this object, and the fbcon object needed to know this
to clean itself up properly.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Hook the GEM vm open/close ops into the generic drm vm open/close so
that the private vma entries are created and destroy appropriately.
Fixes the leak of the drm_vma_entries during the lifetime of the filp.
Reported-by: Matt Mackall <mpm@selenic.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: stable@kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is consistent with trying to access a filename that not exist
within a directory which is a good analogy here. The main reason for the
change is that it is easy to confuse the error code of EBADF as an
performing an ioctl on an invalid file descriptor (rather than an
unknown object).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
/* A typical clean-up sequence for objects stored in an idr tree, will
* use idr_for_each() to free all objects, if necessary, then
* idr_remove_all() to remove all ids, and idr_destroy() to free
* up the cached idr_layers.
*/
We were missing the vital idr_rmove_all() step and so were leaking
the used layers for every dri client:
unreferenced object 0xf32133c0 (size 148):
comm "plymouthd", pid 131, jiffies 4294678490 (age 2308.030s)
hex dump (first 32 bytes):
04 00 00 00 00 00 00 00 00 00 00 00 00 40 19 f3 .............@..
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<c04e5657>] create_object+0x124/0x1f1
[<c07cf100>] kmemleak_alloc+0x4c/0x90
[<c04db6a9>] kmem_cache_alloc+0xee/0x13c
[<c05c3d25>] idr_pre_get+0x24/0x61
[<f8315c9c>] drm_gem_handle_create+0x27/0x7f [drm]
[<f89925b2>] i915_gem_create_ioctl+0x4f/0x71 [i915]
[<f83148ac>] drm_ioctl+0x272/0x356 [drm]
[<c04f27c4>] vfs_ioctl+0x33/0x91
[<c04f31cf>] do_vfs_ioctl+0x46b/0x496
[<c04f3240>] sys_ioctl+0x46/0x66
[<c040325f>] sysenter_do_call+0x12/0x38
[<ffffffff>] 0xffffffff
Fixes https://bugzilla.kernel.org/show_bug.cgi?id=15803
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The pgoff option in mmap() is defined as an unsigned long
so the offset generated by DRM needs to fit into
BITS_PER_LONG for the CPU in question.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When drivers embed the core gem object into their own structures,
they'll have to do this. Temporarily this results in an ugly
kfree(gem_obj);
in every gem driver.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This function can be used by drivers who allocate the drm gem object
on their own. No functional change in here, just preparation.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Mostly obvious simplifications.
The i915 pread/pwrite ioctls, intel_overlay_put_image and
nouveau_gem_new were incorrectly using the locked versions
without locking: this is also fixed in this patch.
Signed-off-by: Luca Barbieri <luca@luca-barbieri.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch introduces the drm_gem_object_unreference_unlocked
and drm_gem_object_handle_unreference_unlocked functions that
do not require holding struct_mutex.
drm_gem_object_unreference_unlocked calls the new
->gem_free_object_unlocked entry point if available, and
otherwise just takes struct_mutex and just calls ->gem_free_object
Signed-off-by: Luca Barbieri <luca@luca-barbieri.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Having missed the ENOMEM return via i915_gem_fault(), there are probably
other paths that I also missed. By not enabling NORETRY by default these
paths can run the shrinker and take memory from the system (but not from
our own inactive lists because our shrinker can not run whilst we hold
the struct mutex) and this may allow the system to survive a little longer
whilst our drivers consume all available memory.
References:
OOM killer unexpectedly called with kernel 2.6.32
http://bugzilla.kernel.org/show_bug.cgi?id=14933
v2: Pass gfp into page mapping.
v3: Use new read_cache_page_gfp() instead of open-coding.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Some architectures compute ->vm_page_prot depending on ->vm_flags, so we
need to update the protections after adjusting the flags.
AFAIK this only affects running X under Xen; without this patch you get
lots of coloured blobs on the screen, or maybe a complete lockup. Or
anything really.
But that still depends on lots of out-of-tree stuff, so I don't think
there are any consequences for anyone else. But it is wrong in principle.
Reported-by: Jan Beulich <JBeulich@novell.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Due to the necessity of having to take the struct_mutex, the i915
shrinker can not free the inactive lists if we fail to allocate memory
whilst processing a batch buffer, triggering an OOM and an ENOMEM that
is reported back to userspace. In order to fare better under such
circumstances we need to manually retry a failed allocation after
evicting inactive buffers.
To do so involves 3 steps:
1. Marking the backing shm pages as NORETRY.
2. Updating the get_pages() callers to evict something on failure and then
retry.
3. Revamping the evict something logic to be smarter about the required
buffer size and prefer to use volatile or clean inactive pages.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Several functions in the GEM kernel API used int as handle type, but
user API has it __u32 which is also the intended type.
Replace int with u32.
Signed-off-by: Pekka Paalanen <pq@iki.fi>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Check kzalloc retval against NULL in drm_gem_object_alloc and bail out
appropriately.
While at it merge the fail paths and jump to them by gotos at the end
of the function.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
It hasn't been used in ages, and having the user tell your how much
memory is being freed at free time is a recipe for disaster even if it
was ever used.
Signed-off-by: Eric Anholt <eric@anholt.net>
Calls to kcalloc() for a single element can be simplified to calls to
kzalloc().
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Otherwise, the PAGE_CACHE_WC would end up getting us a UC-only mapping, and
the write performance of GTT maps dropped 10x.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[anholt: cleaned up unused var]
Signed-off-by: Eric Anholt <eric@anholt.net>
Once upon a time, the DRM made the distinction between the drm_map
data structure exchanged with user space and the drm_local_map used
in the kernel.
For some reasons, while the BSD port still has that "feature", the
linux part abused drm_map for kernel internal usage as the local
map only existed as a typedef of the struct drm_map.
This patch fixes it by declaring struct drm_local_map separately
(though its content is currently identical to the userspace variant),
and changing the kernel code to only use that, except when it's a
user<->kernel interface (ie. ioctl).
This allows subsequent changes to the in-kernel format
I've also replaced the use of drm_local_map_t with struct drm_local_map
in a couple of places. Mostly by accident but they are the same (the
former is a typedef of the later) and I have some remote plans and
half finished patch to completely kill the drm_local_map_t typedef
so I left those bits in.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
This fixes potential fault at fault time if the object was unreferenced
while the mapping still existed. Now, while the mmap_offset only lives
for the lifetime of the object, the object also stays alive while a vma
exists that needs it.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The name table should only hold a single reference, so avoid leaking
additional references for secondary calls to flink().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>