Commit Graph

11 Commits

Author SHA1 Message Date
Imre Deak e227330223 drm/i915: avoid leaking DMA mappings
We have 3 types of DMA mappings for GEM objects:
1. physically contiguous for stolen and for objects needing contiguous
   memory
2. DMA-buf mappings imported via a DMA-buf attach operation
3. SG DMA mappings for shmem backed and userptr objects

For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
corresponding backing pages and so in practice we create/release the
mapping in the object's get_pages/put_pages callback.

For 3. the lifetime of the mapping matches that of any existing GPU binding
of the object, so we'll create the mapping when the object is bound to
the first vma and release the mapping when the object is unbound from its
last vma.

Since the object can be bound to multiple vmas, we can end up creating a
new DMA mapping in the 3. case even if the object already had one. This
is not allowed by the DMA API and can lead to leaked mapping data and
IOMMU memory space starvation in certain cases. For example HW IOMMU
drivers (intel_iommu) allocate a new range from their memory space
whenever a mapping is created, silently overriding a pre-existing
mapping.

Fix this by moving the creation/removal of DMA mappings to the object's
get_pages/put_pages callbacks. These callbacks already check for and do
an early return in case of any nested calls. This way objects of the 3.
case also become more like the other object types.

I noticed this issue by enabling DMA debugging, which got disabled after
a while due to its internal mapping tables getting full. It also reported
errors in connection to random other drivers that did a DMA mapping for
an address that was previously mapped by i915 but was never released.
Besides these diagnostic messages and the memory space starvation
problem for IOMMUs, I'm not aware of this causing a real issue.

The fix is based on a patch from Chris.

v2:
- move the DMA mapping create/remove calls to the get_pages/put_pages
  callbacks instead of adding new callbacks for these (Chris)
v3:
- also fix the get_page cache logic on the userptr async path (Chris)

Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-13 22:42:40 +02:00
Chris Wilson 281400ff0f drm/i915: Use uninterruptible mutex_lock for userptr bo creation
Mika encountered one pathological scenario under X where acquiring all
the mm locks (required to insert a mmu notifier) was very slow, so slow
that by the time we tried to lock the struct_mutex with the usual call
to i915_mutex_lock_interruptible(), X's signal timer had fired causing
us to restart the ioctl (and so looped indefinitely).

While I suspect this is the result of another bug (something leaking mm
perhaps?) we can forgo the error checking and interuptible nature of the
lock here so we only have to pay the expense once and get on with it.
This does expose the userptr creation routine to a driver livelock
though by not being interruptible.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: Init ret to avoid issues reported by PRTS.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-05-20 11:26:03 +02:00
Maarten Lankhorst b588c92b66 drm/i915: get rid of -Iinclude/drm
This results in a warning when building out of tree:
"cc1: warning: include/drm: No such file or directory [enabled by default]"

Most code already uses #include <drm/foo.h> correctly, so fix the
instances that don't.

Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-05-13 11:28:21 +02:00
Michał Winiarski 460822b0b1 drm/i915: Prevent use-after-free in invalidate_range_start callback
It's possible for invalidate_range_start mmu notifier callback to race
against userptr object release. If the gem object was released prior to
obtaining the spinlock in invalidate_range_start we're hitting null
pointer dereference.

Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close
Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close-overlap
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
[Jani: added code comment suggested by Chris]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2015-02-05 16:31:30 +02:00
Tvrtko Ursulin c479f4383e drm/i915: Do not leak pages when freeing userptr objects
sg_alloc_table_from_pages() can build us a table with coalesced ranges which
means we need to iterate over pages and not sg table entries when releasing
page references.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: "Barbalho, Rafael" <rafael.barbalho@intel.com>
Tested-by: Rafael Barbalho <rafael.barbalho@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
[danvet: Remove unused local variable sg.]
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-09-29 15:31:01 +02:00
Chris Wilson e9681366ea drm/i915: Do not store the error pointer for a failed userptr registration
If we fail to create our mmu notification, we report the error back and
currently store the error inside the i915_mm_struct. This not only causes
subsequent registerations of the same mm to fail (an issue if the first
was interrupted by a signal and needed to be restarted) but also causes
us to eventually try and free the error pointer.

[   73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c
[   73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[   73.420065] PGD 8650c067 PUD 870bb067 PMD 0
[   73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[   73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G        W      3.17.0-rc6+ #1561
[   73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011
[   73.421405] Workqueue: events __i915_mm_struct_free__worker
[   73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000
[   73.422051] RIP: 0010:[<ffffffff8114af33>]  [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[   73.422410] RSP: 0018:ffff88008816bd50  EFLAGS: 00010286
[   73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000
[   73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044
[   73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000
[   73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044
[   73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040
[   73.424682] FS:  0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000
[   73.425099] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0
[   73.426157] Stack:
[   73.426597]  ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008
[   73.427096]  ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28
[   73.427603]  ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af
[   73.428119] Call Trace:
[   73.428606]  [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80
[   73.429116]  [<ffffffff8106321a>] process_one_work+0x1ba/0x610
[   73.429632]  [<ffffffff810631af>] ? process_one_work+0x14f/0x610
[   73.430153]  [<ffffffff810636db>] worker_thread+0x6b/0x4a0
[   73.430671]  [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10
[   73.431501]  [<ffffffff81063670>] ? process_one_work+0x610/0x610
[   73.432030]  [<ffffffff8106a206>] kthread+0xf6/0x110
[   73.432561]  [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
[   73.433100]  [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0
[   73.433644]  [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
[   73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65
[   73.435942] RIP  [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[   73.437017]  RSP <ffff88008816bd50>
[   73.437704] CR2: 000000000000004c

Fixes regression from commit ad46cb533d
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Thu Aug 7 14:20:40 2014 +0100

    drm/i915: Prevent recursive deadlock on releasing a busy userptr

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207
Testcase: igt/gem_render_copy_redux
Testcase: igt/gem_userptr_blits/create-destroy-sync
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jacek Danecki <jacek.danecki@intel.com>
Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
Cc: Jacek Danecki <jacek.danecki@intel.com>
Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
Cc: stable@vger.kernel.org
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-09-29 15:19:59 +02:00
Chris Wilson ad46cb533d drm/i915: Prevent recursive deadlock on releasing a busy userptr
During release of the GEM object we hold the struct_mutex. As the
object may be holding onto the last reference for the task->mm,
calling mmput() may trigger exit_mmap() which close the vma
which will call drm_gem_vm_close() and attempt to reacquire
the struct_mutex. In order to avoid that recursion, we have
to defer the mmput() until after we drop the struct_mutex,
i.e. we need to schedule a worker to do the clean up. A further issue
spotted by Tvrtko was caused when we took a GTT mmapping of a userptr
buffer object. In that case, we would never call mmput as the object
would be cyclically referenced by the GTT mmapping and not freed upon
process exit - keeping the entire process mm alive after the process
task was reaped. The fix employed is to replace the mm_users/mmput()
reference handling to mm_count/mmdrop() for the shared i915_mm_struct.

   INFO: task test_surfaces:1632 blocked for more than 120 seconds.
         Tainted: GF          O 3.14.5+ #1
   "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
   test_surfaces   D 0000000000000000     0  1632   1590 0x00000082
    ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010
    0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010
    ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824
   Call Trace:
    [<ffffffff81582499>] schedule+0x29/0x70
    [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10
    [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220
    [<ffffffff81583c53>] mutex_lock+0x23/0x40
    [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]
    [<ffffffff8115a483>] remove_vma+0x33/0x70
    [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170
    [<ffffffff8104d6eb>] mmput+0x6b/0x100
    [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]
    [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]
    [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]
    [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]
    [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]
    [<ffffffff8127ffeb>] idr_for_each+0xab/0x100
    [<ffffffffa005cc70>] ?  drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]
    [<ffffffff81583c46>] ? mutex_lock+0x16/0x40
    [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]
    [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]
    [<ffffffff8118d482>] __fput+0xb2/0x260
    [<ffffffff8118d6de>] ____fput+0xe/0x10
    [<ffffffff8106f27f>] task_work_run+0x8f/0xf0
    [<ffffffff81052228>] do_exit+0x1a8/0x480
    [<ffffffff81052551>] do_group_exit+0x51/0xc0
    [<ffffffff810525d7>] SyS_exit_group+0x17/0x20
    [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b

v2: Incorporate feedback from Tvrtko and remove the unnessary mm
referencing when creating the i915_mm_struct and improve some of the
function names and comments.

Reported-by: Jacek Danecki <jacek.danecki@intel.com>
Test-case: igt/gem_userptr_blits/process-exit*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com>
Cc: Jacek Danecki <jacek.danecki@intel.com>
Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-09-08 08:38:49 +03:00
Chris Wilson 487777673e drm/i915/userptr: Keep spin_lock/unlock in the same block
Move the code around in order to acquire and release the spinlock in the
same function and in the same block. This keeps static analysers happy
and the reader sane.

Suggested-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-07-25 09:39:03 +02:00
Chris Wilson ec8b0dd51c drm/i915: Allow overlapping userptr objects
Whilst I strongly advise against doing so for the implicit coherency
issues between the multiple buffer objects accessing the same backing
store, it nevertheless is a valid use case, akin to mmaping the same
file multiple times.

The reason why we forbade it earlier was that our use of the interval
tree for fast invalidation upon vma changes excluded overlapping
objects. So in the case where the user wishes to create such pairs of
overlapping objects, we degrade the range invalidation to walkin the
linear list of objects associated with the mm.

A situation where overlapping objects could arise is the lax implementation
of MIT-SHM Pixmaps in the xserver. A second situation is where the user
wishes to have different access modes to a region of memory (e.g. access
through a read-only userptr buffer and through a normal userptr buffer).

v2: Compile for mmu-notifiers after tweaking
v3: Rename is_linear/has_linear

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: "Li, Victor Y" <victor.y.li@intel.com>
Cc: "Kelley, Sean V" <sean.v.kelley@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
Cc: Akash Goel <akash.goel@intel.com>
Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-07-24 11:00:00 +02:00
Chris Wilson 6c308fecb4 drm/i915: Initialise userptr mmu_notifier serial to 1
During the range invalidate, we walk the list of buffers associated with
the mmu_notifer and find the ones that overlap the range. An
optimisation is made to speed up the iteration by assuming the previous
iter is still valid whilst the tree is unmodified. This exposes a bug
when a range invalidate is triggered after we have just created the
mmu_notifier, but before attaching any buffers. In that case, we presume
we have an unmodified list and start walking from the last iter which is
NULL. Oops.

The easiest fix is then to initialise the serial of the tree to 1.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Testecase: igt/gem_userptr_blts/stress-mm
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-07-23 07:05:29 +02:00
Chris Wilson 5cc9ed4b9a drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl
By exporting the ability to map user address and inserting PTEs
representing their backing pages into the GTT, we can exploit UMA in order
to utilize normal application data as a texture source or even as a
render target (depending upon the capabilities of the chipset). This has
a number of uses, with zero-copy downloads to the GPU and efficient
readback making the intermixed streaming of CPU and GPU operations
fairly efficient. This ability has many widespread implications from
faster rendering of client-side software rasterisers (chromium),
mitigation of stalls due to read back (firefox) and to faster pipelining
of texture data (such as pixel buffer objects in GL or data blobs in CL).

v2: Compile with CONFIG_MMU_NOTIFIER
v3: We can sleep while performing invalidate-range, which we can utilise
to drop our page references prior to the kernel manipulating the vma
(for either discard or cloning) and so protect normal users.
v4: Only run the invalidate notifier if the range intercepts the bo.
v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers
v6: Recheck after reacquire mutex for lost mmu.
v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary.
v8: Fix rebasing error after forwarding porting the back port.
v9: Limit the userptr to page aligned entries. We now expect userspace
    to handle all the offset-in-page adjustments itself.
v10: Prevent vma from being copied across fork to avoid issues with cow.
v11: Drop vma behaviour changes -- locking is nigh on impossible.
     Use a worker to load user pages to avoid lock inversions.
v12: Use get_task_mm()/mmput() for correct refcounting of mm.
v13: Use a worker to release the mmu_notifier to avoid lock inversion
v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer
     with its own locking and tree of objects for each mm/mmu_notifier.
v15: Prevent overlapping userptr objects, and invalidate all objects
     within the mmu_notifier range
v16: Fix a typo for iterating over multiple objects in the range and
     rearrange error path to destroy the mmu_notifier locklessly.
     Also close a race between invalidate_range and the get_pages_worker.
v17: Close a race between get_pages_worker/invalidate_range and fresh
     allocations of the same userptr range - and notice that
     struct_mutex was presumed to be held when during creation it wasn't.
v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory
     for the struct sg_table and to clear it before reporting an error.
v19: Always error out on read-only userptr requests as we don't have the
     hardware infrastructure to support them at the moment.
v20: Refuse to implement read-only support until we have the required
     infrastructure - but reserve the bit in flags for future use.
v21: use_mm() is not required for get_user_pages(). It is only meant to
     be used to fix up the kernel thread's current->mm for use with
     copy_user().
v22: Use sg_alloc_table_from_pages for that chunky feeling
v23: Export a function for sanity checking dma-buf rather than encode
     userptr details elsewhere, and clean up comments based on
     suggestions by Bradley.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
Cc: Akash Goel <akash.goel@intel.com>
Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
[danvet: Frob ioctl allocation to pick the next one - will cause a bit
of fuss with create2 apparently, but such are the rules.]
[danvet2: oops, forgot to git add after manual patch application]
[danvet3: Appease sparse.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 19:31:29 +02:00