drm/i915: Rewrite some comments around RCU-deferred object free

Tvrtko noticed that the comments describing the interaction of RCU and
the deferred worker for freeing drm_i915_gem_object were a little
confusing, so attempt to bring some sense to them.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180115205759.13884-1-chris@chris-wilson.co.uk
This commit is contained in:
Chris Wilson 2018-01-15 20:57:59 +00:00
parent 2aa472c827
commit 2ef1e729c7
1 changed files with 13 additions and 6 deletions

View File

@ -4699,7 +4699,8 @@ static void __i915_gem_free_work(struct work_struct *work)
container_of(work, struct drm_i915_private, mm.free_work); container_of(work, struct drm_i915_private, mm.free_work);
struct llist_node *freed; struct llist_node *freed;
/* All file-owned VMA should have been released by this point through /*
* All file-owned VMA should have been released by this point through
* i915_gem_close_object(), or earlier by i915_gem_context_close(). * i915_gem_close_object(), or earlier by i915_gem_context_close().
* However, the object may also be bound into the global GTT (e.g. * However, the object may also be bound into the global GTT (e.g.
* older GPUs without per-process support, or for direct access through * older GPUs without per-process support, or for direct access through
@ -4726,10 +4727,15 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head)
container_of(head, typeof(*obj), rcu); container_of(head, typeof(*obj), rcu);
struct drm_i915_private *i915 = to_i915(obj->base.dev); struct drm_i915_private *i915 = to_i915(obj->base.dev);
/* We can't simply use call_rcu() from i915_gem_free_object() /*
* as we need to block whilst unbinding, and the call_rcu * Since we require blocking on struct_mutex to unbind the freed
* task may be called from softirq context. So we take a * object from the GPU before releasing resources back to the
* detour through a worker. * system, we can not do that directly from the RCU callback (which may
* be a softirq context), but must instead then defer that work onto a
* kthread. We use the RCU callback rather than move the freed object
* directly onto the work queue so that we can mix between using the
* worker and performing frees directly from subsequent allocations for
* crude but effective memory throttling.
*/ */
if (llist_add(&obj->freed, &i915->mm.free_list)) if (llist_add(&obj->freed, &i915->mm.free_list))
queue_work(i915->wq, &i915->mm.free_work); queue_work(i915->wq, &i915->mm.free_work);
@ -4745,7 +4751,8 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
if (discard_backing_storage(obj)) if (discard_backing_storage(obj))
obj->mm.madv = I915_MADV_DONTNEED; obj->mm.madv = I915_MADV_DONTNEED;
/* Before we free the object, make sure any pure RCU-only /*
* Before we free the object, make sure any pure RCU-only
* read-side critical sections are complete, e.g. * read-side critical sections are complete, e.g.
* i915_gem_busy_ioctl(). For the corresponding synchronized * i915_gem_busy_ioctl(). For the corresponding synchronized
* lookup see i915_gem_object_lookup_rcu(). * lookup see i915_gem_object_lookup_rcu().