Commit Graph

6 Commits

Author SHA1 Message Date
Matthew Auld f4b1b92f41 drm/i915/buddy: avoid double list_add
Be careful not to mark an already free node as free again.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20200305204711.217783-1-matthew.auld@intel.com
2020-03-06 14:33:08 +00:00
Chris Wilson a9e395a4ab drn/i915: Break up long i915_buddy_free_list() with a cond_resched()
In the selftests, we may feed very long lists of blocks to be freed on
culmination of the tests. This coupled with kasan and other
malloc-tracing can make the kmem_cache_free() operation time consuming,
and doing many of those trigger soft lockup warnings. Break the list up
with a cond_resched().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221144917.1040662-1-chris@chris-wilson.co.uk
2019-12-30 12:10:38 +00:00
Matthew Auld 5d7f965e56 drm/i915/buddy: add missing call to i915_global_register
We are meant to register the kmem cache at init, such the supplied exit
and shrink hooks can be called.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190905072921.7979-1-matthew.auld@intel.com
2019-09-09 10:58:20 +01:00
Matthew Auld 3ba09632ce drm/i915/buddy: use kmemleak_update_trace
Since nodes are cached in a free-list, and potentially marked as free
without actually being destroyed, thus allowing them to be
opportunistically re-allocated, we should apply kmemleak_update_trace
every time a node is given a new owner and marked as allocated, to aid
in debugging.

Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190816105357.14340-2-matthew.auld@intel.com
2019-08-16 16:28:41 +01:00
Matthew Auld 665c1c2166 drm/i915/buddy: tidy up i915_buddy_fini
If we are leaking nodes don't hide it. Also stop trying to be
"defensive" and instead embrace Kasan et al.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190816105357.14340-1-matthew.auld@intel.com
2019-08-16 16:28:41 +01:00
Matthew Auld 14d1b9a624 drm/i915: buddy allocator
Simple buddy allocator. We want to allocate properly aligned
power-of-two blocks to promote usage of huge-pages for the GTT, so 64K,
2M and possibly even 1G. While we do support allocating stuff at a
specific offset, it is more intended for preallocating portions of the
address space, say for an initial framebuffer, for other uses drm_mm is
probably a much better fit. Anyway, hopefully this can all be thrown
away if we eventually move to having the core MM manage device memory.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190809202926.14545-2-matthew.auld@intel.com
2019-08-10 19:47:40 +01:00