zsmalloc: do not take class lock in zs_shrinker_count()

We can avoid taking class ->lock around zs_can_compact() in
zs_shrinker_count(), because the number that we return back is outdated
in general case, by design.  We have different sources that are able to
change class's state right after we return from zs_can_compact() --
ongoing I/O operations, manually triggered compaction, or two of them
happening simultaneously.

We re-do this calculations during compaction on a per class basis
anyway.

zs_unregister_shrinker() will not return until we have an active
shrinker, so classes won't unexpectedly disappear while
zs_shrinker_count() iterates them.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Sergey Senozhatsky 2015-09-08 15:04:52 -07:00 committed by Linus Torvalds
parent 6cbf16b3b6
commit b3e237f1f5
1 changed files with 0 additions and 4 deletions

View File

@ -1710,8 +1710,6 @@ static struct page *isolate_source_page(struct size_class *class)
* *
* Based on the number of unused allocated objects calculate * Based on the number of unused allocated objects calculate
* and return the number of pages that we can free. * and return the number of pages that we can free.
*
* Should be called under class->lock.
*/ */
static unsigned long zs_can_compact(struct size_class *class) static unsigned long zs_can_compact(struct size_class *class)
{ {
@ -1834,9 +1832,7 @@ static unsigned long zs_shrinker_count(struct shrinker *shrinker,
if (class->index != i) if (class->index != i)
continue; continue;
spin_lock(&class->lock);
pages_to_free += zs_can_compact(class); pages_to_free += zs_can_compact(class);
spin_unlock(&class->lock);
} }
return pages_to_free; return pages_to_free;