memcg: gix memory accounting scalability in shrink_page_list
I noticed in a multi-process parallel files reading benchmark I ran on a 8 socket machine, throughput slowed down by a factor of 8 when I ran the benchmark within a cgroup container. I traced the problem to the following code path (see below) when we are trying to reclaim memory from file cache. The res_counter_uncharge function is called on every page that's reclaimed and created heavy lock contention. The patch below allows the reclaimed pages to be uncharged from the resource counter in batch and recovered the regression. Tim 40.67% usemem [kernel.kallsyms] [k] _raw_spin_lock | --- _raw_spin_lock | |--92.61%-- res_counter_uncharge | | | |--100.00%-- __mem_cgroup_uncharge_common | | | | | |--100.00%-- mem_cgroup_uncharge_cache_page | | | __remove_mapping | | | shrink_page_list | | | shrink_inactive_list | | | shrink_mem_cgroup_zone | | | shrink_zone | | | do_try_to_free_pages | | | try_to_free_pages | | | __alloc_pages_nodemask | | | alloc_pages_current Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
c1c9518331
commit
69980e3175
|
@ -687,6 +687,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
|
|||
|
||||
cond_resched();
|
||||
|
||||
mem_cgroup_uncharge_start();
|
||||
while (!list_empty(page_list)) {
|
||||
enum page_references references;
|
||||
struct address_space *mapping;
|
||||
|
@ -953,6 +954,7 @@ keep:
|
|||
|
||||
list_splice(&ret_pages, page_list);
|
||||
count_vm_events(PGACTIVATE, pgactivate);
|
||||
mem_cgroup_uncharge_end();
|
||||
*ret_nr_dirty += nr_dirty;
|
||||
*ret_nr_writeback += nr_writeback;
|
||||
return nr_reclaimed;
|
||||
|
|
Loading…
Reference in New Issue