emm: workingset: simplify and use a more intuitive model

Upstream: pending

This basically removed workingset_activation and reduced calls to
workingset_age_nonresident.

The idea behind this change is a new way to calculate the refault
distance and prepare for adapting refault distance based file page
protection for multi-gen LRU.

Currently, refault distance re-activation for active/inactive can help
keep working set pages in memory, it works by estimating the refault
(re-access) distance of a page, if it's small enough, then put it
on active LRU instead of inactive LRU.

The estimation, as described in mm/workingset.c, is based on two assumptions:

1. Activation of an inactive page will left-shift LRU pages (considering
   LRU starts from right).
2. Eviction of an inactive page will left-shift LRU pages.

Assumption 2 is correct, but assumption 1 is not always true, an activated
page could be anywhere in the LRU list (through mark_page_accessed), it
only left-shift the pages on its right side.

And besides, one page can get activate/deactivated for multiple times.

And multi-gen LRU doesn't fit with this model well, pages are getting
aged in generations, and getting promoted frequently between generations.

So instead we introduce a simpler idea here: Just presume the evicted
pages are still in memory, each has an corresponding eviction timestamp
(nonresistence_age) that is increased and recorded upon each eviction.
These timestamp could logically form a "Shadow LRU", a read-only
imaginary LRU. Let the `nonresistence_age` still be NA, then we have:

  Let SP = ((NA's reading @ current) - (NA's reading @ eviction))

                           +-memory available to cache-+
                           |                           |
 +-------------------------+===============+===========+
 | *   shadows  O O  O     |   INACTIVE    |   ACTIVE  |
 +-+-----------------------+===============+===========+
   |                       |
   +-----------------------+
   |         SP
 fault page         O -> Hole left by refaulted in pages.
                         Entries are suppose to be removed
                         upon access but this is not a real
                         LRU so can't really update it.
                    * -> The page corresponding to SP

It can be easily seen that SP stands for the offset of a page in the
imaginary LRU, which is also how far the current workflow could push
a page out of available memory. Since all evicted page was once head
of INACTIVE list, the estimated minimum value of refault distance is:

  SP + NR_INACTIVE

On refault, the page *may* get activated and stay in memory if we put
it to active LRU if:

  SP + NR_INACTIVE < NR_INACTIVE + NR_ACTIVE

Which can be simplified to:

  SP < NR_ACTIVE

Then the page is worth getting re-activated to start from active LRU,
since the access distance is smaller than the total memory.

And since this is only an estimation, based on several hypotheses, and
it could break the ability of LRU to distinguish a workingset out of
caches, in extreme cases all refault causing activation will lead to
worse thrashing, so throttle this by two factors:

1. Notice previously re-faulted in pages may leave "holes" on the shadow
   part of LRU, that part is left unhandled on purpose to decrease
   re-activate rate for pages that have a large SP value (the larger
   SP value a page has, the more likely it will be affected by such
   holes).
2. When the active LRU is long enough, chanllaging active pages
   by re-activating a one-time access previously evicted/inactive page
   may not be a good idea, so throttle the re-activation when
   NR_ACTIVE > NR_INACTIVE, by comparing with NR_INACTIVE instead.

Another effect of the refault activation throttling worth noticing is that,
when the cache size is larger than total memory and hotness is similar
among all cache pages, it can help hold a portion (possible have slightly
higher hotness) of the caches in memory instead of letting caches get
evicted permutably due to the nature of LRU.
That's because the established workingset (active LRU) will tend to stay
since we throttled reactivation when NR_ACTIVE is high.

This side effect is actually similar with the algoritm before, which
introduce such effect by increasing nonresistence_age in extra call
paths, trottled the re-activation when activition/reactivation is
massively happenning.

Combined all above, we have following simple rules:

  Upon refault, if any of following conditions is met, mark page as active:

- If active LRU is low (NR_ACTIVE < NR_INACTIVE), check if:
  SP < NR_ACTIVE

- If active LRU is high (NR_ACTIVE >= NR_INACTIVE), check if:
  SP < NR_INACTIVE

Code-wise, this is simpler than before since no longer need to do lruvec
workingset data update when activating a page, and so far, a few benchmarks
shows a similar or better result under memore pressure. The performance
should also be better when there is no memory pressure since some memcg
iteration and atomic operation is no longer needed.

When combined with multi-gen LRU (in later commits) it shows a measurable
performance gain for some workloads.

Using memtier and fio test from commit ac35a49023 but scaled down
to fit in my test environment, and some other test results:

  memtier test (with 16G ramdisk as swap and 4G memcg limit on an i7-9700):
  memcached -u nobody -m 16384 -s /tmp/memcached.socket \
    -a 0766 -t 12 -B binary &
  memtier_benchmark -S /tmp/memcached.socket -P memcache_binary -n allkeys\
    --key-minimum=1 --key-maximum=32000000 --key-pattern=P:P -c 1 \
    -t 12 --ratio 1:0 --pipeline 8 -d 2000 -x 6

  fio test 1 (with 16G ramdisk on 28G VM on an i7-9700):
  fio -name=refault --numjobs=12 --directory=/mnt --size=1024m \
    --buffered=1 --ioengine=io_uring --iodepth=128 \
    --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
    --rw=randread --random_distribution=random --norandommap \
    --time_based --ramp_time=5m --runtime=5m --group_reporting

  fio test 2 (with 16G ramdisk on 28G VM on an i7-9700):
  fio -name=mglru --numjobs=10 --directory=/mnt --size=1536m \
    --buffered=1 --ioengine=io_uring --iodepth=128 \
    --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
    --rw=randread --random_distribution=zipf:1.2 --norandommap \
    --time_based --ramp_time=10m --runtime=5m --group_reporting

  mysql (using oltp_read_only from sysbench, with 12G of buffer pool
  in a 10G memcg):
  sysbench /usr/share/sysbench/oltp_read_only.lua <auth and db params> \
    --tables=36 --table-size=2000000 --threads=12 --time=1800

  kernel build test done with 3G memcg limit on an i7-9700.

Before (Average of 6 test run):
fio: IOPS=5125.5k
fio2: IOPS=7291.16k
memcached: 57600.926 ops/s
mysql: 6280.08 tps
kernel-build: 1817.13499 seconds

After (Average of 6 test run):
fio: IOPS=5137.5k (+2.3%)
fio2: IOPS=7300.67k (+1.3%)
memcached: 57878.422 ops/s (+4.8%)
mysql: 6312.06 tps (+0.5%)
kernel-build: 1813.66231 seconds (+2.0%)

Signed-off-by: Kairui Song <kasong@tencent.com>
This commit is contained in:
Kairui Song 2023-12-15 10:45:43 +08:00
parent c5b45ef797
commit 4564eafa9e
4 changed files with 60 additions and 100 deletions

View File

@ -349,10 +349,8 @@ static inline swp_entry_t page_swap_entry(struct page *page)
/* linux/mm/workingset.c */
bool workingset_test_recent(void *shadow, bool file, bool *workingset);
void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
void workingset_refault(struct folio *folio, void *shadow);
void workingset_activation(struct folio *folio);
/* Only track the nodes of mappings with shadow entries */
void workingset_update_node(struct xa_node *node);

View File

@ -482,7 +482,6 @@ void folio_mark_accessed(struct folio *folio)
else
__lru_cache_activate_folio(folio);
folio_clear_referenced(folio);
workingset_activation(folio);
}
if (folio_test_idle(folio))
folio_clear_idle(folio);

View File

@ -2580,8 +2580,6 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
lruvec_add_folio(lruvec, folio);
nr_pages = folio_nr_pages(folio);
nr_moved += nr_pages;
if (folio_test_active(folio))
workingset_age_nonresident(lruvec, nr_pages);
}
/*

View File

@ -64,74 +64,64 @@
* thrashing on the inactive list, after which refaulting pages can be
* activated optimistically to compete with the existing active pages.
*
* Approximating inactive page access frequency - Observations:
* For such approximation, we introduce a counter `nonresistence_age` (NA)
* here. This counter increases each time a page is evicted, and each evicted
* page will have a shadow that stores the counter reading at the eviction
* time as a timestamp. So when an evicted page was faulted again, we have:
*
* 1. When a page is accessed for the first time, it is added to the
* head of the inactive list, slides every existing inactive page
* towards the tail by one slot, and pushes the current tail page
* out of memory.
* Let SP = ((NA's reading @ current) - (NA's reading @ eviction))
*
* 2. When a page is accessed for the second time, it is promoted to
* the active list, shrinking the inactive list by one slot. This
* also slides all inactive pages that were faulted into the cache
* more recently than the activated page towards the tail of the
* inactive list.
* +-memory available to cache-+
* | |
* +-------------------------+===============+===========+
* | * shadows O O O | INACTIVE | ACTIVE |
* +-+-----------------------+===============+===========+
* | |
* +-----------------------+
* | SP
* fault page O -> Hole left by previously faulted in pages
* * -> The page corresponding to SP
*
* Thus:
* Here SP can stands for how far the current workflow could push a page
* out of available memory. Since all evicted page was once head of
* INACTIVE list, the page could have such an access distance of:
*
* 1. The sum of evictions and activations between any two points in
* time indicate the minimum number of inactive pages accessed in
* between.
* SP + NR_INACTIVE
*
* 2. Moving one inactive page N page slots towards the tail of the
* list requires at least N inactive page accesses.
* So if:
*
* Combining these:
* SP + NR_INACTIVE < NR_INACTIVE + NR_ACTIVE
*
* 1. When a page is finally evicted from memory, the number of
* inactive pages accessed while the page was in cache is at least
* the number of page slots on the inactive list.
* Which can be simplified to:
*
* 2. In addition, measuring the sum of evictions and activations (E)
* at the time of a page's eviction, and comparing it to another
* reading (R) at the time the page faults back into memory tells
* the minimum number of accesses while the page was not cached.
* This is called the refault distance.
* SP < NR_ACTIVE
*
* Because the first access of the page was the fault and the second
* access the refault, we combine the in-cache distance with the
* out-of-cache distance to get the complete minimum access distance
* of this page:
* Then the page is worth getting re-activated to start from ACTIVE part,
* since the access distance is shorter than total memory to make it stay.
*
* NR_inactive + (R - E)
* And since this is only an estimation, based on several hypotheses, and
* it could break the ability of LRU to distinguish a workingset out of
* caches, so throttle this by two factors:
*
* And knowing the minimum access distance of a page, we can easily
* tell if the page would be able to stay in cache assuming all page
* slots in the cache were available:
* 1. Notice that re-faulted in pages may leave "holes" on the shadow
* part of LRU, that part is left unhandled on purpose to decrease
* re-activate rate for pages that have a large SP value (the larger
* SP value a page have, the more likely it will be affected by such
* holes).
* 2. When the ACTIVE part of LRU is long enough, challenging ACTIVE pages
* by re-activating a one-time faulted previously INACTIVE page may not
* be a good idea, so throttle the re-activation when ACTIVE > INACTIVE
* by comparing with INACTIVE instead.
*
* NR_inactive + (R - E) <= NR_inactive + NR_active
* Combined all above, we have:
* Upon refault, if any of the following conditions is met, mark the page
* as active:
*
* If we have swap we should consider about NR_inactive_anon and
* NR_active_anon, so for page cache and anonymous respectively:
*
* NR_inactive_file + (R - E) <= NR_inactive_file + NR_active_file
* + NR_inactive_anon + NR_active_anon
*
* NR_inactive_anon + (R - E) <= NR_inactive_anon + NR_active_anon
* + NR_inactive_file + NR_active_file
*
* Which can be further simplified to:
*
* (R - E) <= NR_active_file + NR_inactive_anon + NR_active_anon
*
* (R - E) <= NR_active_anon + NR_inactive_file + NR_active_file
*
* Put into words, the refault distance (out-of-cache) can be seen as
* a deficit in inactive list space (in-cache). If the inactive list
* had (R - E) more page slots, the page would not have been evicted
* in between accesses, but activated instead. And on a full system,
* the only thing eating into inactive list space is active pages.
* - If ACTIVE LRU is low (NR_ACTIVE < NR_INACTIVE), check if:
* SP < NR_ACTIVE
*
* - If ACTIVE LRU is high (NR_ACTIVE >= NR_INACTIVE), check if:
* SP < NR_INACTIVE
*
* Refaulting inactive pages
*
@ -352,7 +342,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow)
* to the in-memory dimensions. This function allows reclaim and LRU
* operations to drive the non-resident aging along in parallel.
*/
void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages)
static void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages)
{
/*
* Reclaiming a cgroup means reclaiming all its children in a
@ -418,9 +408,9 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
{
struct mem_cgroup *eviction_memcg;
struct lruvec *eviction_lruvec;
unsigned long refault_distance;
unsigned long workingset_size;
unsigned long refault;
unsigned long refault_distance, refault;
unsigned long inactive;
unsigned long active;
int memcgid;
struct pglist_data *pgdat;
unsigned long eviction;
@ -499,22 +489,22 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
* workingset competition needs to consider anon or not depends
* on having free swap space.
*/
workingset_size = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE);
if (!file) {
workingset_size += lruvec_page_state(eviction_lruvec,
NR_INACTIVE_FILE);
}
active = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE);
inactive = lruvec_page_state(eviction_lruvec, NR_INACTIVE_FILE);
if (mem_cgroup_get_nr_swap_pages(eviction_memcg) > 0) {
workingset_size += lruvec_page_state(eviction_lruvec,
NR_ACTIVE_ANON);
if (file) {
workingset_size += lruvec_page_state(eviction_lruvec,
NR_INACTIVE_ANON);
}
active += lruvec_page_state(eviction_lruvec, NR_ACTIVE_ANON);
inactive += lruvec_page_state(eviction_lruvec, NR_INACTIVE_ANON);
}
mem_cgroup_put(eviction_memcg);
return refault_distance <= workingset_size;
/*
* When there are already enough active pages, be less aggressive
* on reactivating pages, challenge an large set of established
* active pages with one time refaulted page may not be a good idea.
*/
return refault_distance < min(active, inactive);
}
/**
@ -561,7 +551,6 @@ void workingset_refault(struct folio *folio, void *shadow)
return;
folio_set_active(folio);
workingset_age_nonresident(lruvec, nr);
mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + file, nr);
/* Folio was active prior to eviction */
@ -576,30 +565,6 @@ void workingset_refault(struct folio *folio, void *shadow)
}
}
/**
* workingset_activation - note a page activation
* @folio: Folio that is being activated.
*/
void workingset_activation(struct folio *folio)
{
struct mem_cgroup *memcg;
rcu_read_lock();
/*
* Filter non-memcg pages here, e.g. unmap can call
* mark_page_accessed() on VDSO pages.
*
* XXX: See workingset_refault() - this should return
* root_mem_cgroup even for !CONFIG_MEMCG.
*/
memcg = folio_memcg_rcu(folio);
if (!mem_cgroup_disabled() && !memcg)
goto out;
workingset_age_nonresident(folio_lruvec(folio), folio_nr_pages(folio));
out:
rcu_read_unlock();
}
/*
* Shadow entries reflect the share of the working set that does not
* fit into memory, so their number depends on the access pattern of