OpenCloudOS-Kernel/mm
Kairui Song 4564eafa9e emm: workingset: simplify and use a more intuitive model
Upstream: pending

This basically removed workingset_activation and reduced calls to
workingset_age_nonresident.

The idea behind this change is a new way to calculate the refault
distance and prepare for adapting refault distance based file page
protection for multi-gen LRU.

Currently, refault distance re-activation for active/inactive can help
keep working set pages in memory, it works by estimating the refault
(re-access) distance of a page, if it's small enough, then put it
on active LRU instead of inactive LRU.

The estimation, as described in mm/workingset.c, is based on two assumptions:

1. Activation of an inactive page will left-shift LRU pages (considering
   LRU starts from right).
2. Eviction of an inactive page will left-shift LRU pages.

Assumption 2 is correct, but assumption 1 is not always true, an activated
page could be anywhere in the LRU list (through mark_page_accessed), it
only left-shift the pages on its right side.

And besides, one page can get activate/deactivated for multiple times.

And multi-gen LRU doesn't fit with this model well, pages are getting
aged in generations, and getting promoted frequently between generations.

So instead we introduce a simpler idea here: Just presume the evicted
pages are still in memory, each has an corresponding eviction timestamp
(nonresistence_age) that is increased and recorded upon each eviction.
These timestamp could logically form a "Shadow LRU", a read-only
imaginary LRU. Let the `nonresistence_age` still be NA, then we have:

  Let SP = ((NA's reading @ current) - (NA's reading @ eviction))

                           +-memory available to cache-+
                           |                           |
 +-------------------------+===============+===========+
 | *   shadows  O O  O     |   INACTIVE    |   ACTIVE  |
 +-+-----------------------+===============+===========+
   |                       |
   +-----------------------+
   |         SP
 fault page         O -> Hole left by refaulted in pages.
                         Entries are suppose to be removed
                         upon access but this is not a real
                         LRU so can't really update it.
                    * -> The page corresponding to SP

It can be easily seen that SP stands for the offset of a page in the
imaginary LRU, which is also how far the current workflow could push
a page out of available memory. Since all evicted page was once head
of INACTIVE list, the estimated minimum value of refault distance is:

  SP + NR_INACTIVE

On refault, the page *may* get activated and stay in memory if we put
it to active LRU if:

  SP + NR_INACTIVE < NR_INACTIVE + NR_ACTIVE

Which can be simplified to:

  SP < NR_ACTIVE

Then the page is worth getting re-activated to start from active LRU,
since the access distance is smaller than the total memory.

And since this is only an estimation, based on several hypotheses, and
it could break the ability of LRU to distinguish a workingset out of
caches, in extreme cases all refault causing activation will lead to
worse thrashing, so throttle this by two factors:

1. Notice previously re-faulted in pages may leave "holes" on the shadow
   part of LRU, that part is left unhandled on purpose to decrease
   re-activate rate for pages that have a large SP value (the larger
   SP value a page has, the more likely it will be affected by such
   holes).
2. When the active LRU is long enough, chanllaging active pages
   by re-activating a one-time access previously evicted/inactive page
   may not be a good idea, so throttle the re-activation when
   NR_ACTIVE > NR_INACTIVE, by comparing with NR_INACTIVE instead.

Another effect of the refault activation throttling worth noticing is that,
when the cache size is larger than total memory and hotness is similar
among all cache pages, it can help hold a portion (possible have slightly
higher hotness) of the caches in memory instead of letting caches get
evicted permutably due to the nature of LRU.
That's because the established workingset (active LRU) will tend to stay
since we throttled reactivation when NR_ACTIVE is high.

This side effect is actually similar with the algoritm before, which
introduce such effect by increasing nonresistence_age in extra call
paths, trottled the re-activation when activition/reactivation is
massively happenning.

Combined all above, we have following simple rules:

  Upon refault, if any of following conditions is met, mark page as active:

- If active LRU is low (NR_ACTIVE < NR_INACTIVE), check if:
  SP < NR_ACTIVE

- If active LRU is high (NR_ACTIVE >= NR_INACTIVE), check if:
  SP < NR_INACTIVE

Code-wise, this is simpler than before since no longer need to do lruvec
workingset data update when activating a page, and so far, a few benchmarks
shows a similar or better result under memore pressure. The performance
should also be better when there is no memory pressure since some memcg
iteration and atomic operation is no longer needed.

When combined with multi-gen LRU (in later commits) it shows a measurable
performance gain for some workloads.

Using memtier and fio test from commit ac35a49023 but scaled down
to fit in my test environment, and some other test results:

  memtier test (with 16G ramdisk as swap and 4G memcg limit on an i7-9700):
  memcached -u nobody -m 16384 -s /tmp/memcached.socket \
    -a 0766 -t 12 -B binary &
  memtier_benchmark -S /tmp/memcached.socket -P memcache_binary -n allkeys\
    --key-minimum=1 --key-maximum=32000000 --key-pattern=P:P -c 1 \
    -t 12 --ratio 1:0 --pipeline 8 -d 2000 -x 6

  fio test 1 (with 16G ramdisk on 28G VM on an i7-9700):
  fio -name=refault --numjobs=12 --directory=/mnt --size=1024m \
    --buffered=1 --ioengine=io_uring --iodepth=128 \
    --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
    --rw=randread --random_distribution=random --norandommap \
    --time_based --ramp_time=5m --runtime=5m --group_reporting

  fio test 2 (with 16G ramdisk on 28G VM on an i7-9700):
  fio -name=mglru --numjobs=10 --directory=/mnt --size=1536m \
    --buffered=1 --ioengine=io_uring --iodepth=128 \
    --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
    --rw=randread --random_distribution=zipf:1.2 --norandommap \
    --time_based --ramp_time=10m --runtime=5m --group_reporting

  mysql (using oltp_read_only from sysbench, with 12G of buffer pool
  in a 10G memcg):
  sysbench /usr/share/sysbench/oltp_read_only.lua <auth and db params> \
    --tables=36 --table-size=2000000 --threads=12 --time=1800

  kernel build test done with 3G memcg limit on an i7-9700.

Before (Average of 6 test run):
fio: IOPS=5125.5k
fio2: IOPS=7291.16k
memcached: 57600.926 ops/s
mysql: 6280.08 tps
kernel-build: 1817.13499 seconds

After (Average of 6 test run):
fio: IOPS=5137.5k (+2.3%)
fio2: IOPS=7300.67k (+1.3%)
memcached: 57878.422 ops/s (+4.8%)
mysql: 6312.06 tps (+0.5%)
kernel-build: 1813.66231 seconds (+2.0%)

Signed-off-by: Kairui Song <kasong@tencent.com>
2024-04-03 16:58:55 +08:00
..
damon mm/damon/reclaim: fix quota stauts loss due to online tunings 2024-03-01 13:35:00 +01:00
kasan kasan: disable kasan_non_canonical_hook() for HW tags 2023-10-18 12:12:41 -07:00
kfence LoongArch changes for v6.6 2023-09-08 12:16:52 -07:00
kmsan mm: kmsan: use helper macros PAGE_ALIGN and PAGE_ALIGN_DOWN 2023-08-21 13:37:29 -07:00
Kconfig emm: block: introduce basic flags for ramdisk swap optimization 2024-04-03 16:58:52 +08:00
Kconfig.debug mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM 2023-05-29 16:14:28 +01:00
Makefile - Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list") 2023-08-29 14:25:26 -07:00
backing-dev.c blk-wbt: Fix detection of dirty-throttled tasks 2024-02-23 09:25:16 +01:00
balloon_compaction.c
bootmem_info.c
cma.c mm/cma: use nth_page() in place of direct struct page manipulation 2023-11-28 17:20:06 +00:00
cma.h
cma_debug.c
cma_sysfs.c mm: cma: make kobj_type structure constant 2023-03-28 16:20:06 -07:00
compaction.c merge mm-hotfixes-stable into mm-stable to pick up depended-upon changes 2023-08-21 14:26:20 -07:00
debug.c mm: update validate_mm() to use vma iterator 2023-06-09 16:25:31 -07:00
debug_page_alloc.c mm: page_alloc: split out DEBUG_PAGEALLOC 2023-06-09 16:25:23 -07:00
debug_page_ref.c
debug_vm_pgtable.c Add x86 shadow stack support 2023-08-31 12:20:12 -07:00
dmapool.c dmapool: create/destroy cleanup 2023-06-09 16:25:17 -07:00
dmapool_test.c dmapool: add alloc/free performance test 2023-04-05 19:42:38 -07:00
early_ioremap.c mm/early_ioremap.c: improve the execution efficiency of early_ioremap_setup() 2023-06-09 16:25:56 -07:00
fadvise.c mm: remove unnecessary pagevec includes 2023-06-23 16:59:31 -07:00
fail_page_alloc.c mm: page_alloc: split out FAIL_PAGE_ALLOC 2023-06-09 16:25:23 -07:00
failslab.c mm: fix unexpected changes to {failslab|fail_page_alloc}.attr 2022-11-22 18:50:44 -08:00
filemap.c Merge remote-tracking branch 'stable/linux-6.6.y' into ocks-2401 2024-03-01 17:21:23 +08:00
folio-compat.c filemap: Add fgf_t typedef 2023-07-24 18:04:30 -04:00
gup.c Add x86 shadow stack support 2023-08-31 12:20:12 -07:00
gup_test.c Merge mm-hotfixes-stable into mm-stable to pick up depended-upon changes. 2023-06-23 16:58:19 -07:00
gup_test.h mm/gup_test: start/stop/read functionality for PIN LONGTERM test 2022-11-08 17:37:15 -08:00
highmem.c mm: ptep_get() conversion 2023-06-19 16:19:25 -07:00
hmm.c mm: enable page walking API to lock vmas during the walk 2023-08-21 13:07:20 -07:00
huge_memory.c mm: fix for negative counter: nr_file_hugepages 2023-11-28 17:20:13 +00:00
hugetlb.c hugetlb: fix null-ptr-deref in hugetlb_vma_lock_write 2023-12-13 18:45:24 +01:00
hugetlb_cgroup.c mm/hugetlb: increase use of folios in alloc_huge_page() 2023-02-13 15:54:27 -08:00
hugetlb_vmemmap.c mm: hugetlb_vmemmap: fix a race between vmemmap pmd split 2023-08-18 10:12:14 -07:00
hugetlb_vmemmap.h
hwpoison-inject.c mm/hwpoison: add __init/__exit annotations to module init/exit funcs 2022-10-03 14:03:05 -07:00
init-mm.c swap: expose required symbols for some 3rd part modules 2023-12-12 15:57:10 +08:00
internal.h Add x86 shadow stack support 2023-08-31 12:20:12 -07:00
interval_tree.c
io-mapping.c
ioremap.c mm: ioremap: remove unneeded ioremap_allowed and iounmap_allowed 2023-08-18 10:12:36 -07:00
khugepaged.c - Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list") 2023-08-29 14:25:26 -07:00
kmemleak.c mm/kmemleak: move up cond_resched() call in page scanning loop 2023-09-02 15:17:34 -07:00
ksm.c mm: memory-failure: use rcu lock instead of tasklist_lock when collect_procs() 2023-09-05 11:11:52 -07:00
list_lru.c
maccess.c mm: Fix copy_from_user_nofault(). 2023-04-12 17:36:23 -07:00
madvise.c swap: remove remnants of polling from read_swap_cache_async 2023-08-24 16:20:16 -07:00
mapping_dirty_helpers.c mm: fix clean_record_shared_mapping_range kernel-doc 2023-08-24 16:20:30 -07:00
memblock.c x86/numa: Fix the address overlap check in numa_fill_memblks() 2024-03-01 13:35:06 +01:00
memcontrol.c emm: allow modules to access more mm data 2024-04-03 16:58:54 +08:00
memfd.c revert "memfd: improve userspace warnings for missing exec-related flags". 2023-09-05 11:11:52 -07:00
memory-failure.c mm/memory-failure: pass the folio and the page to collect_procs() 2024-01-10 17:16:54 +01:00
memory-tiers.c memory tier: use helper macro __ATTR_RW() 2023-08-18 10:12:38 -07:00
memory.c mm/swap: fix race when skipping swapcache 2024-03-01 13:35:00 +01:00
memory_hotplug.c mm/memory_hotplug: fix memmap_on_memory sysfs value retrieval 2024-01-20 11:51:49 +01:00
mempolicy.c numa: Generalize numa_map_to_online_node() 2023-11-20 11:58:51 +01:00
mempool.c mempool: do not use ksize() for poisoning 2022-11-30 15:58:41 -08:00
memremap.c mm/memremap.c: fix outdated comment in devm_memremap_pages 2023-02-09 16:51:46 -08:00
memtest.c mm: memtest: convert to memtest_report_meminfo() 2023-08-21 13:37:47 -07:00
migrate.c mm: migrate: fix getting incorrect page mapping during page migration 2024-01-31 16:19:10 -08:00
migrate_device.c Add x86 shadow stack support 2023-08-31 12:20:12 -07:00
mincore.c mm: enable page walking API to lock vmas during the walk 2023-08-21 13:07:20 -07:00
mlock.c merge mm-hotfixes-stable into mm-stable to pick up depended-upon changes 2023-08-21 14:26:20 -07:00
mm_init.c efi: disable mirror feature during crashkernel 2024-01-31 16:18:56 -08:00
mm_slot.h mm: introduce common struct mm_slot 2022-10-03 14:02:43 -07:00
mmap.c vm: isolate max_map_count by pid namespace 2023-12-12 15:56:56 +08:00
mmap_lock.c
mmu_gather.c mm: fix kernel-doc warning from tlb_flush_rmaps() 2023-08-24 16:20:30 -07:00
mmu_notifier.c mmu_notifiers: rename invalidate_range notifier 2023-08-18 10:12:41 -07:00
mmzone.c mm: multi-gen LRU: groundwork 2022-09-26 19:46:09 -07:00
mprotect.c Add x86 shadow stack support 2023-08-31 12:20:12 -07:00
mremap.c vm: isolate max_map_count by pid namespace 2023-12-12 15:56:56 +08:00
msync.c mm/msync: use vma_find() instead of vma linked list 2022-09-26 19:46:25 -07:00
nommu.c vm: isolate max_map_count by pid namespace 2023-12-12 15:56:56 +08:00
oom_kill.c mm: remove redundant K() macro definition 2023-08-21 13:37:44 -07:00
page-writeback.c Merge remote-tracking branch 'stable/linux-6.6.y' into ocks-2401 2024-03-01 17:21:23 +08:00
page_alloc.c mm: page_alloc: unreserve highatomic page blocks before oom 2024-01-31 16:18:57 -08:00
page_counter.c mm: page_counter: remove unneeded atomic ops for low/min 2022-09-11 20:26:01 -07:00
page_ext.c mm/page_ext: move functions around for minor cleanups to page_ext 2023-08-18 10:12:31 -07:00
page_idle.c mm: page_idle: convert page idle to use a folio 2023-01-18 17:12:52 -08:00
page_io.c swap: expose required symbols for some 3rd part modules 2023-12-12 15:57:10 +08:00
page_isolation.c mm/hugetlb: get rid of page_hstate() 2023-08-18 10:12:39 -07:00
page_owner.c mm/page_ext: use page_ext_data helper in page_owner 2023-08-21 13:37:27 -07:00
page_poison.c mm/page_poison: remove unused page_ext.h from page_poison 2023-08-21 13:37:30 -07:00
page_reporting.c mm, treewide: redefine MAX_ORDER sanely 2023-04-05 19:42:46 -07:00
page_reporting.h
page_table_check.c mm: convert page_table_check_pte_set() to page_table_check_ptes_set() 2023-08-24 16:20:18 -07:00
page_vma_mapped.c mm: correct stale comment of function check_pte 2023-08-18 10:12:13 -07:00
pagewalk.c mm/pagewalk: fix bootstopping regression from extra pte_unmap() 2023-09-02 08:39:21 -07:00
percpu-internal.h percpu-internal/pcpu_chunk: re-layout pcpu_chunk structure to reduce false sharing 2023-06-19 16:19:29 -07:00
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c mm: Introduce flush_cache_vmap_early() 2024-02-16 19:10:52 +01:00
pgalloc-track.h
pgtable-generic.c swap: expose required symbols for some 3rd part modules 2023-12-12 15:57:10 +08:00
process_vm_access.c mm/gup: remove unused vmas parameter from pin_user_pages_remote() 2023-06-09 16:25:25 -07:00
ptdump.c mm: ptdump should use ptep_get_lockless() 2023-06-19 16:19:24 -07:00
readahead.c vfs: fix readahead(2) on block devices 2023-11-20 11:58:52 +01:00
rmap.c mm: hugetlb: add huge page size param to set_huge_pte_at() 2023-09-29 17:20:47 -07:00
rodata_test.c mm/rodata_test: use PAGE_ALIGNED() helper 2022-10-03 14:03:05 -07:00
secretmem.c mm/secretmem: use a folio in secretmem_fault() 2023-08-21 13:38:02 -07:00
shmem.c Merge remote-tracking branch 'stable/linux-6.6.y' into ocks-2401 2024-03-01 17:21:23 +08:00
shmem_quota.c shmem: Add default quota limit mount options 2023-08-09 09:15:40 +02:00
show_mem.c xfs: add kmem_alloc_by_vmalloc and kmem_alloc_large_dump_stack sysctl 2023-12-12 15:56:58 +08:00
shrinker_debug.c Revert "mm: shrinkers: make count and scan in shrinker debugfs lockless" 2023-06-19 13:19:34 -07:00
shuffle.c mm/shuffle: convert module_param_call to module_param_cb 2022-10-03 14:03:07 -07:00
shuffle.h mm, treewide: redefine MAX_ORDER sanely 2023-04-05 19:42:46 -07:00
slab.c Randomized slab caches for kmalloc() 2023-07-18 10:07:47 +02:00
slab.h Randomized slab caches for kmalloc() 2023-07-18 10:07:47 +02:00
slab_common.c mm: slab: Do not create kmalloc caches smaller than arch_slab_minalign() 2023-10-11 15:24:49 +02:00
slub.c mm/slub.c: sanitize freelist pointer assignment even more 2023-12-12 15:57:11 +08:00
sparse-vmemmap.c mm/vmemmap: allow architectures to override how vmemmap optimization works 2023-08-18 10:12:53 -07:00
sparse.c mm/sparsemem: fix race in accessing memory_section->usage 2024-01-31 16:18:56 -08:00
swap.c emm: workingset: simplify and use a more intuitive model 2024-04-03 16:58:55 +08:00
swap.h mm/swap: fix race when skipping swapcache 2024-03-01 13:35:00 +01:00
swap_cgroup.c mm: memcontrol: don't allocate cgroup swap arrays when memcg is disabled 2022-10-03 14:03:36 -07:00
swap_slots.c mm/swap: convert put_swap_page() to put_swap_folio() 2022-10-03 14:02:46 -07:00
swap_state.c emm: swap: introduce AS_RAM_SWAP flag 2024-04-03 16:58:52 +08:00
swapfile.c emm: swap: introduce AS_RAM_SWAP flag 2024-04-03 16:58:52 +08:00
truncate.c - Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list") 2023-08-29 14:25:26 -07:00
usercopy.c mm: Fix copy_from_user_nofault(). 2023-04-12 17:36:23 -07:00
userfaultfd.c userfaultfd: fix mmap_changing checking in mfill_atomic_hugetlb 2024-02-23 09:24:53 +01:00
util.c parisc: fix mmap_base calculation when stack grows upwards 2023-11-28 17:20:08 +00:00
vmalloc.c mm: hugetlb: add huge page size param to set_huge_pte_at() 2023-09-29 17:20:47 -07:00
vmpressure.c net-memcg: Fix scope of sockmem pressure indicators 2023-08-16 12:21:32 +01:00
vmscan.c emm: workingset: simplify and use a more intuitive model 2024-04-03 16:58:55 +08:00
vmstat.c emm: allow modules to access more mm data 2024-04-03 16:58:54 +08:00
workingset.c emm: workingset: simplify and use a more intuitive model 2024-04-03 16:58:55 +08:00
z3fold.c mm/z3fold: remove obsolete comment for struct z3fold_pool 2023-08-21 13:37:51 -07:00
zbud.c mm: zswap: remove shrink from zpool interface 2023-06-19 16:19:27 -07:00
zpool.c mm: zswap: remove shrink from zpool interface 2023-06-19 16:19:27 -07:00
zsmalloc.c merge mm-hotfixes-stable into mm-stable to pick up depended-upon changes 2023-08-21 14:26:20 -07:00
zswap.c mm: zswap: fix missing folio cleanup in writeback race path 2024-03-01 13:35:10 +01:00