2019-04-19 04:29:24 +08:00
|
|
|
========================================
|
2020-01-21 06:51:41 +08:00
|
|
|
zram: Compressed RAM-based block devices
|
2019-04-19 04:29:24 +08:00
|
|
|
========================================
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Introduction
|
|
|
|
============
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
The zram module creates RAM-based block devices named /dev/zram<id>
|
2010-08-10 01:26:55 +08:00
|
|
|
(<id> = 0, 1, ...). Pages written to these disks are compressed and stored
|
|
|
|
in memory itself. These disks allow very fast I/O and compression provides
|
2020-01-21 06:51:41 +08:00
|
|
|
good amounts of memory savings. Some of the use cases include /tmp storage,
|
|
|
|
use as swap disks, various caches under /var and maybe many more. :)
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2010-08-10 01:26:55 +08:00
|
|
|
Statistics for individual zram devices are exported through sysfs nodes at
|
|
|
|
/sys/block/zram<id>/
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Usage
|
|
|
|
=====
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2015-09-24 17:56:41 +08:00
|
|
|
There are several ways to configure and manage zram device(-s):
|
2019-04-19 04:29:24 +08:00
|
|
|
|
2015-09-24 17:56:41 +08:00
|
|
|
a) using zram and zram_control sysfs attributes
|
|
|
|
b) using zramctl utility, provided by util-linux (util-linux@vger.kernel.org).
|
|
|
|
|
|
|
|
In this document we will describe only 'manual' zram configuration steps,
|
|
|
|
IOW, zram and zram_control sysfs attributes.
|
|
|
|
|
|
|
|
In order to get a better idea about zramctl please consult util-linux
|
2019-04-19 04:29:24 +08:00
|
|
|
documentation, zramctl man-page or `zramctl --help`. Please be informed
|
2015-09-24 17:56:41 +08:00
|
|
|
that zram maintainers do not develop/maintain util-linux or zramctl, should
|
|
|
|
you have any questions please contact util-linux@vger.kernel.org
|
|
|
|
|
2010-06-01 16:01:26 +08:00
|
|
|
Following shows a typical sequence of steps for using zram.
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2015-09-24 17:56:41 +08:00
|
|
|
WARNING
|
|
|
|
=======
|
2019-04-19 04:29:24 +08:00
|
|
|
|
2015-09-24 17:56:41 +08:00
|
|
|
For the sake of simplicity we skip error checking parts in most of the
|
|
|
|
examples below. However, it is your sole responsibility to handle errors.
|
|
|
|
|
|
|
|
zram sysfs attributes always return negative values in case of errors.
|
|
|
|
The list of possible return codes:
|
2019-04-19 04:29:24 +08:00
|
|
|
|
|
|
|
======== =============================================================
|
|
|
|
-EBUSY an attempt to modify an attribute that cannot be changed once
|
2020-01-21 06:51:41 +08:00
|
|
|
the device has been initialised. Please reset device first.
|
2019-04-19 04:29:24 +08:00
|
|
|
-ENOMEM zram was not able to allocate enough memory to fulfil your
|
2020-01-21 06:51:41 +08:00
|
|
|
needs.
|
2019-04-19 04:29:24 +08:00
|
|
|
-EINVAL invalid input has been provided.
|
|
|
|
======== =============================================================
|
2015-09-24 17:56:41 +08:00
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
If you use 'echo', the returned value is set by the 'echo' utility,
|
2019-04-19 04:29:24 +08:00
|
|
|
and, in general case, something like::
|
2015-09-24 17:56:41 +08:00
|
|
|
|
|
|
|
echo 3 > /sys/block/zram0/max_comp_streams
|
2020-01-21 06:51:41 +08:00
|
|
|
if [ $? -ne 0 ]; then
|
2015-09-24 17:56:41 +08:00
|
|
|
handle_error
|
|
|
|
fi
|
|
|
|
|
|
|
|
should suffice.
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
1) Load Module
|
|
|
|
==============
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2010-06-01 16:01:26 +08:00
|
|
|
modprobe zram num_devices=4
|
2020-01-21 06:51:41 +08:00
|
|
|
|
|
|
|
This creates 4 devices: /dev/zram{0,1,2,3}
|
2015-06-26 06:00:11 +08:00
|
|
|
|
|
|
|
num_devices parameter is optional and tells zram how many devices should be
|
|
|
|
pre-created. Default: 1.
|
2009-09-22 12:56:54 +08:00
|
|
|
|
zram: add multi stream functionality
Existing zram (zcomp) implementation has only one compression stream
(buffer and algorithm private part), so in order to prevent data
corruption only one write (compress operation) can use this compression
stream, forcing all concurrent write operations to wait for stream lock
to be released. This patch changes zcomp to keep a compression streams
list of user-defined size (via sysfs device attr). Each write operation
still exclusively holds compression stream, the difference is that we
can have N write operations (depending on size of streams list)
executing in parallel. See TEST section later in commit message for
performance data.
Introduce struct zcomp_strm_multi and a set of functions to manage
zcomp_strm stream access. zcomp_strm_multi has a list of idle
zcomp_strm structs, spinlock to protect idle list and wait queue, making
it possible to perform parallel compressions.
The following set of functions added:
- zcomp_strm_multi_find()/zcomp_strm_multi_release()
find and release a compression stream, implement required locking
- zcomp_strm_multi_create()/zcomp_strm_multi_destroy()
create and destroy zcomp_strm_multi
zcomp ->strm_find() and ->strm_release() callbacks are set during
initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release()
correspondingly.
Each time zcomp issues a zcomp_strm_multi_find() call, the following set
of operations performed:
- spin lock strm_lock
- if idle list is not empty, remove zcomp_strm from idle list, spin
unlock and return zcomp stream pointer to caller
- if idle list is empty, current adds itself to wait queue. it will be
awaken by zcomp_strm_multi_release() caller.
zcomp_strm_multi_release():
- spin lock strm_lock
- add zcomp stream to idle list
- spin unlock, wake up sleeper
Minchan Kim reported that spinlock-based locking scheme has demonstrated
a severe perfomance regression for single compression stream case,
comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16)
base spinlock mutex
==Initial write ==Initial write ==Initial write
records: 5 records: 5 records: 5
avg: 1642424.35 avg: 699610.40 avg: 1655583.71
std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96
max: 1690170.94 max: 1163473.45 max: 1697164.75
min: 1568669.52 min: 573429.88 min: 1553410.23
==Rewrite ==Rewrite ==Rewrite
records: 5 records: 5 records: 5
avg: 1611775.39 avg: 501406.64 avg: 1684419.11
std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42
max: 1641800.95 max: 531356.78 max: 1706445.84
min: 1593515.27 min: 488817.78 min: 1655335.73
When only one compression stream available, mutex with spin on owner
tends to perform much better than frequent wait_event()/wake_up(). This
is why single stream implemented as a special case with mutex locking.
Introduce and document zram device attribute max_comp_streams. This
attr shows and stores current zcomp's max number of zcomp streams
(max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter.
`max_strm' limits the number of zcomp_strm structs in compression
backend's idle list (max_comp_streams).
max_comp_streams used during initialisation as follows:
-- passing to zcomp_create() max_strm equals to 1 will initialise zcomp
using single compression stream zcomp_strm_single (mutex-based locking).
-- passing to zcomp_create() max_strm greater than 1 will initialise zcomp
using multi compression stream zcomp_strm_multi (spinlock-based locking).
default max_comp_streams value is 1, meaning that zram with single stream
will be initialised.
Later patch will introduce configuration knob to change max_comp_streams
on already initialised and used zcomp.
TEST
iozone -t 3 -R -r 16K -s 60M -I +Z
test base 1 strm (mutex) 3 strm (spinlock)
-----------------------------------------------------------------------
Initial write 589286.78 583518.39 718011.05
Rewrite 604837.97 596776.38 1515125.72
Random write 584120.11 595714.58 1388850.25
Pwrite 535731.17 541117.38 739295.27
Fwrite 1418083.88 1478612.72 1484927.06
Usage example:
set max_comp_streams to 4
echo 4 > /sys/block/zram0/max_comp_streams
show current max_comp_streams (default value is 1).
cat /sys/block/zram0/max_comp_streams
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-08 06:38:14 +08:00
|
|
|
2) Set max number of compression streams
|
2019-04-19 04:29:24 +08:00
|
|
|
========================================
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
Regardless of the value passed to this attribute, ZRAM will always
|
|
|
|
allocate multiple compression streams - one per online CPU - thus
|
2016-07-27 06:22:51 +08:00
|
|
|
allowing several concurrent compression operations. The number of
|
|
|
|
allocated compression streams goes down when some of the CPUs
|
|
|
|
become offline. There is no single-compression-stream mode anymore,
|
2020-01-21 06:51:41 +08:00
|
|
|
unless you are running a UP system or have only 1 CPU online.
|
2016-07-27 06:22:51 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
To find out how many streams are currently available::
|
|
|
|
|
zram: add multi stream functionality
Existing zram (zcomp) implementation has only one compression stream
(buffer and algorithm private part), so in order to prevent data
corruption only one write (compress operation) can use this compression
stream, forcing all concurrent write operations to wait for stream lock
to be released. This patch changes zcomp to keep a compression streams
list of user-defined size (via sysfs device attr). Each write operation
still exclusively holds compression stream, the difference is that we
can have N write operations (depending on size of streams list)
executing in parallel. See TEST section later in commit message for
performance data.
Introduce struct zcomp_strm_multi and a set of functions to manage
zcomp_strm stream access. zcomp_strm_multi has a list of idle
zcomp_strm structs, spinlock to protect idle list and wait queue, making
it possible to perform parallel compressions.
The following set of functions added:
- zcomp_strm_multi_find()/zcomp_strm_multi_release()
find and release a compression stream, implement required locking
- zcomp_strm_multi_create()/zcomp_strm_multi_destroy()
create and destroy zcomp_strm_multi
zcomp ->strm_find() and ->strm_release() callbacks are set during
initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release()
correspondingly.
Each time zcomp issues a zcomp_strm_multi_find() call, the following set
of operations performed:
- spin lock strm_lock
- if idle list is not empty, remove zcomp_strm from idle list, spin
unlock and return zcomp stream pointer to caller
- if idle list is empty, current adds itself to wait queue. it will be
awaken by zcomp_strm_multi_release() caller.
zcomp_strm_multi_release():
- spin lock strm_lock
- add zcomp stream to idle list
- spin unlock, wake up sleeper
Minchan Kim reported that spinlock-based locking scheme has demonstrated
a severe perfomance regression for single compression stream case,
comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16)
base spinlock mutex
==Initial write ==Initial write ==Initial write
records: 5 records: 5 records: 5
avg: 1642424.35 avg: 699610.40 avg: 1655583.71
std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96
max: 1690170.94 max: 1163473.45 max: 1697164.75
min: 1568669.52 min: 573429.88 min: 1553410.23
==Rewrite ==Rewrite ==Rewrite
records: 5 records: 5 records: 5
avg: 1611775.39 avg: 501406.64 avg: 1684419.11
std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42
max: 1641800.95 max: 531356.78 max: 1706445.84
min: 1593515.27 min: 488817.78 min: 1655335.73
When only one compression stream available, mutex with spin on owner
tends to perform much better than frequent wait_event()/wake_up(). This
is why single stream implemented as a special case with mutex locking.
Introduce and document zram device attribute max_comp_streams. This
attr shows and stores current zcomp's max number of zcomp streams
(max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter.
`max_strm' limits the number of zcomp_strm structs in compression
backend's idle list (max_comp_streams).
max_comp_streams used during initialisation as follows:
-- passing to zcomp_create() max_strm equals to 1 will initialise zcomp
using single compression stream zcomp_strm_single (mutex-based locking).
-- passing to zcomp_create() max_strm greater than 1 will initialise zcomp
using multi compression stream zcomp_strm_multi (spinlock-based locking).
default max_comp_streams value is 1, meaning that zram with single stream
will be initialised.
Later patch will introduce configuration knob to change max_comp_streams
on already initialised and used zcomp.
TEST
iozone -t 3 -R -r 16K -s 60M -I +Z
test base 1 strm (mutex) 3 strm (spinlock)
-----------------------------------------------------------------------
Initial write 589286.78 583518.39 718011.05
Rewrite 604837.97 596776.38 1515125.72
Random write 584120.11 595714.58 1388850.25
Pwrite 535731.17 541117.38 739295.27
Fwrite 1418083.88 1478612.72 1484927.06
Usage example:
set max_comp_streams to 4
echo 4 > /sys/block/zram0/max_comp_streams
show current max_comp_streams (default value is 1).
cat /sys/block/zram0/max_comp_streams
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-08 06:38:14 +08:00
|
|
|
cat /sys/block/zram0/max_comp_streams
|
|
|
|
|
2014-04-08 06:38:17 +08:00
|
|
|
3) Select compression algorithm
|
2019-04-19 04:29:24 +08:00
|
|
|
===============================
|
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
Using comp_algorithm device attribute one can see available and
|
|
|
|
currently selected (shown in square brackets) compression algorithms,
|
2020-01-21 06:51:41 +08:00
|
|
|
or change the selected compression algorithm (once the device is initialised
|
2016-07-27 06:22:51 +08:00
|
|
|
there is no way to change compression algorithm).
|
2014-04-08 06:38:17 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Examples::
|
|
|
|
|
2014-04-08 06:38:17 +08:00
|
|
|
#show supported compression algorithms
|
|
|
|
cat /sys/block/zram0/comp_algorithm
|
|
|
|
lzo [lz4]
|
|
|
|
|
|
|
|
#select lzo compression algorithm
|
|
|
|
echo lzo > /sys/block/zram0/comp_algorithm
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
For the time being, the `comp_algorithm` content does not necessarily
|
2016-07-27 06:22:51 +08:00
|
|
|
show every compression algorithm supported by the kernel. We keep this
|
|
|
|
list primarily to simplify device configuration and one can configure
|
|
|
|
a new device with a compression algorithm that is not listed in
|
2019-04-19 04:29:24 +08:00
|
|
|
`comp_algorithm`. The thing is that, internally, ZRAM uses Crypto API
|
2016-07-27 06:22:51 +08:00
|
|
|
and, if some of the algorithms were built as modules, it's impossible
|
|
|
|
to list all of them using, for instance, /proc/crypto or any other
|
|
|
|
method. This, however, has an advantage of permitting the usage of
|
|
|
|
custom crypto compression modules (implementing S/W or H/W compression).
|
zram: use crypto api to check alg availability
There is no way to get a string with all the crypto comp algorithms
supported by the crypto comp engine, so we need to maintain our own
backends list. At the same time we additionally need to use
crypto_has_comp() to make sure that the user has requested a compression
algorithm that is recognized by the crypto comp engine. Relying on
/proc/crypto is not an options here, because it does not show
not-yet-inserted compression modules.
Example:
modprobe zram
cat /proc/crypto | grep -i lz4
modprobe lz4
cat /proc/crypto | grep -i lz4
name : lz4
driver : lz4-generic
module : lz4
So the user can't tell exactly if the lz4 is really supported from
/proc/crypto output, unless someone or something has loaded it.
This patch also adds crypto_has_comp() to zcomp_available_show(). We
store all the compression algorithms names in zcomp's `backends' array,
regardless the CONFIG_CRYPTO_FOO configuration, but show only those that
are also supported by crypto engine. This helps user to know the exact
list of compression algorithms that can be used.
Example:
module lz4 is not loaded yet, but is supported by the crypto
engine. /proc/crypto has no information on this module, while
zram's `comp_algorithm' lists it:
cat /proc/crypto | grep -i lz4
cat /sys/block/zram0/comp_algorithm
[lzo] lz4 deflate lz4hc 842
We still use the `backends' array to determine if the requested
compression backend is known to crypto api. This array, however, may not
contain some entries, therefore as the last step we call crypto_has_comp()
function which attempts to insmod the requested compression algorithm to
determine if crypto api supports it. The advantage of this method is that
now we permit the usage of out-of-tree crypto compression modules
(implementing S/W or H/W compression).
[sergey.senozhatsky@gmail.com: zram-use-crypto-api-to-check-alg-availability-v3]
Link: http://lkml.kernel.org/r/20160604024902.11778-4-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20160531122017.2878-5-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:48 +08:00
|
|
|
|
2014-04-08 06:38:17 +08:00
|
|
|
4) Set Disksize
|
2019-04-19 04:29:24 +08:00
|
|
|
===============
|
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
Set disk size by writing the value to sysfs node 'disksize'.
|
|
|
|
The value can be either in bytes or you can use mem suffixes.
|
2019-04-19 04:29:24 +08:00
|
|
|
Examples::
|
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
# Initialize /dev/zram0 with 50MB disksize
|
|
|
|
echo $((50*1024*1024)) > /sys/block/zram0/disksize
|
2013-01-30 10:41:40 +08:00
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
# Using mem suffixes
|
|
|
|
echo 256K > /sys/block/zram0/disksize
|
|
|
|
echo 512M > /sys/block/zram0/disksize
|
|
|
|
echo 1G > /sys/block/zram0/disksize
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2014-04-08 06:38:07 +08:00
|
|
|
Note:
|
|
|
|
There is little point creating a zram of greater than twice the size of memory
|
|
|
|
since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
|
|
|
|
size of the disk when not in use so a huge zram is wasteful.
|
|
|
|
|
2014-10-10 06:29:53 +08:00
|
|
|
5) Set memory limit: Optional
|
2019-04-19 04:29:24 +08:00
|
|
|
=============================
|
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
Set memory limit by writing the value to sysfs node 'mem_limit'.
|
|
|
|
The value can be either in bytes or you can use mem suffixes.
|
|
|
|
In addition, you could change the value in runtime.
|
2019-04-19 04:29:24 +08:00
|
|
|
Examples::
|
|
|
|
|
2016-07-27 06:22:51 +08:00
|
|
|
# limit /dev/zram0 with 50MB memory
|
|
|
|
echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
|
|
|
|
|
|
|
|
# Using mem suffixes
|
|
|
|
echo 256K > /sys/block/zram0/mem_limit
|
|
|
|
echo 512M > /sys/block/zram0/mem_limit
|
|
|
|
echo 1G > /sys/block/zram0/mem_limit
|
|
|
|
|
|
|
|
# To disable memory limit
|
|
|
|
echo 0 > /sys/block/zram0/mem_limit
|
2014-10-10 06:29:53 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
6) Activate
|
|
|
|
===========
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2010-06-01 16:01:26 +08:00
|
|
|
mkswap /dev/zram0
|
|
|
|
swapon /dev/zram0
|
|
|
|
|
|
|
|
mkfs.ext4 /dev/zram1
|
|
|
|
mount /dev/zram1 /tmp
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2015-06-26 06:00:24 +08:00
|
|
|
7) Add/remove zram devices
|
2019-04-19 04:29:24 +08:00
|
|
|
==========================
|
2015-06-26 06:00:24 +08:00
|
|
|
|
|
|
|
zram provides a control interface, which enables dynamic (on-demand) device
|
|
|
|
addition and removal.
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
In order to add a new /dev/zramX device, perform a read operation on the hot_add
|
|
|
|
attribute. This will return either the new device's device id (meaning that you
|
|
|
|
can use /dev/zram<id>) or an error code.
|
2015-06-26 06:00:24 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Example::
|
|
|
|
|
2015-06-26 06:00:24 +08:00
|
|
|
cat /sys/class/zram-control/hot_add
|
|
|
|
1
|
|
|
|
|
|
|
|
To remove the existing /dev/zramX device (where X is a device id)
|
2019-04-19 04:29:24 +08:00
|
|
|
execute::
|
|
|
|
|
2015-06-26 06:00:24 +08:00
|
|
|
echo X > /sys/class/zram-control/hot_remove
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
8) Stats
|
|
|
|
========
|
|
|
|
|
2015-04-16 07:16:00 +08:00
|
|
|
Per-device statistics are exported as various nodes under /sys/block/zram<id>/
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
A brief description of exported device attributes follows. For more details
|
|
|
|
please read Documentation/ABI/testing/sysfs-block-zram.
|
2015-04-16 07:16:00 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
====================== ====== ===============================================
|
2019-01-09 07:22:53 +08:00
|
|
|
Name access description
|
2019-04-19 04:29:24 +08:00
|
|
|
====================== ====== ===============================================
|
2019-01-09 07:22:53 +08:00
|
|
|
disksize RW show and set the device's disk size
|
|
|
|
initstate RO shows the initialization state of the device
|
|
|
|
reset WO trigger device reset
|
2019-04-19 04:29:24 +08:00
|
|
|
mem_used_max WO reset the `mem_used_max` counter (see later)
|
|
|
|
mem_limit WO specifies the maximum amount of memory ZRAM can
|
|
|
|
use to store the compressed data
|
|
|
|
writeback_limit WO specifies the maximum amount of write IO zram
|
|
|
|
can write out to backing device as 4KB unit
|
2019-01-09 07:22:53 +08:00
|
|
|
writeback_limit_enable RW show and set writeback_limit feature
|
2019-04-19 04:29:24 +08:00
|
|
|
max_comp_streams RW the number of possible concurrent compress
|
|
|
|
operations
|
2019-01-09 07:22:53 +08:00
|
|
|
comp_algorithm RW show and change the compression algorithm
|
|
|
|
compact WO trigger memory compaction
|
|
|
|
debug_stat RO this file is used for zram debugging purposes
|
|
|
|
backing_dev RW set up backend storage for zram to write out
|
|
|
|
idle WO mark allocated slot as idle
|
2019-04-19 04:29:24 +08:00
|
|
|
====================== ====== ===============================================
|
2015-04-16 07:16:00 +08:00
|
|
|
|
2015-04-16 07:16:09 +08:00
|
|
|
|
|
|
|
User space is advised to use the following files to read the device statistics.
|
|
|
|
|
2015-04-16 07:16:00 +08:00
|
|
|
File /sys/block/zram<id>/stat
|
|
|
|
|
2019-04-19 06:45:00 +08:00
|
|
|
Represents block layer statistics. Read Documentation/block/stat.rst for
|
2015-04-16 07:16:00 +08:00
|
|
|
details.
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2015-04-16 07:16:03 +08:00
|
|
|
File /sys/block/zram<id>/io_stat
|
|
|
|
|
|
|
|
The stat file represents device's I/O statistics not accounted by block
|
|
|
|
layer and, thus, not available in zram<id>/stat file. It consists of a
|
|
|
|
single line of text and contains the following stats separated by
|
|
|
|
whitespace:
|
2019-04-19 04:29:24 +08:00
|
|
|
|
|
|
|
============= =============================================================
|
|
|
|
failed_reads The number of failed reads
|
|
|
|
failed_writes The number of failed writes
|
|
|
|
invalid_io The number of non-page-size-aligned I/O requests
|
2017-02-23 07:46:45 +08:00
|
|
|
notify_free Depending on device usage scenario it may account
|
2019-04-19 04:29:24 +08:00
|
|
|
|
2017-02-23 07:46:45 +08:00
|
|
|
a) the number of pages freed because of swap slot free
|
2019-04-19 04:29:24 +08:00
|
|
|
notifications
|
|
|
|
b) the number of pages freed because of
|
|
|
|
REQ_OP_DISCARD requests sent by bio. The former ones are
|
|
|
|
sent to a swap block device when a swap slot is freed,
|
|
|
|
which implies that this disk is being used as a swap disk.
|
|
|
|
|
2017-02-23 07:46:45 +08:00
|
|
|
The latter ones are sent by filesystem mounted with
|
|
|
|
discard option, whenever some data blocks are getting
|
|
|
|
discarded.
|
2019-04-19 04:29:24 +08:00
|
|
|
============= =============================================================
|
2015-04-16 07:16:03 +08:00
|
|
|
|
2015-04-16 07:16:06 +08:00
|
|
|
File /sys/block/zram<id>/mm_stat
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
The mm_stat file represents the device's mm statistics. It consists of a single
|
2015-04-16 07:16:06 +08:00
|
|
|
line of text and contains the following stats separated by whitespace:
|
2019-04-19 04:29:24 +08:00
|
|
|
|
|
|
|
================ =============================================================
|
2017-02-23 07:46:45 +08:00
|
|
|
orig_data_size uncompressed size of data stored in this disk.
|
|
|
|
Unit: bytes
|
|
|
|
compr_data_size compressed size of data stored in this disk
|
|
|
|
mem_used_total the amount of memory allocated for this disk. This
|
|
|
|
includes allocator fragmentation and metadata overhead,
|
|
|
|
allocated for this disk. So, allocator space efficiency
|
|
|
|
can be calculated using compr_data_size and this statistic.
|
|
|
|
Unit: bytes
|
|
|
|
mem_limit the maximum amount of memory ZRAM can use to store
|
|
|
|
the compressed data
|
2020-01-21 06:51:41 +08:00
|
|
|
mem_used_max the maximum amount of memory zram has consumed to
|
2017-02-23 07:46:45 +08:00
|
|
|
store the data
|
2017-02-25 06:59:27 +08:00
|
|
|
same_pages the number of same element filled pages written to this disk.
|
2017-02-23 07:46:45 +08:00
|
|
|
No memory is allocated for such pages.
|
|
|
|
pages_compacted the number of pages freed during compaction
|
2018-06-08 08:05:42 +08:00
|
|
|
huge_pages the number of incompressible pages
|
2019-04-19 04:29:24 +08:00
|
|
|
================ =============================================================
|
2015-04-16 07:16:06 +08:00
|
|
|
|
2018-12-28 16:36:51 +08:00
|
|
|
File /sys/block/zram<id>/bd_stat
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
The bd_stat file represents a device's backing device statistics. It consists of
|
2018-12-28 16:36:51 +08:00
|
|
|
a single line of text and contains the following stats separated by whitespace:
|
2019-04-19 04:29:24 +08:00
|
|
|
|
|
|
|
============== =============================================================
|
2018-12-28 16:36:51 +08:00
|
|
|
bd_count size of data written in backing device.
|
|
|
|
Unit: 4K bytes
|
|
|
|
bd_reads the number of reads from backing device
|
|
|
|
Unit: 4K bytes
|
|
|
|
bd_writes the number of writes to backing device
|
|
|
|
Unit: 4K bytes
|
2019-04-19 04:29:24 +08:00
|
|
|
============== =============================================================
|
|
|
|
|
|
|
|
9) Deactivate
|
|
|
|
=============
|
|
|
|
|
|
|
|
::
|
2018-12-28 16:36:51 +08:00
|
|
|
|
2010-06-01 16:01:26 +08:00
|
|
|
swapoff /dev/zram0
|
|
|
|
umount /dev/zram1
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
10) Reset
|
|
|
|
=========
|
|
|
|
|
|
|
|
Write any positive value to 'reset' sysfs node::
|
|
|
|
|
|
|
|
echo 1 > /sys/block/zram0/reset
|
|
|
|
echo 1 > /sys/block/zram1/reset
|
2010-08-10 01:26:55 +08:00
|
|
|
|
2013-01-30 10:41:40 +08:00
|
|
|
This frees all the memory allocated for the given device and
|
|
|
|
resets the disksize to zero. You must set the disksize again
|
|
|
|
before reusing the device.
|
2009-09-22 12:56:54 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Optional Feature
|
|
|
|
================
|
2017-09-07 07:20:10 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
writeback
|
|
|
|
---------
|
2017-09-07 07:20:10 +08:00
|
|
|
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
With CONFIG_ZRAM_WRITEBACK, zram can write idle/incompressible page
|
2017-09-07 07:20:10 +08:00
|
|
|
to backing storage rather than keeping it in memory.
|
2019-04-19 04:29:24 +08:00
|
|
|
To use the feature, admin should set up backing device via::
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
echo /dev/sda5 > /sys/block/zramX/backing_dev
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
|
|
|
before disksize setting. It supports only partition at this moment.
|
2020-01-21 06:51:41 +08:00
|
|
|
If admin wants to use incompressible page writeback, they could do via::
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
2020-01-20 18:29:49 +08:00
|
|
|
echo huge > /sys/block/zramX/writeback
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
|
|
|
To use idle page writeback, first, user need to declare zram pages
|
2019-04-19 04:29:24 +08:00
|
|
|
as idle::
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
echo all > /sys/block/zramX/idle
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
|
|
|
From now on, any pages on zram are idle pages. The idle mark
|
2020-01-21 06:51:41 +08:00
|
|
|
will be removed until someone requests access of the block.
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
IOW, unless there is access request, those pages are still idle pages.
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
Admin can request writeback of those idle pages at right timing via::
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
echo idle > /sys/block/zramX/writeback
|
zram: support idle/huge page writeback
Add a new feature "zram idle/huge page writeback". In the zram-swap use
case, zram usually has many idle/huge swap pages. It's pointless to keep
them in memory (ie, zram).
To solve this problem, this feature introduces idle/huge page writeback to
the backing device so the goal is to save more memory space on embedded
systems.
Normal sequence to use idle/huge page writeback feature is as follows,
while (1) {
# mark allocated zram slot to idle
echo all > /sys/block/zram0/idle
# leave system working for several hours
# Unless there is no access for some blocks on zram,
# they are still IDLE marked pages.
echo "idle" > /sys/block/zram0/writeback
or/and
echo "huge" > /sys/block/zram0/writeback
# write the IDLE or/and huge marked slot into backing device
# and free the memory.
}
Per the discussion at
https://lore.kernel.org/lkml/20181122065926.GG3441@jagdpanzerIV/T/#u,
This patch removes direct incommpressibe page writeback feature
(d2afd25114f4 ("zram: write incompressible pages to backing device")).
Below concerns from Sergey:
== &< ==
"IDLE writeback" is superior to "incompressible writeback".
"incompressible writeback" is completely unpredictable and uncontrollable;
it depens on data patterns and compression algorithms. While "IDLE
writeback" is predictable.
I even suspect, that, *ideally*, we can remove "incompressible writeback".
"IDLE pages" is a super set which also includes "incompressible" pages.
So, technically, we still can do "incompressible writeback" from "IDLE
writeback" path; but a much more reasonable one, based on a page idling
period.
I understand that you want to keep "direct incompressible writeback"
around. ZRAM is especially popular on devices which do suffer from flash
wearout, so I can see "incompressible writeback" path becoming a dead
code, long term.
== &< ==
Below concerns from Minchan:
== &< ==
My concern is if we enable CONFIG_ZRAM_WRITEBACK in this implementation,
both hugepage/idlepage writeck will turn on. However someuser want to
enable only idlepage writeback so we need to introduce turn on/off knob
for hugepage or new CONFIG_ZRAM_IDLEPAGE_WRITEBACK for those usecase. I
don't want to make it complicated *if possible*.
Long term, I imagine we need to make VM aware of new swap hierarchy a
little bit different with as-is. For example, first high priority swap
can return -EIO or -ENOCOMP, swap try to fallback to next lower priority
swap device. With that, hugepage writeback will work tranparently.
So we could regard it as regression because incompressible pages doesn't
go to backing storage automatically. Instead, user should do it via "echo
huge" > /sys/block/zram/writeback" manually.
== &< ==
Link: http://lkml.kernel.org/r/20181127055429.251614-6-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joey Pabalinas <joeypabalinas@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 16:36:47 +08:00
|
|
|
|
|
|
|
With the command, zram writeback idle pages from memory to the storage.
|
2017-09-07 07:20:10 +08:00
|
|
|
|
2018-12-28 16:36:54 +08:00
|
|
|
If there are lots of write IO with flash device, potentially, it has
|
|
|
|
flash wearout problem so that admin needs to design write limitation
|
|
|
|
to guarantee storage health for entire product life.
|
2019-01-09 07:22:53 +08:00
|
|
|
|
|
|
|
To overcome the concern, zram supports "writeback_limit" feature.
|
|
|
|
The "writeback_limit_enable"'s default value is 0 so that it doesn't limit
|
2020-01-21 06:51:41 +08:00
|
|
|
any writeback. IOW, if admin wants to apply writeback budget, he should
|
2019-04-19 04:29:24 +08:00
|
|
|
enable writeback_limit_enable via::
|
2019-01-09 07:22:53 +08:00
|
|
|
|
|
|
|
$ echo 1 > /sys/block/zramX/writeback_limit_enable
|
|
|
|
|
|
|
|
Once writeback_limit_enable is set, zram doesn't allow any writeback
|
2020-01-21 06:51:41 +08:00
|
|
|
until admin sets the budget via /sys/block/zramX/writeback_limit.
|
2019-01-09 07:22:53 +08:00
|
|
|
|
|
|
|
(If admin doesn't enable writeback_limit_enable, writeback_limit's value
|
2020-01-21 06:51:41 +08:00
|
|
|
assigned via /sys/block/zramX/writeback_limit is meaningless.)
|
2018-12-28 16:36:54 +08:00
|
|
|
|
|
|
|
If admin want to limit writeback as per-day 400M, he could do it
|
2019-04-19 04:29:24 +08:00
|
|
|
like below::
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2019-01-09 07:22:53 +08:00
|
|
|
$ MB_SHIFT=20
|
|
|
|
$ 4K_SHIFT=12
|
|
|
|
$ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \
|
|
|
|
/sys/block/zram0/writeback_limit.
|
|
|
|
$ echo 1 > /sys/block/zram0/writeback_limit_enable
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
If admins want to allow further write again once the bugdet is exhausted,
|
2019-04-19 04:29:24 +08:00
|
|
|
he could do it like below::
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2019-01-09 07:22:53 +08:00
|
|
|
$ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \
|
|
|
|
/sys/block/zram0/writeback_limit
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
If admin wants to see remaining writeback budget since last set::
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2019-01-09 07:22:53 +08:00
|
|
|
$ cat /sys/block/zramX/writeback_limit
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
If admin want to disable writeback limit, he could do::
|
2019-01-09 07:22:53 +08:00
|
|
|
|
|
|
|
$ echo 0 > /sys/block/zramX/writeback_limit_enable
|
2018-12-28 16:36:54 +08:00
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
The writeback_limit count will reset whenever you reset zram (e.g.,
|
2018-12-28 16:36:54 +08:00
|
|
|
system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of
|
|
|
|
writeback happened until you reset the zram to allocate extra writeback
|
|
|
|
budget in next setting is user's job.
|
|
|
|
|
2020-01-21 06:51:41 +08:00
|
|
|
If admin wants to measure writeback count in a certain period, he could
|
2019-01-09 07:22:53 +08:00
|
|
|
know it via /sys/block/zram0/bd_stat's 3rd column.
|
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
memory tracking
|
|
|
|
===============
|
2018-06-08 08:05:49 +08:00
|
|
|
|
|
|
|
With CONFIG_ZRAM_MEMORY_TRACKING, user can know information of the
|
|
|
|
zram block. It could be useful to catch cold or incompressible
|
|
|
|
pages of the process with*pagemap.
|
2019-04-19 04:29:24 +08:00
|
|
|
|
2018-06-08 08:05:49 +08:00
|
|
|
If you enable the feature, you could see block state via
|
2019-04-19 04:29:24 +08:00
|
|
|
/sys/kernel/debug/zram/zram0/block_state". The output is as follows::
|
2018-06-08 08:05:49 +08:00
|
|
|
|
2018-12-28 16:36:44 +08:00
|
|
|
300 75.033841 .wh.
|
|
|
|
301 63.806904 s...
|
|
|
|
302 63.806919 ..hi
|
2018-06-08 08:05:49 +08:00
|
|
|
|
2019-04-19 04:29:24 +08:00
|
|
|
First column
|
|
|
|
zram's block index.
|
|
|
|
Second column
|
|
|
|
access time since the system was booted
|
|
|
|
Third column
|
|
|
|
state of the block:
|
|
|
|
|
|
|
|
s:
|
|
|
|
same page
|
|
|
|
w:
|
|
|
|
written page to backing store
|
|
|
|
h:
|
|
|
|
huge page
|
|
|
|
i:
|
|
|
|
idle page
|
2018-06-08 08:05:49 +08:00
|
|
|
|
|
|
|
First line of above example says 300th block is accessed at 75.033841sec
|
|
|
|
and the block's state is huge so it is written back to the backing
|
|
|
|
storage. It's a debugging feature so anyone shouldn't rely on it to work
|
|
|
|
properly.
|
|
|
|
|
2009-09-22 12:56:54 +08:00
|
|
|
Nitin Gupta
|
|
|
|
ngupta@vflare.org
|