2007-07-11 08:22:24 +08:00
|
|
|
/*
|
2012-08-13 23:25:44 +08:00
|
|
|
* LZO1X Compressor from LZO
|
2007-07-11 08:22:24 +08:00
|
|
|
*
|
2012-08-13 23:25:44 +08:00
|
|
|
* Copyright (C) 1996-2012 Markus F.X.J. Oberhumer <markus@oberhumer.com>
|
2007-07-11 08:22:24 +08:00
|
|
|
*
|
|
|
|
* The full LZO package can be found at:
|
|
|
|
* http://www.oberhumer.com/opensource/lzo/
|
|
|
|
*
|
2012-08-13 23:25:44 +08:00
|
|
|
* Changed for Linux kernel use by:
|
2007-07-11 08:22:24 +08:00
|
|
|
* Nitin Gupta <nitingupta910@gmail.com>
|
|
|
|
* Richard Purdie <rpurdie@openedhand.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <asm/unaligned.h>
|
2012-08-13 23:25:44 +08:00
|
|
|
#include <linux/lzo.h>
|
2007-07-11 08:22:24 +08:00
|
|
|
#include "lzodefs.h"
|
|
|
|
|
|
|
|
static noinline size_t
|
2012-08-13 23:25:44 +08:00
|
|
|
lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
|
|
|
|
unsigned char *out, size_t *out_len,
|
2019-03-08 08:30:44 +08:00
|
|
|
size_t ti, void *wrkmem, signed char *state_offset,
|
|
|
|
const unsigned char bitstream_version)
|
2007-07-11 08:22:24 +08:00
|
|
|
{
|
2012-08-13 23:25:44 +08:00
|
|
|
const unsigned char *ip;
|
|
|
|
unsigned char *op;
|
2007-07-11 08:22:24 +08:00
|
|
|
const unsigned char * const in_end = in + in_len;
|
2012-08-13 23:25:44 +08:00
|
|
|
const unsigned char * const ip_end = in + in_len - 20;
|
|
|
|
const unsigned char *ii;
|
|
|
|
lzo_dict_t * const dict = (lzo_dict_t *) wrkmem;
|
2007-07-11 08:22:24 +08:00
|
|
|
|
2012-08-13 23:25:44 +08:00
|
|
|
op = out;
|
|
|
|
ip = in;
|
|
|
|
ii = ip;
|
|
|
|
ip += ti < 4 ? 4 - ti : 0;
|
2007-07-11 08:22:24 +08:00
|
|
|
|
|
|
|
for (;;) {
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
const unsigned char *m_pos = NULL;
|
2012-08-13 23:25:44 +08:00
|
|
|
size_t t, m_len, m_off;
|
|
|
|
u32 dv;
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
u32 run_length = 0;
|
2007-07-11 08:22:24 +08:00
|
|
|
literal:
|
2012-08-13 23:25:44 +08:00
|
|
|
ip += 1 + ((ip - ii) >> 5);
|
|
|
|
next:
|
2007-07-11 08:22:24 +08:00
|
|
|
if (unlikely(ip >= ip_end))
|
|
|
|
break;
|
2012-08-13 23:25:44 +08:00
|
|
|
dv = get_unaligned_le32(ip);
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
|
2019-03-08 08:30:44 +08:00
|
|
|
if (dv == 0 && bitstream_version) {
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
const unsigned char *ir = ip + 4;
|
|
|
|
const unsigned char *limit = ip_end
|
|
|
|
< (ip + MAX_ZERO_RUN_LENGTH + 1)
|
|
|
|
? ip_end : ip + MAX_ZERO_RUN_LENGTH + 1;
|
|
|
|
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \
|
|
|
|
defined(LZO_FAST_64BIT_MEMORY_ACCESS)
|
|
|
|
u64 dv64;
|
|
|
|
|
|
|
|
for (; (ir + 32) <= limit; ir += 32) {
|
|
|
|
dv64 = get_unaligned((u64 *)ir);
|
|
|
|
dv64 |= get_unaligned((u64 *)ir + 1);
|
|
|
|
dv64 |= get_unaligned((u64 *)ir + 2);
|
|
|
|
dv64 |= get_unaligned((u64 *)ir + 3);
|
|
|
|
if (dv64)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
for (; (ir + 8) <= limit; ir += 8) {
|
|
|
|
dv64 = get_unaligned((u64 *)ir);
|
|
|
|
if (dv64) {
|
|
|
|
# if defined(__LITTLE_ENDIAN)
|
|
|
|
ir += __builtin_ctzll(dv64) >> 3;
|
|
|
|
# elif defined(__BIG_ENDIAN)
|
|
|
|
ir += __builtin_clzll(dv64) >> 3;
|
|
|
|
# else
|
|
|
|
# error "missing endian definition"
|
|
|
|
# endif
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
while ((ir < (const unsigned char *)
|
|
|
|
ALIGN((uintptr_t)ir, 4)) &&
|
|
|
|
(ir < limit) && (*ir == 0))
|
|
|
|
ir++;
|
|
|
|
for (; (ir + 4) <= limit; ir += 4) {
|
|
|
|
dv = *((u32 *)ir);
|
|
|
|
if (dv) {
|
|
|
|
# if defined(__LITTLE_ENDIAN)
|
|
|
|
ir += __builtin_ctz(dv) >> 3;
|
|
|
|
# elif defined(__BIG_ENDIAN)
|
|
|
|
ir += __builtin_clz(dv) >> 3;
|
|
|
|
# else
|
|
|
|
# error "missing endian definition"
|
|
|
|
# endif
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
while (likely(ir < limit) && unlikely(*ir == 0))
|
|
|
|
ir++;
|
|
|
|
run_length = ir - ip;
|
|
|
|
if (run_length > MAX_ZERO_RUN_LENGTH)
|
|
|
|
run_length = MAX_ZERO_RUN_LENGTH;
|
|
|
|
} else {
|
|
|
|
t = ((dv * 0x1824429d) >> (32 - D_BITS)) & D_MASK;
|
|
|
|
m_pos = in + dict[t];
|
|
|
|
dict[t] = (lzo_dict_t) (ip - in);
|
|
|
|
if (unlikely(dv != get_unaligned_le32(m_pos)))
|
|
|
|
goto literal;
|
|
|
|
}
|
2007-07-11 08:22:24 +08:00
|
|
|
|
2012-08-13 23:25:44 +08:00
|
|
|
ii -= ti;
|
|
|
|
ti = 0;
|
|
|
|
t = ip - ii;
|
|
|
|
if (t != 0) {
|
2007-07-11 08:22:24 +08:00
|
|
|
if (t <= 3) {
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
op[*state_offset] |= t;
|
2012-08-13 23:25:44 +08:00
|
|
|
COPY4(op, ii);
|
|
|
|
op += t;
|
|
|
|
} else if (t <= 16) {
|
2007-07-11 08:22:24 +08:00
|
|
|
*op++ = (t - 3);
|
2012-08-13 23:25:44 +08:00
|
|
|
COPY8(op, ii);
|
|
|
|
COPY8(op + 8, ii + 8);
|
|
|
|
op += t;
|
2007-07-11 08:22:24 +08:00
|
|
|
} else {
|
2012-08-13 23:25:44 +08:00
|
|
|
if (t <= 18) {
|
|
|
|
*op++ = (t - 3);
|
|
|
|
} else {
|
|
|
|
size_t tt = t - 18;
|
2007-07-11 08:22:24 +08:00
|
|
|
*op++ = 0;
|
2012-08-13 23:25:44 +08:00
|
|
|
while (unlikely(tt > 255)) {
|
|
|
|
tt -= 255;
|
|
|
|
*op++ = 0;
|
|
|
|
}
|
|
|
|
*op++ = tt;
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
do {
|
|
|
|
COPY8(op, ii);
|
|
|
|
COPY8(op + 8, ii + 8);
|
|
|
|
op += 16;
|
|
|
|
ii += 16;
|
|
|
|
t -= 16;
|
|
|
|
} while (t >= 16);
|
|
|
|
if (t > 0) do {
|
|
|
|
*op++ = *ii++;
|
|
|
|
} while (--t > 0);
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
if (unlikely(run_length)) {
|
|
|
|
ip += run_length;
|
|
|
|
run_length -= MIN_ZERO_RUN_LENGTH;
|
|
|
|
put_unaligned_le32((run_length << 21) | 0xfffc18
|
|
|
|
| (run_length & 0x7), op);
|
|
|
|
op += 4;
|
|
|
|
run_length = 0;
|
|
|
|
*state_offset = -3;
|
|
|
|
goto finished_writing_instruction;
|
|
|
|
}
|
|
|
|
|
2012-08-13 23:25:44 +08:00
|
|
|
m_len = 4;
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && defined(LZO_USE_CTZ64)
|
|
|
|
u64 v;
|
|
|
|
v = get_unaligned((const u64 *) (ip + m_len)) ^
|
|
|
|
get_unaligned((const u64 *) (m_pos + m_len));
|
|
|
|
if (unlikely(v == 0)) {
|
|
|
|
do {
|
|
|
|
m_len += 8;
|
|
|
|
v = get_unaligned((const u64 *) (ip + m_len)) ^
|
|
|
|
get_unaligned((const u64 *) (m_pos + m_len));
|
|
|
|
if (unlikely(ip + m_len >= ip_end))
|
|
|
|
goto m_len_done;
|
|
|
|
} while (v == 0);
|
|
|
|
}
|
|
|
|
# if defined(__LITTLE_ENDIAN)
|
|
|
|
m_len += (unsigned) __builtin_ctzll(v) / 8;
|
|
|
|
# elif defined(__BIG_ENDIAN)
|
|
|
|
m_len += (unsigned) __builtin_clzll(v) / 8;
|
|
|
|
# else
|
|
|
|
# error "missing endian definition"
|
|
|
|
# endif
|
|
|
|
#elif defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && defined(LZO_USE_CTZ32)
|
|
|
|
u32 v;
|
|
|
|
v = get_unaligned((const u32 *) (ip + m_len)) ^
|
|
|
|
get_unaligned((const u32 *) (m_pos + m_len));
|
|
|
|
if (unlikely(v == 0)) {
|
|
|
|
do {
|
|
|
|
m_len += 4;
|
|
|
|
v = get_unaligned((const u32 *) (ip + m_len)) ^
|
|
|
|
get_unaligned((const u32 *) (m_pos + m_len));
|
|
|
|
if (v != 0)
|
|
|
|
break;
|
|
|
|
m_len += 4;
|
|
|
|
v = get_unaligned((const u32 *) (ip + m_len)) ^
|
|
|
|
get_unaligned((const u32 *) (m_pos + m_len));
|
|
|
|
if (unlikely(ip + m_len >= ip_end))
|
|
|
|
goto m_len_done;
|
|
|
|
} while (v == 0);
|
|
|
|
}
|
|
|
|
# if defined(__LITTLE_ENDIAN)
|
|
|
|
m_len += (unsigned) __builtin_ctz(v) / 8;
|
|
|
|
# elif defined(__BIG_ENDIAN)
|
|
|
|
m_len += (unsigned) __builtin_clz(v) / 8;
|
|
|
|
# else
|
|
|
|
# error "missing endian definition"
|
|
|
|
# endif
|
|
|
|
#else
|
|
|
|
if (unlikely(ip[m_len] == m_pos[m_len])) {
|
|
|
|
do {
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (ip[m_len] != m_pos[m_len])
|
|
|
|
break;
|
|
|
|
m_len += 1;
|
|
|
|
if (unlikely(ip + m_len >= ip_end))
|
|
|
|
goto m_len_done;
|
|
|
|
} while (ip[m_len] == m_pos[m_len]);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
m_len_done:
|
2007-07-11 08:22:24 +08:00
|
|
|
|
2012-08-13 23:25:44 +08:00
|
|
|
m_off = ip - m_pos;
|
|
|
|
ip += m_len;
|
|
|
|
if (m_len <= M2_MAX_LEN && m_off <= M2_MAX_OFFSET) {
|
|
|
|
m_off -= 1;
|
|
|
|
*op++ = (((m_len - 1) << 5) | ((m_off & 7) << 2));
|
|
|
|
*op++ = (m_off >> 3);
|
|
|
|
} else if (m_off <= M3_MAX_OFFSET) {
|
|
|
|
m_off -= 1;
|
|
|
|
if (m_len <= M3_MAX_LEN)
|
2007-07-11 08:22:24 +08:00
|
|
|
*op++ = (M3_MARKER | (m_len - 2));
|
2012-08-13 23:25:44 +08:00
|
|
|
else {
|
|
|
|
m_len -= M3_MAX_LEN;
|
|
|
|
*op++ = M3_MARKER | 0;
|
|
|
|
while (unlikely(m_len > 255)) {
|
|
|
|
m_len -= 255;
|
|
|
|
*op++ = 0;
|
|
|
|
}
|
|
|
|
*op++ = (m_len);
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
*op++ = (m_off << 2);
|
|
|
|
*op++ = (m_off >> 6);
|
2007-07-11 08:22:24 +08:00
|
|
|
} else {
|
2012-08-13 23:25:44 +08:00
|
|
|
m_off -= 0x4000;
|
|
|
|
if (m_len <= M4_MAX_LEN)
|
|
|
|
*op++ = (M4_MARKER | ((m_off >> 11) & 8)
|
2007-07-11 08:22:24 +08:00
|
|
|
| (m_len - 2));
|
2012-08-13 23:25:44 +08:00
|
|
|
else {
|
|
|
|
m_len -= M4_MAX_LEN;
|
|
|
|
*op++ = (M4_MARKER | ((m_off >> 11) & 8));
|
|
|
|
while (unlikely(m_len > 255)) {
|
|
|
|
m_len -= 255;
|
|
|
|
*op++ = 0;
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
*op++ = (m_len);
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
*op++ = (m_off << 2);
|
2007-07-11 08:22:24 +08:00
|
|
|
*op++ = (m_off >> 6);
|
|
|
|
}
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
*state_offset = -2;
|
|
|
|
finished_writing_instruction:
|
|
|
|
ii = ip;
|
2012-08-13 23:25:44 +08:00
|
|
|
goto next;
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
|
|
|
*out_len = op - out;
|
2012-08-13 23:25:44 +08:00
|
|
|
return in_end - (ii - ti);
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
|
|
|
|
2019-03-08 08:30:44 +08:00
|
|
|
int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len,
|
2012-08-13 23:25:44 +08:00
|
|
|
unsigned char *out, size_t *out_len,
|
2019-03-08 08:30:44 +08:00
|
|
|
void *wrkmem, const unsigned char bitstream_version)
|
2007-07-11 08:22:24 +08:00
|
|
|
{
|
2012-08-13 23:25:44 +08:00
|
|
|
const unsigned char *ip = in;
|
2007-07-11 08:22:24 +08:00
|
|
|
unsigned char *op = out;
|
2012-08-13 23:25:44 +08:00
|
|
|
size_t l = in_len;
|
|
|
|
size_t t = 0;
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
signed char state_offset = -2;
|
2019-03-08 08:30:44 +08:00
|
|
|
unsigned int m4_max_offset;
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
|
|
|
|
// LZO v0 will never write 17 as first byte,
|
|
|
|
// so this is used to version the bitstream
|
2019-03-08 08:30:44 +08:00
|
|
|
if (bitstream_version > 0) {
|
|
|
|
*op++ = 17;
|
|
|
|
*op++ = bitstream_version;
|
|
|
|
m4_max_offset = M4_MAX_OFFSET_V1;
|
|
|
|
} else {
|
|
|
|
m4_max_offset = M4_MAX_OFFSET_V0;
|
|
|
|
}
|
2007-07-11 08:22:24 +08:00
|
|
|
|
2012-08-13 23:25:44 +08:00
|
|
|
while (l > 20) {
|
2019-03-08 08:30:44 +08:00
|
|
|
size_t ll = l <= (m4_max_offset + 1) ? l : (m4_max_offset + 1);
|
2012-08-13 23:25:44 +08:00
|
|
|
uintptr_t ll_end = (uintptr_t) ip + ll;
|
|
|
|
if ((ll_end + ((t + ll) >> 5)) <= ll_end)
|
|
|
|
break;
|
|
|
|
BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > LZO1X_1_MEM_COMPRESS);
|
|
|
|
memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
|
2019-03-08 08:30:44 +08:00
|
|
|
t = lzo1x_1_do_compress(ip, ll, op, out_len, t, wrkmem,
|
|
|
|
&state_offset, bitstream_version);
|
2012-08-13 23:25:44 +08:00
|
|
|
ip += ll;
|
2007-07-11 08:22:24 +08:00
|
|
|
op += *out_len;
|
2012-08-13 23:25:44 +08:00
|
|
|
l -= ll;
|
2007-07-11 08:22:24 +08:00
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
t += l;
|
2007-07-11 08:22:24 +08:00
|
|
|
|
|
|
|
if (t > 0) {
|
2012-08-13 23:25:44 +08:00
|
|
|
const unsigned char *ii = in + in_len - t;
|
2007-07-11 08:22:24 +08:00
|
|
|
|
|
|
|
if (op == out && t <= 238) {
|
|
|
|
*op++ = (17 + t);
|
|
|
|
} else if (t <= 3) {
|
lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.
Following on from the previous lzo-rle patchset:
https://lkml.org/lkml/2018/11/30/972
This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
Previously, some questions were raised around the RLE patches. I've
done some additional benchmarking to answer these questions. In short:
- RLE offers significant additional performance (data-dependent)
- I didn't measure any regressions that were clearly outside the noise
One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy). I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.
Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages. I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram. What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages). This supports the idea of special-casing zeros in zram.
Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).
https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.
RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).
Best performance is seen with both MS and RLE patches.
Finally, I have benchmarked the same dataset on an x86-64 device. Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm. There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations). I think this
is probably within the noise.
https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit. Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect. So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.
This patch (of 3):
When using zram, we frequently encounter long runs of zero bytes. This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.
This is faster for both compression and decompresion. For high-entropy
data which doesn't hit this case, impact is minimal.
Compression ratio is within a few percent in all cases.
This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).
Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-08 08:30:40 +08:00
|
|
|
op[state_offset] |= t;
|
2007-07-11 08:22:24 +08:00
|
|
|
} else if (t <= 18) {
|
|
|
|
*op++ = (t - 3);
|
|
|
|
} else {
|
|
|
|
size_t tt = t - 18;
|
|
|
|
*op++ = 0;
|
|
|
|
while (tt > 255) {
|
|
|
|
tt -= 255;
|
|
|
|
*op++ = 0;
|
|
|
|
}
|
|
|
|
*op++ = tt;
|
|
|
|
}
|
2012-08-13 23:25:44 +08:00
|
|
|
if (t >= 16) do {
|
|
|
|
COPY8(op, ii);
|
|
|
|
COPY8(op + 8, ii + 8);
|
|
|
|
op += 16;
|
|
|
|
ii += 16;
|
|
|
|
t -= 16;
|
|
|
|
} while (t >= 16);
|
|
|
|
if (t > 0) do {
|
2007-07-11 08:22:24 +08:00
|
|
|
*op++ = *ii++;
|
|
|
|
} while (--t > 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
*op++ = M4_MARKER | 1;
|
|
|
|
*op++ = 0;
|
|
|
|
*op++ = 0;
|
|
|
|
|
|
|
|
*out_len = op - out;
|
|
|
|
return LZO_E_OK;
|
|
|
|
}
|
2019-03-08 08:30:44 +08:00
|
|
|
|
|
|
|
int lzo1x_1_compress(const unsigned char *in, size_t in_len,
|
|
|
|
unsigned char *out, size_t *out_len,
|
|
|
|
void *wrkmem)
|
|
|
|
{
|
|
|
|
return lzogeneric1x_1_compress(in, in_len, out, out_len, wrkmem, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int lzorle1x_1_compress(const unsigned char *in, size_t in_len,
|
|
|
|
unsigned char *out, size_t *out_len,
|
|
|
|
void *wrkmem)
|
|
|
|
{
|
|
|
|
return lzogeneric1x_1_compress(in, in_len, out, out_len,
|
|
|
|
wrkmem, LZO_VERSION);
|
|
|
|
}
|
|
|
|
|
2007-07-11 08:22:24 +08:00
|
|
|
EXPORT_SYMBOL_GPL(lzo1x_1_compress);
|
2019-03-08 08:30:44 +08:00
|
|
|
EXPORT_SYMBOL_GPL(lzorle1x_1_compress);
|
2007-07-11 08:22:24 +08:00
|
|
|
|
|
|
|
MODULE_LICENSE("GPL");
|
|
|
|
MODULE_DESCRIPTION("LZO1X-1 Compressor");
|