2018-04-04 01:16:55 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2008 Oracle. All rights reserved.
|
|
|
|
*/
|
|
|
|
|
2018-04-04 01:16:55 +08:00
|
|
|
#ifndef BTRFS_COMPRESSION_H
|
|
|
|
#define BTRFS_COMPRESSION_H
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
|
2018-05-17 13:52:22 +08:00
|
|
|
#include <linux/sizes.h>
|
|
|
|
|
2020-06-03 13:55:16 +08:00
|
|
|
struct btrfs_inode;
|
|
|
|
|
2017-02-15 02:30:39 +08:00
|
|
|
/*
|
|
|
|
* We want to make sure that amount of RAM required to uncompress an extent is
|
|
|
|
* reasonable, so we limit the total size in ram of a compressed extent to
|
|
|
|
* 128k. This is a crucial number because it also controls how easily we can
|
|
|
|
* spread reads across cpus for decompression.
|
|
|
|
*
|
|
|
|
* We also want to make sure the amount of IO required to do a random read is
|
|
|
|
* reasonably small, so we limit the size of a compressed extent to 128k.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Maximum length of compressed data stored on disk */
|
|
|
|
#define BTRFS_MAX_COMPRESSED (SZ_128K)
|
|
|
|
/* Maximum size of data before compression */
|
|
|
|
#define BTRFS_MAX_UNCOMPRESSED (SZ_128K)
|
|
|
|
|
2017-11-06 10:43:18 +08:00
|
|
|
#define BTRFS_ZLIB_DEFAULT_LEVEL 3
|
|
|
|
|
2017-05-26 15:44:59 +08:00
|
|
|
struct compressed_bio {
|
|
|
|
/* number of bios pending for this compressed extent */
|
|
|
|
refcount_t pending_bios;
|
|
|
|
|
|
|
|
/* the pages with the compressed data on them */
|
|
|
|
struct page **compressed_pages;
|
|
|
|
|
|
|
|
/* inode that owns this data */
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
/* starting offset in the inode for our pages */
|
|
|
|
u64 start;
|
|
|
|
|
|
|
|
/* number of bytes in the inode we're working on */
|
|
|
|
unsigned long len;
|
|
|
|
|
|
|
|
/* number of bytes on disk */
|
|
|
|
unsigned long compressed_len;
|
|
|
|
|
|
|
|
/* the compression algorithm for this bio */
|
|
|
|
int compress_type;
|
|
|
|
|
|
|
|
/* number of compressed pages in the array */
|
|
|
|
unsigned long nr_pages;
|
|
|
|
|
|
|
|
/* IO errors */
|
|
|
|
int errors;
|
|
|
|
int mirror_num;
|
|
|
|
|
|
|
|
/* for reads, this is the bio we are copying the data into */
|
|
|
|
struct bio *orig_bio;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* the start of a variable length array of checksums only
|
|
|
|
* used by reads
|
|
|
|
*/
|
2019-05-22 16:19:02 +08:00
|
|
|
u8 sums[];
|
2017-05-26 15:44:59 +08:00
|
|
|
};
|
|
|
|
|
2019-02-05 04:19:57 +08:00
|
|
|
static inline unsigned int btrfs_compress_type(unsigned int type_level)
|
|
|
|
{
|
|
|
|
return (type_level & 0xF);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int btrfs_compress_level(unsigned int type_level)
|
|
|
|
{
|
|
|
|
return ((type_level & 0xF0) >> 4);
|
|
|
|
}
|
|
|
|
|
2017-11-03 07:21:50 +08:00
|
|
|
void __init btrfs_init_compress(void);
|
2018-02-20 00:24:18 +08:00
|
|
|
void __cold btrfs_exit_compress(void);
|
2010-12-17 14:21:50 +08:00
|
|
|
|
2017-09-15 23:36:57 +08:00
|
|
|
int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
|
2017-02-15 02:04:07 +08:00
|
|
|
u64 start, struct page **pages,
|
2010-12-17 14:21:50 +08:00
|
|
|
unsigned long *out_pages,
|
|
|
|
unsigned long *total_in,
|
2017-02-15 02:45:05 +08:00
|
|
|
unsigned long *total_out);
|
2010-12-17 14:21:50 +08:00
|
|
|
int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
|
|
|
|
unsigned long start_byte, size_t srclen, size_t destlen);
|
2017-02-15 00:58:04 +08:00
|
|
|
int btrfs_decompress_buf2page(const char *buf, unsigned long buf_start,
|
2010-11-08 15:22:19 +08:00
|
|
|
unsigned long total_out, u64 disk_start,
|
2016-11-25 16:07:46 +08:00
|
|
|
struct bio *bio);
|
2010-12-17 14:21:50 +08:00
|
|
|
|
2020-06-03 13:55:16 +08:00
|
|
|
blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
unsigned long len, u64 disk_start,
|
|
|
|
unsigned long compressed_len,
|
|
|
|
struct page **compressed_pages,
|
2017-10-24 13:18:16 +08:00
|
|
|
unsigned long nr_pages,
|
2019-07-11 03:28:17 +08:00
|
|
|
unsigned int write_flags,
|
|
|
|
struct cgroup_subsys_state *blkcg_css);
|
2017-06-03 15:38:06 +08:00
|
|
|
blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
int mirror_num, unsigned long bio_flags);
|
2016-03-10 17:26:59 +08:00
|
|
|
|
2019-02-05 04:20:05 +08:00
|
|
|
unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
|
2017-09-15 23:36:57 +08:00
|
|
|
|
2016-03-10 17:26:59 +08:00
|
|
|
enum btrfs_compression_type {
|
|
|
|
BTRFS_COMPRESS_NONE = 0,
|
|
|
|
BTRFS_COMPRESS_ZLIB = 1,
|
|
|
|
BTRFS_COMPRESS_LZO = 2,
|
btrfs: Add zstd support
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no compression, lzo
compression, and zlib compression. I benchmarked two scenarios. Copying
a set of files to btrfs, and then reading the files. Copying a tarball
to btrfs, extracting it to btrfs, and then reading the extracted files.
After every operation, I call `sync` and include the sync time.
Between every pair of operations I unmount and remount the filesystem
to avoid caching. The benchmark files can be found in the upstream
zstd source repository under
`contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
[1] [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD.
The first compression benchmark is copying 10 copies of the unzipped
Silesia corpus [3] into a BtrFS filesystem mounted with
`-o compress-force=Method`. The decompression benchmark times how long
it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
measured by comparing the output of `df` and `du`. See the benchmark file
[1] for details. I benchmarked multiple zstd compression levels, although
the patch uses zstd level 1.
| Method | Ratio | Compression MB/s | Decompression speed |
|---------|-------|------------------|---------------------|
| None | 0.99 | 504 | 686 |
| lzo | 1.66 | 398 | 442 |
| zlib | 2.58 | 65 | 241 |
| zstd 1 | 2.57 | 260 | 383 |
| zstd 3 | 2.71 | 174 | 408 |
| zstd 6 | 2.87 | 70 | 398 |
| zstd 9 | 2.92 | 43 | 406 |
| zstd 12 | 2.93 | 21 | 408 |
| zstd 15 | 3.01 | 11 | 354 |
The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
measures the compression ratio, extracts the tar, and deletes the tar.
Then it measures the compression ratio again, and `tar`s the extracted
files into `/dev/null`. See the benchmark file [2] for details.
| Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
|--------|-----------|---------------|----------|------------|----------|
| None | 0.97 | 0.78 | 0.981 | 5.501 | 8.807 |
| lzo | 2.06 | 1.38 | 1.631 | 8.458 | 8.585 |
| zlib | 3.40 | 1.86 | 7.750 | 21.544 | 11.744 |
| zstd 1 | 3.57 | 1.85 | 2.579 | 11.479 | 9.389 |
[1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-10 10:39:02 +08:00
|
|
|
BTRFS_COMPRESS_ZSTD = 3,
|
2019-10-10 15:59:57 +08:00
|
|
|
BTRFS_NR_COMPRESS_TYPES = 4,
|
2016-03-10 17:26:59 +08:00
|
|
|
};
|
|
|
|
|
2019-02-05 04:20:03 +08:00
|
|
|
struct workspace_manager {
|
|
|
|
struct list_head idle_ws;
|
|
|
|
spinlock_t ws_lock;
|
|
|
|
/* Number of free workspaces */
|
|
|
|
int free_ws;
|
|
|
|
/* Total number of allocated workspaces */
|
|
|
|
atomic_t total_ws;
|
|
|
|
/* Waiters for a free workspace */
|
|
|
|
wait_queue_head_t ws_wait;
|
|
|
|
};
|
|
|
|
|
2019-10-04 08:50:28 +08:00
|
|
|
struct list_head *btrfs_get_workspace(int type, unsigned int level);
|
2019-10-04 08:50:28 +08:00
|
|
|
void btrfs_put_workspace(int type, struct list_head *ws);
|
2019-02-05 04:20:03 +08:00
|
|
|
|
2010-12-17 14:21:50 +08:00
|
|
|
struct btrfs_compress_op {
|
2019-10-02 06:53:31 +08:00
|
|
|
struct workspace_manager *workspace_manager;
|
2019-08-09 22:25:34 +08:00
|
|
|
/* Maximum level supported by the compression algorithm */
|
|
|
|
unsigned int max_level;
|
|
|
|
unsigned int default_level;
|
2010-12-17 14:21:50 +08:00
|
|
|
};
|
|
|
|
|
2019-02-05 04:19:59 +08:00
|
|
|
/* The heuristic workspaces are managed via the 0th workspace manager */
|
2019-10-10 15:59:57 +08:00
|
|
|
#define BTRFS_NR_WORKSPACE_MANAGERS BTRFS_NR_COMPRESS_TYPES
|
2019-02-05 04:19:59 +08:00
|
|
|
|
|
|
|
extern const struct btrfs_compress_op btrfs_heuristic_compress;
|
2015-01-03 01:23:10 +08:00
|
|
|
extern const struct btrfs_compress_op btrfs_zlib_compress;
|
|
|
|
extern const struct btrfs_compress_op btrfs_lzo_compress;
|
btrfs: Add zstd support
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no compression, lzo
compression, and zlib compression. I benchmarked two scenarios. Copying
a set of files to btrfs, and then reading the files. Copying a tarball
to btrfs, extracting it to btrfs, and then reading the extracted files.
After every operation, I call `sync` and include the sync time.
Between every pair of operations I unmount and remount the filesystem
to avoid caching. The benchmark files can be found in the upstream
zstd source repository under
`contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
[1] [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD.
The first compression benchmark is copying 10 copies of the unzipped
Silesia corpus [3] into a BtrFS filesystem mounted with
`-o compress-force=Method`. The decompression benchmark times how long
it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
measured by comparing the output of `df` and `du`. See the benchmark file
[1] for details. I benchmarked multiple zstd compression levels, although
the patch uses zstd level 1.
| Method | Ratio | Compression MB/s | Decompression speed |
|---------|-------|------------------|---------------------|
| None | 0.99 | 504 | 686 |
| lzo | 1.66 | 398 | 442 |
| zlib | 2.58 | 65 | 241 |
| zstd 1 | 2.57 | 260 | 383 |
| zstd 3 | 2.71 | 174 | 408 |
| zstd 6 | 2.87 | 70 | 398 |
| zstd 9 | 2.92 | 43 | 406 |
| zstd 12 | 2.93 | 21 | 408 |
| zstd 15 | 3.01 | 11 | 354 |
The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
measures the compression ratio, extracts the tar, and deletes the tar.
Then it measures the compression ratio again, and `tar`s the extracted
files into `/dev/null`. See the benchmark file [2] for details.
| Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
|--------|-----------|---------------|----------|------------|----------|
| None | 0.97 | 0.78 | 0.981 | 5.501 | 8.807 |
| lzo | 2.06 | 1.38 | 1.631 | 8.458 | 8.585 |
| zlib | 3.40 | 1.86 | 7.750 | 21.544 | 11.744 |
| zstd 1 | 3.57 | 1.85 | 2.579 | 11.479 | 9.389 |
[1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
2017-08-10 10:39:02 +08:00
|
|
|
extern const struct btrfs_compress_op btrfs_zstd_compress;
|
2010-12-17 14:21:50 +08:00
|
|
|
|
2017-11-01 00:24:26 +08:00
|
|
|
const char* btrfs_compress_type2str(enum btrfs_compression_type type);
|
2019-06-06 18:07:15 +08:00
|
|
|
bool btrfs_compress_is_valid_type(const char *str, size_t len);
|
2017-11-01 00:24:26 +08:00
|
|
|
|
2017-07-17 21:52:58 +08:00
|
|
|
int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end);
|
|
|
|
|
Btrfs: Add zlib compression support
This is a large change for adding compression on reading and writing,
both for inline and regular extents. It does some fairly large
surgery to the writeback paths.
Compression is off by default and enabled by mount -o compress. Even
when the -o compress mount option is not used, it is possible to read
compressed extents off the disk.
If compression for a given set of pages fails to make them smaller, the
file is flagged to avoid future compression attempts later.
* While finding delalloc extents, the pages are locked before being sent down
to the delalloc handler. This allows the delalloc handler to do complex things
such as cleaning the pages, marking them writeback and starting IO on their
behalf.
* Inline extents are inserted at delalloc time now. This allows us to compress
the data before inserting the inline extent, and it allows us to insert
an inline extent that spans multiple pages.
* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
are changed to record both an in-memory size and an on disk size, as well
as a flag for compression.
From a disk format point of view, the extent pointers in the file are changed
to record the on disk size of a given extent and some encoding flags.
Space in the disk format is allocated for compression encoding, as well
as encryption and a generic 'other' field. Neither the encryption or the
'other' field are currently used.
In order to limit the amount of data read for a single random read in the
file, the size of a compressed extent is limited to 128k. This is a
software only limit, the disk format supports u64 sized compressed extents.
In order to limit the ram consumed while processing extents, the uncompressed
size of a compressed extent is limited to 256k. This is a software only limit
and will be subject to tuning later.
Checksumming is still done on compressed extents, and it is done on the
uncompressed version of the data. This way additional encodings can be
layered on without having to figure out which encoding to checksum.
Compression happens at delalloc time, which is basically singled threaded because
it is usually done by a single pdflush thread. This makes it tricky to
spread the compression load across all the cpus on the box. We'll have to
look at parallel pdflush walks of dirty inodes at a later time.
Decompression is hooked into readpages and it does spread across CPUs nicely.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2008-10-30 02:49:59 +08:00
|
|
|
#endif
|