ceph: fix blindly expanding the readahead windows

Blindly expanding the readahead windows will cause unneccessary
pagecache thrashing and also will introduce the network workload.
We should disable expanding the windows if the readahead is disabled
and also shouldn't expand the windows too much.

Expanding forward firstly instead of expanding backward for possible
sequential reads.

Bound `rreq->len` to the actual file size to restore the previous page
cache usage.

The posix_fadvise may change the maximum size of a file readahead.

Cc: stable@vger.kernel.org
Fixes: 4987005600 ("ceph: convert ceph_readpages to ceph_readahead")
Link: https://lore.kernel.org/ceph-devel/20230504082510.247-1-sehuww@mail.scut.edu.cn
Link: https://www.spinics.net/lists/ceph-users/msg76183.html
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Reviewed-and-tested-by: Hu Weiwen <sehuww@mail.scut.edu.cn>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
This commit is contained in:
Xiubo Li 2023-05-04 19:00:42 +08:00 committed by Ilya Dryomov
parent 23ee27dce3
commit dc94bb8f27
1 changed files with 33 additions and 7 deletions

View File

@ -187,16 +187,42 @@ static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
struct inode *inode = rreq->inode;
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_file_layout *lo = &ci->i_layout;
unsigned long max_pages = inode->i_sb->s_bdi->ra_pages;
loff_t end = rreq->start + rreq->len, new_end;
struct ceph_netfs_request_data *priv = rreq->netfs_priv;
unsigned long max_len;
u32 blockoff;
u64 blockno;
/* Expand the start downward */
blockno = div_u64_rem(rreq->start, lo->stripe_unit, &blockoff);
rreq->start = blockno * lo->stripe_unit;
rreq->len += blockoff;
if (priv) {
/* Readahead is disabled by posix_fadvise POSIX_FADV_RANDOM */
if (priv->file_ra_disabled)
max_pages = 0;
else
max_pages = priv->file_ra_pages;
/* Now, round up the length to the next block */
rreq->len = roundup(rreq->len, lo->stripe_unit);
}
/* Readahead is disabled */
if (!max_pages)
return;
max_len = max_pages << PAGE_SHIFT;
/*
* Try to expand the length forward by rounding up it to the next
* block, but do not exceed the file size, unless the original
* request already exceeds it.
*/
new_end = min(round_up(end, lo->stripe_unit), rreq->i_size);
if (new_end > end && new_end <= rreq->start + max_len)
rreq->len = new_end - rreq->start;
/* Try to expand the start downward */
div_u64_rem(rreq->start, lo->stripe_unit, &blockoff);
if (rreq->len + blockoff <= max_len) {
rreq->start -= blockoff;
rreq->len += blockoff;
}
}
static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq)