License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2013-04-05 14:27:12 +08:00
|
|
|
|
2010-04-07 06:14:15 +08:00
|
|
|
#include <linux/ceph/ceph_debug.h>
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2010-04-07 06:14:15 +08:00
|
|
|
#include <linux/module.h>
|
2009-10-07 02:31:10 +08:00
|
|
|
#include <linux/err.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/uaccess.h>
|
2010-04-07 06:01:27 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#endif
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
#include <linux/ceph/ceph_features.h>
|
2010-04-07 06:14:15 +08:00
|
|
|
#include <linux/ceph/libceph.h>
|
|
|
|
#include <linux/ceph/osd_client.h>
|
|
|
|
#include <linux/ceph/messenger.h>
|
|
|
|
#include <linux/ceph/decode.h>
|
|
|
|
#include <linux/ceph/auth.h>
|
|
|
|
#include <linux/ceph/pagelist.h>
|
2018-02-17 17:41:20 +08:00
|
|
|
#include <linux/ceph/striper.h>
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2010-03-02 05:02:00 +08:00
|
|
|
#define OSD_OPREPLY_FRONT_LEN 512
|
2010-01-14 09:03:23 +08:00
|
|
|
|
2013-05-02 01:43:04 +08:00
|
|
|
static struct kmem_cache *ceph_osd_request_cache;
|
|
|
|
|
2010-05-20 16:40:19 +08:00
|
|
|
static const struct ceph_connection_operations osd_con_ops;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Implement client access to distributed object storage cluster.
|
|
|
|
*
|
|
|
|
* All data objects are stored within a cluster/cloud of OSDs, or
|
|
|
|
* "object storage devices." (Note that Ceph OSDs have _nothing_ to
|
|
|
|
* do with the T10 OSD extensions to SCSI.) Ceph OSDs are simply
|
|
|
|
* remote daemons serving up and coordinating consistent and safe
|
|
|
|
* access to storage.
|
|
|
|
*
|
|
|
|
* Cluster membership and the mapping of data objects onto storage devices
|
|
|
|
* are described by the osd map.
|
|
|
|
*
|
|
|
|
* We keep track of pending OSD requests (read, write), resubmit
|
|
|
|
* requests to different OSDs when the cluster topology/data layout
|
|
|
|
* change, or retry the affected requests when the communications
|
|
|
|
* channel with an OSD is reset.
|
|
|
|
*/
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void link_request(struct ceph_osd *osd, struct ceph_osd_request *req);
|
|
|
|
static void unlink_request(struct ceph_osd *osd, struct ceph_osd_request *req);
|
2016-05-26 07:15:02 +08:00
|
|
|
static void link_linger(struct ceph_osd *osd,
|
|
|
|
struct ceph_osd_linger_request *lreq);
|
|
|
|
static void unlink_linger(struct ceph_osd *osd,
|
|
|
|
struct ceph_osd_linger_request *lreq);
|
2017-06-19 18:18:05 +08:00
|
|
|
static void clear_backoffs(struct ceph_osd *osd);
|
2016-04-28 22:07:26 +08:00
|
|
|
|
|
|
|
#if 1
|
|
|
|
static inline bool rwsem_is_wrlocked(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
bool wrlocked = true;
|
|
|
|
|
|
|
|
if (unlikely(down_read_trylock(sem))) {
|
|
|
|
wrlocked = false;
|
|
|
|
up_read(sem);
|
|
|
|
}
|
|
|
|
|
|
|
|
return wrlocked;
|
|
|
|
}
|
|
|
|
static inline void verify_osdc_locked(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
WARN_ON(!rwsem_is_locked(&osdc->lock));
|
|
|
|
}
|
|
|
|
static inline void verify_osdc_wrlocked(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
WARN_ON(!rwsem_is_wrlocked(&osdc->lock));
|
|
|
|
}
|
|
|
|
static inline void verify_osd_locked(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
|
|
|
|
WARN_ON(!(mutex_is_locked(&osd->lock) &&
|
|
|
|
rwsem_is_locked(&osdc->lock)) &&
|
|
|
|
!rwsem_is_wrlocked(&osdc->lock));
|
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
static inline void verify_lreq_locked(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
WARN_ON(!mutex_is_locked(&lreq->lock));
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
#else
|
|
|
|
static inline void verify_osdc_locked(struct ceph_osd_client *osdc) { }
|
|
|
|
static inline void verify_osdc_wrlocked(struct ceph_osd_client *osdc) { }
|
|
|
|
static inline void verify_osd_locked(struct ceph_osd *osd) { }
|
2016-05-26 07:15:02 +08:00
|
|
|
static inline void verify_lreq_locked(struct ceph_osd_linger_request *lreq) { }
|
2016-04-28 22:07:26 +08:00
|
|
|
#endif
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* calculate the mapping of a file extent onto an object, and fill out the
|
|
|
|
* request accordingly. shorten extent as necessary if it crosses an
|
|
|
|
* object boundary.
|
|
|
|
*
|
|
|
|
* fill osd op in request message.
|
|
|
|
*/
|
2013-02-16 12:10:17 +08:00
|
|
|
static int calc_layout(struct ceph_file_layout *layout, u64 off, u64 *plen,
|
2013-03-14 09:50:01 +08:00
|
|
|
u64 *objnum, u64 *objoff, u64 *objlen)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2013-02-16 01:42:29 +08:00
|
|
|
u64 orig_len = *plen;
|
2018-02-17 16:29:58 +08:00
|
|
|
u32 xlen;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2013-02-16 01:42:29 +08:00
|
|
|
/* object extent? */
|
2018-02-17 16:29:58 +08:00
|
|
|
ceph_calc_file_object_mapping(layout, off, orig_len, objnum,
|
|
|
|
objoff, &xlen);
|
|
|
|
*objlen = xlen;
|
2013-03-14 09:50:00 +08:00
|
|
|
if (*objlen < orig_len) {
|
|
|
|
*plen = *objlen;
|
2013-02-16 01:42:29 +08:00
|
|
|
dout(" skipping last %llu, final file extent %llu~%llu\n",
|
|
|
|
orig_len - *plen, off, *plen);
|
|
|
|
}
|
|
|
|
|
2013-03-14 09:50:00 +08:00
|
|
|
dout("calc_layout objnum=%llx %llu~%llu\n", *objnum, *objoff, *objlen);
|
2013-02-16 12:10:17 +08:00
|
|
|
return 0;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2013-04-03 14:28:57 +08:00
|
|
|
static void ceph_osd_data_init(struct ceph_osd_data *osd_data)
|
|
|
|
{
|
|
|
|
memset(osd_data, 0, sizeof (*osd_data));
|
|
|
|
osd_data->type = CEPH_OSD_DATA_TYPE_NONE;
|
|
|
|
}
|
|
|
|
|
2018-09-28 22:02:53 +08:00
|
|
|
/*
|
|
|
|
* Consumes @pages if @own_pages is true.
|
|
|
|
*/
|
2013-04-05 14:27:12 +08:00
|
|
|
static void ceph_osd_data_pages_init(struct ceph_osd_data *osd_data,
|
2013-04-03 14:28:57 +08:00
|
|
|
struct page **pages, u64 length, u32 alignment,
|
|
|
|
bool pages_from_pool, bool own_pages)
|
|
|
|
{
|
|
|
|
osd_data->type = CEPH_OSD_DATA_TYPE_PAGES;
|
|
|
|
osd_data->pages = pages;
|
|
|
|
osd_data->length = length;
|
|
|
|
osd_data->alignment = alignment;
|
|
|
|
osd_data->pages_from_pool = pages_from_pool;
|
|
|
|
osd_data->own_pages = own_pages;
|
|
|
|
}
|
|
|
|
|
2018-09-28 22:02:53 +08:00
|
|
|
/*
|
|
|
|
* Consumes a ref on @pagelist.
|
|
|
|
*/
|
2013-04-05 14:27:12 +08:00
|
|
|
static void ceph_osd_data_pagelist_init(struct ceph_osd_data *osd_data,
|
2013-04-03 14:28:57 +08:00
|
|
|
struct ceph_pagelist *pagelist)
|
|
|
|
{
|
|
|
|
osd_data->type = CEPH_OSD_DATA_TYPE_PAGELIST;
|
|
|
|
osd_data->pagelist = pagelist;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_BLOCK
|
2013-04-05 14:27:12 +08:00
|
|
|
static void ceph_osd_data_bio_init(struct ceph_osd_data *osd_data,
|
2018-01-20 17:30:10 +08:00
|
|
|
struct ceph_bio_iter *bio_pos,
|
|
|
|
u32 bio_length)
|
2013-04-03 14:28:57 +08:00
|
|
|
{
|
|
|
|
osd_data->type = CEPH_OSD_DATA_TYPE_BIO;
|
2018-01-20 17:30:10 +08:00
|
|
|
osd_data->bio_pos = *bio_pos;
|
2013-04-03 14:28:57 +08:00
|
|
|
osd_data->bio_length = bio_length;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_BLOCK */
|
|
|
|
|
2018-01-20 17:30:11 +08:00
|
|
|
static void ceph_osd_data_bvecs_init(struct ceph_osd_data *osd_data,
|
2018-05-04 22:57:30 +08:00
|
|
|
struct ceph_bvec_iter *bvec_pos,
|
|
|
|
u32 num_bvecs)
|
2018-01-20 17:30:11 +08:00
|
|
|
{
|
|
|
|
osd_data->type = CEPH_OSD_DATA_TYPE_BVECS;
|
|
|
|
osd_data->bvec_pos = *bvec_pos;
|
2018-05-04 22:57:30 +08:00
|
|
|
osd_data->num_bvecs = num_bvecs;
|
2018-01-20 17:30:11 +08:00
|
|
|
}
|
|
|
|
|
2013-02-12 02:33:24 +08:00
|
|
|
static struct ceph_osd_data *
|
|
|
|
osd_req_op_raw_data_in(struct ceph_osd_request *osd_req, unsigned int which)
|
|
|
|
{
|
|
|
|
BUG_ON(which >= osd_req->r_num_ops);
|
|
|
|
|
|
|
|
return &osd_req->r_ops[which].raw_data_in;
|
|
|
|
}
|
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
struct ceph_osd_data *
|
|
|
|
osd_req_op_extent_osd_data(struct ceph_osd_request *osd_req,
|
2013-04-16 03:50:36 +08:00
|
|
|
unsigned int which)
|
2013-04-05 14:27:12 +08:00
|
|
|
{
|
2013-04-16 03:50:36 +08:00
|
|
|
return osd_req_op_data(osd_req, which, extent, osd_data);
|
2013-04-05 14:27:12 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data);
|
|
|
|
|
2013-02-12 02:33:24 +08:00
|
|
|
void osd_req_op_raw_data_in_pages(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, struct page **pages,
|
|
|
|
u64 length, u32 alignment,
|
|
|
|
bool pages_from_pool, bool own_pages)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
|
|
|
osd_data = osd_req_op_raw_data_in(osd_req, which);
|
|
|
|
ceph_osd_data_pages_init(osd_data, pages, length, alignment,
|
|
|
|
pages_from_pool, own_pages);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_raw_data_in_pages);
|
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
void osd_req_op_extent_osd_data_pages(struct ceph_osd_request *osd_req,
|
2013-04-16 03:50:36 +08:00
|
|
|
unsigned int which, struct page **pages,
|
|
|
|
u64 length, u32 alignment,
|
2013-04-05 14:27:12 +08:00
|
|
|
bool pages_from_pool, bool own_pages)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_data = osd_req_op_data(osd_req, which, extent, osd_data);
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_pages_init(osd_data, pages, length, alignment,
|
|
|
|
pages_from_pool, own_pages);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data_pages);
|
|
|
|
|
|
|
|
void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *osd_req,
|
2013-04-16 03:50:36 +08:00
|
|
|
unsigned int which, struct ceph_pagelist *pagelist)
|
2013-04-05 14:27:12 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_data = osd_req_op_data(osd_req, which, extent, osd_data);
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_pagelist_init(osd_data, pagelist);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data_pagelist);
|
|
|
|
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req,
|
2018-01-20 17:30:10 +08:00
|
|
|
unsigned int which,
|
|
|
|
struct ceph_bio_iter *bio_pos,
|
|
|
|
u32 bio_length)
|
2013-04-05 14:27:12 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
2013-04-16 03:50:36 +08:00
|
|
|
|
|
|
|
osd_data = osd_req_op_data(osd_req, which, extent, osd_data);
|
2018-01-20 17:30:10 +08:00
|
|
|
ceph_osd_data_bio_init(osd_data, bio_pos, bio_length);
|
2013-04-05 14:27:12 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data_bio);
|
|
|
|
#endif /* CONFIG_BLOCK */
|
|
|
|
|
2018-05-04 22:57:30 +08:00
|
|
|
void osd_req_op_extent_osd_data_bvecs(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
struct bio_vec *bvecs, u32 num_bvecs,
|
|
|
|
u32 bytes)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
struct ceph_bvec_iter it = {
|
|
|
|
.bvecs = bvecs,
|
|
|
|
.iter = { .bi_size = bytes },
|
|
|
|
};
|
|
|
|
|
|
|
|
osd_data = osd_req_op_data(osd_req, which, extent, osd_data);
|
|
|
|
ceph_osd_data_bvecs_init(osd_data, &it, num_bvecs);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data_bvecs);
|
|
|
|
|
2018-01-20 17:30:11 +08:00
|
|
|
void osd_req_op_extent_osd_data_bvec_pos(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
struct ceph_bvec_iter *bvec_pos)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
|
|
|
osd_data = osd_req_op_data(osd_req, which, extent, osd_data);
|
2018-05-04 22:57:30 +08:00
|
|
|
ceph_osd_data_bvecs_init(osd_data, bvec_pos, 0);
|
2018-01-20 17:30:11 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_osd_data_bvec_pos);
|
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
static void osd_req_op_cls_request_info_pagelist(
|
|
|
|
struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, struct ceph_pagelist *pagelist)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_data = osd_req_op_data(osd_req, which, cls, request_info);
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_pagelist_init(osd_data, pagelist);
|
|
|
|
}
|
|
|
|
|
2013-04-06 03:46:02 +08:00
|
|
|
void osd_req_op_cls_request_data_pagelist(
|
|
|
|
struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, struct ceph_pagelist *pagelist)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_data = osd_req_op_data(osd_req, which, cls, request_data);
|
2013-04-06 03:46:02 +08:00
|
|
|
ceph_osd_data_pagelist_init(osd_data, pagelist);
|
2016-05-26 06:29:52 +08:00
|
|
|
osd_req->r_ops[which].cls.indata_len += pagelist->length;
|
|
|
|
osd_req->r_ops[which].indata_len += pagelist->length;
|
2013-04-06 03:46:02 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_cls_request_data_pagelist);
|
|
|
|
|
2013-04-20 04:34:49 +08:00
|
|
|
void osd_req_op_cls_request_data_pages(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool, bool own_pages)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
|
|
|
osd_data = osd_req_op_data(osd_req, which, cls, request_data);
|
|
|
|
ceph_osd_data_pages_init(osd_data, pages, length, alignment,
|
|
|
|
pages_from_pool, own_pages);
|
2016-05-26 06:29:52 +08:00
|
|
|
osd_req->r_ops[which].cls.indata_len += length;
|
|
|
|
osd_req->r_ops[which].indata_len += length;
|
2013-04-20 04:34:49 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_cls_request_data_pages);
|
|
|
|
|
2018-01-20 17:30:11 +08:00
|
|
|
void osd_req_op_cls_request_data_bvecs(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
2018-05-04 22:57:30 +08:00
|
|
|
struct bio_vec *bvecs, u32 num_bvecs,
|
|
|
|
u32 bytes)
|
2018-01-20 17:30:11 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
struct ceph_bvec_iter it = {
|
|
|
|
.bvecs = bvecs,
|
|
|
|
.iter = { .bi_size = bytes },
|
|
|
|
};
|
|
|
|
|
|
|
|
osd_data = osd_req_op_data(osd_req, which, cls, request_data);
|
2018-05-04 22:57:30 +08:00
|
|
|
ceph_osd_data_bvecs_init(osd_data, &it, num_bvecs);
|
2018-01-20 17:30:11 +08:00
|
|
|
osd_req->r_ops[which].cls.indata_len += bytes;
|
|
|
|
osd_req->r_ops[which].indata_len += bytes;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_cls_request_data_bvecs);
|
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
void osd_req_op_cls_response_data_pages(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool, bool own_pages)
|
|
|
|
{
|
|
|
|
struct ceph_osd_data *osd_data;
|
|
|
|
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_data = osd_req_op_data(osd_req, which, cls, response_data);
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_pages_init(osd_data, pages, length, alignment,
|
|
|
|
pages_from_pool, own_pages);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_cls_response_data_pages);
|
|
|
|
|
2013-04-03 14:28:58 +08:00
|
|
|
static u64 ceph_osd_data_length(struct ceph_osd_data *osd_data)
|
|
|
|
{
|
|
|
|
switch (osd_data->type) {
|
|
|
|
case CEPH_OSD_DATA_TYPE_NONE:
|
|
|
|
return 0;
|
|
|
|
case CEPH_OSD_DATA_TYPE_PAGES:
|
|
|
|
return osd_data->length;
|
|
|
|
case CEPH_OSD_DATA_TYPE_PAGELIST:
|
|
|
|
return (u64)osd_data->pagelist->length;
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
case CEPH_OSD_DATA_TYPE_BIO:
|
|
|
|
return (u64)osd_data->bio_length;
|
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-01-20 17:30:11 +08:00
|
|
|
case CEPH_OSD_DATA_TYPE_BVECS:
|
|
|
|
return osd_data->bvec_pos.iter.bi_size;
|
2013-04-03 14:28:58 +08:00
|
|
|
default:
|
|
|
|
WARN(true, "unrecognized data type %d\n", (int)osd_data->type);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-04-03 14:28:57 +08:00
|
|
|
static void ceph_osd_data_release(struct ceph_osd_data *osd_data)
|
|
|
|
{
|
2013-04-05 14:27:12 +08:00
|
|
|
if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGES && osd_data->own_pages) {
|
2013-04-03 14:28:57 +08:00
|
|
|
int num_pages;
|
|
|
|
|
|
|
|
num_pages = calc_pages_for((u64)osd_data->alignment,
|
|
|
|
(u64)osd_data->length);
|
|
|
|
ceph_release_page_vector(osd_data->pages, num_pages);
|
2018-09-28 22:02:53 +08:00
|
|
|
} else if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGELIST) {
|
|
|
|
ceph_pagelist_release(osd_data->pagelist);
|
2013-04-03 14:28:57 +08:00
|
|
|
}
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_init(osd_data);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void osd_req_op_data_release(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which)
|
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
|
|
|
|
BUG_ON(which >= osd_req->r_num_ops);
|
|
|
|
op = &osd_req->r_ops[which];
|
|
|
|
|
|
|
|
switch (op->op) {
|
|
|
|
case CEPH_OSD_OP_READ:
|
|
|
|
case CEPH_OSD_OP_WRITE:
|
2015-10-07 23:27:17 +08:00
|
|
|
case CEPH_OSD_OP_WRITEFULL:
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_release(&op->extent.osd_data);
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_CALL:
|
|
|
|
ceph_osd_data_release(&op->cls.request_info);
|
2013-04-06 03:46:02 +08:00
|
|
|
ceph_osd_data_release(&op->cls.request_data);
|
2013-04-05 14:27:12 +08:00
|
|
|
ceph_osd_data_release(&op->cls.response_data);
|
|
|
|
break;
|
2014-11-12 14:00:43 +08:00
|
|
|
case CEPH_OSD_OP_SETXATTR:
|
|
|
|
case CEPH_OSD_OP_CMPXATTR:
|
|
|
|
ceph_osd_data_release(&op->xattr.osd_data);
|
|
|
|
break;
|
2015-04-27 11:02:35 +08:00
|
|
|
case CEPH_OSD_OP_STAT:
|
|
|
|
ceph_osd_data_release(&op->raw_data_in);
|
|
|
|
break;
|
2016-05-26 07:15:02 +08:00
|
|
|
case CEPH_OSD_OP_NOTIFY_ACK:
|
|
|
|
ceph_osd_data_release(&op->notify_ack.request_data);
|
|
|
|
break;
|
2016-04-28 22:07:27 +08:00
|
|
|
case CEPH_OSD_OP_NOTIFY:
|
|
|
|
ceph_osd_data_release(&op->notify.request_data);
|
|
|
|
ceph_osd_data_release(&op->notify.response_data);
|
|
|
|
break;
|
2015-07-18 04:18:07 +08:00
|
|
|
case CEPH_OSD_OP_LIST_WATCHERS:
|
|
|
|
ceph_osd_data_release(&op->list_watchers.response_data);
|
|
|
|
break;
|
2018-10-15 23:45:58 +08:00
|
|
|
case CEPH_OSD_OP_COPY_FROM:
|
|
|
|
ceph_osd_data_release(&op->copy_from.osd_data);
|
|
|
|
break;
|
2013-04-05 14:27:12 +08:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2013-04-03 14:28:57 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:23 +08:00
|
|
|
/*
|
|
|
|
* Assumes @t is zero-initialized.
|
|
|
|
*/
|
|
|
|
static void target_init(struct ceph_osd_request_target *t)
|
|
|
|
{
|
|
|
|
ceph_oid_init(&t->base_oid);
|
|
|
|
ceph_oloc_init(&t->base_oloc);
|
|
|
|
ceph_oid_init(&t->target_oid);
|
|
|
|
ceph_oloc_init(&t->target_oloc);
|
|
|
|
|
|
|
|
ceph_osds_init(&t->acting);
|
|
|
|
ceph_osds_init(&t->up);
|
|
|
|
t->size = -1;
|
|
|
|
t->min_size = -1;
|
|
|
|
|
|
|
|
t->osd = CEPH_HOMELESS_OSD;
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
static void target_copy(struct ceph_osd_request_target *dest,
|
|
|
|
const struct ceph_osd_request_target *src)
|
|
|
|
{
|
|
|
|
ceph_oid_copy(&dest->base_oid, &src->base_oid);
|
|
|
|
ceph_oloc_copy(&dest->base_oloc, &src->base_oloc);
|
|
|
|
ceph_oid_copy(&dest->target_oid, &src->target_oid);
|
|
|
|
ceph_oloc_copy(&dest->target_oloc, &src->target_oloc);
|
|
|
|
|
|
|
|
dest->pgid = src->pgid; /* struct */
|
2017-06-15 22:30:53 +08:00
|
|
|
dest->spgid = src->spgid; /* struct */
|
2016-05-26 07:15:02 +08:00
|
|
|
dest->pg_num = src->pg_num;
|
|
|
|
dest->pg_num_mask = src->pg_num_mask;
|
|
|
|
ceph_osds_copy(&dest->acting, &src->acting);
|
|
|
|
ceph_osds_copy(&dest->up, &src->up);
|
|
|
|
dest->size = src->size;
|
|
|
|
dest->min_size = src->min_size;
|
|
|
|
dest->sort_bitwise = src->sort_bitwise;
|
|
|
|
|
|
|
|
dest->flags = src->flags;
|
|
|
|
dest->paused = src->paused;
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
dest->epoch = src->epoch;
|
2017-06-05 20:45:00 +08:00
|
|
|
dest->last_force_resend = src->last_force_resend;
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
dest->osd = src->osd;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:23 +08:00
|
|
|
static void target_destroy(struct ceph_osd_request_target *t)
|
|
|
|
{
|
|
|
|
ceph_oid_destroy(&t->base_oid);
|
2016-02-14 11:24:31 +08:00
|
|
|
ceph_oloc_destroy(&t->base_oloc);
|
2016-04-28 22:07:23 +08:00
|
|
|
ceph_oid_destroy(&t->target_oid);
|
2016-02-14 11:24:31 +08:00
|
|
|
ceph_oloc_destroy(&t->target_oloc);
|
2016-04-28 22:07:23 +08:00
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* requests
|
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static void request_release_checks(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&req->r_node));
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!RB_EMPTY_NODE(&req->r_mc_node));
|
2019-07-08 18:50:09 +08:00
|
|
|
WARN_ON(!list_empty(&req->r_private_item));
|
2016-04-28 22:07:26 +08:00
|
|
|
WARN_ON(req->r_osd);
|
|
|
|
}
|
|
|
|
|
2014-06-20 18:14:42 +08:00
|
|
|
static void ceph_osdc_release_request(struct kref *kref)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2014-06-20 18:14:42 +08:00
|
|
|
struct ceph_osd_request *req = container_of(kref,
|
|
|
|
struct ceph_osd_request, r_kref);
|
2013-04-05 14:27:12 +08:00
|
|
|
unsigned int which;
|
2009-12-08 05:37:03 +08:00
|
|
|
|
2014-06-20 18:14:42 +08:00
|
|
|
dout("%s %p (r_request %p r_reply %p)\n", __func__, req,
|
|
|
|
req->r_request, req->r_reply);
|
2016-04-28 22:07:26 +08:00
|
|
|
request_release_checks(req);
|
2014-06-20 18:14:42 +08:00
|
|
|
|
2009-12-08 05:37:03 +08:00
|
|
|
if (req->r_request)
|
|
|
|
ceph_msg_put(req->r_request);
|
2016-04-28 22:07:26 +08:00
|
|
|
if (req->r_reply)
|
2012-06-05 03:43:32 +08:00
|
|
|
ceph_msg_put(req->r_reply);
|
2013-02-15 02:16:43 +08:00
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
for (which = 0; which < req->r_num_ops; which++)
|
|
|
|
osd_req_op_data_release(req, which);
|
2013-02-15 02:16:43 +08:00
|
|
|
|
2016-04-28 22:07:23 +08:00
|
|
|
target_destroy(&req->r_t);
|
2009-12-08 05:37:03 +08:00
|
|
|
ceph_put_snap_context(req->r_snapc);
|
2016-04-30 01:54:20 +08:00
|
|
|
|
2009-12-08 05:37:03 +08:00
|
|
|
if (req->r_mempool)
|
|
|
|
mempool_free(req, req->r_osdc->req_mempool);
|
2016-02-10 00:50:15 +08:00
|
|
|
else if (req->r_num_ops <= CEPH_OSD_SLAB_OPS)
|
2013-05-02 01:43:04 +08:00
|
|
|
kmem_cache_free(ceph_osd_request_cache, req);
|
2016-02-10 00:50:15 +08:00
|
|
|
else
|
|
|
|
kfree(req);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2014-06-20 18:14:42 +08:00
|
|
|
|
|
|
|
void ceph_osdc_get_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
dout("%s %p (was %d)\n", __func__, req,
|
2016-11-15 00:29:48 +08:00
|
|
|
kref_read(&req->r_kref));
|
2014-06-20 18:14:42 +08:00
|
|
|
kref_get(&req->r_kref);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_get_request);
|
|
|
|
|
|
|
|
void ceph_osdc_put_request(struct ceph_osd_request *req)
|
|
|
|
{
|
2016-04-26 21:05:29 +08:00
|
|
|
if (req) {
|
|
|
|
dout("%s %p (was %d)\n", __func__, req,
|
2016-11-15 00:29:48 +08:00
|
|
|
kref_read(&req->r_kref));
|
2016-04-26 21:05:29 +08:00
|
|
|
kref_put(&req->r_kref, ceph_osdc_release_request);
|
|
|
|
}
|
2014-06-20 18:14:42 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_put_request);
|
2010-04-07 06:01:27 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void request_init(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
/* req only, each op is zeroed in _osd_req_op_init() */
|
|
|
|
memset(req, 0, sizeof(*req));
|
|
|
|
|
|
|
|
kref_init(&req->r_kref);
|
|
|
|
init_completion(&req->r_completion);
|
|
|
|
RB_CLEAR_NODE(&req->r_node);
|
2016-04-28 22:07:27 +08:00
|
|
|
RB_CLEAR_NODE(&req->r_mc_node);
|
2019-07-08 18:50:09 +08:00
|
|
|
INIT_LIST_HEAD(&req->r_private_item);
|
2016-04-28 22:07:26 +08:00
|
|
|
|
|
|
|
target_init(&req->r_t);
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* This is ugly, but it allows us to reuse linger registration and ping
|
|
|
|
* requests, keeping the structure of the code around send_linger{_ping}()
|
|
|
|
* reasonable. Setting up a min_nr=2 mempool for each linger request
|
|
|
|
* and dealing with copying ops (this blasts req only, watch op remains
|
|
|
|
* intact) isn't any better.
|
|
|
|
*/
|
|
|
|
static void request_reinit(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
bool mempool = req->r_mempool;
|
|
|
|
unsigned int num_ops = req->r_num_ops;
|
|
|
|
u64 snapid = req->r_snapid;
|
|
|
|
struct ceph_snap_context *snapc = req->r_snapc;
|
|
|
|
bool linger = req->r_linger;
|
|
|
|
struct ceph_msg *request_msg = req->r_request;
|
|
|
|
struct ceph_msg *reply_msg = req->r_reply;
|
|
|
|
|
|
|
|
dout("%s req %p\n", __func__, req);
|
2016-11-15 00:29:48 +08:00
|
|
|
WARN_ON(kref_read(&req->r_kref) != 1);
|
2016-05-26 07:15:02 +08:00
|
|
|
request_release_checks(req);
|
|
|
|
|
2016-11-15 00:29:48 +08:00
|
|
|
WARN_ON(kref_read(&request_msg->kref) != 1);
|
|
|
|
WARN_ON(kref_read(&reply_msg->kref) != 1);
|
2016-05-26 07:15:02 +08:00
|
|
|
target_destroy(&req->r_t);
|
|
|
|
|
|
|
|
request_init(req);
|
|
|
|
req->r_osdc = osdc;
|
|
|
|
req->r_mempool = mempool;
|
|
|
|
req->r_num_ops = num_ops;
|
|
|
|
req->r_snapid = snapid;
|
|
|
|
req->r_snapc = snapc;
|
|
|
|
req->r_linger = linger;
|
|
|
|
req->r_request = request_msg;
|
|
|
|
req->r_reply = reply_msg;
|
|
|
|
}
|
|
|
|
|
2010-04-07 05:51:47 +08:00
|
|
|
struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
|
2009-10-07 02:31:10 +08:00
|
|
|
struct ceph_snap_context *snapc,
|
2013-02-26 08:11:12 +08:00
|
|
|
unsigned int num_ops,
|
2010-04-07 05:51:47 +08:00
|
|
|
bool use_mempool,
|
2012-11-14 11:11:15 +08:00
|
|
|
gfp_t gfp_flags)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
if (use_mempool) {
|
2016-02-10 00:50:15 +08:00
|
|
|
BUG_ON(num_ops > CEPH_OSD_SLAB_OPS);
|
2010-04-07 05:51:47 +08:00
|
|
|
req = mempool_alloc(osdc->req_mempool, gfp_flags);
|
2016-02-10 00:50:15 +08:00
|
|
|
} else if (num_ops <= CEPH_OSD_SLAB_OPS) {
|
|
|
|
req = kmem_cache_alloc(ceph_osd_request_cache, gfp_flags);
|
2009-10-07 02:31:10 +08:00
|
|
|
} else {
|
2016-02-10 00:50:15 +08:00
|
|
|
BUG_ON(num_ops > CEPH_OSD_MAX_OPS);
|
treewide: Use struct_size() for kmalloc()-family
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kmalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
This patch makes the changes for kmalloc()-family (and kvmalloc()-family)
uses. It was done via automatic conversion with manual review for the
"CHECKME" non-standard cases noted below, using the following Coccinelle
script:
// pkey_cache = kmalloc(sizeof *pkey_cache + tprops->pkey_tbl_len *
// sizeof *pkey_cache->table, GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// Same pattern, but can't trivially locate the trailing element name,
// or variable name.
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
expression SOMETHING, COUNT, ELEMENT;
@@
- alloc(sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP)
+ alloc(CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-05-09 04:45:50 +08:00
|
|
|
req = kmalloc(struct_size(req, r_ops, num_ops), gfp_flags);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2016-02-10 00:50:15 +08:00
|
|
|
if (unlikely(!req))
|
2010-04-02 07:06:19 +08:00
|
|
|
return NULL;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
request_init(req);
|
2009-10-07 02:31:10 +08:00
|
|
|
req->r_osdc = osdc;
|
|
|
|
req->r_mempool = use_mempool;
|
2013-04-04 10:32:51 +08:00
|
|
|
req->r_num_ops = num_ops;
|
2016-04-26 21:39:47 +08:00
|
|
|
req->r_snapid = CEPH_NOSNAP;
|
|
|
|
req->r_snapc = ceph_get_snap_context(snapc);
|
2010-04-07 06:01:27 +08:00
|
|
|
|
2016-04-27 20:15:51 +08:00
|
|
|
dout("%s req %p\n", __func__, req);
|
|
|
|
return req;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_alloc_request);
|
2016-02-10 00:50:15 +08:00
|
|
|
|
2017-06-15 22:30:53 +08:00
|
|
|
static int ceph_oloc_encoding_size(const struct ceph_object_locator *oloc)
|
2016-02-14 11:24:31 +08:00
|
|
|
{
|
|
|
|
return 8 + 4 + 4 + 4 + (oloc->pool_ns ? oloc->pool_ns->len : 0);
|
|
|
|
}
|
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
static int __ceph_osdc_alloc_messages(struct ceph_osd_request *req, gfp_t gfp,
|
|
|
|
int num_request_data_items,
|
|
|
|
int num_reply_data_items)
|
2016-04-27 20:15:51 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
int msg_size;
|
2010-03-02 05:02:00 +08:00
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
WARN_ON(req->r_request || req->r_reply);
|
2016-04-30 01:54:20 +08:00
|
|
|
WARN_ON(ceph_oid_empty(&req->r_base_oid));
|
2016-02-14 11:24:31 +08:00
|
|
|
WARN_ON(ceph_oloc_empty(&req->r_base_oloc));
|
2016-04-30 01:54:20 +08:00
|
|
|
|
2016-04-27 20:15:51 +08:00
|
|
|
/* create request message */
|
2017-06-15 22:30:54 +08:00
|
|
|
msg_size = CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
CEPH_PGID_ENCODING_LEN + 1; /* spgid */
|
|
|
|
msg_size += 4 + 4 + 4; /* hash, osdmap_epoch, flags */
|
|
|
|
msg_size += CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
sizeof(struct ceph_osd_reqid); /* reqid */
|
|
|
|
msg_size += sizeof(struct ceph_blkin_trace_info); /* trace */
|
|
|
|
msg_size += 4 + sizeof(struct ceph_timespec); /* client_inc, mtime */
|
2016-02-14 11:24:31 +08:00
|
|
|
msg_size += CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
ceph_oloc_encoding_size(&req->r_base_oloc); /* oloc */
|
2016-04-27 20:15:51 +08:00
|
|
|
msg_size += 4 + req->r_base_oid.name_len; /* oid */
|
|
|
|
msg_size += 2 + req->r_num_ops * sizeof(struct ceph_osd_op);
|
2016-02-11 20:09:15 +08:00
|
|
|
msg_size += 8; /* snapid */
|
|
|
|
msg_size += 8; /* snap_seq */
|
2016-04-27 20:15:51 +08:00
|
|
|
msg_size += 4 + 8 * (req->r_snapc ? req->r_snapc->num_snaps : 0);
|
2017-06-15 22:30:54 +08:00
|
|
|
msg_size += 4 + 8; /* retry_attempt, features */
|
2016-02-11 20:09:15 +08:00
|
|
|
|
2016-04-27 20:15:51 +08:00
|
|
|
if (req->r_mempool)
|
2018-10-15 23:38:23 +08:00
|
|
|
msg = ceph_msgpool_get(&osdc->msgpool_op, msg_size,
|
|
|
|
num_request_data_items);
|
2009-10-07 02:31:10 +08:00
|
|
|
else
|
2018-10-15 23:38:23 +08:00
|
|
|
msg = ceph_msg_new2(CEPH_MSG_OSD_OP, msg_size,
|
|
|
|
num_request_data_items, gfp, true);
|
2016-04-27 20:15:51 +08:00
|
|
|
if (!msg)
|
|
|
|
return -ENOMEM;
|
2010-04-07 06:01:27 +08:00
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
memset(msg->front.iov_base, 0, msg->front.iov_len);
|
2010-04-07 05:51:47 +08:00
|
|
|
req->r_request = msg;
|
|
|
|
|
2016-04-27 20:15:51 +08:00
|
|
|
/* create reply message */
|
|
|
|
msg_size = OSD_OPREPLY_FRONT_LEN;
|
2016-04-28 00:32:56 +08:00
|
|
|
msg_size += req->r_base_oid.name_len;
|
|
|
|
msg_size += req->r_num_ops * sizeof(struct ceph_osd_op);
|
2016-04-27 20:15:51 +08:00
|
|
|
|
|
|
|
if (req->r_mempool)
|
2018-10-15 23:38:23 +08:00
|
|
|
msg = ceph_msgpool_get(&osdc->msgpool_op_reply, msg_size,
|
|
|
|
num_reply_data_items);
|
2016-04-27 20:15:51 +08:00
|
|
|
else
|
2018-10-15 23:38:23 +08:00
|
|
|
msg = ceph_msg_new2(CEPH_MSG_OSD_OPREPLY, msg_size,
|
|
|
|
num_reply_data_items, gfp, true);
|
2016-04-27 20:15:51 +08:00
|
|
|
if (!msg)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
req->r_reply = msg;
|
|
|
|
|
|
|
|
return 0;
|
2010-04-07 05:51:47 +08:00
|
|
|
}
|
|
|
|
|
2013-03-14 09:50:00 +08:00
|
|
|
static bool osd_req_opcode_valid(u16 opcode)
|
2010-04-07 06:01:27 +08:00
|
|
|
{
|
2013-03-14 09:50:00 +08:00
|
|
|
switch (opcode) {
|
2014-10-02 21:22:29 +08:00
|
|
|
#define GENERATE_CASE(op, opcode, str) case CEPH_OSD_OP_##op: return true;
|
|
|
|
__CEPH_FORALL_OSD_OPS(GENERATE_CASE)
|
|
|
|
#undef GENERATE_CASE
|
2013-03-14 09:50:00 +08:00
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
static void get_num_data_items(struct ceph_osd_request *req,
|
|
|
|
int *num_request_data_items,
|
|
|
|
int *num_reply_data_items)
|
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
|
|
|
|
*num_request_data_items = 0;
|
|
|
|
*num_reply_data_items = 0;
|
|
|
|
|
|
|
|
for (op = req->r_ops; op != &req->r_ops[req->r_num_ops]; op++) {
|
|
|
|
switch (op->op) {
|
|
|
|
/* request */
|
|
|
|
case CEPH_OSD_OP_WRITE:
|
|
|
|
case CEPH_OSD_OP_WRITEFULL:
|
|
|
|
case CEPH_OSD_OP_SETXATTR:
|
|
|
|
case CEPH_OSD_OP_CMPXATTR:
|
|
|
|
case CEPH_OSD_OP_NOTIFY_ACK:
|
2018-10-15 23:45:58 +08:00
|
|
|
case CEPH_OSD_OP_COPY_FROM:
|
2018-10-15 23:38:23 +08:00
|
|
|
*num_request_data_items += 1;
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* reply */
|
|
|
|
case CEPH_OSD_OP_STAT:
|
|
|
|
case CEPH_OSD_OP_READ:
|
|
|
|
case CEPH_OSD_OP_LIST_WATCHERS:
|
|
|
|
*num_reply_data_items += 1;
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* both */
|
|
|
|
case CEPH_OSD_OP_NOTIFY:
|
|
|
|
*num_request_data_items += 1;
|
|
|
|
*num_reply_data_items += 1;
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_CALL:
|
|
|
|
*num_request_data_items += 2;
|
|
|
|
*num_reply_data_items += 1;
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
WARN_ON(!osd_req_opcode_valid(op->op));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* oid, oloc and OSD op opcode(s) must be filled in before this function
|
|
|
|
* is called.
|
|
|
|
*/
|
|
|
|
int ceph_osdc_alloc_messages(struct ceph_osd_request *req, gfp_t gfp)
|
|
|
|
{
|
|
|
|
int num_request_data_items, num_reply_data_items;
|
|
|
|
|
|
|
|
get_num_data_items(req, &num_request_data_items, &num_reply_data_items);
|
|
|
|
return __ceph_osdc_alloc_messages(req, gfp, num_request_data_items,
|
|
|
|
num_reply_data_items);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_alloc_messages);
|
|
|
|
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
/*
|
|
|
|
* This is an osd op init function for opcodes that have no data or
|
|
|
|
* other information associated with them. It also serves as a
|
|
|
|
* common init routine for all the other init functions, below.
|
|
|
|
*/
|
2013-04-05 14:27:11 +08:00
|
|
|
static struct ceph_osd_req_op *
|
2013-02-12 02:33:24 +08:00
|
|
|
_osd_req_op_init(struct ceph_osd_request *osd_req, unsigned int which,
|
2015-04-27 11:09:54 +08:00
|
|
|
u16 opcode, u32 flags)
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
{
|
2013-04-05 14:27:11 +08:00
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
|
|
|
|
BUG_ON(which >= osd_req->r_num_ops);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
BUG_ON(!osd_req_opcode_valid(opcode));
|
|
|
|
|
2013-04-05 14:27:11 +08:00
|
|
|
op = &osd_req->r_ops[which];
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
memset(op, 0, sizeof (*op));
|
|
|
|
op->op = opcode;
|
2015-04-27 11:09:54 +08:00
|
|
|
op->flags = flags;
|
2013-04-05 14:27:11 +08:00
|
|
|
|
|
|
|
return op;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
}
|
|
|
|
|
2013-02-12 02:33:24 +08:00
|
|
|
void osd_req_op_init(struct ceph_osd_request *osd_req,
|
2015-04-27 11:09:54 +08:00
|
|
|
unsigned int which, u16 opcode, u32 flags)
|
2013-02-12 02:33:24 +08:00
|
|
|
{
|
2015-04-27 11:09:54 +08:00
|
|
|
(void)_osd_req_op_init(osd_req, which, opcode, flags);
|
2013-02-12 02:33:24 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_init);
|
|
|
|
|
2013-04-05 14:27:11 +08:00
|
|
|
void osd_req_op_extent_init(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u16 opcode,
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
u64 offset, u64 length,
|
|
|
|
u64 truncate_size, u32 truncate_seq)
|
|
|
|
{
|
2015-04-27 11:09:54 +08:00
|
|
|
struct ceph_osd_req_op *op = _osd_req_op_init(osd_req, which,
|
|
|
|
opcode, 0);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
size_t payload_len = 0;
|
|
|
|
|
2013-08-15 11:51:44 +08:00
|
|
|
BUG_ON(opcode != CEPH_OSD_OP_READ && opcode != CEPH_OSD_OP_WRITE &&
|
2015-10-07 23:27:17 +08:00
|
|
|
opcode != CEPH_OSD_OP_WRITEFULL && opcode != CEPH_OSD_OP_ZERO &&
|
|
|
|
opcode != CEPH_OSD_OP_TRUNCATE);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
|
|
|
|
op->extent.offset = offset;
|
|
|
|
op->extent.length = length;
|
|
|
|
op->extent.truncate_size = truncate_size;
|
|
|
|
op->extent.truncate_seq = truncate_seq;
|
2015-10-07 23:27:17 +08:00
|
|
|
if (opcode == CEPH_OSD_OP_WRITE || opcode == CEPH_OSD_OP_WRITEFULL)
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
payload_len += length;
|
|
|
|
|
2016-02-08 20:39:46 +08:00
|
|
|
op->indata_len = payload_len;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_init);
|
|
|
|
|
2013-04-05 14:27:11 +08:00
|
|
|
void osd_req_op_extent_update(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u64 length)
|
2013-03-15 03:09:05 +08:00
|
|
|
{
|
2013-04-05 14:27:11 +08:00
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
u64 previous;
|
|
|
|
|
|
|
|
BUG_ON(which >= osd_req->r_num_ops);
|
|
|
|
op = &osd_req->r_ops[which];
|
|
|
|
previous = op->extent.length;
|
2013-03-15 03:09:05 +08:00
|
|
|
|
|
|
|
if (length == previous)
|
|
|
|
return; /* Nothing to do */
|
|
|
|
BUG_ON(length > previous);
|
|
|
|
|
|
|
|
op->extent.length = length;
|
2017-01-19 11:21:29 +08:00
|
|
|
if (op->op == CEPH_OSD_OP_WRITE || op->op == CEPH_OSD_OP_WRITEFULL)
|
|
|
|
op->indata_len -= previous - length;
|
2013-03-15 03:09:05 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_update);
|
|
|
|
|
2016-01-07 17:32:54 +08:00
|
|
|
void osd_req_op_extent_dup_last(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u64 offset_inc)
|
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op, *prev_op;
|
|
|
|
|
|
|
|
BUG_ON(which + 1 >= osd_req->r_num_ops);
|
|
|
|
|
|
|
|
prev_op = &osd_req->r_ops[which];
|
|
|
|
op = _osd_req_op_init(osd_req, which + 1, prev_op->op, prev_op->flags);
|
|
|
|
/* dup previous one */
|
|
|
|
op->indata_len = prev_op->indata_len;
|
|
|
|
op->outdata_len = prev_op->outdata_len;
|
|
|
|
op->extent = prev_op->extent;
|
|
|
|
/* adjust offset */
|
|
|
|
op->extent.offset += offset_inc;
|
|
|
|
op->extent.length -= offset_inc;
|
|
|
|
|
|
|
|
if (op->op == CEPH_OSD_OP_WRITE || op->op == CEPH_OSD_OP_WRITEFULL)
|
|
|
|
op->indata_len -= offset_inc;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_extent_dup_last);
|
|
|
|
|
2018-04-12 12:04:55 +08:00
|
|
|
int osd_req_op_cls_init(struct ceph_osd_request *osd_req, unsigned int which,
|
2018-09-27 01:12:07 +08:00
|
|
|
const char *class, const char *method)
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
{
|
2018-09-27 01:12:07 +08:00
|
|
|
struct ceph_osd_req_op *op;
|
2013-04-05 14:27:12 +08:00
|
|
|
struct ceph_pagelist *pagelist;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
size_t payload_len = 0;
|
|
|
|
size_t size;
|
2019-07-03 23:59:20 +08:00
|
|
|
int ret;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
|
2018-09-27 01:12:07 +08:00
|
|
|
op = _osd_req_op_init(osd_req, which, CEPH_OSD_OP_CALL, 0);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
|
2018-09-28 21:38:34 +08:00
|
|
|
pagelist = ceph_pagelist_alloc(GFP_NOFS);
|
2018-04-12 12:04:55 +08:00
|
|
|
if (!pagelist)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
op->cls.class_name = class;
|
|
|
|
size = strlen(class);
|
|
|
|
BUG_ON(size > (size_t) U8_MAX);
|
|
|
|
op->cls.class_len = size;
|
2019-07-03 23:59:20 +08:00
|
|
|
ret = ceph_pagelist_append(pagelist, class, size);
|
|
|
|
if (ret)
|
|
|
|
goto err_pagelist_free;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
payload_len += size;
|
|
|
|
|
|
|
|
op->cls.method_name = method;
|
|
|
|
size = strlen(method);
|
|
|
|
BUG_ON(size > (size_t) U8_MAX);
|
|
|
|
op->cls.method_len = size;
|
2019-07-03 23:59:20 +08:00
|
|
|
ret = ceph_pagelist_append(pagelist, method, size);
|
|
|
|
if (ret)
|
|
|
|
goto err_pagelist_free;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
payload_len += size;
|
|
|
|
|
2013-04-05 14:27:12 +08:00
|
|
|
osd_req_op_cls_request_info_pagelist(osd_req, which, pagelist);
|
2016-02-08 20:39:46 +08:00
|
|
|
op->indata_len = payload_len;
|
2018-04-12 12:04:55 +08:00
|
|
|
return 0;
|
2019-07-03 23:59:20 +08:00
|
|
|
|
|
|
|
err_pagelist_free:
|
|
|
|
ceph_pagelist_release(pagelist);
|
|
|
|
return ret;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_cls_init);
|
2013-04-03 14:28:58 +08:00
|
|
|
|
2014-11-12 14:00:43 +08:00
|
|
|
int osd_req_op_xattr_init(struct ceph_osd_request *osd_req, unsigned int which,
|
|
|
|
u16 opcode, const char *name, const void *value,
|
|
|
|
size_t size, u8 cmp_op, u8 cmp_mode)
|
|
|
|
{
|
2015-04-27 11:09:54 +08:00
|
|
|
struct ceph_osd_req_op *op = _osd_req_op_init(osd_req, which,
|
|
|
|
opcode, 0);
|
2014-11-12 14:00:43 +08:00
|
|
|
struct ceph_pagelist *pagelist;
|
|
|
|
size_t payload_len;
|
2019-07-03 23:59:20 +08:00
|
|
|
int ret;
|
2014-11-12 14:00:43 +08:00
|
|
|
|
|
|
|
BUG_ON(opcode != CEPH_OSD_OP_SETXATTR && opcode != CEPH_OSD_OP_CMPXATTR);
|
|
|
|
|
2018-09-28 21:38:34 +08:00
|
|
|
pagelist = ceph_pagelist_alloc(GFP_NOFS);
|
2014-11-12 14:00:43 +08:00
|
|
|
if (!pagelist)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
payload_len = strlen(name);
|
|
|
|
op->xattr.name_len = payload_len;
|
2019-07-03 23:59:20 +08:00
|
|
|
ret = ceph_pagelist_append(pagelist, name, payload_len);
|
|
|
|
if (ret)
|
|
|
|
goto err_pagelist_free;
|
2014-11-12 14:00:43 +08:00
|
|
|
|
|
|
|
op->xattr.value_len = size;
|
2019-07-03 23:59:20 +08:00
|
|
|
ret = ceph_pagelist_append(pagelist, value, size);
|
|
|
|
if (ret)
|
|
|
|
goto err_pagelist_free;
|
2014-11-12 14:00:43 +08:00
|
|
|
payload_len += size;
|
|
|
|
|
|
|
|
op->xattr.cmp_op = cmp_op;
|
|
|
|
op->xattr.cmp_mode = cmp_mode;
|
|
|
|
|
|
|
|
ceph_osd_data_pagelist_init(&op->xattr.osd_data, pagelist);
|
2016-02-08 20:39:46 +08:00
|
|
|
op->indata_len = payload_len;
|
2014-11-12 14:00:43 +08:00
|
|
|
return 0;
|
2019-07-03 23:59:20 +08:00
|
|
|
|
|
|
|
err_pagelist_free:
|
|
|
|
ceph_pagelist_release(pagelist);
|
|
|
|
return ret;
|
2014-11-12 14:00:43 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_xattr_init);
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* @watch_opcode: CEPH_OSD_WATCH_OP_*
|
|
|
|
*/
|
|
|
|
static void osd_req_op_watch_init(struct ceph_osd_request *req, int which,
|
|
|
|
u64 cookie, u8 watch_opcode)
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
{
|
2016-05-26 07:15:02 +08:00
|
|
|
struct ceph_osd_req_op *op;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
op = _osd_req_op_init(req, which, CEPH_OSD_OP_WATCH, 0);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
op->watch.cookie = cookie;
|
2016-05-26 07:15:02 +08:00
|
|
|
op->watch.op = watch_opcode;
|
|
|
|
op->watch.gen = 0;
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 09:50:00 +08:00
|
|
|
}
|
|
|
|
|
2014-02-25 22:22:27 +08:00
|
|
|
void osd_req_op_alloc_hint_init(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
u64 expected_object_size,
|
|
|
|
u64 expected_write_size)
|
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op = _osd_req_op_init(osd_req, which,
|
2015-04-27 11:09:54 +08:00
|
|
|
CEPH_OSD_OP_SETALLOCHINT,
|
|
|
|
0);
|
2014-02-25 22:22:27 +08:00
|
|
|
|
|
|
|
op->alloc_hint.expected_object_size = expected_object_size;
|
|
|
|
op->alloc_hint.expected_write_size = expected_write_size;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CEPH_OSD_OP_SETALLOCHINT op is advisory and therefore deemed
|
|
|
|
* not worth a feature bit. Set FAILOK per-op flag to make
|
|
|
|
* sure older osds don't trip over an unsupported opcode.
|
|
|
|
*/
|
|
|
|
op->flags |= CEPH_OSD_OP_FLAG_FAILOK;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(osd_req_op_alloc_hint_init);
|
|
|
|
|
2013-04-06 03:46:01 +08:00
|
|
|
static void ceph_osdc_msg_data_add(struct ceph_msg *msg,
|
2013-04-05 14:27:12 +08:00
|
|
|
struct ceph_osd_data *osd_data)
|
|
|
|
{
|
|
|
|
u64 length = ceph_osd_data_length(osd_data);
|
|
|
|
|
|
|
|
if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGES) {
|
|
|
|
BUG_ON(length > (u64) SIZE_MAX);
|
|
|
|
if (length)
|
2013-04-06 03:46:01 +08:00
|
|
|
ceph_msg_data_add_pages(msg, osd_data->pages,
|
2013-04-05 14:27:12 +08:00
|
|
|
length, osd_data->alignment);
|
|
|
|
} else if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGELIST) {
|
|
|
|
BUG_ON(!length);
|
2013-04-06 03:46:01 +08:00
|
|
|
ceph_msg_data_add_pagelist(msg, osd_data->pagelist);
|
2013-04-05 14:27:12 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
} else if (osd_data->type == CEPH_OSD_DATA_TYPE_BIO) {
|
2018-01-20 17:30:10 +08:00
|
|
|
ceph_msg_data_add_bio(msg, &osd_data->bio_pos, length);
|
2013-04-05 14:27:12 +08:00
|
|
|
#endif
|
2018-01-20 17:30:11 +08:00
|
|
|
} else if (osd_data->type == CEPH_OSD_DATA_TYPE_BVECS) {
|
|
|
|
ceph_msg_data_add_bvecs(msg, &osd_data->bvec_pos);
|
2013-04-05 14:27:12 +08:00
|
|
|
} else {
|
|
|
|
BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_NONE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
static u32 osd_req_encode_op(struct ceph_osd_op *dst,
|
|
|
|
const struct ceph_osd_req_op *src)
|
2013-03-14 09:50:00 +08:00
|
|
|
{
|
|
|
|
switch (src->op) {
|
|
|
|
case CEPH_OSD_OP_STAT:
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_READ:
|
|
|
|
case CEPH_OSD_OP_WRITE:
|
2015-10-07 23:27:17 +08:00
|
|
|
case CEPH_OSD_OP_WRITEFULL:
|
2013-08-15 11:51:44 +08:00
|
|
|
case CEPH_OSD_OP_ZERO:
|
|
|
|
case CEPH_OSD_OP_TRUNCATE:
|
2013-03-14 09:50:00 +08:00
|
|
|
dst->extent.offset = cpu_to_le64(src->extent.offset);
|
|
|
|
dst->extent.length = cpu_to_le64(src->extent.length);
|
|
|
|
dst->extent.truncate_size =
|
|
|
|
cpu_to_le64(src->extent.truncate_size);
|
|
|
|
dst->extent.truncate_seq =
|
|
|
|
cpu_to_le32(src->extent.truncate_seq);
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_CALL:
|
|
|
|
dst->cls.class_len = src->cls.class_len;
|
|
|
|
dst->cls.method_len = src->cls.method_len;
|
2016-05-26 06:29:52 +08:00
|
|
|
dst->cls.indata_len = cpu_to_le32(src->cls.indata_len);
|
2013-03-14 09:50:00 +08:00
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_WATCH:
|
|
|
|
dst->watch.cookie = cpu_to_le64(src->watch.cookie);
|
2016-05-26 07:15:02 +08:00
|
|
|
dst->watch.ver = cpu_to_le64(0);
|
|
|
|
dst->watch.op = src->watch.op;
|
|
|
|
dst->watch.gen = cpu_to_le32(src->watch.gen);
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_NOTIFY_ACK:
|
2013-03-14 09:50:00 +08:00
|
|
|
break;
|
2016-04-28 22:07:27 +08:00
|
|
|
case CEPH_OSD_OP_NOTIFY:
|
|
|
|
dst->notify.cookie = cpu_to_le64(src->notify.cookie);
|
|
|
|
break;
|
2015-07-18 04:18:07 +08:00
|
|
|
case CEPH_OSD_OP_LIST_WATCHERS:
|
|
|
|
break;
|
2014-02-25 22:22:27 +08:00
|
|
|
case CEPH_OSD_OP_SETALLOCHINT:
|
|
|
|
dst->alloc_hint.expected_object_size =
|
|
|
|
cpu_to_le64(src->alloc_hint.expected_object_size);
|
|
|
|
dst->alloc_hint.expected_write_size =
|
|
|
|
cpu_to_le64(src->alloc_hint.expected_write_size);
|
|
|
|
break;
|
2014-11-12 14:00:43 +08:00
|
|
|
case CEPH_OSD_OP_SETXATTR:
|
|
|
|
case CEPH_OSD_OP_CMPXATTR:
|
|
|
|
dst->xattr.name_len = cpu_to_le32(src->xattr.name_len);
|
|
|
|
dst->xattr.value_len = cpu_to_le32(src->xattr.value_len);
|
|
|
|
dst->xattr.cmp_op = src->xattr.cmp_op;
|
|
|
|
dst->xattr.cmp_mode = src->xattr.cmp_mode;
|
|
|
|
break;
|
2014-11-13 10:47:25 +08:00
|
|
|
case CEPH_OSD_OP_CREATE:
|
|
|
|
case CEPH_OSD_OP_DELETE:
|
|
|
|
break;
|
2018-10-15 23:45:58 +08:00
|
|
|
case CEPH_OSD_OP_COPY_FROM:
|
|
|
|
dst->copy_from.snapid = cpu_to_le64(src->copy_from.snapid);
|
|
|
|
dst->copy_from.src_version =
|
|
|
|
cpu_to_le64(src->copy_from.src_version);
|
|
|
|
dst->copy_from.flags = src->copy_from.flags;
|
|
|
|
dst->copy_from.src_fadvise_flags =
|
|
|
|
cpu_to_le32(src->copy_from.src_fadvise_flags);
|
|
|
|
break;
|
2013-03-14 09:50:00 +08:00
|
|
|
default:
|
2013-02-16 01:42:30 +08:00
|
|
|
pr_err("unsupported osd opcode %s\n",
|
2013-03-05 01:08:29 +08:00
|
|
|
ceph_osd_op_name(src->op));
|
2013-02-16 01:42:30 +08:00
|
|
|
WARN_ON(1);
|
2013-03-14 09:50:00 +08:00
|
|
|
|
|
|
|
return 0;
|
2010-04-07 06:01:27 +08:00
|
|
|
}
|
2014-02-25 22:22:26 +08:00
|
|
|
|
2013-03-14 09:50:00 +08:00
|
|
|
dst->op = cpu_to_le16(src->op);
|
2014-02-25 22:22:26 +08:00
|
|
|
dst->flags = cpu_to_le32(src->flags);
|
2016-02-08 20:39:46 +08:00
|
|
|
dst->payload_len = cpu_to_le32(src->indata_len);
|
2013-03-09 03:35:36 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
return src->indata_len;
|
2010-04-07 06:01:27 +08:00
|
|
|
}
|
|
|
|
|
2010-04-07 05:51:47 +08:00
|
|
|
/*
|
|
|
|
* build new request AND message, calculate layout, and adjust file
|
|
|
|
* extent as needed.
|
|
|
|
*
|
|
|
|
* if the file was recently truncated, we include information about its
|
|
|
|
* old and new size so that the object can be updated appropriately. (we
|
|
|
|
* avoid synchronously deleting truncated objects because it's slow.)
|
|
|
|
*/
|
|
|
|
struct ceph_osd_request *ceph_osdc_new_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_file_layout *layout,
|
|
|
|
struct ceph_vino vino,
|
2014-11-13 14:40:37 +08:00
|
|
|
u64 off, u64 *plen,
|
|
|
|
unsigned int which, int num_ops,
|
2010-04-07 05:51:47 +08:00
|
|
|
int opcode, int flags,
|
|
|
|
struct ceph_snap_context *snapc,
|
|
|
|
u32 truncate_seq,
|
|
|
|
u64 truncate_size,
|
libceph: don't assign page info in ceph_osdc_new_request()
Currently ceph_osdc_new_request() assigns an osd request's
r_num_pages and r_alignment fields. The only thing it does
after that is call ceph_osdc_build_request(), and that doesn't
need those fields to be assigned.
Move the assignment of those fields out of ceph_osdc_new_request()
and into its caller. As a result, the page_align parameter is no
longer used, so get rid of it.
Note that in ceph_sync_write(), the value for req->r_num_pages had
already been calculated earlier (as num_pages, and fortunately
it was computed the same way). So don't bother recomputing it,
but because it's not needed earlier, move that calculation after the
call to ceph_osdc_new_request(). Hold off making the assignment to
r_alignment, doing it instead r_pages and r_num_pages are
getting set.
Similarly, in start_read(), nr_pages already holds the number of
pages in the array (and is calculated the same way), so there's no
need to recompute it. Move the assignment of the page alignment
down with the others there as well.
This and the next few patches are preparation work for:
http://tracker.ceph.com/issues/4127
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-02 08:00:15 +08:00
|
|
|
bool use_mempool)
|
2010-04-07 05:51:47 +08:00
|
|
|
{
|
2010-04-07 06:01:27 +08:00
|
|
|
struct ceph_osd_request *req;
|
2013-03-14 09:50:00 +08:00
|
|
|
u64 objnum = 0;
|
|
|
|
u64 objoff = 0;
|
|
|
|
u64 objlen = 0;
|
2012-09-25 12:01:02 +08:00
|
|
|
int r;
|
2010-04-07 06:01:27 +08:00
|
|
|
|
2013-08-15 11:51:44 +08:00
|
|
|
BUG_ON(opcode != CEPH_OSD_OP_READ && opcode != CEPH_OSD_OP_WRITE &&
|
2014-11-13 10:47:25 +08:00
|
|
|
opcode != CEPH_OSD_OP_ZERO && opcode != CEPH_OSD_OP_TRUNCATE &&
|
|
|
|
opcode != CEPH_OSD_OP_CREATE && opcode != CEPH_OSD_OP_DELETE);
|
2010-04-07 06:01:27 +08:00
|
|
|
|
2013-03-15 03:09:05 +08:00
|
|
|
req = ceph_osdc_alloc_request(osdc, snapc, num_ops, use_mempool,
|
2012-11-14 11:11:15 +08:00
|
|
|
GFP_NOFS);
|
2016-04-27 20:15:51 +08:00
|
|
|
if (!req) {
|
|
|
|
r = -ENOMEM;
|
|
|
|
goto fail;
|
|
|
|
}
|
2013-04-04 10:32:51 +08:00
|
|
|
|
2010-04-07 05:51:47 +08:00
|
|
|
/* calculate max write size */
|
2013-03-14 09:50:01 +08:00
|
|
|
r = calc_layout(layout, off, plen, &objnum, &objoff, &objlen);
|
2016-04-27 20:15:51 +08:00
|
|
|
if (r)
|
|
|
|
goto fail;
|
2013-03-14 09:50:01 +08:00
|
|
|
|
2014-11-13 10:47:25 +08:00
|
|
|
if (opcode == CEPH_OSD_OP_CREATE || opcode == CEPH_OSD_OP_DELETE) {
|
2015-04-27 11:09:54 +08:00
|
|
|
osd_req_op_init(req, which, opcode, 0);
|
2014-11-13 10:47:25 +08:00
|
|
|
} else {
|
2016-02-03 21:24:49 +08:00
|
|
|
u32 object_size = layout->object_size;
|
2014-11-13 10:47:25 +08:00
|
|
|
u32 object_base = off - objoff;
|
|
|
|
if (!(truncate_seq == 1 && truncate_size == -1ULL)) {
|
|
|
|
if (truncate_size <= object_base) {
|
|
|
|
truncate_size = 0;
|
|
|
|
} else {
|
|
|
|
truncate_size -= object_base;
|
|
|
|
if (truncate_size > object_size)
|
|
|
|
truncate_size = object_size;
|
|
|
|
}
|
2013-06-02 18:40:23 +08:00
|
|
|
}
|
2014-11-13 14:40:37 +08:00
|
|
|
osd_req_op_extent_init(req, which, opcode, objoff, objlen,
|
2014-11-13 10:47:25 +08:00
|
|
|
truncate_size, truncate_seq);
|
|
|
|
}
|
2013-03-14 09:50:01 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
req->r_flags = flags;
|
2016-02-03 21:24:49 +08:00
|
|
|
req->r_base_oloc.pool = layout->pool_id;
|
2016-02-14 11:24:31 +08:00
|
|
|
req->r_base_oloc.pool_ns = ceph_try_get_string(layout->pool_ns);
|
2016-04-30 01:54:20 +08:00
|
|
|
ceph_oid_printf(&req->r_base_oid, "%llx.%08llx", vino.ino, objnum);
|
2013-02-16 12:10:17 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
req->r_snapid = vino.snap;
|
|
|
|
if (flags & CEPH_OSD_FLAG_WRITE)
|
|
|
|
req->r_data_offset = off;
|
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
if (num_ops > 1)
|
|
|
|
/*
|
|
|
|
* This is a special case for ceph_writepages_start(), but it
|
|
|
|
* also covers ceph_uninline_data(). If more multi-op request
|
|
|
|
* use cases emerge, we will need a separate helper.
|
|
|
|
*/
|
|
|
|
r = __ceph_osdc_alloc_messages(req, GFP_NOFS, num_ops, 0);
|
|
|
|
else
|
|
|
|
r = ceph_osdc_alloc_messages(req, GFP_NOFS);
|
2016-04-27 20:15:51 +08:00
|
|
|
if (r)
|
|
|
|
goto fail;
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
return req;
|
2016-04-27 20:15:51 +08:00
|
|
|
|
|
|
|
fail:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ERR_PTR(r);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_new_request);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We keep osd requests in an rbtree, sorted by ->r_tid.
|
|
|
|
*/
|
2016-04-28 22:07:22 +08:00
|
|
|
DEFINE_RB_FUNCS(request, struct ceph_osd_request, r_tid, r_node)
|
2016-04-28 22:07:27 +08:00
|
|
|
DEFINE_RB_FUNCS(request_mc, struct ceph_osd_request, r_tid, r_mc_node)
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2018-05-15 21:47:58 +08:00
|
|
|
/*
|
|
|
|
* Call @fn on each OSD request as long as @fn returns 0.
|
|
|
|
*/
|
|
|
|
static void for_each_request(struct ceph_osd_client *osdc,
|
|
|
|
int (*fn)(struct ceph_osd_request *req, void *arg),
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct rb_node *n, *p;
|
|
|
|
|
|
|
|
for (n = rb_first(&osdc->osds); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node);
|
|
|
|
|
|
|
|
for (p = rb_first(&osd->o_requests); p; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(p, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
p = rb_next(p);
|
|
|
|
if (fn(req, arg))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (p = rb_first(&osdc->homeless_osd.o_requests); p; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(p, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
p = rb_next(p);
|
|
|
|
if (fn(req, arg))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
static bool osd_homeless(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
return osd->o_osd == CEPH_HOMELESS_OSD;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static bool osd_registered(struct ceph_osd *osd)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
verify_osdc_locked(osd->o_osdc);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
return !RB_EMPTY_NODE(&osd->o_node);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
/*
|
|
|
|
* Assumes @osd is zero-initialized.
|
|
|
|
*/
|
|
|
|
static void osd_init(struct ceph_osd *osd)
|
|
|
|
{
|
2017-03-17 20:10:28 +08:00
|
|
|
refcount_set(&osd->o_ref, 1);
|
2016-04-28 22:07:25 +08:00
|
|
|
RB_CLEAR_NODE(&osd->o_node);
|
2016-04-28 22:07:26 +08:00
|
|
|
osd->o_requests = RB_ROOT;
|
2016-05-26 07:15:02 +08:00
|
|
|
osd->o_linger_requests = RB_ROOT;
|
2017-06-19 18:18:05 +08:00
|
|
|
osd->o_backoff_mappings = RB_ROOT;
|
|
|
|
osd->o_backoffs_by_id = RB_ROOT;
|
2016-04-28 22:07:25 +08:00
|
|
|
INIT_LIST_HEAD(&osd->o_osd_lru);
|
|
|
|
INIT_LIST_HEAD(&osd->o_keepalive_item);
|
|
|
|
osd->o_incarnation = 1;
|
2016-04-28 22:07:26 +08:00
|
|
|
mutex_init(&osd->lock);
|
2016-04-28 22:07:25 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void osd_cleanup(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&osd->o_node));
|
2016-04-28 22:07:26 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_requests));
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_linger_requests));
|
2017-06-19 18:18:05 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_backoff_mappings));
|
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_backoffs_by_id));
|
2016-04-28 22:07:25 +08:00
|
|
|
WARN_ON(!list_empty(&osd->o_osd_lru));
|
|
|
|
WARN_ON(!list_empty(&osd->o_keepalive_item));
|
|
|
|
|
|
|
|
if (osd->o_auth.authorizer) {
|
|
|
|
WARN_ON(osd_homeless(osd));
|
|
|
|
ceph_auth_destroy_authorizer(osd->o_auth.authorizer);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* Track open sessions with osds.
|
|
|
|
*/
|
2012-05-27 12:26:43 +08:00
|
|
|
static struct ceph_osd *create_osd(struct ceph_osd_client *osdc, int onum)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd *osd;
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
WARN_ON(onum == CEPH_HOMELESS_OSD);
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
osd = kzalloc(sizeof(*osd), GFP_NOIO | __GFP_NOFAIL);
|
2016-04-28 22:07:25 +08:00
|
|
|
osd_init(osd);
|
2009-10-07 02:31:10 +08:00
|
|
|
osd->o_osdc = osdc;
|
2012-05-27 12:26:43 +08:00
|
|
|
osd->o_osd = onum;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2012-06-28 03:24:08 +08:00
|
|
|
ceph_con_init(&osd->o_con, osd, &osd_con_ops, &osdc->client->msgr);
|
2009-11-19 08:19:57 +08:00
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
return osd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ceph_osd *get_osd(struct ceph_osd *osd)
|
|
|
|
{
|
2017-03-17 20:10:28 +08:00
|
|
|
if (refcount_inc_not_zero(&osd->o_ref)) {
|
|
|
|
dout("get_osd %p %d -> %d\n", osd, refcount_read(&osd->o_ref)-1,
|
|
|
|
refcount_read(&osd->o_ref));
|
2009-10-07 02:31:10 +08:00
|
|
|
return osd;
|
|
|
|
} else {
|
|
|
|
dout("get_osd %p FAIL\n", osd);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void put_osd(struct ceph_osd *osd)
|
|
|
|
{
|
2017-03-17 20:10:28 +08:00
|
|
|
dout("put_osd %p %d -> %d\n", osd, refcount_read(&osd->o_ref),
|
|
|
|
refcount_read(&osd->o_ref) - 1);
|
|
|
|
if (refcount_dec_and_test(&osd->o_ref)) {
|
2016-04-28 22:07:25 +08:00
|
|
|
osd_cleanup(osd);
|
2009-10-07 02:31:10 +08:00
|
|
|
kfree(osd);
|
2010-05-28 05:15:49 +08:00
|
|
|
}
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:22 +08:00
|
|
|
DEFINE_RB_FUNCS(osd, struct ceph_osd, o_osd, o_node)
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void __move_osd_to_lru(struct ceph_osd *osd)
|
2010-02-04 03:00:26 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
|
|
|
|
dout("%s osd %p osd%d\n", __func__, osd, osd->o_osd);
|
2010-02-04 03:00:26 +08:00
|
|
|
BUG_ON(!list_empty(&osd->o_osd_lru));
|
2014-06-20 18:14:41 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
spin_lock(&osdc->osd_lru_lock);
|
2010-02-04 03:00:26 +08:00
|
|
|
list_add_tail(&osd->o_osd_lru, &osdc->osd_lru);
|
2016-04-28 22:07:26 +08:00
|
|
|
spin_unlock(&osdc->osd_lru_lock);
|
|
|
|
|
2015-05-15 17:02:17 +08:00
|
|
|
osd->lru_ttl = jiffies + osdc->client->options->osd_idle_ttl;
|
2010-02-04 03:00:26 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void maybe_move_osd_to_lru(struct ceph_osd *osd)
|
2014-06-20 18:14:41 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
if (RB_EMPTY_ROOT(&osd->o_requests) &&
|
2016-05-26 07:15:02 +08:00
|
|
|
RB_EMPTY_ROOT(&osd->o_linger_requests))
|
2016-04-28 22:07:26 +08:00
|
|
|
__move_osd_to_lru(osd);
|
2014-06-20 18:14:41 +08:00
|
|
|
}
|
|
|
|
|
2010-02-04 03:00:26 +08:00
|
|
|
static void __remove_osd_from_lru(struct ceph_osd *osd)
|
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
|
|
|
|
dout("%s osd %p osd%d\n", __func__, osd, osd->o_osd);
|
|
|
|
|
|
|
|
spin_lock(&osdc->osd_lru_lock);
|
2010-02-04 03:00:26 +08:00
|
|
|
if (!list_empty(&osd->o_osd_lru))
|
|
|
|
list_del_init(&osd->o_osd_lru);
|
2016-04-28 22:07:26 +08:00
|
|
|
spin_unlock(&osdc->osd_lru_lock);
|
2010-02-04 03:00:26 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
/*
|
|
|
|
* Close the connection and assign any leftover requests to the
|
|
|
|
* homeless session.
|
|
|
|
*/
|
|
|
|
static void close_osd(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
struct rb_node *n;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
dout("%s osd %p osd%d\n", __func__, osd, osd->o_osd);
|
|
|
|
|
|
|
|
ceph_con_close(&osd->o_con);
|
|
|
|
|
|
|
|
for (n = rb_first(&osd->o_requests); n; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
n = rb_next(n); /* unlink_request() */
|
|
|
|
|
|
|
|
dout(" reassigning req %p tid %llu\n", req, req->r_tid);
|
|
|
|
unlink_request(osd, req);
|
|
|
|
link_request(&osdc->homeless_osd, req);
|
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
for (n = rb_first(&osd->o_linger_requests); n; ) {
|
|
|
|
struct ceph_osd_linger_request *lreq =
|
|
|
|
rb_entry(n, struct ceph_osd_linger_request, node);
|
|
|
|
|
|
|
|
n = rb_next(n); /* unlink_linger() */
|
|
|
|
|
|
|
|
dout(" reassigning lreq %p linger_id %llu\n", lreq,
|
|
|
|
lreq->linger_id);
|
|
|
|
unlink_linger(osd, lreq);
|
|
|
|
link_linger(&osdc->homeless_osd, lreq);
|
|
|
|
}
|
2017-06-19 18:18:05 +08:00
|
|
|
clear_backoffs(osd);
|
2016-04-28 22:07:26 +08:00
|
|
|
|
|
|
|
__remove_osd_from_lru(osd);
|
|
|
|
erase_osd(&osdc->osds, osd);
|
|
|
|
put_osd(osd);
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* reset osd connect
|
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static int reopen_osd(struct ceph_osd *osd)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2012-12-07 23:57:58 +08:00
|
|
|
struct ceph_entity_addr *peer_addr;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s osd %p osd%d\n", __func__, osd, osd->o_osd);
|
|
|
|
|
|
|
|
if (RB_EMPTY_ROOT(&osd->o_requests) &&
|
2016-05-26 07:15:02 +08:00
|
|
|
RB_EMPTY_ROOT(&osd->o_linger_requests)) {
|
2016-04-28 22:07:26 +08:00
|
|
|
close_osd(osd);
|
2012-12-07 23:57:58 +08:00
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
peer_addr = &osd->o_osdc->osdmap->osd_addr[osd->o_osd];
|
2012-12-07 23:57:58 +08:00
|
|
|
if (!memcmp(peer_addr, &osd->o_con.peer_addr, sizeof (*peer_addr)) &&
|
|
|
|
!ceph_con_opened(&osd->o_con)) {
|
2016-04-28 22:07:26 +08:00
|
|
|
struct rb_node *n;
|
2012-12-07 23:57:58 +08:00
|
|
|
|
2014-01-17 01:18:27 +08:00
|
|
|
dout("osd addr hasn't changed and connection never opened, "
|
|
|
|
"letting msgr retry\n");
|
ceph: avoid reopening osd connections when address hasn't changed
We get a fault callback on _every_ tcp connection fault. Normally, we
want to reopen the connection when that happens. If the address we have
is bad, however, and connection attempts always result in a connection
refused or similar error, explicitly closing and reopening the msgr
connection just prevents the messenger's backoff logic from kicking in.
The result can be a console full of
[ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
Instead, if we get a fault, and have outstanding requests, but the osd
address hasn't changed and the connection never successfully connected in
the first place, do nothing to the osd connection. The messenger layer
will back off and retry periodically, because we never connected and thus
the lossy bit is not set.
Instead, touch each request's r_stamp so that handle_timeout can tell the
request is still alive and kicking.
Signed-off-by: Sage Weil <sage@newdream.net>
2010-03-23 05:51:18 +08:00
|
|
|
/* touch each r_stamp for handle_timeout()'s benfit */
|
2016-04-28 22:07:26 +08:00
|
|
|
for (n = rb_first(&osd->o_requests); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
ceph: avoid reopening osd connections when address hasn't changed
We get a fault callback on _every_ tcp connection fault. Normally, we
want to reopen the connection when that happens. If the address we have
is bad, however, and connection attempts always result in a connection
refused or similar error, explicitly closing and reopening the msgr
connection just prevents the messenger's backoff logic from kicking in.
The result can be a console full of
[ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
Instead, if we get a fault, and have outstanding requests, but the osd
address hasn't changed and the connection never successfully connected in
the first place, do nothing to the osd connection. The messenger layer
will back off and retry periodically, because we never connected and thus
the lossy bit is not set.
Instead, touch each request's r_stamp so that handle_timeout can tell the
request is still alive and kicking.
Signed-off-by: Sage Weil <sage@newdream.net>
2010-03-23 05:51:18 +08:00
|
|
|
req->r_stamp = jiffies;
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
2012-12-07 23:57:58 +08:00
|
|
|
|
|
|
|
return -EAGAIN;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2012-12-07 23:57:58 +08:00
|
|
|
|
|
|
|
ceph_con_close(&osd->o_con);
|
|
|
|
ceph_con_open(&osd->o_con, CEPH_ENTITY_TYPE_OSD, osd->o_osd, peer_addr);
|
|
|
|
osd->o_incarnation++;
|
|
|
|
|
|
|
|
return 0;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static struct ceph_osd *lookup_create_osd(struct ceph_osd_client *osdc, int o,
|
|
|
|
bool wrlocked)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd *osd;
|
2012-05-17 04:16:38 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (wrlocked)
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
else
|
|
|
|
verify_osdc_locked(osdc);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (o != CEPH_HOMELESS_OSD)
|
|
|
|
osd = lookup_osd(&osdc->osds, o);
|
|
|
|
else
|
|
|
|
osd = &osdc->homeless_osd;
|
|
|
|
if (!osd) {
|
|
|
|
if (!wrlocked)
|
|
|
|
return ERR_PTR(-EAGAIN);
|
2009-10-09 07:57:16 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
osd = create_osd(osdc, o);
|
|
|
|
insert_osd(&osdc->osds, osd);
|
|
|
|
ceph_con_open(&osd->o_con, CEPH_ENTITY_TYPE_OSD, osd->o_osd,
|
|
|
|
&osdc->osdmap->osd_addr[osd->o_osd]);
|
2009-10-09 07:57:16 +08:00
|
|
|
}
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s osdc %p osd%d -> osd %p\n", __func__, osdc, o, osd);
|
|
|
|
return osd;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-04-28 22:07:26 +08:00
|
|
|
* Create request <-> OSD session relation.
|
|
|
|
*
|
|
|
|
* @req has to be assigned a tid, @osd may be homeless.
|
2009-10-07 02:31:10 +08:00
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static void link_request(struct ceph_osd *osd, struct ceph_osd_request *req)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
verify_osd_locked(osd);
|
|
|
|
WARN_ON(!req->r_tid || req->r_osd);
|
|
|
|
dout("%s osd %p osd%d req %p tid %llu\n", __func__, osd, osd->o_osd,
|
|
|
|
req, req->r_tid);
|
|
|
|
|
|
|
|
if (!osd_homeless(osd))
|
|
|
|
__remove_osd_from_lru(osd);
|
|
|
|
else
|
|
|
|
atomic_inc(&osd->o_osdc->num_homeless);
|
|
|
|
|
|
|
|
get_osd(osd);
|
|
|
|
insert_request(&osd->o_requests, req);
|
|
|
|
req->r_osd = osd;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void unlink_request(struct ceph_osd *osd, struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
verify_osd_locked(osd);
|
|
|
|
WARN_ON(req->r_osd != osd);
|
|
|
|
dout("%s osd %p osd%d req %p tid %llu\n", __func__, osd, osd->o_osd,
|
|
|
|
req, req->r_tid);
|
|
|
|
|
|
|
|
req->r_osd = NULL;
|
|
|
|
erase_request(&osd->o_requests, req);
|
|
|
|
put_osd(osd);
|
|
|
|
|
|
|
|
if (!osd_homeless(osd))
|
|
|
|
maybe_move_osd_to_lru(osd);
|
|
|
|
else
|
|
|
|
atomic_dec(&osd->o_osdc->num_homeless);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:23 +08:00
|
|
|
static bool __pool_full(struct ceph_pg_pool_info *pi)
|
|
|
|
{
|
|
|
|
return pi->flags & CEPH_POOL_FLAG_FULL;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
static bool have_pool_full(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
struct rb_node *n;
|
|
|
|
|
|
|
|
for (n = rb_first(&osdc->osdmap->pg_pools); n; n = rb_next(n)) {
|
|
|
|
struct ceph_pg_pool_info *pi =
|
|
|
|
rb_entry(n, struct ceph_pg_pool_info, node);
|
|
|
|
|
|
|
|
if (__pool_full(pi))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static bool pool_full(struct ceph_osd_client *osdc, s64 pool_id)
|
|
|
|
{
|
|
|
|
struct ceph_pg_pool_info *pi;
|
|
|
|
|
|
|
|
pi = ceph_pg_pool_by_id(osdc->osdmap, pool_id);
|
|
|
|
if (!pi)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return __pool_full(pi);
|
|
|
|
}
|
|
|
|
|
libceph: block I/O when PAUSE or FULL osd map flags are set
The PAUSEWR and PAUSERD flags are meant to stop the cluster from
processing writes and reads, respectively. The FULL flag is set when
the cluster determines that it is out of space, and will no longer
process writes. PAUSEWR and PAUSERD are purely client-side settings
already implemented in userspace clients. The osd does nothing special
with these flags.
When the FULL flag is set, however, the osd responds to all writes
with -ENOSPC. For cephfs, this makes sense, but for rbd the block
layer translates this into EIO. If a cluster goes from full to
non-full quickly, a filesystem on top of rbd will not behave well,
since some writes succeed while others get EIO.
Fix this by blocking any writes when the FULL flag is set in the osd
client. This is the same strategy used by userspace, so apply it by
default. A follow-on patch makes this configurable.
__map_request() is called to re-target osd requests in case the
available osds changed. Add a paused field to a ceph_osd_request, and
set it whenever an appropriate osd map flag is set. Avoid queueing
paused requests in __map_request(), but force them to be resent if
they become unpaused.
Also subscribe to the next osd map from the monitor if any of these
flags are set, so paused requests can be unblocked as soon as
possible.
Fixes: http://tracker.ceph.com/issues/6079
Reviewed-by: Sage Weil <sage@inktank.com>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
2013-12-03 11:11:48 +08:00
|
|
|
/*
|
|
|
|
* Returns whether a request should be blocked from being sent
|
|
|
|
* based on the current osdmap and osd_client settings.
|
|
|
|
*/
|
2016-04-28 22:07:23 +08:00
|
|
|
static bool target_should_be_paused(struct ceph_osd_client *osdc,
|
|
|
|
const struct ceph_osd_request_target *t,
|
|
|
|
struct ceph_pg_pool_info *pi)
|
|
|
|
{
|
2016-04-28 22:07:25 +08:00
|
|
|
bool pauserd = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD);
|
|
|
|
bool pausewr = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSEWR) ||
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
2016-04-28 22:07:23 +08:00
|
|
|
__pool_full(pi);
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
WARN_ON(pi->id != t->target_oloc.pool);
|
2017-04-18 21:21:16 +08:00
|
|
|
return ((t->flags & CEPH_OSD_FLAG_READ) && pauserd) ||
|
|
|
|
((t->flags & CEPH_OSD_FLAG_WRITE) && pausewr) ||
|
|
|
|
(osdc->osdmap->epoch < osdc->epoch_barrier);
|
2016-04-28 22:07:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
enum calc_target_result {
|
|
|
|
CALC_TARGET_NO_ACTION = 0,
|
|
|
|
CALC_TARGET_NEED_RESEND,
|
|
|
|
CALC_TARGET_POOL_DNE,
|
|
|
|
};
|
|
|
|
|
|
|
|
static enum calc_target_result calc_target(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_request_target *t,
|
|
|
|
bool any_change)
|
|
|
|
{
|
|
|
|
struct ceph_pg_pool_info *pi;
|
|
|
|
struct ceph_pg pgid, last_pgid;
|
|
|
|
struct ceph_osds up, acting;
|
|
|
|
bool force_resend = false;
|
2017-06-15 22:30:54 +08:00
|
|
|
bool unpaused = false;
|
2019-08-20 22:40:33 +08:00
|
|
|
bool legacy_change = false;
|
2017-06-15 22:30:54 +08:00
|
|
|
bool split = false;
|
2016-04-28 22:07:25 +08:00
|
|
|
bool sort_bitwise = ceph_osdmap_flag(osdc, CEPH_OSDMAP_SORTBITWISE);
|
2017-07-27 23:59:14 +08:00
|
|
|
bool recovery_deletes = ceph_osdmap_flag(osdc,
|
|
|
|
CEPH_OSDMAP_RECOVERY_DELETES);
|
2016-04-28 22:07:23 +08:00
|
|
|
enum calc_target_result ct_res;
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
t->epoch = osdc->osdmap->epoch;
|
2016-04-28 22:07:23 +08:00
|
|
|
pi = ceph_pg_pool_by_id(osdc->osdmap, t->base_oloc.pool);
|
|
|
|
if (!pi) {
|
|
|
|
t->osd = CEPH_HOMELESS_OSD;
|
|
|
|
ct_res = CALC_TARGET_POOL_DNE;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (osdc->osdmap->epoch == pi->last_force_request_resend) {
|
2017-06-05 20:45:00 +08:00
|
|
|
if (t->last_force_resend < pi->last_force_request_resend) {
|
|
|
|
t->last_force_resend = pi->last_force_request_resend;
|
2016-04-28 22:07:23 +08:00
|
|
|
force_resend = true;
|
2017-06-05 20:45:00 +08:00
|
|
|
} else if (t->last_force_resend == 0) {
|
2016-04-28 22:07:23 +08:00
|
|
|
force_resend = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
/* apply tiering */
|
|
|
|
ceph_oid_copy(&t->target_oid, &t->base_oid);
|
|
|
|
ceph_oloc_copy(&t->target_oloc, &t->base_oloc);
|
|
|
|
if ((t->flags & CEPH_OSD_FLAG_IGNORE_OVERLAY) == 0) {
|
2016-04-28 22:07:23 +08:00
|
|
|
if (t->flags & CEPH_OSD_FLAG_READ && pi->read_tier >= 0)
|
|
|
|
t->target_oloc.pool = pi->read_tier;
|
|
|
|
if (t->flags & CEPH_OSD_FLAG_WRITE && pi->write_tier >= 0)
|
|
|
|
t->target_oloc.pool = pi->write_tier;
|
2017-06-15 22:30:55 +08:00
|
|
|
|
|
|
|
pi = ceph_pg_pool_by_id(osdc->osdmap, t->target_oloc.pool);
|
|
|
|
if (!pi) {
|
|
|
|
t->osd = CEPH_HOMELESS_OSD;
|
|
|
|
ct_res = CALC_TARGET_POOL_DNE;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-04-28 22:07:23 +08:00
|
|
|
}
|
|
|
|
|
2018-05-23 20:46:53 +08:00
|
|
|
__ceph_object_locator_to_pg(pi, &t->target_oid, &t->target_oloc, &pgid);
|
2016-04-28 22:07:23 +08:00
|
|
|
last_pgid.pool = pgid.pool;
|
|
|
|
last_pgid.seed = ceph_stable_mod(pgid.seed, t->pg_num, t->pg_num_mask);
|
|
|
|
|
2017-06-15 22:30:56 +08:00
|
|
|
ceph_pg_to_up_acting_osds(osdc->osdmap, pi, &pgid, &up, &acting);
|
2016-04-28 22:07:23 +08:00
|
|
|
if (any_change &&
|
|
|
|
ceph_is_new_interval(&t->acting,
|
|
|
|
&acting,
|
|
|
|
&t->up,
|
|
|
|
&up,
|
|
|
|
t->size,
|
|
|
|
pi->size,
|
|
|
|
t->min_size,
|
|
|
|
pi->min_size,
|
|
|
|
t->pg_num,
|
|
|
|
pi->pg_num,
|
|
|
|
t->sort_bitwise,
|
|
|
|
sort_bitwise,
|
2017-07-27 23:59:14 +08:00
|
|
|
t->recovery_deletes,
|
|
|
|
recovery_deletes,
|
2016-04-28 22:07:23 +08:00
|
|
|
&last_pgid))
|
|
|
|
force_resend = true;
|
|
|
|
|
|
|
|
if (t->paused && !target_should_be_paused(osdc, t, pi)) {
|
|
|
|
t->paused = false;
|
2017-06-15 22:30:54 +08:00
|
|
|
unpaused = true;
|
2016-04-28 22:07:23 +08:00
|
|
|
}
|
2017-06-15 22:30:54 +08:00
|
|
|
legacy_change = ceph_pg_compare(&t->pgid, &pgid) ||
|
|
|
|
ceph_osds_changed(&t->acting, &acting, any_change);
|
2017-06-15 22:30:54 +08:00
|
|
|
if (t->pg_num)
|
|
|
|
split = ceph_pg_is_split(&last_pgid, t->pg_num, pi->pg_num);
|
2016-04-28 22:07:23 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
if (legacy_change || force_resend || split) {
|
2016-04-28 22:07:23 +08:00
|
|
|
t->pgid = pgid; /* struct */
|
2017-06-15 22:30:56 +08:00
|
|
|
ceph_pg_to_primary_shard(osdc->osdmap, pi, &pgid, &t->spgid);
|
2016-04-28 22:07:23 +08:00
|
|
|
ceph_osds_copy(&t->acting, &acting);
|
|
|
|
ceph_osds_copy(&t->up, &up);
|
|
|
|
t->size = pi->size;
|
|
|
|
t->min_size = pi->min_size;
|
|
|
|
t->pg_num = pi->pg_num;
|
|
|
|
t->pg_num_mask = pi->pg_num_mask;
|
|
|
|
t->sort_bitwise = sort_bitwise;
|
2017-07-27 23:59:14 +08:00
|
|
|
t->recovery_deletes = recovery_deletes;
|
2016-04-28 22:07:23 +08:00
|
|
|
|
|
|
|
t->osd = acting.primary;
|
|
|
|
}
|
|
|
|
|
2019-08-20 22:40:33 +08:00
|
|
|
if (unpaused || legacy_change || force_resend || split)
|
2017-06-15 22:30:54 +08:00
|
|
|
ct_res = CALC_TARGET_NEED_RESEND;
|
|
|
|
else
|
|
|
|
ct_res = CALC_TARGET_NO_ACTION;
|
|
|
|
|
2016-04-28 22:07:23 +08:00
|
|
|
out:
|
2019-08-20 22:40:33 +08:00
|
|
|
dout("%s t %p -> %d%d%d%d ct_res %d osd%d\n", __func__, t, unpaused,
|
|
|
|
legacy_change, force_resend, split, ct_res, t->osd);
|
2016-04-28 22:07:23 +08:00
|
|
|
return ct_res;
|
|
|
|
}
|
|
|
|
|
2017-06-19 18:18:05 +08:00
|
|
|
static struct ceph_spg_mapping *alloc_spg_mapping(void)
|
|
|
|
{
|
|
|
|
struct ceph_spg_mapping *spg;
|
|
|
|
|
|
|
|
spg = kmalloc(sizeof(*spg), GFP_NOIO);
|
|
|
|
if (!spg)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
RB_CLEAR_NODE(&spg->node);
|
|
|
|
spg->backoffs = RB_ROOT;
|
|
|
|
return spg;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_spg_mapping(struct ceph_spg_mapping *spg)
|
|
|
|
{
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&spg->node));
|
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&spg->backoffs));
|
|
|
|
|
|
|
|
kfree(spg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* rbtree of ceph_spg_mapping for handling map<spg_t, ...>, similar to
|
|
|
|
* ceph_pg_mapping. Used to track OSD backoffs -- a backoff [range] is
|
|
|
|
* defined only within a specific spgid; it does not pass anything to
|
|
|
|
* children on split, or to another primary.
|
|
|
|
*/
|
|
|
|
DEFINE_RB_FUNCS2(spg_mapping, struct ceph_spg_mapping, spgid, ceph_spg_compare,
|
|
|
|
RB_BYPTR, const struct ceph_spg *, node)
|
|
|
|
|
|
|
|
static u64 hoid_get_bitwise_key(const struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
return hoid->is_max ? 0x100000000ull : hoid->hash_reverse_bits;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void hoid_get_effective_key(const struct ceph_hobject_id *hoid,
|
|
|
|
void **pkey, size_t *pkey_len)
|
|
|
|
{
|
|
|
|
if (hoid->key_len) {
|
|
|
|
*pkey = hoid->key;
|
|
|
|
*pkey_len = hoid->key_len;
|
|
|
|
} else {
|
|
|
|
*pkey = hoid->oid;
|
|
|
|
*pkey_len = hoid->oid_len;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int compare_names(const void *name1, size_t name1_len,
|
|
|
|
const void *name2, size_t name2_len)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = memcmp(name1, name2, min(name1_len, name2_len));
|
|
|
|
if (!ret) {
|
|
|
|
if (name1_len < name2_len)
|
|
|
|
ret = -1;
|
|
|
|
else if (name1_len > name2_len)
|
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hoid_compare(const struct ceph_hobject_id *lhs,
|
|
|
|
const struct ceph_hobject_id *rhs)
|
|
|
|
{
|
|
|
|
void *effective_key1, *effective_key2;
|
|
|
|
size_t effective_key1_len, effective_key2_len;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (lhs->is_max < rhs->is_max)
|
|
|
|
return -1;
|
|
|
|
if (lhs->is_max > rhs->is_max)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (lhs->pool < rhs->pool)
|
|
|
|
return -1;
|
|
|
|
if (lhs->pool > rhs->pool)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (hoid_get_bitwise_key(lhs) < hoid_get_bitwise_key(rhs))
|
|
|
|
return -1;
|
|
|
|
if (hoid_get_bitwise_key(lhs) > hoid_get_bitwise_key(rhs))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
ret = compare_names(lhs->nspace, lhs->nspace_len,
|
|
|
|
rhs->nspace, rhs->nspace_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
hoid_get_effective_key(lhs, &effective_key1, &effective_key1_len);
|
|
|
|
hoid_get_effective_key(rhs, &effective_key2, &effective_key2_len);
|
|
|
|
ret = compare_names(effective_key1, effective_key1_len,
|
|
|
|
effective_key2, effective_key2_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = compare_names(lhs->oid, lhs->oid_len, rhs->oid, rhs->oid_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (lhs->snapid < rhs->snapid)
|
|
|
|
return -1;
|
|
|
|
if (lhs->snapid > rhs->snapid)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For decoding ->begin and ->end of MOSDBackoff only -- no MIN/MAX
|
|
|
|
* compat stuff here.
|
|
|
|
*
|
|
|
|
* Assumes @hoid is zero-initialized.
|
|
|
|
*/
|
|
|
|
static int decode_hoid(void **p, void *end, struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
u8 struct_v;
|
|
|
|
u32 struct_len;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ceph_start_decoding(p, end, 4, "hobject_t", &struct_v,
|
|
|
|
&struct_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (struct_v < 4) {
|
|
|
|
pr_err("got struct_v %d < 4 of hobject_t\n", struct_v);
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
|
|
|
|
hoid->key = ceph_extract_encoded_string(p, end, &hoid->key_len,
|
|
|
|
GFP_NOIO);
|
|
|
|
if (IS_ERR(hoid->key)) {
|
|
|
|
ret = PTR_ERR(hoid->key);
|
|
|
|
hoid->key = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
hoid->oid = ceph_extract_encoded_string(p, end, &hoid->oid_len,
|
|
|
|
GFP_NOIO);
|
|
|
|
if (IS_ERR(hoid->oid)) {
|
|
|
|
ret = PTR_ERR(hoid->oid);
|
|
|
|
hoid->oid = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_decode_64_safe(p, end, hoid->snapid, e_inval);
|
|
|
|
ceph_decode_32_safe(p, end, hoid->hash, e_inval);
|
|
|
|
ceph_decode_8_safe(p, end, hoid->is_max, e_inval);
|
|
|
|
|
|
|
|
hoid->nspace = ceph_extract_encoded_string(p, end, &hoid->nspace_len,
|
|
|
|
GFP_NOIO);
|
|
|
|
if (IS_ERR(hoid->nspace)) {
|
|
|
|
ret = PTR_ERR(hoid->nspace);
|
|
|
|
hoid->nspace = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_decode_64_safe(p, end, hoid->pool, e_inval);
|
|
|
|
|
|
|
|
ceph_hoid_build_hash_cache(hoid);
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
e_inval:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hoid_encoding_size(const struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
return 8 + 4 + 1 + 8 + /* snapid, hash, is_max, pool */
|
|
|
|
4 + hoid->key_len + 4 + hoid->oid_len + 4 + hoid->nspace_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void encode_hoid(void **p, void *end, const struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
ceph_start_encoding(p, 4, 3, hoid_encoding_size(hoid));
|
|
|
|
ceph_encode_string(p, end, hoid->key, hoid->key_len);
|
|
|
|
ceph_encode_string(p, end, hoid->oid, hoid->oid_len);
|
|
|
|
ceph_encode_64(p, hoid->snapid);
|
|
|
|
ceph_encode_32(p, hoid->hash);
|
|
|
|
ceph_encode_8(p, hoid->is_max);
|
|
|
|
ceph_encode_string(p, end, hoid->nspace, hoid->nspace_len);
|
|
|
|
ceph_encode_64(p, hoid->pool);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_hoid(struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
if (hoid) {
|
|
|
|
kfree(hoid->key);
|
|
|
|
kfree(hoid->oid);
|
|
|
|
kfree(hoid->nspace);
|
|
|
|
kfree(hoid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ceph_osd_backoff *alloc_backoff(void)
|
|
|
|
{
|
|
|
|
struct ceph_osd_backoff *backoff;
|
|
|
|
|
|
|
|
backoff = kzalloc(sizeof(*backoff), GFP_NOIO);
|
|
|
|
if (!backoff)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
RB_CLEAR_NODE(&backoff->spg_node);
|
|
|
|
RB_CLEAR_NODE(&backoff->id_node);
|
|
|
|
return backoff;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_backoff(struct ceph_osd_backoff *backoff)
|
|
|
|
{
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&backoff->spg_node));
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&backoff->id_node));
|
|
|
|
|
|
|
|
free_hoid(backoff->begin);
|
|
|
|
free_hoid(backoff->end);
|
|
|
|
kfree(backoff);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Within a specific spgid, backoffs are managed by ->begin hoid.
|
|
|
|
*/
|
|
|
|
DEFINE_RB_INSDEL_FUNCS2(backoff, struct ceph_osd_backoff, begin, hoid_compare,
|
|
|
|
RB_BYVAL, spg_node);
|
|
|
|
|
|
|
|
static struct ceph_osd_backoff *lookup_containing_backoff(struct rb_root *root,
|
|
|
|
const struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
struct rb_node *n = root->rb_node;
|
|
|
|
|
|
|
|
while (n) {
|
|
|
|
struct ceph_osd_backoff *cur =
|
|
|
|
rb_entry(n, struct ceph_osd_backoff, spg_node);
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
cmp = hoid_compare(hoid, cur->begin);
|
|
|
|
if (cmp < 0) {
|
|
|
|
n = n->rb_left;
|
|
|
|
} else if (cmp > 0) {
|
|
|
|
if (hoid_compare(hoid, cur->end) < 0)
|
|
|
|
return cur;
|
|
|
|
|
|
|
|
n = n->rb_right;
|
|
|
|
} else {
|
|
|
|
return cur;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Each backoff has a unique id within its OSD session.
|
|
|
|
*/
|
|
|
|
DEFINE_RB_FUNCS(backoff_by_id, struct ceph_osd_backoff, id, id_node)
|
|
|
|
|
|
|
|
static void clear_backoffs(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
while (!RB_EMPTY_ROOT(&osd->o_backoff_mappings)) {
|
|
|
|
struct ceph_spg_mapping *spg =
|
|
|
|
rb_entry(rb_first(&osd->o_backoff_mappings),
|
|
|
|
struct ceph_spg_mapping, node);
|
|
|
|
|
|
|
|
while (!RB_EMPTY_ROOT(&spg->backoffs)) {
|
|
|
|
struct ceph_osd_backoff *backoff =
|
|
|
|
rb_entry(rb_first(&spg->backoffs),
|
|
|
|
struct ceph_osd_backoff, spg_node);
|
|
|
|
|
|
|
|
erase_backoff(&spg->backoffs, backoff);
|
|
|
|
erase_backoff_by_id(&osd->o_backoffs_by_id, backoff);
|
|
|
|
free_backoff(backoff);
|
|
|
|
}
|
|
|
|
erase_spg_mapping(&osd->o_backoff_mappings, spg);
|
|
|
|
free_spg_mapping(spg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up a temporary, non-owning view into @t.
|
|
|
|
*/
|
|
|
|
static void hoid_fill_from_target(struct ceph_hobject_id *hoid,
|
|
|
|
const struct ceph_osd_request_target *t)
|
|
|
|
{
|
|
|
|
hoid->key = NULL;
|
|
|
|
hoid->key_len = 0;
|
|
|
|
hoid->oid = t->target_oid.name;
|
|
|
|
hoid->oid_len = t->target_oid.name_len;
|
|
|
|
hoid->snapid = CEPH_NOSNAP;
|
|
|
|
hoid->hash = t->pgid.seed;
|
|
|
|
hoid->is_max = false;
|
|
|
|
if (t->target_oloc.pool_ns) {
|
|
|
|
hoid->nspace = t->target_oloc.pool_ns->str;
|
|
|
|
hoid->nspace_len = t->target_oloc.pool_ns->len;
|
|
|
|
} else {
|
|
|
|
hoid->nspace = NULL;
|
|
|
|
hoid->nspace_len = 0;
|
|
|
|
}
|
|
|
|
hoid->pool = t->target_oloc.pool;
|
|
|
|
ceph_hoid_build_hash_cache(hoid);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool should_plug_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = req->r_osd;
|
|
|
|
struct ceph_spg_mapping *spg;
|
|
|
|
struct ceph_osd_backoff *backoff;
|
|
|
|
struct ceph_hobject_id hoid;
|
|
|
|
|
|
|
|
spg = lookup_spg_mapping(&osd->o_backoff_mappings, &req->r_t.spgid);
|
|
|
|
if (!spg)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
hoid_fill_from_target(&hoid, &req->r_t);
|
|
|
|
backoff = lookup_containing_backoff(&spg->backoffs, &hoid);
|
|
|
|
if (!backoff)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
dout("%s req %p tid %llu backoff osd%d spgid %llu.%xs%d id %llu\n",
|
|
|
|
__func__, req, req->r_tid, osd->o_osd, backoff->spgid.pgid.pool,
|
|
|
|
backoff->spgid.pgid.seed, backoff->spgid.shard, backoff->id);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
/*
|
|
|
|
* Keep get_num_data_items() in sync with this function.
|
|
|
|
*/
|
2018-10-17 20:23:04 +08:00
|
|
|
static void setup_request_data(struct ceph_osd_request *req)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2018-10-17 20:23:04 +08:00
|
|
|
struct ceph_msg *request_msg = req->r_request;
|
|
|
|
struct ceph_msg *reply_msg = req->r_reply;
|
|
|
|
struct ceph_osd_req_op *op;
|
2016-05-26 06:29:52 +08:00
|
|
|
|
2018-10-17 20:23:04 +08:00
|
|
|
if (req->r_request->num_data_items || req->r_reply->num_data_items)
|
2016-05-26 06:29:52 +08:00
|
|
|
return;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2018-10-17 20:23:04 +08:00
|
|
|
WARN_ON(request_msg->data_length || reply_msg->data_length);
|
|
|
|
for (op = req->r_ops; op != &req->r_ops[req->r_num_ops]; op++) {
|
2016-05-26 06:29:52 +08:00
|
|
|
switch (op->op) {
|
|
|
|
/* request */
|
|
|
|
case CEPH_OSD_OP_WRITE:
|
|
|
|
case CEPH_OSD_OP_WRITEFULL:
|
|
|
|
WARN_ON(op->indata_len != op->extent.length);
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
|
|
|
&op->extent.osd_data);
|
2016-05-26 06:29:52 +08:00
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_SETXATTR:
|
|
|
|
case CEPH_OSD_OP_CMPXATTR:
|
|
|
|
WARN_ON(op->indata_len != op->xattr.name_len +
|
|
|
|
op->xattr.value_len);
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
|
|
|
&op->xattr.osd_data);
|
2016-05-26 06:29:52 +08:00
|
|
|
break;
|
2016-05-26 07:15:02 +08:00
|
|
|
case CEPH_OSD_OP_NOTIFY_ACK:
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
2016-05-26 07:15:02 +08:00
|
|
|
&op->notify_ack.request_data);
|
|
|
|
break;
|
2018-10-15 23:45:58 +08:00
|
|
|
case CEPH_OSD_OP_COPY_FROM:
|
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
|
|
|
&op->copy_from.osd_data);
|
|
|
|
break;
|
2016-05-26 06:29:52 +08:00
|
|
|
|
|
|
|
/* reply */
|
|
|
|
case CEPH_OSD_OP_STAT:
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(reply_msg,
|
2016-05-26 06:29:52 +08:00
|
|
|
&op->raw_data_in);
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_OP_READ:
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(reply_msg,
|
2016-05-26 06:29:52 +08:00
|
|
|
&op->extent.osd_data);
|
|
|
|
break;
|
2015-07-18 04:18:07 +08:00
|
|
|
case CEPH_OSD_OP_LIST_WATCHERS:
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(reply_msg,
|
2015-07-18 04:18:07 +08:00
|
|
|
&op->list_watchers.response_data);
|
|
|
|
break;
|
2016-05-26 06:29:52 +08:00
|
|
|
|
|
|
|
/* both */
|
|
|
|
case CEPH_OSD_OP_CALL:
|
|
|
|
WARN_ON(op->indata_len != op->cls.class_len +
|
|
|
|
op->cls.method_len +
|
|
|
|
op->cls.indata_len);
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
|
|
|
&op->cls.request_info);
|
2016-05-26 06:29:52 +08:00
|
|
|
/* optional, can be NONE */
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
|
|
|
&op->cls.request_data);
|
2016-05-26 06:29:52 +08:00
|
|
|
/* optional, can be NONE */
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(reply_msg,
|
2016-05-26 06:29:52 +08:00
|
|
|
&op->cls.response_data);
|
|
|
|
break;
|
2016-04-28 22:07:27 +08:00
|
|
|
case CEPH_OSD_OP_NOTIFY:
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(request_msg,
|
2016-04-28 22:07:27 +08:00
|
|
|
&op->notify.request_data);
|
2018-10-17 20:23:04 +08:00
|
|
|
ceph_osdc_msg_data_add(reply_msg,
|
2016-04-28 22:07:27 +08:00
|
|
|
&op->notify.response_data);
|
|
|
|
break;
|
2016-05-26 06:29:52 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-15 22:30:53 +08:00
|
|
|
static void encode_pgid(void **p, const struct ceph_pg *pgid)
|
|
|
|
{
|
|
|
|
ceph_encode_8(p, 1);
|
|
|
|
ceph_encode_64(p, pgid->pool);
|
|
|
|
ceph_encode_32(p, pgid->seed);
|
|
|
|
ceph_encode_32(p, -1); /* preferred */
|
|
|
|
}
|
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
static void encode_spgid(void **p, const struct ceph_spg *spgid)
|
|
|
|
{
|
|
|
|
ceph_start_encoding(p, 1, 1, CEPH_PGID_ENCODING_LEN + 1);
|
|
|
|
encode_pgid(p, &spgid->pgid);
|
|
|
|
ceph_encode_8(p, spgid->shard);
|
|
|
|
}
|
|
|
|
|
2017-06-15 22:30:53 +08:00
|
|
|
static void encode_oloc(void **p, void *end,
|
|
|
|
const struct ceph_object_locator *oloc)
|
|
|
|
{
|
|
|
|
ceph_start_encoding(p, 5, 4, ceph_oloc_encoding_size(oloc));
|
|
|
|
ceph_encode_64(p, oloc->pool);
|
|
|
|
ceph_encode_32(p, -1); /* preferred */
|
|
|
|
ceph_encode_32(p, 0); /* key len */
|
|
|
|
if (oloc->pool_ns)
|
|
|
|
ceph_encode_string(p, end, oloc->pool_ns->str,
|
|
|
|
oloc->pool_ns->len);
|
|
|
|
else
|
|
|
|
ceph_encode_32(p, 0);
|
|
|
|
}
|
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
static void encode_request_partial(struct ceph_osd_request *req,
|
|
|
|
struct ceph_msg *msg)
|
2016-05-26 06:29:52 +08:00
|
|
|
{
|
|
|
|
void *p = msg->front.iov_base;
|
|
|
|
void *const end = p + msg->front_alloc_len;
|
|
|
|
u32 data_len = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (req->r_flags & CEPH_OSD_FLAG_WRITE) {
|
|
|
|
/* snapshots aren't writeable */
|
|
|
|
WARN_ON(req->r_snapid != CEPH_NOSNAP);
|
|
|
|
} else {
|
|
|
|
WARN_ON(req->r_mtime.tv_sec || req->r_mtime.tv_nsec ||
|
|
|
|
req->r_data_offset || req->r_snapc);
|
|
|
|
}
|
|
|
|
|
2018-10-17 20:23:04 +08:00
|
|
|
setup_request_data(req);
|
2016-05-26 06:29:52 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
encode_spgid(&p, &req->r_t.spgid); /* actual spg */
|
|
|
|
ceph_encode_32(&p, req->r_t.pgid.seed); /* raw hash */
|
2016-05-26 06:29:52 +08:00
|
|
|
ceph_encode_32(&p, req->r_osdc->osdmap->epoch);
|
|
|
|
ceph_encode_32(&p, req->r_flags);
|
2017-06-15 22:30:54 +08:00
|
|
|
|
|
|
|
/* reqid */
|
|
|
|
ceph_start_encoding(&p, 2, 2, sizeof(struct ceph_osd_reqid));
|
|
|
|
memset(p, 0, sizeof(struct ceph_osd_reqid));
|
|
|
|
p += sizeof(struct ceph_osd_reqid);
|
|
|
|
|
|
|
|
/* trace */
|
|
|
|
memset(p, 0, sizeof(struct ceph_blkin_trace_info));
|
|
|
|
p += sizeof(struct ceph_blkin_trace_info);
|
|
|
|
|
|
|
|
ceph_encode_32(&p, 0); /* client_inc, always 0 */
|
2018-07-14 04:18:37 +08:00
|
|
|
ceph_encode_timespec64(p, &req->r_mtime);
|
2016-05-26 06:29:52 +08:00
|
|
|
p += sizeof(struct ceph_timespec);
|
2017-04-04 20:39:36 +08:00
|
|
|
|
2017-06-15 22:30:53 +08:00
|
|
|
encode_oloc(&p, end, &req->r_t.target_oloc);
|
|
|
|
ceph_encode_string(&p, end, req->r_t.target_oid.name,
|
|
|
|
req->r_t.target_oid.name_len);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
/* ops, can imply data */
|
|
|
|
ceph_encode_16(&p, req->r_num_ops);
|
|
|
|
for (i = 0; i < req->r_num_ops; i++) {
|
|
|
|
data_len += osd_req_encode_op(p, &req->r_ops[i]);
|
|
|
|
p += sizeof(struct ceph_osd_op);
|
|
|
|
}
|
libceph: change how "safe" callback is used
An osd request currently has two callbacks. They inform the
initiator of the request when we've received confirmation for the
target osd that a request was received, and when the osd indicates
all changes described by the request are durable.
The only time the second callback is used is in the ceph file system
for a synchronous write. There's a race that makes some handling of
this case unsafe. This patch addresses this problem. The error
handling for this callback is also kind of gross, and this patch
changes that as well.
In ceph_sync_write(), if a safe callback is requested we want to add
the request on the ceph inode's unsafe items list. Because items on
this list must have their tid set (by ceph_osd_start_request()), the
request added *after* the call to that function returns. The
problem with this is that there's a race between starting the
request and adding it to the unsafe items list; the request may
already be complete before ceph_sync_write() even begins to put it
on the list.
To address this, we change the way the "safe" callback is used.
Rather than just calling it when the request is "safe", we use it to
notify the initiator the bounds (start and end) of the period during
which the request is *unsafe*. So the initiator gets notified just
before the request gets sent to the osd (when it is "unsafe"), and
again when it's known the results are durable (it's no longer
unsafe). The first call will get made in __send_request(), just
before the request message gets sent to the messenger for the first
time. That function is only called by __send_queued(), which is
always called with the osd client's request mutex held.
We then have this callback function insert the request on the ceph
inode's unsafe list when we're told the request is unsafe. This
will avoid the race because this call will be made under protection
of the osd client's request mutex. It also nicely groups the setup
and cleanup of the state associated with managing unsafe requests.
The name of the "safe" callback field is changed to "unsafe" to
better reflect its new purpose. It has a Boolean "unsafe" parameter
to indicate whether the request is becoming unsafe or is now safe.
Because the "msg" parameter wasn't used, we drop that.
This resolves the original problem reportedin:
http://tracker.ceph.com/issues/4706
Reported-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-04-16 00:20:42 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
ceph_encode_64(&p, req->r_snapid); /* snapid */
|
|
|
|
if (req->r_snapc) {
|
|
|
|
ceph_encode_64(&p, req->r_snapc->seq);
|
|
|
|
ceph_encode_32(&p, req->r_snapc->num_snaps);
|
|
|
|
for (i = 0; i < req->r_snapc->num_snaps; i++)
|
|
|
|
ceph_encode_64(&p, req->r_snapc->snaps[i]);
|
|
|
|
} else {
|
|
|
|
ceph_encode_64(&p, 0); /* snap_seq */
|
|
|
|
ceph_encode_32(&p, 0); /* snaps len */
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_encode_32(&p, req->r_attempts); /* retry_attempt */
|
2017-07-25 20:40:03 +08:00
|
|
|
BUG_ON(p > end - 8); /* space for features */
|
2016-05-26 06:29:52 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
msg->hdr.version = cpu_to_le16(8); /* MOSDOp v8 */
|
|
|
|
/* front_len is finalized in encode_request_finish() */
|
2017-07-25 20:40:03 +08:00
|
|
|
msg->front.iov_len = p - msg->front.iov_base;
|
|
|
|
msg->hdr.front_len = cpu_to_le32(msg->front.iov_len);
|
2016-05-26 06:29:52 +08:00
|
|
|
msg->hdr.data_len = cpu_to_le32(data_len);
|
|
|
|
/*
|
|
|
|
* The header "data_off" is a hint to the receiver allowing it
|
|
|
|
* to align received data into its buffers such that there's no
|
|
|
|
* need to re-copy it before writing it to disk (direct I/O).
|
|
|
|
*/
|
|
|
|
msg->hdr.data_off = cpu_to_le16(req->r_data_offset);
|
libceph: change how "safe" callback is used
An osd request currently has two callbacks. They inform the
initiator of the request when we've received confirmation for the
target osd that a request was received, and when the osd indicates
all changes described by the request are durable.
The only time the second callback is used is in the ceph file system
for a synchronous write. There's a race that makes some handling of
this case unsafe. This patch addresses this problem. The error
handling for this callback is also kind of gross, and this patch
changes that as well.
In ceph_sync_write(), if a safe callback is requested we want to add
the request on the ceph inode's unsafe items list. Because items on
this list must have their tid set (by ceph_osd_start_request()), the
request added *after* the call to that function returns. The
problem with this is that there's a race between starting the
request and adding it to the unsafe items list; the request may
already be complete before ceph_sync_write() even begins to put it
on the list.
To address this, we change the way the "safe" callback is used.
Rather than just calling it when the request is "safe", we use it to
notify the initiator the bounds (start and end) of the period during
which the request is *unsafe*. So the initiator gets notified just
before the request gets sent to the osd (when it is "unsafe"), and
again when it's known the results are durable (it's no longer
unsafe). The first call will get made in __send_request(), just
before the request message gets sent to the messenger for the first
time. That function is only called by __send_queued(), which is
always called with the osd client's request mutex held.
We then have this callback function insert the request on the ceph
inode's unsafe list when we're told the request is unsafe. This
will avoid the race because this call will be made under protection
of the osd client's request mutex. It also nicely groups the setup
and cleanup of the state associated with managing unsafe requests.
The name of the "safe" callback field is changed to "unsafe" to
better reflect its new purpose. It has a Boolean "unsafe" parameter
to indicate whether the request is becoming unsafe or is now safe.
Because the "msg" parameter wasn't used, we drop that.
This resolves the original problem reportedin:
http://tracker.ceph.com/issues/4706
Reported-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-04-16 00:20:42 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
dout("%s req %p msg %p oid %s oid_len %d\n", __func__, req, msg,
|
|
|
|
req->r_t.target_oid.name, req->r_t.target_oid.name_len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void encode_request_finish(struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
void *p = msg->front.iov_base;
|
2017-07-25 20:40:03 +08:00
|
|
|
void *const partial_end = p + msg->front.iov_len;
|
2017-06-15 22:30:54 +08:00
|
|
|
void *const end = p + msg->front_alloc_len;
|
|
|
|
|
|
|
|
if (CEPH_HAVE_FEATURE(msg->con->peer_features, RESEND_ON_SPLIT)) {
|
|
|
|
/* luminous OSD -- encode features and be done */
|
2017-07-25 20:40:03 +08:00
|
|
|
p = partial_end;
|
2017-06-15 22:30:54 +08:00
|
|
|
ceph_encode_64(&p, msg->con->peer_features);
|
|
|
|
} else {
|
|
|
|
struct {
|
|
|
|
char spgid[CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
CEPH_PGID_ENCODING_LEN + 1];
|
|
|
|
__le32 hash;
|
|
|
|
__le32 epoch;
|
|
|
|
__le32 flags;
|
|
|
|
char reqid[CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
sizeof(struct ceph_osd_reqid)];
|
|
|
|
char trace[sizeof(struct ceph_blkin_trace_info)];
|
|
|
|
__le32 client_inc;
|
|
|
|
struct ceph_timespec mtime;
|
|
|
|
} __packed head;
|
|
|
|
struct ceph_pg pgid;
|
|
|
|
void *oloc, *oid, *tail;
|
|
|
|
int oloc_len, oid_len, tail_len;
|
|
|
|
int len;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pre-luminous OSD -- reencode v8 into v4 using @head
|
|
|
|
* as a temporary buffer. Encode the raw PG; the rest
|
|
|
|
* is just a matter of moving oloc, oid and tail blobs
|
|
|
|
* around.
|
|
|
|
*/
|
|
|
|
memcpy(&head, p, sizeof(head));
|
|
|
|
p += sizeof(head);
|
|
|
|
|
|
|
|
oloc = p;
|
|
|
|
p += CEPH_ENCODING_START_BLK_LEN;
|
|
|
|
pgid.pool = ceph_decode_64(&p);
|
|
|
|
p += 4 + 4; /* preferred, key len */
|
|
|
|
len = ceph_decode_32(&p);
|
|
|
|
p += len; /* nspace */
|
|
|
|
oloc_len = p - oloc;
|
|
|
|
|
|
|
|
oid = p;
|
|
|
|
len = ceph_decode_32(&p);
|
|
|
|
p += len;
|
|
|
|
oid_len = p - oid;
|
|
|
|
|
|
|
|
tail = p;
|
2017-07-25 20:40:03 +08:00
|
|
|
tail_len = partial_end - p;
|
2017-06-15 22:30:54 +08:00
|
|
|
|
|
|
|
p = msg->front.iov_base;
|
|
|
|
ceph_encode_copy(&p, &head.client_inc, sizeof(head.client_inc));
|
|
|
|
ceph_encode_copy(&p, &head.epoch, sizeof(head.epoch));
|
|
|
|
ceph_encode_copy(&p, &head.flags, sizeof(head.flags));
|
|
|
|
ceph_encode_copy(&p, &head.mtime, sizeof(head.mtime));
|
|
|
|
|
|
|
|
/* reassert_version */
|
|
|
|
memset(p, 0, sizeof(struct ceph_eversion));
|
|
|
|
p += sizeof(struct ceph_eversion);
|
|
|
|
|
|
|
|
BUG_ON(p >= oloc);
|
|
|
|
memmove(p, oloc, oloc_len);
|
|
|
|
p += oloc_len;
|
|
|
|
|
|
|
|
pgid.seed = le32_to_cpu(head.hash);
|
|
|
|
encode_pgid(&p, &pgid); /* raw pg */
|
|
|
|
|
|
|
|
BUG_ON(p >= oid);
|
|
|
|
memmove(p, oid, oid_len);
|
|
|
|
p += oid_len;
|
|
|
|
|
|
|
|
/* tail -- ops, snapid, snapc, retry_attempt */
|
|
|
|
BUG_ON(p >= tail);
|
|
|
|
memmove(p, tail, tail_len);
|
|
|
|
p += tail_len;
|
|
|
|
|
|
|
|
msg->hdr.version = cpu_to_le16(4); /* MOSDOp v4 */
|
|
|
|
}
|
|
|
|
|
|
|
|
BUG_ON(p > end);
|
|
|
|
msg->front.iov_len = p - msg->front.iov_base;
|
|
|
|
msg->hdr.front_len = cpu_to_le32(msg->front.iov_len);
|
|
|
|
|
|
|
|
dout("%s msg %p tid %llu %u+%u+%u v%d\n", __func__, msg,
|
|
|
|
le64_to_cpu(msg->hdr.tid), le32_to_cpu(msg->hdr.front_len),
|
|
|
|
le32_to_cpu(msg->hdr.middle_len), le32_to_cpu(msg->hdr.data_len),
|
|
|
|
le16_to_cpu(msg->hdr.version));
|
2016-05-26 06:29:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* @req has to be assigned a tid and registered.
|
|
|
|
*/
|
|
|
|
static void send_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = req->r_osd;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
verify_osd_locked(osd);
|
2016-05-26 06:29:52 +08:00
|
|
|
WARN_ON(osd->o_osd != req->r_t.osd);
|
|
|
|
|
2017-06-19 18:18:05 +08:00
|
|
|
/* backoff? */
|
|
|
|
if (should_plug_request(req))
|
|
|
|
return;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
/*
|
|
|
|
* We may have a previously queued request message hanging
|
|
|
|
* around. Cancel it to avoid corrupting the msgr.
|
|
|
|
*/
|
|
|
|
if (req->r_sent)
|
|
|
|
ceph_msg_revoke(req->r_request);
|
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
req->r_flags |= CEPH_OSD_FLAG_KNOWN_REDIR;
|
|
|
|
if (req->r_attempts)
|
|
|
|
req->r_flags |= CEPH_OSD_FLAG_RETRY;
|
|
|
|
else
|
|
|
|
WARN_ON(req->r_flags & CEPH_OSD_FLAG_RETRY);
|
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
encode_request_partial(req, req->r_request);
|
2016-05-26 06:29:52 +08:00
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
dout("%s req %p tid %llu to pgid %llu.%x spgid %llu.%xs%d osd%d e%u flags 0x%x attempt %d\n",
|
2016-05-26 06:29:52 +08:00
|
|
|
__func__, req, req->r_tid, req->r_t.pgid.pool, req->r_t.pgid.seed,
|
2017-06-15 22:30:53 +08:00
|
|
|
req->r_t.spgid.pgid.pool, req->r_t.spgid.pgid.seed,
|
2017-06-15 22:30:55 +08:00
|
|
|
req->r_t.spgid.shard, osd->o_osd, req->r_t.epoch, req->r_flags,
|
|
|
|
req->r_attempts);
|
2016-05-26 06:29:52 +08:00
|
|
|
|
|
|
|
req->r_t.paused = false;
|
|
|
|
req->r_stamp = jiffies;
|
|
|
|
req->r_attempts++;
|
|
|
|
|
|
|
|
req->r_sent = osd->o_incarnation;
|
|
|
|
req->r_request->hdr.tid = cpu_to_le64(req->r_tid);
|
|
|
|
ceph_con_send(&osd->o_con, ceph_msg_get(req->r_request));
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
static void maybe_request_map(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
bool continuous = false;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
verify_osdc_locked(osdc);
|
2016-04-28 22:07:25 +08:00
|
|
|
WARN_ON(!osdc->osdmap->epoch);
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD) ||
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSEWR)) {
|
2016-04-28 22:07:25 +08:00
|
|
|
dout("%s osdc %p continuous\n", __func__, osdc);
|
|
|
|
continuous = true;
|
|
|
|
} else {
|
|
|
|
dout("%s osdc %p onetime\n", __func__, osdc);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ceph_monc_want_map(&osdc->client->monc, CEPH_SUB_OSDMAP,
|
|
|
|
osdc->osdmap->epoch + 1, continuous))
|
|
|
|
ceph_monc_renew_subs(&osdc->client->monc);
|
|
|
|
}
|
|
|
|
|
2017-04-04 20:39:37 +08:00
|
|
|
static void complete_request(struct ceph_osd_request *req, int err);
|
2016-04-28 22:07:27 +08:00
|
|
|
static void send_map_check(struct ceph_osd_request *req);
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void __submit_request(struct ceph_osd_request *req, bool wrlocked)
|
2014-02-01 01:33:39 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
struct ceph_osd *osd;
|
2016-04-28 22:07:27 +08:00
|
|
|
enum calc_target_result ct_res;
|
2018-05-15 21:47:58 +08:00
|
|
|
int err = 0;
|
2016-04-28 22:07:26 +08:00
|
|
|
bool need_send = false;
|
|
|
|
bool promoted = false;
|
2014-02-01 01:33:39 +08:00
|
|
|
|
2017-02-12 01:46:08 +08:00
|
|
|
WARN_ON(req->r_tid);
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s req %p wrlocked %d\n", __func__, req, wrlocked);
|
|
|
|
|
|
|
|
again:
|
2019-08-21 19:57:18 +08:00
|
|
|
ct_res = calc_target(osdc, &req->r_t, false);
|
2016-04-28 22:07:27 +08:00
|
|
|
if (ct_res == CALC_TARGET_POOL_DNE && !wrlocked)
|
|
|
|
goto promote;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
osd = lookup_create_osd(osdc, req->r_t.osd, wrlocked);
|
|
|
|
if (IS_ERR(osd)) {
|
|
|
|
WARN_ON(PTR_ERR(osd) != -EAGAIN || wrlocked);
|
|
|
|
goto promote;
|
2014-02-01 01:33:39 +08:00
|
|
|
}
|
|
|
|
|
2018-05-15 21:47:58 +08:00
|
|
|
if (osdc->abort_err) {
|
|
|
|
dout("req %p abort_err %d\n", req, osdc->abort_err);
|
|
|
|
err = osdc->abort_err;
|
|
|
|
} else if (osdc->osdmap->epoch < osdc->epoch_barrier) {
|
2017-04-18 21:21:16 +08:00
|
|
|
dout("req %p epoch %u barrier %u\n", req, osdc->osdmap->epoch,
|
|
|
|
osdc->epoch_barrier);
|
|
|
|
req->r_t.paused = true;
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
} else if ((req->r_flags & CEPH_OSD_FLAG_WRITE) &&
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSEWR)) {
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("req %p pausewr\n", req);
|
|
|
|
req->r_t.paused = true;
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
} else if ((req->r_flags & CEPH_OSD_FLAG_READ) &&
|
2016-04-28 22:07:25 +08:00
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD)) {
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("req %p pauserd\n", req);
|
|
|
|
req->r_t.paused = true;
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
} else if ((req->r_flags & CEPH_OSD_FLAG_WRITE) &&
|
|
|
|
!(req->r_flags & (CEPH_OSD_FLAG_FULL_TRY |
|
|
|
|
CEPH_OSD_FLAG_FULL_FORCE)) &&
|
2016-04-28 22:07:25 +08:00
|
|
|
(ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
2016-04-28 22:07:26 +08:00
|
|
|
pool_full(osdc, req->r_t.base_oloc.pool))) {
|
|
|
|
dout("req %p full/pool_full\n", req);
|
2018-12-18 17:31:48 +08:00
|
|
|
if (ceph_test_opt(osdc->client, ABORT_ON_FULL)) {
|
2018-05-15 21:47:58 +08:00
|
|
|
err = -ENOSPC;
|
2018-05-17 22:13:07 +08:00
|
|
|
} else {
|
|
|
|
pr_warn_ratelimited("FULL or reached pool quota\n");
|
|
|
|
req->r_t.paused = true;
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
} else if (!osd_homeless(osd)) {
|
|
|
|
need_send = true;
|
2014-02-01 01:33:39 +08:00
|
|
|
} else {
|
2016-04-28 22:07:26 +08:00
|
|
|
maybe_request_map(osdc);
|
2014-02-01 01:33:39 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
mutex_lock(&osd->lock);
|
|
|
|
/*
|
|
|
|
* Assign the tid atomically with send_request() to protect
|
|
|
|
* multiple writes to the same object from racing with each
|
|
|
|
* other, resulting in out of order ops on the OSDs.
|
|
|
|
*/
|
|
|
|
req->r_tid = atomic64_inc_return(&osdc->last_tid);
|
|
|
|
link_request(osd, req);
|
|
|
|
if (need_send)
|
|
|
|
send_request(req);
|
2018-05-15 21:47:58 +08:00
|
|
|
else if (err)
|
|
|
|
complete_request(req, err);
|
2016-04-28 22:07:26 +08:00
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
|
2018-05-22 22:26:51 +08:00
|
|
|
if (!err && ct_res == CALC_TARGET_POOL_DNE)
|
2016-04-28 22:07:27 +08:00
|
|
|
send_map_check(req);
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (promoted)
|
|
|
|
downgrade_write(&osdc->lock);
|
|
|
|
return;
|
|
|
|
|
|
|
|
promote:
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
wrlocked = true;
|
|
|
|
promoted = true;
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void account_request(struct ceph_osd_request *req)
|
|
|
|
{
|
2017-02-12 01:48:41 +08:00
|
|
|
WARN_ON(req->r_flags & (CEPH_OSD_FLAG_ACK | CEPH_OSD_FLAG_ONDISK));
|
2017-02-12 01:46:08 +08:00
|
|
|
WARN_ON(!(req->r_flags & (CEPH_OSD_FLAG_READ | CEPH_OSD_FLAG_WRITE)));
|
2016-04-28 22:07:26 +08:00
|
|
|
|
2017-02-12 01:46:08 +08:00
|
|
|
req->r_flags |= CEPH_OSD_FLAG_ONDISK;
|
2016-04-28 22:07:26 +08:00
|
|
|
atomic_inc(&req->r_osdc->num_requests);
|
2017-02-13 00:11:07 +08:00
|
|
|
|
|
|
|
req->r_start_stamp = jiffies;
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void submit_request(struct ceph_osd_request *req, bool wrlocked)
|
|
|
|
{
|
|
|
|
ceph_osdc_get_request(req);
|
|
|
|
account_request(req);
|
|
|
|
__submit_request(req, wrlocked);
|
|
|
|
}
|
|
|
|
|
2016-12-02 21:01:55 +08:00
|
|
|
static void finish_request(struct ceph_osd_request *req)
|
2016-04-28 22:07:26 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
WARN_ON(lookup_request_mc(&osdc->map_checks, req->r_tid));
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s req %p tid %llu\n", __func__, req, req->r_tid);
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
if (req->r_osd)
|
|
|
|
unlink_request(req->r_osd, req);
|
2016-04-28 22:07:26 +08:00
|
|
|
atomic_dec(&osdc->num_requests);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If an OSD has failed or returned and a request has been sent
|
|
|
|
* twice, it's possible to get a reply and end up here while the
|
|
|
|
* request message is queued for delivery. We will ignore the
|
|
|
|
* reply, so not a big deal, but better to try and catch it.
|
|
|
|
*/
|
|
|
|
ceph_msg_revoke(req->r_request);
|
|
|
|
ceph_msg_revoke_incoming(req->r_reply);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
static void __complete_request(struct ceph_osd_request *req)
|
|
|
|
{
|
2019-03-26 03:32:28 +08:00
|
|
|
dout("%s req %p tid %llu cb %ps result %d\n", __func__, req,
|
2018-05-21 21:33:48 +08:00
|
|
|
req->r_tid, req->r_callback, req->r_result);
|
|
|
|
|
|
|
|
if (req->r_callback)
|
2016-04-28 22:07:24 +08:00
|
|
|
req->r_callback(req);
|
2018-05-21 21:33:48 +08:00
|
|
|
complete_all(&req->r_completion);
|
|
|
|
ceph_osdc_put_request(req);
|
2016-04-28 22:07:24 +08:00
|
|
|
}
|
|
|
|
|
2018-05-21 22:00:29 +08:00
|
|
|
static void complete_request_workfn(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
container_of(work, struct ceph_osd_request, r_complete_work);
|
|
|
|
|
|
|
|
__complete_request(req);
|
2016-04-28 22:07:24 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
/*
|
2017-02-12 01:46:08 +08:00
|
|
|
* This is open-coded in handle_reply().
|
2016-04-28 22:07:27 +08:00
|
|
|
*/
|
|
|
|
static void complete_request(struct ceph_osd_request *req, int err)
|
|
|
|
{
|
|
|
|
dout("%s req %p tid %llu err %d\n", __func__, req, req->r_tid, err);
|
|
|
|
|
|
|
|
req->r_result = err;
|
2016-12-02 21:01:55 +08:00
|
|
|
finish_request(req);
|
2018-05-21 22:00:29 +08:00
|
|
|
|
|
|
|
INIT_WORK(&req->r_complete_work, complete_request_workfn);
|
|
|
|
queue_work(req->r_osdc->completion_wq, &req->r_complete_work);
|
2016-04-28 22:07:27 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void cancel_map_check(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
struct ceph_osd_request *lookup_req;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
|
|
|
|
lookup_req = lookup_request_mc(&osdc->map_checks, req->r_tid);
|
|
|
|
if (!lookup_req)
|
|
|
|
return;
|
|
|
|
|
|
|
|
WARN_ON(lookup_req != req);
|
|
|
|
erase_request_mc(&osdc->map_checks, req);
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void cancel_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
dout("%s req %p tid %llu\n", __func__, req, req->r_tid);
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
cancel_map_check(req);
|
2016-12-02 21:01:55 +08:00
|
|
|
finish_request(req);
|
2017-02-12 01:46:08 +08:00
|
|
|
complete_all(&req->r_completion);
|
libceph: always signal completion when done
r_safe_completion is currently, and has always been, signaled only if
on-disk ack was requested. It's there for fsync and syncfs, which wait
for in-flight writes to flush - all data write requests set ONDISK.
However, the pool perm check code introduced in 4.2 sends a write
request with only ACK set. An unfortunately timed syncfs can then hang
forever: r_safe_completion won't be signaled because only an unsafe
reply was requested.
We could patch ceph_osdc_sync() to skip !ONDISK write requests, but
that is somewhat incomplete and yet another special case. Instead,
rename this completion to r_done_completion and always signal it when
the OSD client is done with the request, whether unsafe, safe, or
error. This is a bit cleaner and helps with the cancellation code.
Reported-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2016-12-02 21:01:55 +08:00
|
|
|
ceph_osdc_put_request(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
|
2017-02-13 00:11:07 +08:00
|
|
|
static void abort_request(struct ceph_osd_request *req, int err)
|
|
|
|
{
|
|
|
|
dout("%s req %p tid %llu err %d\n", __func__, req, req->r_tid, err);
|
|
|
|
|
|
|
|
cancel_map_check(req);
|
|
|
|
complete_request(req, err);
|
|
|
|
}
|
|
|
|
|
2018-05-15 21:47:58 +08:00
|
|
|
static int abort_fn(struct ceph_osd_request *req, void *arg)
|
|
|
|
{
|
|
|
|
int err = *(int *)arg;
|
|
|
|
|
|
|
|
abort_request(req, err);
|
|
|
|
return 0; /* continue iteration */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Abort all in-flight requests with @err and arrange for all future
|
|
|
|
* requests to be failed immediately.
|
|
|
|
*/
|
|
|
|
void ceph_osdc_abort_requests(struct ceph_osd_client *osdc, int err)
|
|
|
|
{
|
|
|
|
dout("%s osdc %p err %d\n", __func__, osdc, err);
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
for_each_request(osdc, abort_fn, &err);
|
|
|
|
osdc->abort_err = err;
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_abort_requests);
|
|
|
|
|
2019-07-25 20:16:40 +08:00
|
|
|
void ceph_osdc_clear_abort_err(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
osdc->abort_err = 0;
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_clear_abort_err);
|
|
|
|
|
2017-04-18 21:21:16 +08:00
|
|
|
static void update_epoch_barrier(struct ceph_osd_client *osdc, u32 eb)
|
|
|
|
{
|
|
|
|
if (likely(eb > osdc->epoch_barrier)) {
|
|
|
|
dout("updating epoch_barrier from %u to %u\n",
|
|
|
|
osdc->epoch_barrier, eb);
|
|
|
|
osdc->epoch_barrier = eb;
|
|
|
|
/* Request map if we're not to the barrier yet */
|
|
|
|
if (eb > osdc->osdmap->epoch)
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void ceph_osdc_update_epoch_barrier(struct ceph_osd_client *osdc, u32 eb)
|
|
|
|
{
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
if (unlikely(eb > osdc->epoch_barrier)) {
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
update_epoch_barrier(osdc, eb);
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
} else {
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_update_epoch_barrier);
|
|
|
|
|
2018-05-17 00:21:34 +08:00
|
|
|
/*
|
|
|
|
* We can end up releasing caps as a result of abort_request().
|
|
|
|
* In that case, we probably want to ensure that the cap release message
|
|
|
|
* has an updated epoch barrier in it, so set the epoch barrier prior to
|
|
|
|
* aborting the first request.
|
|
|
|
*/
|
|
|
|
static int abort_on_full_fn(struct ceph_osd_request *req, void *arg)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
bool *victims = arg;
|
|
|
|
|
2018-05-30 22:29:14 +08:00
|
|
|
if ((req->r_flags & CEPH_OSD_FLAG_WRITE) &&
|
2018-05-17 00:21:34 +08:00
|
|
|
(ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
2018-05-30 20:58:25 +08:00
|
|
|
pool_full(osdc, req->r_t.base_oloc.pool))) {
|
2018-05-17 00:21:34 +08:00
|
|
|
if (!*victims) {
|
|
|
|
update_epoch_barrier(osdc, osdc->osdmap->epoch);
|
|
|
|
*victims = true;
|
|
|
|
}
|
|
|
|
abort_request(req, -ENOSPC);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0; /* continue iteration */
|
|
|
|
}
|
|
|
|
|
2017-04-04 20:39:39 +08:00
|
|
|
/*
|
|
|
|
* Drop all pending requests that are stalled waiting on a full condition to
|
2017-04-18 21:21:16 +08:00
|
|
|
* clear, and complete them with ENOSPC as the return code. Set the
|
|
|
|
* osdc->epoch_barrier to the latest map epoch that we've seen if any were
|
|
|
|
* cancelled.
|
2017-04-04 20:39:39 +08:00
|
|
|
*/
|
|
|
|
static void ceph_osdc_abort_on_full(struct ceph_osd_client *osdc)
|
|
|
|
{
|
2017-04-18 21:21:16 +08:00
|
|
|
bool victims = false;
|
2017-04-04 20:39:39 +08:00
|
|
|
|
2018-12-18 17:31:48 +08:00
|
|
|
if (ceph_test_opt(osdc->client, ABORT_ON_FULL) &&
|
2018-05-30 22:29:14 +08:00
|
|
|
(ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || have_pool_full(osdc)))
|
2018-05-17 00:21:34 +08:00
|
|
|
for_each_request(osdc, abort_on_full_fn, &victims);
|
2017-04-04 20:39:39 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
static void check_pool_dne(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
struct ceph_osdmap *map = osdc->osdmap;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
WARN_ON(!map->epoch);
|
|
|
|
|
|
|
|
if (req->r_attempts) {
|
|
|
|
/*
|
|
|
|
* We sent a request earlier, which means that
|
|
|
|
* previously the pool existed, and now it does not
|
|
|
|
* (i.e., it was deleted).
|
|
|
|
*/
|
|
|
|
req->r_map_dne_bound = map->epoch;
|
|
|
|
dout("%s req %p tid %llu pool disappeared\n", __func__, req,
|
|
|
|
req->r_tid);
|
|
|
|
} else {
|
|
|
|
dout("%s req %p tid %llu map_dne_bound %u have %u\n", __func__,
|
|
|
|
req, req->r_tid, req->r_map_dne_bound, map->epoch);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->r_map_dne_bound) {
|
|
|
|
if (map->epoch >= req->r_map_dne_bound) {
|
|
|
|
/* we had a new enough map */
|
|
|
|
pr_info_ratelimited("tid %llu pool does not exist\n",
|
|
|
|
req->r_tid);
|
|
|
|
complete_request(req, -ENOENT);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
send_map_check(req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void map_check_cb(struct ceph_mon_generic_request *greq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = &greq->monc->client->osdc;
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
u64 tid = greq->private_data;
|
|
|
|
|
|
|
|
WARN_ON(greq->result || !greq->u.newest);
|
|
|
|
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
req = lookup_request_mc(&osdc->map_checks, tid);
|
|
|
|
if (!req) {
|
|
|
|
dout("%s tid %llu dne\n", __func__, tid);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
dout("%s req %p tid %llu map_dne_bound %u newest %llu\n", __func__,
|
|
|
|
req, req->r_tid, req->r_map_dne_bound, greq->u.newest);
|
|
|
|
if (!req->r_map_dne_bound)
|
|
|
|
req->r_map_dne_bound = greq->u.newest;
|
|
|
|
erase_request_mc(&osdc->map_checks, req);
|
|
|
|
check_pool_dne(req);
|
|
|
|
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
out_unlock:
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void send_map_check(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
struct ceph_osd_request *lookup_req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
|
|
|
|
lookup_req = lookup_request_mc(&osdc->map_checks, req->r_tid);
|
|
|
|
if (lookup_req) {
|
|
|
|
WARN_ON(lookup_req != req);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_osdc_get_request(req);
|
|
|
|
insert_request_mc(&osdc->map_checks, req);
|
|
|
|
ret = ceph_monc_get_version_async(&osdc->client->monc, "osdmap",
|
|
|
|
map_check_cb, req->r_tid);
|
|
|
|
WARN_ON(ret);
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* lingering requests, watch/notify v2 infrastructure
|
|
|
|
*/
|
|
|
|
static void linger_release(struct kref *kref)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq =
|
|
|
|
container_of(kref, struct ceph_osd_linger_request, kref);
|
|
|
|
|
|
|
|
dout("%s lreq %p reg_req %p ping_req %p\n", __func__, lreq,
|
|
|
|
lreq->reg_req, lreq->ping_req);
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&lreq->node));
|
|
|
|
WARN_ON(!RB_EMPTY_NODE(&lreq->osdc_node));
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!RB_EMPTY_NODE(&lreq->mc_node));
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(!list_empty(&lreq->scan_item));
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!list_empty(&lreq->pending_lworks));
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(lreq->osd);
|
|
|
|
|
|
|
|
if (lreq->reg_req)
|
|
|
|
ceph_osdc_put_request(lreq->reg_req);
|
|
|
|
if (lreq->ping_req)
|
|
|
|
ceph_osdc_put_request(lreq->ping_req);
|
|
|
|
target_destroy(&lreq->t);
|
|
|
|
kfree(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_put(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
if (lreq)
|
|
|
|
kref_put(&lreq->kref, linger_release);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ceph_osd_linger_request *
|
|
|
|
linger_get(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
kref_get(&lreq->kref);
|
|
|
|
return lreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ceph_osd_linger_request *
|
|
|
|
linger_alloc(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
|
|
|
|
|
|
|
lreq = kzalloc(sizeof(*lreq), GFP_NOIO);
|
|
|
|
if (!lreq)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
kref_init(&lreq->kref);
|
|
|
|
mutex_init(&lreq->lock);
|
|
|
|
RB_CLEAR_NODE(&lreq->node);
|
|
|
|
RB_CLEAR_NODE(&lreq->osdc_node);
|
2016-04-28 22:07:27 +08:00
|
|
|
RB_CLEAR_NODE(&lreq->mc_node);
|
2016-05-26 07:15:02 +08:00
|
|
|
INIT_LIST_HEAD(&lreq->scan_item);
|
2016-04-28 22:07:27 +08:00
|
|
|
INIT_LIST_HEAD(&lreq->pending_lworks);
|
2016-05-26 07:15:02 +08:00
|
|
|
init_completion(&lreq->reg_commit_wait);
|
2016-04-28 22:07:27 +08:00
|
|
|
init_completion(&lreq->notify_finish_wait);
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
lreq->osdc = osdc;
|
|
|
|
target_init(&lreq->t);
|
|
|
|
|
|
|
|
dout("%s lreq %p\n", __func__, lreq);
|
|
|
|
return lreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFINE_RB_INSDEL_FUNCS(linger, struct ceph_osd_linger_request, linger_id, node)
|
|
|
|
DEFINE_RB_FUNCS(linger_osdc, struct ceph_osd_linger_request, linger_id, osdc_node)
|
2016-04-28 22:07:27 +08:00
|
|
|
DEFINE_RB_FUNCS(linger_mc, struct ceph_osd_linger_request, linger_id, mc_node)
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create linger request <-> OSD session relation.
|
|
|
|
*
|
|
|
|
* @lreq has to be registered, @osd may be homeless.
|
|
|
|
*/
|
|
|
|
static void link_linger(struct ceph_osd *osd,
|
|
|
|
struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
verify_osd_locked(osd);
|
|
|
|
WARN_ON(!lreq->linger_id || lreq->osd);
|
|
|
|
dout("%s osd %p osd%d lreq %p linger_id %llu\n", __func__, osd,
|
|
|
|
osd->o_osd, lreq, lreq->linger_id);
|
|
|
|
|
|
|
|
if (!osd_homeless(osd))
|
|
|
|
__remove_osd_from_lru(osd);
|
|
|
|
else
|
|
|
|
atomic_inc(&osd->o_osdc->num_homeless);
|
|
|
|
|
|
|
|
get_osd(osd);
|
|
|
|
insert_linger(&osd->o_linger_requests, lreq);
|
|
|
|
lreq->osd = osd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void unlink_linger(struct ceph_osd *osd,
|
|
|
|
struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
verify_osd_locked(osd);
|
|
|
|
WARN_ON(lreq->osd != osd);
|
|
|
|
dout("%s osd %p osd%d lreq %p linger_id %llu\n", __func__, osd,
|
|
|
|
osd->o_osd, lreq, lreq->linger_id);
|
|
|
|
|
|
|
|
lreq->osd = NULL;
|
|
|
|
erase_linger(&osd->o_linger_requests, lreq);
|
|
|
|
put_osd(osd);
|
|
|
|
|
|
|
|
if (!osd_homeless(osd))
|
|
|
|
maybe_move_osd_to_lru(osd);
|
|
|
|
else
|
|
|
|
atomic_dec(&osd->o_osdc->num_homeless);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool __linger_registered(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
verify_osdc_locked(lreq->osdc);
|
|
|
|
|
|
|
|
return !RB_EMPTY_NODE(&lreq->osdc_node);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool linger_registered(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
bool registered;
|
|
|
|
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
registered = __linger_registered(lreq);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
|
|
|
|
return registered;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_register(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
WARN_ON(lreq->linger_id);
|
|
|
|
|
|
|
|
linger_get(lreq);
|
|
|
|
lreq->linger_id = ++osdc->last_linger_id;
|
|
|
|
insert_linger_osdc(&osdc->linger_requests, lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_unregister(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
|
|
|
|
erase_linger_osdc(&osdc->linger_requests, lreq);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void cancel_linger_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = req->r_priv;
|
|
|
|
|
|
|
|
WARN_ON(!req->r_linger);
|
|
|
|
cancel_request(req);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct linger_work {
|
|
|
|
struct work_struct work;
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
2016-04-28 22:07:27 +08:00
|
|
|
struct list_head pending_item;
|
|
|
|
unsigned long queued_stamp;
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
union {
|
|
|
|
struct {
|
|
|
|
u64 notify_id;
|
|
|
|
u64 notifier_id;
|
|
|
|
void *payload; /* points into @msg front */
|
|
|
|
size_t payload_len;
|
|
|
|
|
|
|
|
struct ceph_msg *msg; /* for ceph_msg_put() */
|
|
|
|
} notify;
|
|
|
|
struct {
|
|
|
|
int err;
|
|
|
|
} error;
|
|
|
|
};
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct linger_work *lwork_alloc(struct ceph_osd_linger_request *lreq,
|
|
|
|
work_func_t workfn)
|
|
|
|
{
|
|
|
|
struct linger_work *lwork;
|
|
|
|
|
|
|
|
lwork = kzalloc(sizeof(*lwork), GFP_NOIO);
|
|
|
|
if (!lwork)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
INIT_WORK(&lwork->work, workfn);
|
2016-04-28 22:07:27 +08:00
|
|
|
INIT_LIST_HEAD(&lwork->pending_item);
|
2016-05-26 07:15:02 +08:00
|
|
|
lwork->lreq = linger_get(lreq);
|
|
|
|
|
|
|
|
return lwork;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void lwork_free(struct linger_work *lwork)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = lwork->lreq;
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
mutex_lock(&lreq->lock);
|
|
|
|
list_del(&lwork->pending_item);
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
linger_put(lreq);
|
|
|
|
kfree(lwork);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void lwork_queue(struct linger_work *lwork)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = lwork->lreq;
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
|
|
|
|
verify_lreq_locked(lreq);
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!list_empty(&lwork->pending_item));
|
|
|
|
|
|
|
|
lwork->queued_stamp = jiffies;
|
|
|
|
list_add_tail(&lwork->pending_item, &lreq->pending_lworks);
|
2016-05-26 07:15:02 +08:00
|
|
|
queue_work(osdc->notify_wq, &lwork->work);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void do_watch_notify(struct work_struct *w)
|
|
|
|
{
|
|
|
|
struct linger_work *lwork = container_of(w, struct linger_work, work);
|
|
|
|
struct ceph_osd_linger_request *lreq = lwork->lreq;
|
|
|
|
|
|
|
|
if (!linger_registered(lreq)) {
|
|
|
|
dout("%s lreq %p not registered\n", __func__, lreq);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!lreq->is_watch);
|
2016-05-26 07:15:02 +08:00
|
|
|
dout("%s lreq %p notify_id %llu notifier_id %llu payload_len %zu\n",
|
|
|
|
__func__, lreq, lwork->notify.notify_id, lwork->notify.notifier_id,
|
|
|
|
lwork->notify.payload_len);
|
|
|
|
lreq->wcb(lreq->data, lwork->notify.notify_id, lreq->linger_id,
|
|
|
|
lwork->notify.notifier_id, lwork->notify.payload,
|
|
|
|
lwork->notify.payload_len);
|
|
|
|
|
|
|
|
out:
|
|
|
|
ceph_msg_put(lwork->notify.msg);
|
|
|
|
lwork_free(lwork);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void do_watch_error(struct work_struct *w)
|
|
|
|
{
|
|
|
|
struct linger_work *lwork = container_of(w, struct linger_work, work);
|
|
|
|
struct ceph_osd_linger_request *lreq = lwork->lreq;
|
|
|
|
|
|
|
|
if (!linger_registered(lreq)) {
|
|
|
|
dout("%s lreq %p not registered\n", __func__, lreq);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
dout("%s lreq %p err %d\n", __func__, lreq, lwork->error.err);
|
|
|
|
lreq->errcb(lreq->data, lreq->linger_id, lwork->error.err);
|
|
|
|
|
|
|
|
out:
|
|
|
|
lwork_free(lwork);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void queue_watch_error(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct linger_work *lwork;
|
|
|
|
|
|
|
|
lwork = lwork_alloc(lreq, do_watch_error);
|
|
|
|
if (!lwork) {
|
|
|
|
pr_err("failed to allocate error-lwork\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
lwork->error.err = lreq->last_error;
|
|
|
|
lwork_queue(lwork);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_reg_commit_complete(struct ceph_osd_linger_request *lreq,
|
|
|
|
int result)
|
|
|
|
{
|
|
|
|
if (!completion_done(&lreq->reg_commit_wait)) {
|
|
|
|
lreq->reg_commit_error = (result <= 0 ? result : 0);
|
|
|
|
complete_all(&lreq->reg_commit_wait);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_commit_cb(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = req->r_priv;
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
|
|
|
dout("%s lreq %p linger_id %llu result %d\n", __func__, lreq,
|
|
|
|
lreq->linger_id, req->r_result);
|
|
|
|
linger_reg_commit_complete(lreq, req->r_result);
|
|
|
|
lreq->committed = true;
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
if (!lreq->is_watch) {
|
|
|
|
struct ceph_osd_data *osd_data =
|
|
|
|
osd_req_op_data(req, 0, notify, response_data);
|
|
|
|
void *p = page_address(osd_data->pages[0]);
|
|
|
|
|
|
|
|
WARN_ON(req->r_ops[0].op != CEPH_OSD_OP_NOTIFY ||
|
|
|
|
osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
|
|
|
|
|
|
|
|
/* make note of the notify_id */
|
|
|
|
if (req->r_ops[0].outdata_len >= sizeof(u64)) {
|
|
|
|
lreq->notify_id = ceph_decode_64(&p);
|
|
|
|
dout("lreq %p notify_id %llu\n", lreq,
|
|
|
|
lreq->notify_id);
|
|
|
|
} else {
|
|
|
|
dout("lreq %p no notify_id\n", lreq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int normalize_watch_error(int err)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Translate ENOENT -> ENOTCONN so that a delete->disconnection
|
|
|
|
* notification and a failure to reconnect because we raced with
|
|
|
|
* the delete appear the same to the user.
|
|
|
|
*/
|
|
|
|
if (err == -ENOENT)
|
|
|
|
err = -ENOTCONN;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_reconnect_cb(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = req->r_priv;
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
|
|
|
dout("%s lreq %p linger_id %llu result %d last_error %d\n", __func__,
|
|
|
|
lreq, lreq->linger_id, req->r_result, lreq->last_error);
|
|
|
|
if (req->r_result < 0) {
|
|
|
|
if (!lreq->last_error) {
|
|
|
|
lreq->last_error = normalize_watch_error(req->r_result);
|
|
|
|
queue_watch_error(lreq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void send_linger(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req = lreq->reg_req;
|
|
|
|
struct ceph_osd_req_op *op = &req->r_ops[0];
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(req->r_osdc);
|
|
|
|
dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
|
|
|
|
|
|
|
|
if (req->r_osd)
|
|
|
|
cancel_linger_request(req);
|
|
|
|
|
|
|
|
request_reinit(req);
|
|
|
|
ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
|
|
|
|
req->r_flags = lreq->t.flags;
|
|
|
|
req->r_mtime = lreq->mtime;
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
2016-04-28 22:07:27 +08:00
|
|
|
if (lreq->is_watch && lreq->committed) {
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
|
|
|
|
op->watch.cookie != lreq->linger_id);
|
|
|
|
op->watch.op = CEPH_OSD_WATCH_OP_RECONNECT;
|
|
|
|
op->watch.gen = ++lreq->register_gen;
|
|
|
|
dout("lreq %p reconnect register_gen %u\n", lreq,
|
|
|
|
op->watch.gen);
|
|
|
|
req->r_callback = linger_reconnect_cb;
|
|
|
|
} else {
|
2016-04-28 22:07:27 +08:00
|
|
|
if (!lreq->is_watch)
|
|
|
|
lreq->notify_id = 0;
|
|
|
|
else
|
|
|
|
WARN_ON(op->watch.op != CEPH_OSD_WATCH_OP_WATCH);
|
2016-05-26 07:15:02 +08:00
|
|
|
dout("lreq %p register\n", lreq);
|
|
|
|
req->r_callback = linger_commit_cb;
|
|
|
|
}
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
|
|
|
|
req->r_priv = linger_get(lreq);
|
|
|
|
req->r_linger = true;
|
|
|
|
|
|
|
|
submit_request(req, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_ping_cb(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq = req->r_priv;
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
|
|
|
dout("%s lreq %p linger_id %llu result %d ping_sent %lu last_error %d\n",
|
|
|
|
__func__, lreq, lreq->linger_id, req->r_result, lreq->ping_sent,
|
|
|
|
lreq->last_error);
|
|
|
|
if (lreq->register_gen == req->r_ops[0].watch.gen) {
|
2016-04-28 22:07:27 +08:00
|
|
|
if (!req->r_result) {
|
|
|
|
lreq->watch_valid_thru = lreq->ping_sent;
|
|
|
|
} else if (!lreq->last_error) {
|
2016-05-26 07:15:02 +08:00
|
|
|
lreq->last_error = normalize_watch_error(req->r_result);
|
|
|
|
queue_watch_error(lreq);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
dout("lreq %p register_gen %u ignoring old pong %u\n", lreq,
|
|
|
|
lreq->register_gen, req->r_ops[0].watch.gen);
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void send_linger_ping(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
struct ceph_osd_request *req = lreq->ping_req;
|
|
|
|
struct ceph_osd_req_op *op = &req->r_ops[0];
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD)) {
|
2016-05-26 07:15:02 +08:00
|
|
|
dout("%s PAUSERD\n", __func__);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
lreq->ping_sent = jiffies;
|
|
|
|
dout("%s lreq %p linger_id %llu ping_sent %lu register_gen %u\n",
|
|
|
|
__func__, lreq, lreq->linger_id, lreq->ping_sent,
|
|
|
|
lreq->register_gen);
|
|
|
|
|
|
|
|
if (req->r_osd)
|
|
|
|
cancel_linger_request(req);
|
|
|
|
|
|
|
|
request_reinit(req);
|
|
|
|
target_copy(&req->r_t, &lreq->t);
|
|
|
|
|
|
|
|
WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
|
|
|
|
op->watch.cookie != lreq->linger_id ||
|
|
|
|
op->watch.op != CEPH_OSD_WATCH_OP_PING);
|
|
|
|
op->watch.gen = lreq->register_gen;
|
|
|
|
req->r_callback = linger_ping_cb;
|
|
|
|
req->r_priv = linger_get(lreq);
|
|
|
|
req->r_linger = true;
|
|
|
|
|
|
|
|
ceph_osdc_get_request(req);
|
|
|
|
account_request(req);
|
|
|
|
req->r_tid = atomic64_inc_return(&osdc->last_tid);
|
|
|
|
link_request(lreq->osd, req);
|
|
|
|
send_request(req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_submit(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
struct ceph_osd *osd;
|
|
|
|
|
2018-10-11 18:58:33 +08:00
|
|
|
down_write(&osdc->lock);
|
|
|
|
linger_register(lreq);
|
|
|
|
if (lreq->is_watch) {
|
|
|
|
lreq->reg_req->r_ops[0].watch.cookie = lreq->linger_id;
|
|
|
|
lreq->ping_req->r_ops[0].watch.cookie = lreq->linger_id;
|
|
|
|
} else {
|
|
|
|
lreq->reg_req->r_ops[0].notify.cookie = lreq->linger_id;
|
|
|
|
}
|
|
|
|
|
2019-08-21 19:57:18 +08:00
|
|
|
calc_target(osdc, &lreq->t, false);
|
2016-05-26 07:15:02 +08:00
|
|
|
osd = lookup_create_osd(osdc, lreq->t.osd, true);
|
|
|
|
link_linger(osd, lreq);
|
|
|
|
|
|
|
|
send_linger(lreq);
|
2018-10-11 18:58:33 +08:00
|
|
|
up_write(&osdc->lock);
|
2016-05-26 07:15:02 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
static void cancel_linger_map_check(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
struct ceph_osd_linger_request *lookup_lreq;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
|
|
|
|
lookup_lreq = lookup_linger_mc(&osdc->linger_map_checks,
|
|
|
|
lreq->linger_id);
|
|
|
|
if (!lookup_lreq)
|
|
|
|
return;
|
|
|
|
|
|
|
|
WARN_ON(lookup_lreq != lreq);
|
|
|
|
erase_linger_mc(&osdc->linger_map_checks, lreq);
|
|
|
|
linger_put(lreq);
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* @lreq has to be both registered and linked.
|
|
|
|
*/
|
|
|
|
static void __linger_cancel(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
2016-04-28 22:07:27 +08:00
|
|
|
if (lreq->is_watch && lreq->ping_req->r_osd)
|
2016-05-26 07:15:02 +08:00
|
|
|
cancel_linger_request(lreq->ping_req);
|
|
|
|
if (lreq->reg_req->r_osd)
|
|
|
|
cancel_linger_request(lreq->reg_req);
|
2016-04-28 22:07:27 +08:00
|
|
|
cancel_linger_map_check(lreq);
|
2016-05-26 07:15:02 +08:00
|
|
|
unlink_linger(lreq->osd, lreq);
|
|
|
|
linger_unregister(lreq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_cancel(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
if (__linger_registered(lreq))
|
|
|
|
__linger_cancel(lreq);
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
static void send_linger_map_check(struct ceph_osd_linger_request *lreq);
|
|
|
|
|
|
|
|
static void check_linger_pool_dne(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
struct ceph_osdmap *map = osdc->osdmap;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
WARN_ON(!map->epoch);
|
|
|
|
|
|
|
|
if (lreq->register_gen) {
|
|
|
|
lreq->map_dne_bound = map->epoch;
|
|
|
|
dout("%s lreq %p linger_id %llu pool disappeared\n", __func__,
|
|
|
|
lreq, lreq->linger_id);
|
|
|
|
} else {
|
|
|
|
dout("%s lreq %p linger_id %llu map_dne_bound %u have %u\n",
|
|
|
|
__func__, lreq, lreq->linger_id, lreq->map_dne_bound,
|
|
|
|
map->epoch);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (lreq->map_dne_bound) {
|
|
|
|
if (map->epoch >= lreq->map_dne_bound) {
|
|
|
|
/* we had a new enough map */
|
|
|
|
pr_info("linger_id %llu pool does not exist\n",
|
|
|
|
lreq->linger_id);
|
|
|
|
linger_reg_commit_complete(lreq, -ENOENT);
|
|
|
|
__linger_cancel(lreq);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
send_linger_map_check(lreq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void linger_map_check_cb(struct ceph_mon_generic_request *greq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = &greq->monc->client->osdc;
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
|
|
|
u64 linger_id = greq->private_data;
|
|
|
|
|
|
|
|
WARN_ON(greq->result || !greq->u.newest);
|
|
|
|
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
lreq = lookup_linger_mc(&osdc->linger_map_checks, linger_id);
|
|
|
|
if (!lreq) {
|
|
|
|
dout("%s linger_id %llu dne\n", __func__, linger_id);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
dout("%s lreq %p linger_id %llu map_dne_bound %u newest %llu\n",
|
|
|
|
__func__, lreq, lreq->linger_id, lreq->map_dne_bound,
|
|
|
|
greq->u.newest);
|
|
|
|
if (!lreq->map_dne_bound)
|
|
|
|
lreq->map_dne_bound = greq->u.newest;
|
|
|
|
erase_linger_mc(&osdc->linger_map_checks, lreq);
|
|
|
|
check_linger_pool_dne(lreq);
|
|
|
|
|
|
|
|
linger_put(lreq);
|
|
|
|
out_unlock:
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void send_linger_map_check(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
struct ceph_osd_linger_request *lookup_lreq;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
verify_osdc_wrlocked(osdc);
|
|
|
|
|
|
|
|
lookup_lreq = lookup_linger_mc(&osdc->linger_map_checks,
|
|
|
|
lreq->linger_id);
|
|
|
|
if (lookup_lreq) {
|
|
|
|
WARN_ON(lookup_lreq != lreq);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
linger_get(lreq);
|
|
|
|
insert_linger_mc(&osdc->linger_map_checks, lreq);
|
|
|
|
ret = ceph_monc_get_version_async(&osdc->client->monc, "osdmap",
|
|
|
|
linger_map_check_cb, lreq->linger_id);
|
|
|
|
WARN_ON(ret);
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
static int linger_reg_commit_wait(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
|
|
|
|
ret = wait_for_completion_interruptible(&lreq->reg_commit_wait);
|
|
|
|
return ret ?: lreq->reg_commit_error;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
static int linger_notify_finish_wait(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
|
|
|
|
ret = wait_for_completion_interruptible(&lreq->notify_finish_wait);
|
|
|
|
return ret ?: lreq->notify_finish_error;
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
2016-04-28 22:07:24 +08:00
|
|
|
* Timeout callback, called every N seconds. When 1 or more OSD
|
|
|
|
* requests has been active for more than N seconds, we send a keepalive
|
|
|
|
* (tag + timestamp) to its OSD to ensure any communications channel
|
|
|
|
* reset is detected.
|
2009-10-07 02:31:10 +08:00
|
|
|
*/
|
|
|
|
static void handle_timeout(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc =
|
|
|
|
container_of(work, struct ceph_osd_client, timeout_work.work);
|
2015-05-15 17:02:17 +08:00
|
|
|
struct ceph_options *opts = osdc->client->options;
|
2016-04-28 22:07:26 +08:00
|
|
|
unsigned long cutoff = jiffies - opts->osd_keepalive_timeout;
|
2017-02-13 00:11:07 +08:00
|
|
|
unsigned long expiry_cutoff = jiffies - opts->osd_request_timeout;
|
2016-04-28 22:07:26 +08:00
|
|
|
LIST_HEAD(slow_osds);
|
|
|
|
struct rb_node *n, *p;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s osdc %p\n", __func__, osdc);
|
|
|
|
down_write(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2010-02-27 07:32:31 +08:00
|
|
|
/*
|
|
|
|
* ping osds that are a bit slow. this ensures that if there
|
|
|
|
* is a break in the TCP connection we will notice, and reopen
|
|
|
|
* a connection with that osd (from the fault callback).
|
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
for (n = rb_first(&osdc->osds); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node);
|
|
|
|
bool found = false;
|
|
|
|
|
2017-02-13 00:11:07 +08:00
|
|
|
for (p = rb_first(&osd->o_requests); p; ) {
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(p, struct ceph_osd_request, r_node);
|
|
|
|
|
2017-02-13 00:11:07 +08:00
|
|
|
p = rb_next(p); /* abort_request() */
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (time_before(req->r_stamp, cutoff)) {
|
|
|
|
dout(" req %p tid %llu on osd%d is laggy\n",
|
|
|
|
req, req->r_tid, osd->o_osd);
|
|
|
|
found = true;
|
|
|
|
}
|
2017-02-13 00:11:07 +08:00
|
|
|
if (opts->osd_request_timeout &&
|
|
|
|
time_before(req->r_start_stamp, expiry_cutoff)) {
|
|
|
|
pr_err_ratelimited("tid %llu on osd%d timeout\n",
|
|
|
|
req->r_tid, osd->o_osd);
|
|
|
|
abort_request(req, -ETIMEDOUT);
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
for (p = rb_first(&osd->o_linger_requests); p; p = rb_next(p)) {
|
|
|
|
struct ceph_osd_linger_request *lreq =
|
|
|
|
rb_entry(p, struct ceph_osd_linger_request, node);
|
|
|
|
|
|
|
|
dout(" lreq %p linger_id %llu is served by osd%d\n",
|
|
|
|
lreq, lreq->linger_id, osd->o_osd);
|
|
|
|
found = true;
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
2016-04-28 22:07:27 +08:00
|
|
|
if (lreq->is_watch && lreq->committed && !lreq->last_error)
|
2016-05-26 07:15:02 +08:00
|
|
|
send_linger_ping(lreq);
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
}
|
2010-02-27 07:32:31 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (found)
|
|
|
|
list_move_tail(&osd->o_keepalive_item, &slow_osds);
|
2010-02-27 07:32:31 +08:00
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
|
2017-02-13 00:11:07 +08:00
|
|
|
if (opts->osd_request_timeout) {
|
|
|
|
for (p = rb_first(&osdc->homeless_osd.o_requests); p; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(p, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
p = rb_next(p); /* abort_request() */
|
|
|
|
|
|
|
|
if (time_before(req->r_start_stamp, expiry_cutoff)) {
|
|
|
|
pr_err_ratelimited("tid %llu on osd%d timeout\n",
|
|
|
|
req->r_tid, osdc->homeless_osd.o_osd);
|
|
|
|
abort_request(req, -ETIMEDOUT);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (atomic_read(&osdc->num_homeless) || !list_empty(&slow_osds))
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
|
2010-02-27 07:32:31 +08:00
|
|
|
while (!list_empty(&slow_osds)) {
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd *osd = list_first_entry(&slow_osds,
|
|
|
|
struct ceph_osd,
|
|
|
|
o_keepalive_item);
|
2010-02-27 07:32:31 +08:00
|
|
|
list_del_init(&osd->o_keepalive_item);
|
2009-10-07 02:31:10 +08:00
|
|
|
ceph_con_keepalive(&osd->o_con);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
up_write(&osdc->lock);
|
2016-04-28 22:07:24 +08:00
|
|
|
schedule_delayed_work(&osdc->timeout_work,
|
|
|
|
osdc->client->options->osd_keepalive_timeout);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2010-02-04 03:00:26 +08:00
|
|
|
static void handle_osds_timeout(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc =
|
|
|
|
container_of(work, struct ceph_osd_client,
|
|
|
|
osds_timeout_work.work);
|
2015-05-15 17:02:17 +08:00
|
|
|
unsigned long delay = osdc->client->options->osd_idle_ttl / 4;
|
2016-04-28 22:07:22 +08:00
|
|
|
struct ceph_osd *osd, *nosd;
|
2010-02-04 03:00:26 +08:00
|
|
|
|
2016-04-28 22:07:22 +08:00
|
|
|
dout("%s osdc %p\n", __func__, osdc);
|
2016-04-28 22:07:26 +08:00
|
|
|
down_write(&osdc->lock);
|
2016-04-28 22:07:22 +08:00
|
|
|
list_for_each_entry_safe(osd, nosd, &osdc->osd_lru, o_osd_lru) {
|
|
|
|
if (time_before(jiffies, osd->lru_ttl))
|
|
|
|
break;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_requests));
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osd->o_linger_requests));
|
2016-04-28 22:07:26 +08:00
|
|
|
close_osd(osd);
|
2016-04-28 22:07:22 +08:00
|
|
|
}
|
2010-02-04 03:00:26 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
up_write(&osdc->lock);
|
2010-02-04 03:00:26 +08:00
|
|
|
schedule_delayed_work(&osdc->osds_timeout_work,
|
|
|
|
round_jiffies_relative(delay));
|
|
|
|
}
|
|
|
|
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
static int ceph_oloc_decode(void **p, void *end,
|
|
|
|
struct ceph_object_locator *oloc)
|
|
|
|
{
|
|
|
|
u8 struct_v, struct_cv;
|
|
|
|
u32 len;
|
|
|
|
void *struct_end;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
ceph_decode_need(p, end, 1 + 1 + 4, e_inval);
|
|
|
|
struct_v = ceph_decode_8(p);
|
|
|
|
struct_cv = ceph_decode_8(p);
|
|
|
|
if (struct_v < 3) {
|
|
|
|
pr_warn("got v %d < 3 cv %d of ceph_object_locator\n",
|
|
|
|
struct_v, struct_cv);
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
if (struct_cv > 6) {
|
|
|
|
pr_warn("got v %d cv %d > 6 of ceph_object_locator\n",
|
|
|
|
struct_v, struct_cv);
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
len = ceph_decode_32(p);
|
|
|
|
ceph_decode_need(p, end, len, e_inval);
|
|
|
|
struct_end = *p + len;
|
|
|
|
|
|
|
|
oloc->pool = ceph_decode_64(p);
|
|
|
|
*p += 4; /* skip preferred */
|
|
|
|
|
|
|
|
len = ceph_decode_32(p);
|
|
|
|
if (len > 0) {
|
|
|
|
pr_warn("ceph_object_locator::key is set\n");
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (struct_v >= 5) {
|
2016-06-13 19:05:13 +08:00
|
|
|
bool changed = false;
|
|
|
|
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
len = ceph_decode_32(p);
|
|
|
|
if (len > 0) {
|
2016-02-14 11:24:31 +08:00
|
|
|
ceph_decode_need(p, end, len, e_inval);
|
2016-06-13 19:05:13 +08:00
|
|
|
if (!oloc->pool_ns ||
|
|
|
|
ceph_compare_string(oloc->pool_ns, *p, len))
|
|
|
|
changed = true;
|
2016-02-14 11:24:31 +08:00
|
|
|
*p += len;
|
2016-06-13 19:05:13 +08:00
|
|
|
} else {
|
|
|
|
if (oloc->pool_ns)
|
|
|
|
changed = true;
|
|
|
|
}
|
|
|
|
if (changed) {
|
|
|
|
/* redirect changes namespace */
|
|
|
|
pr_warn("ceph_object_locator::nspace is changed\n");
|
|
|
|
goto e_inval;
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (struct_v >= 6) {
|
|
|
|
s64 hash = ceph_decode_64(p);
|
|
|
|
if (hash != -1) {
|
|
|
|
pr_warn("ceph_object_locator::hash is set\n");
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* skip the rest */
|
|
|
|
*p = struct_end;
|
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
e_inval:
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ceph_redirect_decode(void **p, void *end,
|
|
|
|
struct ceph_request_redirect *redir)
|
|
|
|
{
|
|
|
|
u8 struct_v, struct_cv;
|
|
|
|
u32 len;
|
|
|
|
void *struct_end;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ceph_decode_need(p, end, 1 + 1 + 4, e_inval);
|
|
|
|
struct_v = ceph_decode_8(p);
|
|
|
|
struct_cv = ceph_decode_8(p);
|
|
|
|
if (struct_cv > 1) {
|
|
|
|
pr_warn("got v %d cv %d > 1 of ceph_request_redirect\n",
|
|
|
|
struct_v, struct_cv);
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
len = ceph_decode_32(p);
|
|
|
|
ceph_decode_need(p, end, len, e_inval);
|
|
|
|
struct_end = *p + len;
|
|
|
|
|
|
|
|
ret = ceph_oloc_decode(p, end, &redir->oloc);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
len = ceph_decode_32(p);
|
|
|
|
if (len > 0) {
|
|
|
|
pr_warn("ceph_request_redirect::object_name is set\n");
|
|
|
|
goto e_inval;
|
|
|
|
}
|
|
|
|
|
|
|
|
len = ceph_decode_32(p);
|
|
|
|
*p += len; /* skip osd_instructions */
|
|
|
|
|
|
|
|
/* skip the rest */
|
|
|
|
*p = struct_end;
|
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
e_inval:
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
struct MOSDOpReply {
|
|
|
|
struct ceph_pg pgid;
|
|
|
|
u64 flags;
|
|
|
|
int result;
|
|
|
|
u32 epoch;
|
|
|
|
int num_ops;
|
|
|
|
u32 outdata_len[CEPH_OSD_MAX_OPS];
|
|
|
|
s32 rval[CEPH_OSD_MAX_OPS];
|
|
|
|
int retry_attempt;
|
|
|
|
struct ceph_eversion replay_version;
|
|
|
|
u64 user_version;
|
|
|
|
struct ceph_request_redirect redirect;
|
|
|
|
};
|
2011-06-04 00:37:09 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
static int decode_MOSDOpReply(const struct ceph_msg *msg, struct MOSDOpReply *m)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:24 +08:00
|
|
|
void *p = msg->front.iov_base;
|
|
|
|
void *const end = p + msg->front.iov_len;
|
|
|
|
u16 version = le16_to_cpu(msg->hdr.version);
|
|
|
|
struct ceph_eversion bad_replay_version;
|
2016-02-03 22:25:48 +08:00
|
|
|
u8 decode_redir;
|
2016-04-28 22:07:24 +08:00
|
|
|
u32 len;
|
|
|
|
int ret;
|
|
|
|
int i;
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_decode_32_safe(&p, end, len, e_inval);
|
|
|
|
ceph_decode_need(&p, end, len, e_inval);
|
|
|
|
p += len; /* skip oid */
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ret = ceph_decode_pgid(&p, end, &m->pgid);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_decode_64_safe(&p, end, m->flags, e_inval);
|
|
|
|
ceph_decode_32_safe(&p, end, m->result, e_inval);
|
|
|
|
ceph_decode_need(&p, end, sizeof(bad_replay_version), e_inval);
|
|
|
|
memcpy(&bad_replay_version, p, sizeof(bad_replay_version));
|
|
|
|
p += sizeof(bad_replay_version);
|
|
|
|
ceph_decode_32_safe(&p, end, m->epoch, e_inval);
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_decode_32_safe(&p, end, m->num_ops, e_inval);
|
|
|
|
if (m->num_ops > ARRAY_SIZE(m->outdata_len))
|
|
|
|
goto e_inval;
|
2013-02-26 08:11:12 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_decode_need(&p, end, m->num_ops * sizeof(struct ceph_osd_op),
|
|
|
|
e_inval);
|
|
|
|
for (i = 0; i < m->num_ops; i++) {
|
2013-02-26 08:11:12 +08:00
|
|
|
struct ceph_osd_op *op = p;
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
m->outdata_len[i] = le32_to_cpu(op->payload_len);
|
2013-02-26 08:11:12 +08:00
|
|
|
p += sizeof(*op);
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_decode_32_safe(&p, end, m->retry_attempt, e_inval);
|
|
|
|
for (i = 0; i < m->num_ops; i++)
|
|
|
|
ceph_decode_32_safe(&p, end, m->rval[i], e_inval);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
if (version >= 5) {
|
|
|
|
ceph_decode_need(&p, end, sizeof(m->replay_version), e_inval);
|
|
|
|
memcpy(&m->replay_version, p, sizeof(m->replay_version));
|
|
|
|
p += sizeof(m->replay_version);
|
|
|
|
ceph_decode_64_safe(&p, end, m->user_version, e_inval);
|
|
|
|
} else {
|
|
|
|
m->replay_version = bad_replay_version; /* struct */
|
|
|
|
m->user_version = le64_to_cpu(m->replay_version.version);
|
|
|
|
}
|
2013-05-31 15:54:44 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
if (version >= 6) {
|
|
|
|
if (version >= 7)
|
|
|
|
ceph_decode_8_safe(&p, end, decode_redir, e_inval);
|
2016-02-03 22:25:48 +08:00
|
|
|
else
|
|
|
|
decode_redir = 1;
|
|
|
|
} else {
|
|
|
|
decode_redir = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (decode_redir) {
|
2016-04-28 22:07:24 +08:00
|
|
|
ret = ceph_redirect_decode(&p, end, &m->redirect);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
} else {
|
2016-04-28 22:07:24 +08:00
|
|
|
ceph_oloc_init(&m->redirect.oloc);
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
}
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
e_inval:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-02-12 01:46:08 +08:00
|
|
|
* Handle MOSDOpReply. Set ->r_result and call the callback if it is
|
|
|
|
* specified.
|
2016-04-28 22:07:24 +08:00
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static void handle_reply(struct ceph_osd *osd, struct ceph_msg *msg)
|
2016-04-28 22:07:24 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
2016-04-28 22:07:24 +08:00
|
|
|
struct ceph_osd_request *req;
|
|
|
|
struct MOSDOpReply m;
|
|
|
|
u64 tid = le64_to_cpu(msg->hdr.tid);
|
|
|
|
u32 data_len = 0;
|
|
|
|
int ret;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
dout("%s msg %p tid %llu\n", __func__, msg, tid);
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
down_read(&osdc->lock);
|
|
|
|
if (!osd_registered(osd)) {
|
|
|
|
dout("%s osd%d unknown\n", __func__, osd->o_osd);
|
|
|
|
goto out_unlock_osdc;
|
|
|
|
}
|
|
|
|
WARN_ON(osd->o_osd != le64_to_cpu(msg->hdr.src.num));
|
|
|
|
|
|
|
|
mutex_lock(&osd->lock);
|
|
|
|
req = lookup_request(&osd->o_requests, tid);
|
2016-04-28 22:07:24 +08:00
|
|
|
if (!req) {
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s osd%d tid %llu unknown\n", __func__, osd->o_osd, tid);
|
|
|
|
goto out_unlock_session;
|
2016-04-28 22:07:24 +08:00
|
|
|
}
|
|
|
|
|
2016-06-13 19:05:13 +08:00
|
|
|
m.redirect.oloc.pool_ns = req->r_t.target_oloc.pool_ns;
|
2016-04-28 22:07:24 +08:00
|
|
|
ret = decode_MOSDOpReply(msg, &m);
|
2016-06-13 19:05:13 +08:00
|
|
|
m.redirect.oloc.pool_ns = NULL;
|
2016-04-28 22:07:24 +08:00
|
|
|
if (ret) {
|
|
|
|
pr_err("failed to decode MOSDOpReply for tid %llu: %d\n",
|
|
|
|
req->r_tid, ret);
|
|
|
|
ceph_msg_dump(msg);
|
|
|
|
goto fail_request;
|
|
|
|
}
|
|
|
|
dout("%s req %p tid %llu flags 0x%llx pgid %llu.%x epoch %u attempt %d v %u'%llu uv %llu\n",
|
|
|
|
__func__, req, req->r_tid, m.flags, m.pgid.pool, m.pgid.seed,
|
|
|
|
m.epoch, m.retry_attempt, le32_to_cpu(m.replay_version.epoch),
|
|
|
|
le64_to_cpu(m.replay_version.version), m.user_version);
|
|
|
|
|
|
|
|
if (m.retry_attempt >= 0) {
|
|
|
|
if (m.retry_attempt != req->r_attempts - 1) {
|
|
|
|
dout("req %p tid %llu retry_attempt %d != %d, ignoring\n",
|
|
|
|
req, req->r_tid, m.retry_attempt,
|
|
|
|
req->r_attempts - 1);
|
2016-04-28 22:07:26 +08:00
|
|
|
goto out_unlock_session;
|
2016-04-28 22:07:24 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
WARN_ON(1); /* MOSDOpReply v4 is assumed */
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ceph_oloc_empty(&m.redirect.oloc)) {
|
|
|
|
dout("req %p tid %llu redirect pool %lld\n", req, req->r_tid,
|
|
|
|
m.redirect.oloc.pool);
|
2016-04-28 22:07:26 +08:00
|
|
|
unlink_request(osd, req);
|
|
|
|
mutex_unlock(&osd->lock);
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
|
2016-02-14 11:24:31 +08:00
|
|
|
/*
|
|
|
|
* Not ceph_oloc_copy() - changing pool_ns is not
|
|
|
|
* supported.
|
|
|
|
*/
|
|
|
|
req->r_t.target_oloc.pool = m.redirect.oloc.pool;
|
2016-04-28 22:07:26 +08:00
|
|
|
req->r_flags |= CEPH_OSD_FLAG_REDIRECTED;
|
|
|
|
req->r_tid = 0;
|
|
|
|
__submit_request(req, false);
|
|
|
|
goto out_unlock_osdc;
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 23:40:20 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
if (m.num_ops != req->r_num_ops) {
|
|
|
|
pr_err("num_ops %d != %d for tid %llu\n", m.num_ops,
|
|
|
|
req->r_num_ops, req->r_tid);
|
|
|
|
goto fail_request;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2016-04-28 22:07:24 +08:00
|
|
|
for (i = 0; i < req->r_num_ops; i++) {
|
|
|
|
dout(" req %p tid %llu op %d rval %d len %u\n", req,
|
|
|
|
req->r_tid, i, m.rval[i], m.outdata_len[i]);
|
|
|
|
req->r_ops[i].rval = m.rval[i];
|
|
|
|
req->r_ops[i].outdata_len = m.outdata_len[i];
|
|
|
|
data_len += m.outdata_len[i];
|
|
|
|
}
|
|
|
|
if (data_len != le32_to_cpu(msg->hdr.data_len)) {
|
|
|
|
pr_err("sum of lens %u != %u for tid %llu\n", data_len,
|
|
|
|
le32_to_cpu(msg->hdr.data_len), req->r_tid);
|
|
|
|
goto fail_request;
|
|
|
|
}
|
2017-02-12 01:46:08 +08:00
|
|
|
dout("%s req %p tid %llu result %d data_len %u\n", __func__,
|
|
|
|
req, req->r_tid, m.result, data_len);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2017-02-12 01:46:08 +08:00
|
|
|
/*
|
|
|
|
* Since we only ever request ONDISK, we should only ever get
|
|
|
|
* one (type of) reply back.
|
|
|
|
*/
|
|
|
|
WARN_ON(!(m.flags & CEPH_OSD_FLAG_ONDISK));
|
|
|
|
req->r_result = m.result ?: data_len;
|
|
|
|
finish_request(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
up_read(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2017-02-12 01:46:08 +08:00
|
|
|
__complete_request(req);
|
2009-10-07 02:31:10 +08:00
|
|
|
return;
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
fail_request:
|
2016-04-28 22:07:27 +08:00
|
|
|
complete_request(req, -EIO);
|
2016-04-28 22:07:26 +08:00
|
|
|
out_unlock_session:
|
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
out_unlock_osdc:
|
|
|
|
up_read(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
static void set_pool_was_full(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
struct rb_node *n;
|
|
|
|
|
|
|
|
for (n = rb_first(&osdc->osdmap->pg_pools); n; n = rb_next(n)) {
|
|
|
|
struct ceph_pg_pool_info *pi =
|
|
|
|
rb_entry(n, struct ceph_pg_pool_info, node);
|
|
|
|
|
|
|
|
pi->was_full = __pool_full(pi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static bool pool_cleared_full(struct ceph_osd_client *osdc, s64 pool_id)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_pg_pool_info *pi;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
pi = ceph_pg_pool_by_id(osdc->osdmap, pool_id);
|
|
|
|
if (!pi)
|
|
|
|
return false;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
return pi->was_full && !__pool_full(pi);
|
2010-02-27 07:32:31 +08:00
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
static enum calc_target_result
|
|
|
|
recalc_linger_target(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = lreq->osdc;
|
|
|
|
enum calc_target_result ct_res;
|
|
|
|
|
2019-08-21 19:57:18 +08:00
|
|
|
ct_res = calc_target(osdc, &lreq->t, true);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (ct_res == CALC_TARGET_NEED_RESEND) {
|
|
|
|
struct ceph_osd *osd;
|
|
|
|
|
|
|
|
osd = lookup_create_osd(osdc, lreq->t.osd, true);
|
|
|
|
if (osd != lreq->osd) {
|
|
|
|
unlink_linger(lreq->osd, lreq);
|
|
|
|
link_linger(osd, lreq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ct_res;
|
|
|
|
}
|
|
|
|
|
2010-02-27 07:32:31 +08:00
|
|
|
/*
|
2016-04-28 22:07:26 +08:00
|
|
|
* Requeue requests whose mapping to an OSD has changed.
|
2010-02-27 07:32:31 +08:00
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static void scan_requests(struct ceph_osd *osd,
|
|
|
|
bool force_resend,
|
|
|
|
bool cleared_full,
|
|
|
|
bool check_pool_cleared_full,
|
|
|
|
struct rb_root *need_resend,
|
|
|
|
struct list_head *need_resend_linger)
|
2010-02-27 07:32:31 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
struct rb_node *n;
|
|
|
|
bool force_resend_writes;
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
for (n = rb_first(&osd->o_linger_requests); n; ) {
|
|
|
|
struct ceph_osd_linger_request *lreq =
|
|
|
|
rb_entry(n, struct ceph_osd_linger_request, node);
|
|
|
|
enum calc_target_result ct_res;
|
|
|
|
|
|
|
|
n = rb_next(n); /* recalc_linger_target() */
|
|
|
|
|
|
|
|
dout("%s lreq %p linger_id %llu\n", __func__, lreq,
|
|
|
|
lreq->linger_id);
|
|
|
|
ct_res = recalc_linger_target(lreq);
|
|
|
|
switch (ct_res) {
|
|
|
|
case CALC_TARGET_NO_ACTION:
|
|
|
|
force_resend_writes = cleared_full ||
|
|
|
|
(check_pool_cleared_full &&
|
|
|
|
pool_cleared_full(osdc, lreq->t.base_oloc.pool));
|
|
|
|
if (!force_resend && !force_resend_writes)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* fall through */
|
|
|
|
case CALC_TARGET_NEED_RESEND:
|
2016-04-28 22:07:27 +08:00
|
|
|
cancel_linger_map_check(lreq);
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* scan_requests() for the previous epoch(s)
|
|
|
|
* may have already added it to the list, since
|
|
|
|
* it's not unlinked here.
|
|
|
|
*/
|
|
|
|
if (list_empty(&lreq->scan_item))
|
|
|
|
list_add_tail(&lreq->scan_item, need_resend_linger);
|
|
|
|
break;
|
|
|
|
case CALC_TARGET_POOL_DNE:
|
2017-06-15 22:30:55 +08:00
|
|
|
list_del_init(&lreq->scan_item);
|
2016-04-28 22:07:27 +08:00
|
|
|
check_linger_pool_dne(lreq);
|
2016-05-26 07:15:02 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
for (n = rb_first(&osd->o_requests); n; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
enum calc_target_result ct_res;
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
n = rb_next(n); /* unlink_request(), check_pool_dne() */
|
2016-04-28 22:07:26 +08:00
|
|
|
|
|
|
|
dout("%s req %p tid %llu\n", __func__, req, req->r_tid);
|
2019-08-21 19:57:18 +08:00
|
|
|
ct_res = calc_target(osdc, &req->r_t, false);
|
2016-04-28 22:07:26 +08:00
|
|
|
switch (ct_res) {
|
|
|
|
case CALC_TARGET_NO_ACTION:
|
|
|
|
force_resend_writes = cleared_full ||
|
|
|
|
(check_pool_cleared_full &&
|
|
|
|
pool_cleared_full(osdc, req->r_t.base_oloc.pool));
|
|
|
|
if (!force_resend &&
|
|
|
|
(!(req->r_flags & CEPH_OSD_FLAG_WRITE) ||
|
|
|
|
!force_resend_writes))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* fall through */
|
|
|
|
case CALC_TARGET_NEED_RESEND:
|
2016-04-28 22:07:27 +08:00
|
|
|
cancel_map_check(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
unlink_request(osd, req);
|
|
|
|
insert_request(need_resend, req);
|
|
|
|
break;
|
|
|
|
case CALC_TARGET_POOL_DNE:
|
2016-04-28 22:07:27 +08:00
|
|
|
check_pool_dne(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
break;
|
2015-05-11 22:53:10 +08:00
|
|
|
}
|
2011-01-18 12:34:08 +08:00
|
|
|
}
|
2010-02-27 07:32:31 +08:00
|
|
|
}
|
2011-01-18 12:34:08 +08:00
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
static int handle_one_map(struct ceph_osd_client *osdc,
|
2016-04-28 22:07:26 +08:00
|
|
|
void *p, void *end, bool incremental,
|
|
|
|
struct rb_root *need_resend,
|
|
|
|
struct list_head *need_resend_linger)
|
2016-04-28 22:07:25 +08:00
|
|
|
{
|
|
|
|
struct ceph_osdmap *newmap;
|
|
|
|
struct rb_node *n;
|
|
|
|
bool skipped_map = false;
|
|
|
|
bool was_full;
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
was_full = ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL);
|
2016-04-28 22:07:25 +08:00
|
|
|
set_pool_was_full(osdc);
|
|
|
|
|
|
|
|
if (incremental)
|
|
|
|
newmap = osdmap_apply_incremental(&p, end, osdc->osdmap);
|
|
|
|
else
|
|
|
|
newmap = ceph_osdmap_decode(&p, end);
|
|
|
|
if (IS_ERR(newmap))
|
|
|
|
return PTR_ERR(newmap);
|
|
|
|
|
|
|
|
if (newmap != osdc->osdmap) {
|
|
|
|
/*
|
|
|
|
* Preserve ->was_full before destroying the old map.
|
|
|
|
* For pools that weren't in the old map, ->was_full
|
|
|
|
* should be false.
|
|
|
|
*/
|
|
|
|
for (n = rb_first(&newmap->pg_pools); n; n = rb_next(n)) {
|
|
|
|
struct ceph_pg_pool_info *pi =
|
|
|
|
rb_entry(n, struct ceph_pg_pool_info, node);
|
|
|
|
struct ceph_pg_pool_info *old_pi;
|
|
|
|
|
|
|
|
old_pi = ceph_pg_pool_by_id(osdc->osdmap, pi->id);
|
|
|
|
if (old_pi)
|
|
|
|
pi->was_full = old_pi->was_full;
|
|
|
|
else
|
|
|
|
WARN_ON(pi->was_full);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (osdc->osdmap->epoch &&
|
|
|
|
osdc->osdmap->epoch + 1 < newmap->epoch) {
|
|
|
|
WARN_ON(incremental);
|
|
|
|
skipped_map = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_osdmap_destroy(osdc->osdmap);
|
|
|
|
osdc->osdmap = newmap;
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
was_full &= !ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL);
|
2016-04-28 22:07:26 +08:00
|
|
|
scan_requests(&osdc->homeless_osd, skipped_map, was_full, true,
|
|
|
|
need_resend, need_resend_linger);
|
|
|
|
|
|
|
|
for (n = rb_first(&osdc->osds); n; ) {
|
|
|
|
struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node);
|
|
|
|
|
|
|
|
n = rb_next(n); /* close_osd() */
|
|
|
|
|
|
|
|
scan_requests(osd, skipped_map, was_full, true, need_resend,
|
|
|
|
need_resend_linger);
|
|
|
|
if (!ceph_osd_is_up(osdc->osdmap, osd->o_osd) ||
|
|
|
|
memcmp(&osd->o_con.peer_addr,
|
|
|
|
ceph_osd_addr(osdc->osdmap, osd->o_osd),
|
|
|
|
sizeof(struct ceph_entity_addr)))
|
|
|
|
close_osd(osd);
|
|
|
|
}
|
2016-04-28 22:07:25 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2011-01-18 12:34:08 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
static void kick_requests(struct ceph_osd_client *osdc,
|
|
|
|
struct rb_root *need_resend,
|
|
|
|
struct list_head *need_resend_linger)
|
|
|
|
{
|
2016-05-26 07:15:02 +08:00
|
|
|
struct ceph_osd_linger_request *lreq, *nlreq;
|
2017-06-15 22:30:55 +08:00
|
|
|
enum calc_target_result ct_res;
|
2016-04-28 22:07:26 +08:00
|
|
|
struct rb_node *n;
|
|
|
|
|
2017-06-15 22:30:55 +08:00
|
|
|
/* make sure need_resend targets reflect latest map */
|
|
|
|
for (n = rb_first(need_resend); n; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
n = rb_next(n);
|
|
|
|
|
|
|
|
if (req->r_t.epoch < osdc->osdmap->epoch) {
|
2019-08-21 19:57:18 +08:00
|
|
|
ct_res = calc_target(osdc, &req->r_t, false);
|
2017-06-15 22:30:55 +08:00
|
|
|
if (ct_res == CALC_TARGET_POOL_DNE) {
|
|
|
|
erase_request(need_resend, req);
|
|
|
|
check_pool_dne(req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
for (n = rb_first(need_resend); n; ) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
struct ceph_osd *osd;
|
|
|
|
|
|
|
|
n = rb_next(n);
|
|
|
|
erase_request(need_resend, req); /* before link_request() */
|
|
|
|
|
|
|
|
osd = lookup_create_osd(osdc, req->r_t.osd, true);
|
|
|
|
link_request(osd, req);
|
|
|
|
if (!req->r_linger) {
|
|
|
|
if (!osd_homeless(osd) && !req->r_t.paused)
|
|
|
|
send_request(req);
|
2016-05-26 07:15:02 +08:00
|
|
|
} else {
|
|
|
|
cancel_linger_request(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
list_for_each_entry_safe(lreq, nlreq, need_resend_linger, scan_item) {
|
|
|
|
if (!osd_homeless(lreq->osd))
|
|
|
|
send_linger(lreq);
|
|
|
|
|
|
|
|
list_del_init(&lreq->scan_item);
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* Process updated osd map.
|
|
|
|
*
|
|
|
|
* The message contains any number of incremental and full maps, normally
|
|
|
|
* indicating some sort of topology change in the cluster. Kick requests
|
|
|
|
* off to different OSDs as needed.
|
|
|
|
*/
|
|
|
|
void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg)
|
|
|
|
{
|
2016-04-28 22:07:25 +08:00
|
|
|
void *p = msg->front.iov_base;
|
|
|
|
void *const end = p + msg->front.iov_len;
|
2009-10-07 02:31:10 +08:00
|
|
|
u32 nr_maps, maplen;
|
|
|
|
u32 epoch;
|
|
|
|
struct ceph_fsid fsid;
|
2016-04-28 22:07:26 +08:00
|
|
|
struct rb_root need_resend = RB_ROOT;
|
|
|
|
LIST_HEAD(need_resend_linger);
|
2016-04-28 22:07:25 +08:00
|
|
|
bool handled_incremental = false;
|
|
|
|
bool was_pauserd, was_pausewr;
|
|
|
|
bool pauserd, pausewr;
|
|
|
|
int err;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
dout("%s have %u\n", __func__, osdc->osdmap->epoch);
|
2016-04-28 22:07:26 +08:00
|
|
|
down_write(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/* verify fsid */
|
|
|
|
ceph_decode_need(&p, end, sizeof(fsid), bad);
|
|
|
|
ceph_decode_copy(&p, &fsid, sizeof(fsid));
|
2009-11-19 08:50:41 +08:00
|
|
|
if (ceph_check_fsid(osdc->client, &fsid) < 0)
|
2016-04-28 22:07:25 +08:00
|
|
|
goto bad;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
was_pauserd = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD);
|
|
|
|
was_pausewr = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSEWR) ||
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
2016-04-28 22:07:25 +08:00
|
|
|
have_pool_full(osdc);
|
2013-12-11 01:35:13 +08:00
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/* incremental maps */
|
|
|
|
ceph_decode_32_safe(&p, end, nr_maps, bad);
|
|
|
|
dout(" %d inc maps\n", nr_maps);
|
|
|
|
while (nr_maps > 0) {
|
|
|
|
ceph_decode_need(&p, end, 2*sizeof(u32), bad);
|
2009-10-15 00:59:09 +08:00
|
|
|
epoch = ceph_decode_32(&p);
|
|
|
|
maplen = ceph_decode_32(&p);
|
2009-10-07 02:31:10 +08:00
|
|
|
ceph_decode_need(&p, end, maplen, bad);
|
2016-04-28 22:07:25 +08:00
|
|
|
if (osdc->osdmap->epoch &&
|
|
|
|
osdc->osdmap->epoch + 1 == epoch) {
|
2009-10-07 02:31:10 +08:00
|
|
|
dout("applying incremental map %u len %d\n",
|
|
|
|
epoch, maplen);
|
2016-04-28 22:07:26 +08:00
|
|
|
err = handle_one_map(osdc, p, p + maplen, true,
|
|
|
|
&need_resend, &need_resend_linger);
|
2016-04-28 22:07:25 +08:00
|
|
|
if (err)
|
2009-10-07 02:31:10 +08:00
|
|
|
goto bad;
|
2016-04-28 22:07:25 +08:00
|
|
|
handled_incremental = true;
|
2009-10-07 02:31:10 +08:00
|
|
|
} else {
|
|
|
|
dout("ignoring incremental map %u len %d\n",
|
|
|
|
epoch, maplen);
|
|
|
|
}
|
2016-04-28 22:07:25 +08:00
|
|
|
p += maplen;
|
2009-10-07 02:31:10 +08:00
|
|
|
nr_maps--;
|
|
|
|
}
|
2016-04-28 22:07:25 +08:00
|
|
|
if (handled_incremental)
|
2009-10-07 02:31:10 +08:00
|
|
|
goto done;
|
|
|
|
|
|
|
|
/* full maps */
|
|
|
|
ceph_decode_32_safe(&p, end, nr_maps, bad);
|
|
|
|
dout(" %d full maps\n", nr_maps);
|
|
|
|
while (nr_maps) {
|
|
|
|
ceph_decode_need(&p, end, 2*sizeof(u32), bad);
|
2009-10-15 00:59:09 +08:00
|
|
|
epoch = ceph_decode_32(&p);
|
|
|
|
maplen = ceph_decode_32(&p);
|
2009-10-07 02:31:10 +08:00
|
|
|
ceph_decode_need(&p, end, maplen, bad);
|
|
|
|
if (nr_maps > 1) {
|
|
|
|
dout("skipping non-latest full map %u len %d\n",
|
|
|
|
epoch, maplen);
|
2016-04-28 22:07:25 +08:00
|
|
|
} else if (osdc->osdmap->epoch >= epoch) {
|
2009-10-07 02:31:10 +08:00
|
|
|
dout("skipping full map %u len %d, "
|
|
|
|
"older than our %u\n", epoch, maplen,
|
|
|
|
osdc->osdmap->epoch);
|
|
|
|
} else {
|
|
|
|
dout("taking full map %u len %d\n", epoch, maplen);
|
2016-04-28 22:07:26 +08:00
|
|
|
err = handle_one_map(osdc, p, p + maplen, false,
|
|
|
|
&need_resend, &need_resend_linger);
|
2016-04-28 22:07:25 +08:00
|
|
|
if (err)
|
2009-10-07 02:31:10 +08:00
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
p += maplen;
|
|
|
|
nr_maps--;
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
2011-05-13 00:29:18 +08:00
|
|
|
/*
|
|
|
|
* subscribe to subsequent osdmap updates if full to ensure
|
|
|
|
* we find out when we are no longer full and stop returning
|
|
|
|
* ENOSPC.
|
|
|
|
*/
|
2016-04-28 22:07:25 +08:00
|
|
|
pauserd = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD);
|
|
|
|
pausewr = ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSEWR) ||
|
|
|
|
ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||
|
2016-04-28 22:07:25 +08:00
|
|
|
have_pool_full(osdc);
|
2017-04-18 21:21:16 +08:00
|
|
|
if (was_pauserd || was_pausewr || pauserd || pausewr ||
|
|
|
|
osdc->osdmap->epoch < osdc->epoch_barrier)
|
2016-04-28 22:07:25 +08:00
|
|
|
maybe_request_map(osdc);
|
2011-05-13 00:29:18 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
kick_requests(osdc, &need_resend, &need_resend_linger);
|
2016-04-28 22:07:25 +08:00
|
|
|
|
2017-04-04 20:39:39 +08:00
|
|
|
ceph_osdc_abort_on_full(osdc);
|
2016-04-28 22:07:25 +08:00
|
|
|
ceph_monc_got_map(&osdc->client->monc, CEPH_SUB_OSDMAP,
|
|
|
|
osdc->osdmap->epoch);
|
2016-04-28 22:07:26 +08:00
|
|
|
up_write(&osdc->lock);
|
2010-07-28 04:11:08 +08:00
|
|
|
wake_up_all(&osdc->client->auth_wq);
|
2009-10-07 02:31:10 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
bad:
|
|
|
|
pr_err("osdc handle_map corrupt msg\n");
|
2009-12-15 07:13:47 +08:00
|
|
|
ceph_msg_dump(msg);
|
2016-04-28 22:07:26 +08:00
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Resubmit requests pending on the given osd.
|
|
|
|
*/
|
|
|
|
static void kick_osd_requests(struct ceph_osd *osd)
|
|
|
|
{
|
|
|
|
struct rb_node *n;
|
|
|
|
|
2017-06-19 18:18:05 +08:00
|
|
|
clear_backoffs(osd);
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
for (n = rb_first(&osd->o_requests); n; ) {
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
n = rb_next(n); /* cancel_linger_request() */
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
if (!req->r_linger) {
|
|
|
|
if (!req->r_t.paused)
|
|
|
|
send_request(req);
|
2016-05-26 07:15:02 +08:00
|
|
|
} else {
|
|
|
|
cancel_linger_request(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
for (n = rb_first(&osd->o_linger_requests); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd_linger_request *lreq =
|
|
|
|
rb_entry(n, struct ceph_osd_linger_request, node);
|
|
|
|
|
|
|
|
send_linger(lreq);
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the osd connection drops, we need to resubmit all requests.
|
|
|
|
*/
|
|
|
|
static void osd_fault(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
|
|
|
|
dout("%s osd %p osd%d\n", __func__, osd, osd->o_osd);
|
|
|
|
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
if (!osd_registered(osd)) {
|
|
|
|
dout("%s osd%d unknown\n", __func__, osd->o_osd);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!reopen_osd(osd))
|
|
|
|
kick_osd_requests(osd);
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
up_write(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2017-06-19 18:18:05 +08:00
|
|
|
struct MOSDBackoff {
|
|
|
|
struct ceph_spg spgid;
|
|
|
|
u32 map_epoch;
|
|
|
|
u8 op;
|
|
|
|
u64 id;
|
|
|
|
struct ceph_hobject_id *begin;
|
|
|
|
struct ceph_hobject_id *end;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int decode_MOSDBackoff(const struct ceph_msg *msg, struct MOSDBackoff *m)
|
|
|
|
{
|
|
|
|
void *p = msg->front.iov_base;
|
|
|
|
void *const end = p + msg->front.iov_len;
|
|
|
|
u8 struct_v;
|
|
|
|
u32 struct_len;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ceph_start_decoding(&p, end, 1, "spg_t", &struct_v, &struct_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = ceph_decode_pgid(&p, end, &m->spgid.pgid);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ceph_decode_8_safe(&p, end, m->spgid.shard, e_inval);
|
|
|
|
ceph_decode_32_safe(&p, end, m->map_epoch, e_inval);
|
|
|
|
ceph_decode_8_safe(&p, end, m->op, e_inval);
|
|
|
|
ceph_decode_64_safe(&p, end, m->id, e_inval);
|
|
|
|
|
|
|
|
m->begin = kzalloc(sizeof(*m->begin), GFP_NOIO);
|
|
|
|
if (!m->begin)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = decode_hoid(&p, end, m->begin);
|
|
|
|
if (ret) {
|
|
|
|
free_hoid(m->begin);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
m->end = kzalloc(sizeof(*m->end), GFP_NOIO);
|
|
|
|
if (!m->end) {
|
|
|
|
free_hoid(m->begin);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = decode_hoid(&p, end, m->end);
|
|
|
|
if (ret) {
|
|
|
|
free_hoid(m->begin);
|
|
|
|
free_hoid(m->end);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
e_inval:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ceph_msg *create_backoff_message(
|
|
|
|
const struct ceph_osd_backoff *backoff,
|
|
|
|
u32 map_epoch)
|
|
|
|
{
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
void *p, *end;
|
|
|
|
int msg_size;
|
|
|
|
|
|
|
|
msg_size = CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
CEPH_PGID_ENCODING_LEN + 1; /* spgid */
|
|
|
|
msg_size += 4 + 1 + 8; /* map_epoch, op, id */
|
|
|
|
msg_size += CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
hoid_encoding_size(backoff->begin);
|
|
|
|
msg_size += CEPH_ENCODING_START_BLK_LEN +
|
|
|
|
hoid_encoding_size(backoff->end);
|
|
|
|
|
|
|
|
msg = ceph_msg_new(CEPH_MSG_OSD_BACKOFF, msg_size, GFP_NOIO, true);
|
|
|
|
if (!msg)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
p = msg->front.iov_base;
|
|
|
|
end = p + msg->front_alloc_len;
|
|
|
|
|
|
|
|
encode_spgid(&p, &backoff->spgid);
|
|
|
|
ceph_encode_32(&p, map_epoch);
|
|
|
|
ceph_encode_8(&p, CEPH_OSD_BACKOFF_OP_ACK_BLOCK);
|
|
|
|
ceph_encode_64(&p, backoff->id);
|
|
|
|
encode_hoid(&p, end, backoff->begin);
|
|
|
|
encode_hoid(&p, end, backoff->end);
|
|
|
|
BUG_ON(p != end);
|
|
|
|
|
|
|
|
msg->front.iov_len = p - msg->front.iov_base;
|
|
|
|
msg->hdr.version = cpu_to_le16(1); /* MOSDBackoff v1 */
|
|
|
|
msg->hdr.front_len = cpu_to_le32(msg->front.iov_len);
|
|
|
|
|
|
|
|
return msg;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_backoff_block(struct ceph_osd *osd, struct MOSDBackoff *m)
|
|
|
|
{
|
|
|
|
struct ceph_spg_mapping *spg;
|
|
|
|
struct ceph_osd_backoff *backoff;
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
|
|
|
|
dout("%s osd%d spgid %llu.%xs%d id %llu\n", __func__, osd->o_osd,
|
|
|
|
m->spgid.pgid.pool, m->spgid.pgid.seed, m->spgid.shard, m->id);
|
|
|
|
|
|
|
|
spg = lookup_spg_mapping(&osd->o_backoff_mappings, &m->spgid);
|
|
|
|
if (!spg) {
|
|
|
|
spg = alloc_spg_mapping();
|
|
|
|
if (!spg) {
|
|
|
|
pr_err("%s failed to allocate spg\n", __func__);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
spg->spgid = m->spgid; /* struct */
|
|
|
|
insert_spg_mapping(&osd->o_backoff_mappings, spg);
|
|
|
|
}
|
|
|
|
|
|
|
|
backoff = alloc_backoff();
|
|
|
|
if (!backoff) {
|
|
|
|
pr_err("%s failed to allocate backoff\n", __func__);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
backoff->spgid = m->spgid; /* struct */
|
|
|
|
backoff->id = m->id;
|
|
|
|
backoff->begin = m->begin;
|
|
|
|
m->begin = NULL; /* backoff now owns this */
|
|
|
|
backoff->end = m->end;
|
|
|
|
m->end = NULL; /* ditto */
|
|
|
|
|
|
|
|
insert_backoff(&spg->backoffs, backoff);
|
|
|
|
insert_backoff_by_id(&osd->o_backoffs_by_id, backoff);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ack with original backoff's epoch so that the OSD can
|
|
|
|
* discard this if there was a PG split.
|
|
|
|
*/
|
|
|
|
msg = create_backoff_message(backoff, m->map_epoch);
|
|
|
|
if (!msg) {
|
|
|
|
pr_err("%s failed to allocate msg\n", __func__);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
ceph_con_send(&osd->o_con, msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool target_contained_by(const struct ceph_osd_request_target *t,
|
|
|
|
const struct ceph_hobject_id *begin,
|
|
|
|
const struct ceph_hobject_id *end)
|
|
|
|
{
|
|
|
|
struct ceph_hobject_id hoid;
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
hoid_fill_from_target(&hoid, t);
|
|
|
|
cmp = hoid_compare(&hoid, begin);
|
|
|
|
return !cmp || (cmp > 0 && hoid_compare(&hoid, end) < 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_backoff_unblock(struct ceph_osd *osd,
|
|
|
|
const struct MOSDBackoff *m)
|
|
|
|
{
|
|
|
|
struct ceph_spg_mapping *spg;
|
|
|
|
struct ceph_osd_backoff *backoff;
|
|
|
|
struct rb_node *n;
|
|
|
|
|
|
|
|
dout("%s osd%d spgid %llu.%xs%d id %llu\n", __func__, osd->o_osd,
|
|
|
|
m->spgid.pgid.pool, m->spgid.pgid.seed, m->spgid.shard, m->id);
|
|
|
|
|
|
|
|
backoff = lookup_backoff_by_id(&osd->o_backoffs_by_id, m->id);
|
|
|
|
if (!backoff) {
|
|
|
|
pr_err("%s osd%d spgid %llu.%xs%d id %llu backoff dne\n",
|
|
|
|
__func__, osd->o_osd, m->spgid.pgid.pool,
|
|
|
|
m->spgid.pgid.seed, m->spgid.shard, m->id);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (hoid_compare(backoff->begin, m->begin) &&
|
|
|
|
hoid_compare(backoff->end, m->end)) {
|
|
|
|
pr_err("%s osd%d spgid %llu.%xs%d id %llu bad range?\n",
|
|
|
|
__func__, osd->o_osd, m->spgid.pgid.pool,
|
|
|
|
m->spgid.pgid.seed, m->spgid.shard, m->id);
|
|
|
|
/* unblock it anyway... */
|
|
|
|
}
|
|
|
|
|
|
|
|
spg = lookup_spg_mapping(&osd->o_backoff_mappings, &backoff->spgid);
|
|
|
|
BUG_ON(!spg);
|
|
|
|
|
|
|
|
erase_backoff(&spg->backoffs, backoff);
|
|
|
|
erase_backoff_by_id(&osd->o_backoffs_by_id, backoff);
|
|
|
|
free_backoff(backoff);
|
|
|
|
|
|
|
|
if (RB_EMPTY_ROOT(&spg->backoffs)) {
|
|
|
|
erase_spg_mapping(&osd->o_backoff_mappings, spg);
|
|
|
|
free_spg_mapping(spg);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (n = rb_first(&osd->o_requests); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(n, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
if (!ceph_spg_compare(&req->r_t.spgid, &m->spgid)) {
|
|
|
|
/*
|
|
|
|
* Match against @m, not @backoff -- the PG may
|
|
|
|
* have split on the OSD.
|
|
|
|
*/
|
|
|
|
if (target_contained_by(&req->r_t, m->begin, m->end)) {
|
|
|
|
/*
|
|
|
|
* If no other installed backoff applies,
|
|
|
|
* resend.
|
|
|
|
*/
|
|
|
|
send_request(req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_backoff(struct ceph_osd *osd, struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
|
|
|
struct MOSDBackoff m;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
if (!osd_registered(osd)) {
|
|
|
|
dout("%s osd%d unknown\n", __func__, osd->o_osd);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
WARN_ON(osd->o_osd != le64_to_cpu(msg->hdr.src.num));
|
|
|
|
|
|
|
|
mutex_lock(&osd->lock);
|
|
|
|
ret = decode_MOSDBackoff(msg, &m);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("failed to decode MOSDBackoff: %d\n", ret);
|
|
|
|
ceph_msg_dump(msg);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (m.op) {
|
|
|
|
case CEPH_OSD_BACKOFF_OP_BLOCK:
|
|
|
|
handle_backoff_block(osd, &m);
|
|
|
|
break;
|
|
|
|
case CEPH_OSD_BACKOFF_OP_UNBLOCK:
|
|
|
|
handle_backoff_unblock(osd, &m);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
pr_err("%s osd%d unknown op %d\n", __func__, osd->o_osd, m.op);
|
|
|
|
}
|
|
|
|
|
|
|
|
free_hoid(m.begin);
|
|
|
|
free_hoid(m.end);
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
2011-03-22 06:07:16 +08:00
|
|
|
/*
|
|
|
|
* Process osd watch notifications
|
|
|
|
*/
|
2013-02-16 01:42:30 +08:00
|
|
|
static void handle_watch_notify(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_msg *msg)
|
2011-03-22 06:07:16 +08:00
|
|
|
{
|
2016-05-26 07:15:02 +08:00
|
|
|
void *p = msg->front.iov_base;
|
|
|
|
void *const end = p + msg->front.iov_len;
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
|
|
|
struct linger_work *lwork;
|
|
|
|
u8 proto_ver, opcode;
|
|
|
|
u64 cookie, notify_id;
|
|
|
|
u64 notifier_id = 0;
|
2016-04-28 22:07:27 +08:00
|
|
|
s32 return_code = 0;
|
2016-05-26 07:15:02 +08:00
|
|
|
void *payload = NULL;
|
|
|
|
u32 payload_len = 0;
|
2011-03-22 06:07:16 +08:00
|
|
|
|
|
|
|
ceph_decode_8_safe(&p, end, proto_ver, bad);
|
|
|
|
ceph_decode_8_safe(&p, end, opcode, bad);
|
|
|
|
ceph_decode_64_safe(&p, end, cookie, bad);
|
2016-05-26 07:15:02 +08:00
|
|
|
p += 8; /* skip ver */
|
2011-03-22 06:07:16 +08:00
|
|
|
ceph_decode_64_safe(&p, end, notify_id, bad);
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
if (proto_ver >= 1) {
|
|
|
|
ceph_decode_32_safe(&p, end, payload_len, bad);
|
|
|
|
ceph_decode_need(&p, end, payload_len, bad);
|
|
|
|
payload = p;
|
|
|
|
p += payload_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (le16_to_cpu(msg->hdr.version) >= 2)
|
2016-04-28 22:07:27 +08:00
|
|
|
ceph_decode_32_safe(&p, end, return_code, bad);
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
if (le16_to_cpu(msg->hdr.version) >= 3)
|
|
|
|
ceph_decode_64_safe(&p, end, notifier_id, bad);
|
|
|
|
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
lreq = lookup_linger_osdc(&osdc->linger_requests, cookie);
|
|
|
|
if (!lreq) {
|
|
|
|
dout("%s opcode %d cookie %llu dne\n", __func__, opcode,
|
|
|
|
cookie);
|
|
|
|
goto out_unlock_osdc;
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_lock(&lreq->lock);
|
2016-04-28 22:07:27 +08:00
|
|
|
dout("%s opcode %d cookie %llu lreq %p is_watch %d\n", __func__,
|
|
|
|
opcode, cookie, lreq, lreq->is_watch);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (opcode == CEPH_WATCH_EVENT_DISCONNECT) {
|
|
|
|
if (!lreq->last_error) {
|
|
|
|
lreq->last_error = -ENOTCONN;
|
|
|
|
queue_watch_error(lreq);
|
|
|
|
}
|
2016-04-28 22:07:27 +08:00
|
|
|
} else if (!lreq->is_watch) {
|
|
|
|
/* CEPH_WATCH_EVENT_NOTIFY_COMPLETE */
|
|
|
|
if (lreq->notify_id && lreq->notify_id != notify_id) {
|
|
|
|
dout("lreq %p notify_id %llu != %llu, ignoring\n", lreq,
|
|
|
|
lreq->notify_id, notify_id);
|
|
|
|
} else if (!completion_done(&lreq->notify_finish_wait)) {
|
|
|
|
struct ceph_msg_data *data =
|
2018-10-15 23:38:23 +08:00
|
|
|
msg->num_data_items ? &msg->data[0] : NULL;
|
2016-04-28 22:07:27 +08:00
|
|
|
|
|
|
|
if (data) {
|
|
|
|
if (lreq->preply_pages) {
|
|
|
|
WARN_ON(data->type !=
|
|
|
|
CEPH_MSG_DATA_PAGES);
|
|
|
|
*lreq->preply_pages = data->pages;
|
|
|
|
*lreq->preply_len = data->length;
|
|
|
|
} else {
|
|
|
|
ceph_release_page_vector(data->pages,
|
|
|
|
calc_pages_for(0, data->length));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
lreq->notify_finish_error = return_code;
|
|
|
|
complete_all(&lreq->notify_finish_wait);
|
|
|
|
}
|
2016-05-26 07:15:02 +08:00
|
|
|
} else {
|
|
|
|
/* CEPH_WATCH_EVENT_NOTIFY */
|
|
|
|
lwork = lwork_alloc(lreq, do_watch_notify);
|
|
|
|
if (!lwork) {
|
|
|
|
pr_err("failed to allocate notify-lwork\n");
|
|
|
|
goto out_unlock_lreq;
|
2011-03-22 06:07:16 +08:00
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
lwork->notify.notify_id = notify_id;
|
|
|
|
lwork->notify.notifier_id = notifier_id;
|
|
|
|
lwork->notify.payload = payload;
|
|
|
|
lwork->notify.payload_len = payload_len;
|
|
|
|
lwork->notify.msg = ceph_msg_get(msg);
|
|
|
|
lwork_queue(lwork);
|
2014-09-11 16:18:53 +08:00
|
|
|
}
|
2011-03-22 06:07:16 +08:00
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
out_unlock_lreq:
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
out_unlock_osdc:
|
|
|
|
up_read(&osdc->lock);
|
2011-03-22 06:07:16 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
bad:
|
|
|
|
pr_err("osdc handle_watch_notify corrupt msg\n");
|
|
|
|
}
|
|
|
|
|
2013-03-05 08:29:06 +08:00
|
|
|
/*
|
|
|
|
* Register request, send initial attempt.
|
|
|
|
*/
|
|
|
|
int ceph_osdc_start_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_request *req,
|
|
|
|
bool nofail)
|
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
down_read(&osdc->lock);
|
|
|
|
submit_request(req, false);
|
|
|
|
up_read(&osdc->lock);
|
2014-02-01 01:33:39 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
return 0;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_start_request);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2014-06-19 15:38:13 +08:00
|
|
|
/*
|
libceph: always signal completion when done
r_safe_completion is currently, and has always been, signaled only if
on-disk ack was requested. It's there for fsync and syncfs, which wait
for in-flight writes to flush - all data write requests set ONDISK.
However, the pool perm check code introduced in 4.2 sends a write
request with only ACK set. An unfortunately timed syncfs can then hang
forever: r_safe_completion won't be signaled because only an unsafe
reply was requested.
We could patch ceph_osdc_sync() to skip !ONDISK write requests, but
that is somewhat incomplete and yet another special case. Instead,
rename this completion to r_done_completion and always signal it when
the OSD client is done with the request, whether unsafe, safe, or
error. This is a bit cleaner and helps with the cancellation code.
Reported-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2016-12-02 21:01:55 +08:00
|
|
|
* Unregister a registered request. The request is not completed:
|
|
|
|
* ->r_result isn't set and __complete_request() isn't called.
|
2014-06-19 15:38:13 +08:00
|
|
|
*/
|
|
|
|
void ceph_osdc_cancel_request(struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
struct ceph_osd_client *osdc = req->r_osdc;
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
down_write(&osdc->lock);
|
|
|
|
if (req->r_osd)
|
|
|
|
cancel_request(req);
|
|
|
|
up_write(&osdc->lock);
|
2014-06-19 15:38:13 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_cancel_request);
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
2016-04-28 22:07:26 +08:00
|
|
|
* @timeout: in jiffies, 0 means "wait forever"
|
2009-10-07 02:31:10 +08:00
|
|
|
*/
|
2016-04-28 22:07:26 +08:00
|
|
|
static int wait_request_timeout(struct ceph_osd_request *req,
|
|
|
|
unsigned long timeout)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
long left;
|
2014-06-19 15:38:13 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
dout("%s req %p tid %llu\n", __func__, req, req->r_tid);
|
2016-05-13 11:04:33 +08:00
|
|
|
left = wait_for_completion_killable_timeout(&req->r_completion,
|
2016-04-28 22:07:26 +08:00
|
|
|
ceph_timeout_jiffies(timeout));
|
|
|
|
if (left <= 0) {
|
|
|
|
left = left ?: -ETIMEDOUT;
|
2014-06-19 15:38:13 +08:00
|
|
|
ceph_osdc_cancel_request(req);
|
2016-04-28 22:07:26 +08:00
|
|
|
} else {
|
|
|
|
left = req->r_result; /* completed */
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
return left;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* wait for a request to complete
|
|
|
|
*/
|
|
|
|
int ceph_osdc_wait_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_request *req)
|
|
|
|
{
|
|
|
|
return wait_request_timeout(req, 0);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_wait_request);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* sync - wait for all in-flight requests to flush. avoid starvation.
|
|
|
|
*/
|
|
|
|
void ceph_osdc_sync(struct ceph_osd_client *osdc)
|
|
|
|
{
|
2016-04-28 22:07:26 +08:00
|
|
|
struct rb_node *n, *p;
|
|
|
|
u64 last_tid = atomic64_read(&osdc->last_tid);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
again:
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
for (n = rb_first(&osdc->osds); n; n = rb_next(n)) {
|
|
|
|
struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node);
|
|
|
|
|
|
|
|
mutex_lock(&osd->lock);
|
|
|
|
for (p = rb_first(&osd->o_requests); p; p = rb_next(p)) {
|
|
|
|
struct ceph_osd_request *req =
|
|
|
|
rb_entry(p, struct ceph_osd_request, r_node);
|
|
|
|
|
|
|
|
if (req->r_tid > last_tid)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!(req->r_flags & CEPH_OSD_FLAG_WRITE))
|
|
|
|
continue;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
ceph_osdc_get_request(req);
|
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
dout("%s waiting on req %p tid %llu last_tid %llu\n",
|
|
|
|
__func__, req, req->r_tid, last_tid);
|
2017-02-12 01:46:08 +08:00
|
|
|
wait_for_completion(&req->r_completion);
|
2016-04-28 22:07:26 +08:00
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
goto again;
|
|
|
|
}
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
mutex_unlock(&osd->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
dout("%s done last_tid %llu\n", __func__, last_tid);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_sync);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
static struct ceph_osd_request *
|
|
|
|
alloc_linger_request(struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
|
|
|
|
req = ceph_osdc_alloc_request(lreq->osdc, NULL, 1, false, GFP_NOIO);
|
|
|
|
if (!req)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
|
|
|
|
return req;
|
|
|
|
}
|
|
|
|
|
2018-10-15 21:26:27 +08:00
|
|
|
static struct ceph_osd_request *
|
|
|
|
alloc_watch_request(struct ceph_osd_linger_request *lreq, u8 watch_opcode)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
|
|
|
|
req = alloc_linger_request(lreq);
|
|
|
|
if (!req)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pass 0 for cookie because we don't know it yet, it will be
|
|
|
|
* filled in by linger_submit().
|
|
|
|
*/
|
|
|
|
osd_req_op_watch_init(req, 0, 0, watch_opcode);
|
2018-10-15 22:11:37 +08:00
|
|
|
|
|
|
|
if (ceph_osdc_alloc_messages(req, GFP_NOIO)) {
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-10-15 21:26:27 +08:00
|
|
|
return req;
|
|
|
|
}
|
|
|
|
|
2016-05-26 07:15:02 +08:00
|
|
|
/*
|
|
|
|
* Returns a handle, caller owns a ref.
|
|
|
|
*/
|
|
|
|
struct ceph_osd_linger_request *
|
|
|
|
ceph_osdc_watch(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
rados_watchcb2_t wcb,
|
|
|
|
rados_watcherrcb_t errcb,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
lreq = linger_alloc(osdc);
|
|
|
|
if (!lreq)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
lreq->is_watch = true;
|
2016-05-26 07:15:02 +08:00
|
|
|
lreq->wcb = wcb;
|
|
|
|
lreq->errcb = errcb;
|
|
|
|
lreq->data = data;
|
2016-04-28 22:07:27 +08:00
|
|
|
lreq->watch_valid_thru = jiffies;
|
2016-05-26 07:15:02 +08:00
|
|
|
|
|
|
|
ceph_oid_copy(&lreq->t.base_oid, oid);
|
|
|
|
ceph_oloc_copy(&lreq->t.base_oloc, oloc);
|
2017-02-12 01:48:41 +08:00
|
|
|
lreq->t.flags = CEPH_OSD_FLAG_WRITE;
|
2018-07-14 04:18:37 +08:00
|
|
|
ktime_get_real_ts64(&lreq->mtime);
|
2016-05-26 07:15:02 +08:00
|
|
|
|
2018-10-15 21:26:27 +08:00
|
|
|
lreq->reg_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_WATCH);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (!lreq->reg_req) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_put_lreq;
|
|
|
|
}
|
|
|
|
|
2018-10-15 21:26:27 +08:00
|
|
|
lreq->ping_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_PING);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (!lreq->ping_req) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_put_lreq;
|
|
|
|
}
|
|
|
|
|
2018-10-11 18:58:33 +08:00
|
|
|
linger_submit(lreq);
|
2016-05-26 07:15:02 +08:00
|
|
|
ret = linger_reg_commit_wait(lreq);
|
|
|
|
if (ret) {
|
|
|
|
linger_cancel(lreq);
|
|
|
|
goto err_put_lreq;
|
|
|
|
}
|
|
|
|
|
|
|
|
return lreq;
|
|
|
|
|
|
|
|
err_put_lreq:
|
|
|
|
linger_put(lreq);
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_watch);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Releases a ref.
|
|
|
|
*
|
|
|
|
* Times out after mount_timeout to preserve rbd unmap behaviour
|
|
|
|
* introduced in 2894e1d76974 ("rbd: timeout watch teardown on unmap
|
|
|
|
* with mount_timeout").
|
|
|
|
*/
|
|
|
|
int ceph_osdc_unwatch(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
struct ceph_options *opts = osdc->client->options;
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_NOIO);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
|
2017-02-12 01:48:41 +08:00
|
|
|
req->r_flags = CEPH_OSD_FLAG_WRITE;
|
2018-07-14 04:18:37 +08:00
|
|
|
ktime_get_real_ts64(&req->r_mtime);
|
2016-05-26 07:15:02 +08:00
|
|
|
osd_req_op_watch_init(req, 0, lreq->linger_id,
|
|
|
|
CEPH_OSD_WATCH_OP_UNWATCH);
|
|
|
|
|
|
|
|
ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
|
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
|
|
|
ceph_osdc_start_request(osdc, req, false);
|
|
|
|
linger_cancel(lreq);
|
|
|
|
linger_put(lreq);
|
|
|
|
ret = wait_request_timeout(req, opts->mount_timeout);
|
|
|
|
|
|
|
|
out_put_req:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_unwatch);
|
|
|
|
|
|
|
|
static int osd_req_op_notify_ack_init(struct ceph_osd_request *req, int which,
|
|
|
|
u64 notify_id, u64 cookie, void *payload,
|
2018-06-25 23:26:55 +08:00
|
|
|
u32 payload_len)
|
2016-05-26 07:15:02 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
struct ceph_pagelist *pl;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
op = _osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY_ACK, 0);
|
|
|
|
|
2018-09-28 21:38:34 +08:00
|
|
|
pl = ceph_pagelist_alloc(GFP_NOIO);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (!pl)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = ceph_pagelist_encode_64(pl, notify_id);
|
|
|
|
ret |= ceph_pagelist_encode_64(pl, cookie);
|
|
|
|
if (payload) {
|
|
|
|
ret |= ceph_pagelist_encode_32(pl, payload_len);
|
|
|
|
ret |= ceph_pagelist_append(pl, payload, payload_len);
|
|
|
|
} else {
|
|
|
|
ret |= ceph_pagelist_encode_32(pl, 0);
|
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
ceph_pagelist_release(pl);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_osd_data_pagelist_init(&op->notify_ack.request_data, pl);
|
|
|
|
op->indata_len = pl->length;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ceph_osdc_notify_ack(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
u64 notify_id,
|
|
|
|
u64 cookie,
|
|
|
|
void *payload,
|
2018-06-25 23:26:55 +08:00
|
|
|
u32 payload_len)
|
2016-05-26 07:15:02 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_NOIO);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ceph_oid_copy(&req->r_base_oid, oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, oloc);
|
|
|
|
req->r_flags = CEPH_OSD_FLAG_READ;
|
|
|
|
|
2018-10-15 22:11:37 +08:00
|
|
|
ret = osd_req_op_notify_ack_init(req, 0, notify_id, cookie, payload,
|
|
|
|
payload_len);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
2018-10-15 22:11:37 +08:00
|
|
|
ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
|
2016-05-26 07:15:02 +08:00
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
|
|
|
ceph_osdc_start_request(osdc, req, false);
|
|
|
|
ret = ceph_osdc_wait_request(osdc, req);
|
|
|
|
|
|
|
|
out_put_req:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_notify_ack);
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
static int osd_req_op_notify_init(struct ceph_osd_request *req, int which,
|
|
|
|
u64 cookie, u32 prot_ver, u32 timeout,
|
2018-06-25 23:26:55 +08:00
|
|
|
void *payload, u32 payload_len)
|
2016-04-28 22:07:27 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
struct ceph_pagelist *pl;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
op = _osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0);
|
|
|
|
op->notify.cookie = cookie;
|
|
|
|
|
2018-09-28 21:38:34 +08:00
|
|
|
pl = ceph_pagelist_alloc(GFP_NOIO);
|
2016-04-28 22:07:27 +08:00
|
|
|
if (!pl)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = ceph_pagelist_encode_32(pl, 1); /* prot_ver */
|
|
|
|
ret |= ceph_pagelist_encode_32(pl, timeout);
|
|
|
|
ret |= ceph_pagelist_encode_32(pl, payload_len);
|
|
|
|
ret |= ceph_pagelist_append(pl, payload, payload_len);
|
|
|
|
if (ret) {
|
|
|
|
ceph_pagelist_release(pl);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_osd_data_pagelist_init(&op->notify.request_data, pl);
|
|
|
|
op->indata_len = pl->length;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* @timeout: in seconds
|
|
|
|
*
|
|
|
|
* @preply_{pages,len} are initialized both on success and error.
|
|
|
|
* The caller is responsible for:
|
|
|
|
*
|
|
|
|
* ceph_release_page_vector(reply_pages, calc_pages_for(0, reply_len))
|
|
|
|
*/
|
|
|
|
int ceph_osdc_notify(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
void *payload,
|
2018-06-25 23:26:55 +08:00
|
|
|
u32 payload_len,
|
2016-04-28 22:07:27 +08:00
|
|
|
u32 timeout,
|
|
|
|
struct page ***preply_pages,
|
|
|
|
size_t *preply_len)
|
|
|
|
{
|
|
|
|
struct ceph_osd_linger_request *lreq;
|
|
|
|
struct page **pages;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
WARN_ON(!timeout);
|
|
|
|
if (preply_pages) {
|
|
|
|
*preply_pages = NULL;
|
|
|
|
*preply_len = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
lreq = linger_alloc(osdc);
|
|
|
|
if (!lreq)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
lreq->preply_pages = preply_pages;
|
|
|
|
lreq->preply_len = preply_len;
|
|
|
|
|
|
|
|
ceph_oid_copy(&lreq->t.base_oid, oid);
|
|
|
|
ceph_oloc_copy(&lreq->t.base_oloc, oloc);
|
|
|
|
lreq->t.flags = CEPH_OSD_FLAG_READ;
|
|
|
|
|
|
|
|
lreq->reg_req = alloc_linger_request(lreq);
|
|
|
|
if (!lreq->reg_req) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out_put_lreq;
|
|
|
|
}
|
|
|
|
|
2018-10-11 18:58:33 +08:00
|
|
|
/*
|
|
|
|
* Pass 0 for cookie because we don't know it yet, it will be
|
|
|
|
* filled in by linger_submit().
|
|
|
|
*/
|
|
|
|
ret = osd_req_op_notify_init(lreq->reg_req, 0, 0, 1, timeout,
|
|
|
|
payload, payload_len);
|
|
|
|
if (ret)
|
|
|
|
goto out_put_lreq;
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
/* for notify_id */
|
|
|
|
pages = ceph_alloc_page_vector(1, GFP_NOIO);
|
|
|
|
if (IS_ERR(pages)) {
|
|
|
|
ret = PTR_ERR(pages);
|
|
|
|
goto out_put_lreq;
|
|
|
|
}
|
|
|
|
ceph_osd_data_pages_init(osd_req_op_data(lreq->reg_req, 0, notify,
|
|
|
|
response_data),
|
|
|
|
pages, PAGE_SIZE, 0, false, true);
|
|
|
|
|
2018-10-15 22:11:37 +08:00
|
|
|
ret = ceph_osdc_alloc_messages(lreq->reg_req, GFP_NOIO);
|
|
|
|
if (ret)
|
|
|
|
goto out_put_lreq;
|
|
|
|
|
2018-10-11 18:58:33 +08:00
|
|
|
linger_submit(lreq);
|
2016-04-28 22:07:27 +08:00
|
|
|
ret = linger_reg_commit_wait(lreq);
|
|
|
|
if (!ret)
|
|
|
|
ret = linger_notify_finish_wait(lreq);
|
|
|
|
else
|
|
|
|
dout("lreq %p failed to initiate notify %d\n", lreq, ret);
|
|
|
|
|
|
|
|
linger_cancel(lreq);
|
|
|
|
out_put_lreq:
|
|
|
|
linger_put(lreq);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_notify);
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
/*
|
|
|
|
* Return the number of milliseconds since the watch was last
|
|
|
|
* confirmed, or an error. If there is an error, the watch is no
|
|
|
|
* longer valid, and should be destroyed with ceph_osdc_unwatch().
|
|
|
|
*/
|
|
|
|
int ceph_osdc_watch_check(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_linger_request *lreq)
|
|
|
|
{
|
|
|
|
unsigned long stamp, age;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
mutex_lock(&lreq->lock);
|
|
|
|
stamp = lreq->watch_valid_thru;
|
|
|
|
if (!list_empty(&lreq->pending_lworks)) {
|
|
|
|
struct linger_work *lwork =
|
|
|
|
list_first_entry(&lreq->pending_lworks,
|
|
|
|
struct linger_work,
|
|
|
|
pending_item);
|
|
|
|
|
|
|
|
if (time_before(lwork->queued_stamp, stamp))
|
|
|
|
stamp = lwork->queued_stamp;
|
|
|
|
}
|
|
|
|
age = jiffies - stamp;
|
|
|
|
dout("%s lreq %p linger_id %llu age %lu last_error %d\n", __func__,
|
|
|
|
lreq, lreq->linger_id, age, lreq->last_error);
|
|
|
|
/* we are truncating to msecs, so return a safe upper bound */
|
|
|
|
ret = lreq->last_error ?: 1 + jiffies_to_msecs(age);
|
|
|
|
|
|
|
|
mutex_unlock(&lreq->lock);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-07-18 04:18:07 +08:00
|
|
|
static int decode_watcher(void **p, void *end, struct ceph_watch_item *item)
|
|
|
|
{
|
|
|
|
u8 struct_v;
|
|
|
|
u32 struct_len;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ceph_start_decoding(p, end, 2, "watch_item_t",
|
|
|
|
&struct_v, &struct_len);
|
|
|
|
if (ret)
|
2019-06-05 02:35:18 +08:00
|
|
|
goto bad;
|
|
|
|
|
|
|
|
ret = -EINVAL;
|
|
|
|
ceph_decode_copy_safe(p, end, &item->name, sizeof(item->name), bad);
|
|
|
|
ceph_decode_64_safe(p, end, item->cookie, bad);
|
|
|
|
ceph_decode_skip_32(p, end, bad); /* skip timeout seconds */
|
2015-07-18 04:18:07 +08:00
|
|
|
|
|
|
|
if (struct_v >= 2) {
|
2019-06-05 02:35:18 +08:00
|
|
|
ret = ceph_decode_entity_addr(p, end, &item->addr);
|
|
|
|
if (ret)
|
|
|
|
goto bad;
|
|
|
|
} else {
|
|
|
|
ret = 0;
|
2015-07-18 04:18:07 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
dout("%s %s%llu cookie %llu addr %s\n", __func__,
|
|
|
|
ENTITY_NAME(item->name), item->cookie,
|
2019-05-06 21:38:47 +08:00
|
|
|
ceph_pr_addr(&item->addr));
|
2019-06-05 02:35:18 +08:00
|
|
|
bad:
|
|
|
|
return ret;
|
2015-07-18 04:18:07 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int decode_watchers(void **p, void *end,
|
|
|
|
struct ceph_watch_item **watchers,
|
|
|
|
u32 *num_watchers)
|
|
|
|
{
|
|
|
|
u8 struct_v;
|
|
|
|
u32 struct_len;
|
|
|
|
int i;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ceph_start_decoding(p, end, 1, "obj_list_watch_response_t",
|
|
|
|
&struct_v, &struct_len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
*num_watchers = ceph_decode_32(p);
|
|
|
|
*watchers = kcalloc(*num_watchers, sizeof(**watchers), GFP_NOIO);
|
|
|
|
if (!*watchers)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < *num_watchers; i++) {
|
|
|
|
ret = decode_watcher(p, end, *watchers + i);
|
|
|
|
if (ret) {
|
|
|
|
kfree(*watchers);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* On success, the caller is responsible for:
|
|
|
|
*
|
|
|
|
* kfree(watchers);
|
|
|
|
*/
|
|
|
|
int ceph_osdc_list_watchers(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
struct ceph_watch_item **watchers,
|
|
|
|
u32 *num_watchers)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
struct page **pages;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_NOIO);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ceph_oid_copy(&req->r_base_oid, oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, oloc);
|
|
|
|
req->r_flags = CEPH_OSD_FLAG_READ;
|
|
|
|
|
|
|
|
pages = ceph_alloc_page_vector(1, GFP_NOIO);
|
|
|
|
if (IS_ERR(pages)) {
|
|
|
|
ret = PTR_ERR(pages);
|
|
|
|
goto out_put_req;
|
|
|
|
}
|
|
|
|
|
|
|
|
osd_req_op_init(req, 0, CEPH_OSD_OP_LIST_WATCHERS, 0);
|
|
|
|
ceph_osd_data_pages_init(osd_req_op_data(req, 0, list_watchers,
|
|
|
|
response_data),
|
|
|
|
pages, PAGE_SIZE, 0, false, true);
|
|
|
|
|
2018-10-15 22:11:37 +08:00
|
|
|
ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
|
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
2015-07-18 04:18:07 +08:00
|
|
|
ceph_osdc_start_request(osdc, req, false);
|
|
|
|
ret = ceph_osdc_wait_request(osdc, req);
|
|
|
|
if (ret >= 0) {
|
|
|
|
void *p = page_address(pages[0]);
|
|
|
|
void *const end = p + req->r_ops[0].outdata_len;
|
|
|
|
|
|
|
|
ret = decode_watchers(&p, end, watchers, num_watchers);
|
|
|
|
}
|
|
|
|
|
|
|
|
out_put_req:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_list_watchers);
|
|
|
|
|
2013-08-29 12:43:09 +08:00
|
|
|
/*
|
|
|
|
* Call all pending notify callbacks - for use after a watch is
|
|
|
|
* unregistered, to make sure no more callbacks for it will be invoked
|
|
|
|
*/
|
2014-06-11 11:30:13 +08:00
|
|
|
void ceph_osdc_flush_notifies(struct ceph_osd_client *osdc)
|
2013-08-29 12:43:09 +08:00
|
|
|
{
|
2016-08-12 22:11:41 +08:00
|
|
|
dout("%s osdc %p\n", __func__, osdc);
|
2013-08-29 12:43:09 +08:00
|
|
|
flush_workqueue(osdc->notify_wq);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_flush_notifies);
|
|
|
|
|
2016-04-28 22:07:28 +08:00
|
|
|
void ceph_osdc_maybe_request_map(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
down_read(&osdc->lock);
|
|
|
|
maybe_request_map(osdc);
|
|
|
|
up_read(&osdc->lock);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_maybe_request_map);
|
2013-08-29 12:43:09 +08:00
|
|
|
|
2015-06-18 02:49:45 +08:00
|
|
|
/*
|
|
|
|
* Execute an OSD class method on an object.
|
|
|
|
*
|
|
|
|
* @flags: CEPH_OSD_FLAG_*
|
2017-01-26 01:16:21 +08:00
|
|
|
* @resp_len: in/out param for reply length
|
2015-06-18 02:49:45 +08:00
|
|
|
*/
|
|
|
|
int ceph_osdc_call(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
const char *class, const char *method,
|
|
|
|
unsigned int flags,
|
|
|
|
struct page *req_page, size_t req_len,
|
2019-06-15 00:16:51 +08:00
|
|
|
struct page **resp_pages, size_t *resp_len)
|
2015-06-18 02:49:45 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int ret;
|
|
|
|
|
2019-06-15 00:16:51 +08:00
|
|
|
if (req_len > PAGE_SIZE)
|
2017-01-26 01:16:21 +08:00
|
|
|
return -E2BIG;
|
|
|
|
|
2015-06-18 02:49:45 +08:00
|
|
|
req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_NOIO);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ceph_oid_copy(&req->r_base_oid, oid);
|
|
|
|
ceph_oloc_copy(&req->r_base_oloc, oloc);
|
|
|
|
req->r_flags = flags;
|
|
|
|
|
2018-09-27 01:12:07 +08:00
|
|
|
ret = osd_req_op_cls_init(req, 0, class, method);
|
2018-04-12 12:04:55 +08:00
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
2015-06-18 02:49:45 +08:00
|
|
|
if (req_page)
|
|
|
|
osd_req_op_cls_request_data_pages(req, 0, &req_page, req_len,
|
|
|
|
0, false, false);
|
2019-06-15 00:16:51 +08:00
|
|
|
if (resp_pages)
|
|
|
|
osd_req_op_cls_response_data_pages(req, 0, resp_pages,
|
2017-01-26 01:16:21 +08:00
|
|
|
*resp_len, 0, false, false);
|
2015-06-18 02:49:45 +08:00
|
|
|
|
2018-10-15 22:11:37 +08:00
|
|
|
ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
|
|
|
|
if (ret)
|
|
|
|
goto out_put_req;
|
|
|
|
|
2015-06-18 02:49:45 +08:00
|
|
|
ceph_osdc_start_request(osdc, req, false);
|
|
|
|
ret = ceph_osdc_wait_request(osdc, req);
|
|
|
|
if (ret >= 0) {
|
|
|
|
ret = req->r_ops[0].rval;
|
2019-06-15 00:16:51 +08:00
|
|
|
if (resp_pages)
|
2015-06-18 02:49:45 +08:00
|
|
|
*resp_len = req->r_ops[0].outdata_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
out_put_req:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_call);
|
|
|
|
|
2019-07-25 20:16:39 +08:00
|
|
|
/*
|
|
|
|
* reset all osd connections
|
|
|
|
*/
|
|
|
|
void ceph_osdc_reopen_osds(struct ceph_osd_client *osdc)
|
|
|
|
{
|
|
|
|
struct rb_node *n;
|
|
|
|
|
|
|
|
down_write(&osdc->lock);
|
|
|
|
for (n = rb_first(&osdc->osds); n; ) {
|
|
|
|
struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node);
|
|
|
|
|
|
|
|
n = rb_next(n);
|
|
|
|
if (!reopen_osd(osd))
|
|
|
|
kick_osd_requests(osd);
|
|
|
|
}
|
|
|
|
up_write(&osdc->lock);
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* init, shutdown
|
|
|
|
*/
|
|
|
|
int ceph_osdc_init(struct ceph_osd_client *osdc, struct ceph_client *client)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
dout("init\n");
|
|
|
|
osdc->client = client;
|
2016-04-28 22:07:26 +08:00
|
|
|
init_rwsem(&osdc->lock);
|
2009-10-07 02:31:10 +08:00
|
|
|
osdc->osds = RB_ROOT;
|
2010-02-04 03:00:26 +08:00
|
|
|
INIT_LIST_HEAD(&osdc->osd_lru);
|
2016-04-28 22:07:26 +08:00
|
|
|
spin_lock_init(&osdc->osd_lru_lock);
|
2016-04-28 22:07:26 +08:00
|
|
|
osd_init(&osdc->homeless_osd);
|
|
|
|
osdc->homeless_osd.o_osdc = osdc;
|
|
|
|
osdc->homeless_osd.o_osd = CEPH_HOMELESS_OSD;
|
2016-11-08 22:15:24 +08:00
|
|
|
osdc->last_linger_id = CEPH_LINGER_ID_START;
|
2016-05-26 07:15:02 +08:00
|
|
|
osdc->linger_requests = RB_ROOT;
|
2016-04-28 22:07:27 +08:00
|
|
|
osdc->map_checks = RB_ROOT;
|
|
|
|
osdc->linger_map_checks = RB_ROOT;
|
2009-10-07 02:31:10 +08:00
|
|
|
INIT_DELAYED_WORK(&osdc->timeout_work, handle_timeout);
|
2010-02-04 03:00:26 +08:00
|
|
|
INIT_DELAYED_WORK(&osdc->osds_timeout_work, handle_osds_timeout);
|
|
|
|
|
2009-11-19 06:52:18 +08:00
|
|
|
err = -ENOMEM;
|
2016-04-28 22:07:25 +08:00
|
|
|
osdc->osdmap = ceph_osdmap_alloc();
|
|
|
|
if (!osdc->osdmap)
|
|
|
|
goto out;
|
|
|
|
|
2016-02-10 00:25:31 +08:00
|
|
|
osdc->req_mempool = mempool_create_slab_pool(10,
|
|
|
|
ceph_osd_request_cache);
|
2009-10-07 02:31:10 +08:00
|
|
|
if (!osdc->req_mempool)
|
2016-04-28 22:07:25 +08:00
|
|
|
goto out_map;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2012-07-10 05:22:34 +08:00
|
|
|
err = ceph_msgpool_init(&osdc->msgpool_op, CEPH_MSG_OSD_OP,
|
2018-10-15 23:38:23 +08:00
|
|
|
PAGE_SIZE, CEPH_OSD_SLAB_OPS, 10, "osd_op");
|
2009-10-07 02:31:10 +08:00
|
|
|
if (err < 0)
|
2009-11-19 06:52:18 +08:00
|
|
|
goto out_mempool;
|
2012-07-10 05:22:34 +08:00
|
|
|
err = ceph_msgpool_init(&osdc->msgpool_op_reply, CEPH_MSG_OSD_OPREPLY,
|
2018-10-15 23:38:23 +08:00
|
|
|
PAGE_SIZE, CEPH_OSD_SLAB_OPS, 10,
|
|
|
|
"osd_op_reply");
|
2010-03-02 05:02:00 +08:00
|
|
|
if (err < 0)
|
|
|
|
goto out_msgpool;
|
2011-03-22 06:07:16 +08:00
|
|
|
|
2013-08-15 13:58:59 +08:00
|
|
|
err = -ENOMEM;
|
2011-03-22 06:07:16 +08:00
|
|
|
osdc->notify_wq = create_singlethread_workqueue("ceph-watch-notify");
|
2013-08-15 13:58:59 +08:00
|
|
|
if (!osdc->notify_wq)
|
2014-01-31 23:49:22 +08:00
|
|
|
goto out_msgpool_reply;
|
|
|
|
|
2018-05-21 22:00:29 +08:00
|
|
|
osdc->completion_wq = create_singlethread_workqueue("ceph-completion");
|
|
|
|
if (!osdc->completion_wq)
|
|
|
|
goto out_notify_wq;
|
|
|
|
|
2016-04-28 22:07:24 +08:00
|
|
|
schedule_delayed_work(&osdc->timeout_work,
|
|
|
|
osdc->client->options->osd_keepalive_timeout);
|
2016-04-28 22:07:24 +08:00
|
|
|
schedule_delayed_work(&osdc->osds_timeout_work,
|
|
|
|
round_jiffies_relative(osdc->client->options->osd_idle_ttl));
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
return 0;
|
2009-11-19 06:52:18 +08:00
|
|
|
|
2018-05-21 22:00:29 +08:00
|
|
|
out_notify_wq:
|
|
|
|
destroy_workqueue(osdc->notify_wq);
|
2014-01-31 23:49:22 +08:00
|
|
|
out_msgpool_reply:
|
|
|
|
ceph_msgpool_destroy(&osdc->msgpool_op_reply);
|
2010-03-02 05:02:00 +08:00
|
|
|
out_msgpool:
|
|
|
|
ceph_msgpool_destroy(&osdc->msgpool_op);
|
2009-11-19 06:52:18 +08:00
|
|
|
out_mempool:
|
|
|
|
mempool_destroy(osdc->req_mempool);
|
2016-04-28 22:07:25 +08:00
|
|
|
out_map:
|
|
|
|
ceph_osdmap_destroy(osdc->osdmap);
|
2009-11-19 06:52:18 +08:00
|
|
|
out:
|
|
|
|
return err;
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void ceph_osdc_stop(struct ceph_osd_client *osdc)
|
|
|
|
{
|
2018-05-21 22:00:29 +08:00
|
|
|
destroy_workqueue(osdc->completion_wq);
|
2011-03-22 06:07:16 +08:00
|
|
|
destroy_workqueue(osdc->notify_wq);
|
2009-10-07 02:31:10 +08:00
|
|
|
cancel_delayed_work_sync(&osdc->timeout_work);
|
2010-02-04 03:00:26 +08:00
|
|
|
cancel_delayed_work_sync(&osdc->osds_timeout_work);
|
2016-04-28 22:07:22 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
down_write(&osdc->lock);
|
2016-04-28 22:07:22 +08:00
|
|
|
while (!RB_EMPTY_ROOT(&osdc->osds)) {
|
|
|
|
struct ceph_osd *osd = rb_entry(rb_first(&osdc->osds),
|
|
|
|
struct ceph_osd, o_node);
|
2016-04-28 22:07:26 +08:00
|
|
|
close_osd(osd);
|
2016-04-28 22:07:22 +08:00
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
up_write(&osdc->lock);
|
2017-03-17 20:10:28 +08:00
|
|
|
WARN_ON(refcount_read(&osdc->homeless_osd.o_ref) != 1);
|
2016-04-28 22:07:26 +08:00
|
|
|
osd_cleanup(&osdc->homeless_osd);
|
|
|
|
|
|
|
|
WARN_ON(!list_empty(&osdc->osd_lru));
|
2016-05-26 07:15:02 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osdc->linger_requests));
|
2016-04-28 22:07:27 +08:00
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osdc->map_checks));
|
|
|
|
WARN_ON(!RB_EMPTY_ROOT(&osdc->linger_map_checks));
|
2016-04-28 22:07:26 +08:00
|
|
|
WARN_ON(atomic_read(&osdc->num_requests));
|
|
|
|
WARN_ON(atomic_read(&osdc->num_homeless));
|
2016-04-28 22:07:22 +08:00
|
|
|
|
2016-04-28 22:07:25 +08:00
|
|
|
ceph_osdmap_destroy(osdc->osdmap);
|
2009-10-07 02:31:10 +08:00
|
|
|
mempool_destroy(osdc->req_mempool);
|
|
|
|
ceph_msgpool_destroy(&osdc->msgpool_op);
|
2010-03-02 05:02:00 +08:00
|
|
|
ceph_msgpool_destroy(&osdc->msgpool_op_reply);
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read some contiguous pages. If we cross a stripe boundary, shorten
|
|
|
|
* *plen. Return number of bytes read, or error.
|
|
|
|
*/
|
|
|
|
int ceph_osdc_readpages(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_vino vino, struct ceph_file_layout *layout,
|
|
|
|
u64 off, u64 *plen,
|
|
|
|
u32 truncate_seq, u64 truncate_size,
|
2010-11-10 04:43:12 +08:00
|
|
|
struct page **pages, int num_pages, int page_align)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
dout("readpages on ino %llx.%llx on %llu~%llu\n", vino.ino,
|
|
|
|
vino.snap, off, *plen);
|
2014-11-13 14:40:37 +08:00
|
|
|
req = ceph_osdc_new_request(osdc, layout, vino, off, plen, 0, 1,
|
2009-10-07 02:31:10 +08:00
|
|
|
CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ,
|
2013-03-15 03:09:05 +08:00
|
|
|
NULL, truncate_seq, truncate_size,
|
libceph: don't assign page info in ceph_osdc_new_request()
Currently ceph_osdc_new_request() assigns an osd request's
r_num_pages and r_alignment fields. The only thing it does
after that is call ceph_osdc_build_request(), and that doesn't
need those fields to be assigned.
Move the assignment of those fields out of ceph_osdc_new_request()
and into its caller. As a result, the page_align parameter is no
longer used, so get rid of it.
Note that in ceph_sync_write(), the value for req->r_num_pages had
already been calculated earlier (as num_pages, and fortunately
it was computed the same way). So don't bother recomputing it,
but because it's not needed earlier, move that calculation after the
call to ceph_osdc_new_request(). Hold off making the assignment to
r_alignment, doing it instead r_pages and r_num_pages are
getting set.
Similarly, in start_read(), nr_pages already holds the number of
pages in the array (and is calculated the same way), so there's no
need to recompute it. Move the assignment of the page alignment
down with the others there as well.
This and the next few patches are preparation work for:
http://tracker.ceph.com/issues/4127
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-02 08:00:15 +08:00
|
|
|
false);
|
2012-09-25 12:01:02 +08:00
|
|
|
if (IS_ERR(req))
|
|
|
|
return PTR_ERR(req);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/* it may be a short read due to an object boundary */
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_req_op_extent_osd_data_pages(req, 0,
|
2013-04-05 14:27:12 +08:00
|
|
|
pages, *plen, page_align, false, false);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2013-03-08 05:38:25 +08:00
|
|
|
dout("readpages final extent is %llu~%llu (%llu bytes align %d)\n",
|
2013-04-03 14:28:57 +08:00
|
|
|
off, *plen, *plen, page_align);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
rc = ceph_osdc_start_request(osdc, req, false);
|
|
|
|
if (!rc)
|
|
|
|
rc = ceph_osdc_wait_request(osdc, req);
|
|
|
|
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
dout("readpages result %d\n", rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_readpages);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* do a synchronous write on N pages
|
|
|
|
*/
|
|
|
|
int ceph_osdc_writepages(struct ceph_osd_client *osdc, struct ceph_vino vino,
|
|
|
|
struct ceph_file_layout *layout,
|
|
|
|
struct ceph_snap_context *snapc,
|
|
|
|
u64 off, u64 len,
|
|
|
|
u32 truncate_seq, u64 truncate_size,
|
2018-07-14 04:18:37 +08:00
|
|
|
struct timespec64 *mtime,
|
2013-02-16 01:42:29 +08:00
|
|
|
struct page **pages, int num_pages)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int rc = 0;
|
2010-11-10 04:43:12 +08:00
|
|
|
int page_align = off & ~PAGE_MASK;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2014-11-13 14:40:37 +08:00
|
|
|
req = ceph_osdc_new_request(osdc, layout, vino, off, &len, 0, 1,
|
2017-02-12 01:48:41 +08:00
|
|
|
CEPH_OSD_OP_WRITE, CEPH_OSD_FLAG_WRITE,
|
2013-03-15 03:09:05 +08:00
|
|
|
snapc, truncate_seq, truncate_size,
|
libceph: don't assign page info in ceph_osdc_new_request()
Currently ceph_osdc_new_request() assigns an osd request's
r_num_pages and r_alignment fields. The only thing it does
after that is call ceph_osdc_build_request(), and that doesn't
need those fields to be assigned.
Move the assignment of those fields out of ceph_osdc_new_request()
and into its caller. As a result, the page_align parameter is no
longer used, so get rid of it.
Note that in ceph_sync_write(), the value for req->r_num_pages had
already been calculated earlier (as num_pages, and fortunately
it was computed the same way). So don't bother recomputing it,
but because it's not needed earlier, move that calculation after the
call to ceph_osdc_new_request(). Hold off making the assignment to
r_alignment, doing it instead r_pages and r_num_pages are
getting set.
Similarly, in start_read(), nr_pages already holds the number of
pages in the array (and is calculated the same way), so there's no
need to recompute it. Move the assignment of the page alignment
down with the others there as well.
This and the next few patches are preparation work for:
http://tracker.ceph.com/issues/4127
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-02 08:00:15 +08:00
|
|
|
true);
|
2012-09-25 12:01:02 +08:00
|
|
|
if (IS_ERR(req))
|
|
|
|
return PTR_ERR(req);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
/* it may be a short write due to an object boundary */
|
2013-04-16 03:50:36 +08:00
|
|
|
osd_req_op_extent_osd_data_pages(req, 0, pages, len, page_align,
|
2013-04-03 14:28:57 +08:00
|
|
|
false, false);
|
|
|
|
dout("writepages %llu~%llu (%llu bytes)\n", off, len, len);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-05-26 06:29:52 +08:00
|
|
|
req->r_mtime = *mtime;
|
2013-02-16 01:42:29 +08:00
|
|
|
rc = ceph_osdc_start_request(osdc, req, true);
|
2009-10-07 02:31:10 +08:00
|
|
|
if (!rc)
|
|
|
|
rc = ceph_osdc_wait_request(osdc, req);
|
|
|
|
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
if (rc == 0)
|
|
|
|
rc = len;
|
|
|
|
dout("writepages result %d\n", rc);
|
|
|
|
return rc;
|
|
|
|
}
|
2010-04-07 06:14:15 +08:00
|
|
|
EXPORT_SYMBOL(ceph_osdc_writepages);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2018-10-15 23:45:58 +08:00
|
|
|
static int osd_req_op_copy_from_init(struct ceph_osd_request *req,
|
|
|
|
u64 src_snapid, u64 src_version,
|
|
|
|
struct ceph_object_id *src_oid,
|
|
|
|
struct ceph_object_locator *src_oloc,
|
|
|
|
u32 src_fadvise_flags,
|
|
|
|
u32 dst_fadvise_flags,
|
|
|
|
u8 copy_from_flags)
|
|
|
|
{
|
|
|
|
struct ceph_osd_req_op *op;
|
|
|
|
struct page **pages;
|
|
|
|
void *p, *end;
|
|
|
|
|
|
|
|
pages = ceph_alloc_page_vector(1, GFP_KERNEL);
|
|
|
|
if (IS_ERR(pages))
|
|
|
|
return PTR_ERR(pages);
|
|
|
|
|
|
|
|
op = _osd_req_op_init(req, 0, CEPH_OSD_OP_COPY_FROM, dst_fadvise_flags);
|
|
|
|
op->copy_from.snapid = src_snapid;
|
|
|
|
op->copy_from.src_version = src_version;
|
|
|
|
op->copy_from.flags = copy_from_flags;
|
|
|
|
op->copy_from.src_fadvise_flags = src_fadvise_flags;
|
|
|
|
|
|
|
|
p = page_address(pages[0]);
|
|
|
|
end = p + PAGE_SIZE;
|
|
|
|
ceph_encode_string(&p, end, src_oid->name, src_oid->name_len);
|
|
|
|
encode_oloc(&p, end, src_oloc);
|
|
|
|
op->indata_len = PAGE_SIZE - (end - p);
|
|
|
|
|
|
|
|
ceph_osd_data_pages_init(&op->copy_from.osd_data, pages,
|
|
|
|
op->indata_len, 0, false, true);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ceph_osdc_copy_from(struct ceph_osd_client *osdc,
|
|
|
|
u64 src_snapid, u64 src_version,
|
|
|
|
struct ceph_object_id *src_oid,
|
|
|
|
struct ceph_object_locator *src_oloc,
|
|
|
|
u32 src_fadvise_flags,
|
|
|
|
struct ceph_object_id *dst_oid,
|
|
|
|
struct ceph_object_locator *dst_oloc,
|
|
|
|
u32 dst_fadvise_flags,
|
|
|
|
u8 copy_from_flags)
|
|
|
|
{
|
|
|
|
struct ceph_osd_request *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_KERNEL);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
req->r_flags = CEPH_OSD_FLAG_WRITE;
|
|
|
|
|
|
|
|
ceph_oloc_copy(&req->r_t.base_oloc, dst_oloc);
|
|
|
|
ceph_oid_copy(&req->r_t.base_oid, dst_oid);
|
|
|
|
|
|
|
|
ret = osd_req_op_copy_from_init(req, src_snapid, src_version, src_oid,
|
|
|
|
src_oloc, src_fadvise_flags,
|
|
|
|
dst_fadvise_flags, copy_from_flags);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = ceph_osdc_alloc_messages(req, GFP_KERNEL);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ceph_osdc_start_request(osdc, req, false);
|
|
|
|
ret = ceph_osdc_wait_request(osdc, req);
|
|
|
|
|
|
|
|
out:
|
|
|
|
ceph_osdc_put_request(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_osdc_copy_from);
|
|
|
|
|
2018-03-10 20:32:05 +08:00
|
|
|
int __init ceph_osdc_setup(void)
|
2013-05-02 01:43:04 +08:00
|
|
|
{
|
2016-02-10 00:50:15 +08:00
|
|
|
size_t size = sizeof(struct ceph_osd_request) +
|
|
|
|
CEPH_OSD_SLAB_OPS * sizeof(struct ceph_osd_req_op);
|
|
|
|
|
2013-05-02 01:43:04 +08:00
|
|
|
BUG_ON(ceph_osd_request_cache);
|
2016-02-10 00:50:15 +08:00
|
|
|
ceph_osd_request_cache = kmem_cache_create("ceph_osd_request", size,
|
|
|
|
0, 0, NULL);
|
2013-05-02 01:43:04 +08:00
|
|
|
|
|
|
|
return ceph_osd_request_cache ? 0 : -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ceph_osdc_cleanup(void)
|
|
|
|
{
|
|
|
|
BUG_ON(!ceph_osd_request_cache);
|
|
|
|
kmem_cache_destroy(ceph_osd_request_cache);
|
|
|
|
ceph_osd_request_cache = NULL;
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
/*
|
|
|
|
* handle incoming message
|
|
|
|
*/
|
|
|
|
static void dispatch(struct ceph_connection *con, struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
2009-10-07 02:31:10 +08:00
|
|
|
int type = le16_to_cpu(msg->hdr.type);
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case CEPH_MSG_OSD_MAP:
|
|
|
|
ceph_osdc_handle_map(osdc, msg);
|
|
|
|
break;
|
|
|
|
case CEPH_MSG_OSD_OPREPLY:
|
2016-04-28 22:07:26 +08:00
|
|
|
handle_reply(osd, msg);
|
2009-10-07 02:31:10 +08:00
|
|
|
break;
|
2017-06-19 18:18:05 +08:00
|
|
|
case CEPH_MSG_OSD_BACKOFF:
|
|
|
|
handle_backoff(osd, msg);
|
|
|
|
break;
|
2011-03-22 06:07:16 +08:00
|
|
|
case CEPH_MSG_WATCH_NOTIFY:
|
|
|
|
handle_watch_notify(osdc, msg);
|
|
|
|
break;
|
2009-10-07 02:31:10 +08:00
|
|
|
|
|
|
|
default:
|
|
|
|
pr_err("received unknown message type %d %s\n", type,
|
|
|
|
ceph_msg_type_name(type));
|
|
|
|
}
|
2016-04-28 22:07:26 +08:00
|
|
|
|
2009-10-07 02:31:10 +08:00
|
|
|
ceph_msg_put(msg);
|
|
|
|
}
|
|
|
|
|
2010-02-20 13:43:23 +08:00
|
|
|
/*
|
libceph: check data_len in ->alloc_msg()
Only ->alloc_msg() should check data_len of the incoming message
against the preallocated ceph_msg, doing it in the messenger is not
right. The contract is that either ->alloc_msg() returns a ceph_msg
which will fit all of the portions of the incoming message, or it
returns NULL and possibly sets skip, signaling whether NULL is due to
an -ENOMEM. ->alloc_msg() should be the only place where we make the
skip/no-skip decision.
I stumbled upon this while looking at con/osd ref counting. Right now,
if we get a non-extent message with a larger data portion than we are
prepared for, ->alloc_msg() returns a ceph_msg, and then, when we skip
it in the messenger, we don't put the con/osd ref acquired in
ceph_con_in_msg_alloc() (which is normally put in process_message()),
so this also fixes a memory leak.
An existing BUG_ON in ceph_msg_data_cursor_init() ensures we don't
corrupt random memory should a buggy ->alloc_msg() return an unfit
ceph_msg.
While at it, I changed the "unknown tid" dout() to a pr_warn() to make
sure all skips are seen and unified format strings.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
2015-09-02 16:37:09 +08:00
|
|
|
* Lookup and return message for incoming reply. Don't try to do
|
|
|
|
* anything about a larger than preallocated data portion of the
|
|
|
|
* message at the moment - for now, just skip the message.
|
2010-02-20 13:43:23 +08:00
|
|
|
*/
|
|
|
|
static struct ceph_msg *get_reply(struct ceph_connection *con,
|
2010-01-09 05:58:34 +08:00
|
|
|
struct ceph_msg_header *hdr,
|
|
|
|
int *skip)
|
2009-10-07 02:31:10 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
|
|
|
struct ceph_osd_client *osdc = osd->o_osdc;
|
2016-04-28 22:07:26 +08:00
|
|
|
struct ceph_msg *m = NULL;
|
2010-01-12 06:47:13 +08:00
|
|
|
struct ceph_osd_request *req;
|
2014-01-10 02:08:21 +08:00
|
|
|
int front_len = le32_to_cpu(hdr->front_len);
|
2010-02-20 13:43:23 +08:00
|
|
|
int data_len = le32_to_cpu(hdr->data_len);
|
2016-04-28 22:07:26 +08:00
|
|
|
u64 tid = le64_to_cpu(hdr->tid);
|
2009-10-07 02:31:10 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
down_read(&osdc->lock);
|
|
|
|
if (!osd_registered(osd)) {
|
|
|
|
dout("%s osd%d unknown, skipping\n", __func__, osd->o_osd);
|
|
|
|
*skip = 1;
|
|
|
|
goto out_unlock_osdc;
|
|
|
|
}
|
|
|
|
WARN_ON(osd->o_osd != le64_to_cpu(hdr->src.num));
|
|
|
|
|
|
|
|
mutex_lock(&osd->lock);
|
|
|
|
req = lookup_request(&osd->o_requests, tid);
|
2010-01-12 06:47:13 +08:00
|
|
|
if (!req) {
|
2016-02-19 18:38:57 +08:00
|
|
|
dout("%s osd%d tid %llu unknown, skipping\n", __func__,
|
|
|
|
osd->o_osd, tid);
|
libceph: check data_len in ->alloc_msg()
Only ->alloc_msg() should check data_len of the incoming message
against the preallocated ceph_msg, doing it in the messenger is not
right. The contract is that either ->alloc_msg() returns a ceph_msg
which will fit all of the portions of the incoming message, or it
returns NULL and possibly sets skip, signaling whether NULL is due to
an -ENOMEM. ->alloc_msg() should be the only place where we make the
skip/no-skip decision.
I stumbled upon this while looking at con/osd ref counting. Right now,
if we get a non-extent message with a larger data portion than we are
prepared for, ->alloc_msg() returns a ceph_msg, and then, when we skip
it in the messenger, we don't put the con/osd ref acquired in
ceph_con_in_msg_alloc() (which is normally put in process_message()),
so this also fixes a memory leak.
An existing BUG_ON in ceph_msg_data_cursor_init() ensures we don't
corrupt random memory should a buggy ->alloc_msg() return an unfit
ceph_msg.
While at it, I changed the "unknown tid" dout() to a pr_warn() to make
sure all skips are seen and unified format strings.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
2015-09-02 16:37:09 +08:00
|
|
|
*skip = 1;
|
2016-04-28 22:07:26 +08:00
|
|
|
goto out_unlock_session;
|
2010-01-12 06:47:13 +08:00
|
|
|
}
|
2010-03-02 05:02:00 +08:00
|
|
|
|
2013-04-02 05:12:14 +08:00
|
|
|
ceph_msg_revoke_incoming(req->r_reply);
|
2010-01-12 06:47:13 +08:00
|
|
|
|
2014-01-10 02:08:21 +08:00
|
|
|
if (front_len > req->r_reply->front_alloc_len) {
|
libceph: check data_len in ->alloc_msg()
Only ->alloc_msg() should check data_len of the incoming message
against the preallocated ceph_msg, doing it in the messenger is not
right. The contract is that either ->alloc_msg() returns a ceph_msg
which will fit all of the portions of the incoming message, or it
returns NULL and possibly sets skip, signaling whether NULL is due to
an -ENOMEM. ->alloc_msg() should be the only place where we make the
skip/no-skip decision.
I stumbled upon this while looking at con/osd ref counting. Right now,
if we get a non-extent message with a larger data portion than we are
prepared for, ->alloc_msg() returns a ceph_msg, and then, when we skip
it in the messenger, we don't put the con/osd ref acquired in
ceph_con_in_msg_alloc() (which is normally put in process_message()),
so this also fixes a memory leak.
An existing BUG_ON in ceph_msg_data_cursor_init() ensures we don't
corrupt random memory should a buggy ->alloc_msg() return an unfit
ceph_msg.
While at it, I changed the "unknown tid" dout() to a pr_warn() to make
sure all skips are seen and unified format strings.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
2015-09-02 16:37:09 +08:00
|
|
|
pr_warn("%s osd%d tid %llu front %d > preallocated %d\n",
|
|
|
|
__func__, osd->o_osd, req->r_tid, front_len,
|
|
|
|
req->r_reply->front_alloc_len);
|
2014-01-10 02:08:21 +08:00
|
|
|
m = ceph_msg_new(CEPH_MSG_OSD_OPREPLY, front_len, GFP_NOFS,
|
|
|
|
false);
|
2010-04-02 07:06:19 +08:00
|
|
|
if (!m)
|
2016-04-28 22:07:26 +08:00
|
|
|
goto out_unlock_session;
|
2010-03-02 05:02:00 +08:00
|
|
|
ceph_msg_put(req->r_reply);
|
|
|
|
req->r_reply = m;
|
|
|
|
}
|
2013-02-15 02:16:43 +08:00
|
|
|
|
libceph: check data_len in ->alloc_msg()
Only ->alloc_msg() should check data_len of the incoming message
against the preallocated ceph_msg, doing it in the messenger is not
right. The contract is that either ->alloc_msg() returns a ceph_msg
which will fit all of the portions of the incoming message, or it
returns NULL and possibly sets skip, signaling whether NULL is due to
an -ENOMEM. ->alloc_msg() should be the only place where we make the
skip/no-skip decision.
I stumbled upon this while looking at con/osd ref counting. Right now,
if we get a non-extent message with a larger data portion than we are
prepared for, ->alloc_msg() returns a ceph_msg, and then, when we skip
it in the messenger, we don't put the con/osd ref acquired in
ceph_con_in_msg_alloc() (which is normally put in process_message()),
so this also fixes a memory leak.
An existing BUG_ON in ceph_msg_data_cursor_init() ensures we don't
corrupt random memory should a buggy ->alloc_msg() return an unfit
ceph_msg.
While at it, I changed the "unknown tid" dout() to a pr_warn() to make
sure all skips are seen and unified format strings.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
2015-09-02 16:37:09 +08:00
|
|
|
if (data_len > req->r_reply->data_length) {
|
|
|
|
pr_warn("%s osd%d tid %llu data %d > preallocated %zu, skipping\n",
|
|
|
|
__func__, osd->o_osd, req->r_tid, data_len,
|
|
|
|
req->r_reply->data_length);
|
|
|
|
m = NULL;
|
|
|
|
*skip = 1;
|
2016-04-28 22:07:26 +08:00
|
|
|
goto out_unlock_session;
|
2010-01-12 06:47:13 +08:00
|
|
|
}
|
libceph: check data_len in ->alloc_msg()
Only ->alloc_msg() should check data_len of the incoming message
against the preallocated ceph_msg, doing it in the messenger is not
right. The contract is that either ->alloc_msg() returns a ceph_msg
which will fit all of the portions of the incoming message, or it
returns NULL and possibly sets skip, signaling whether NULL is due to
an -ENOMEM. ->alloc_msg() should be the only place where we make the
skip/no-skip decision.
I stumbled upon this while looking at con/osd ref counting. Right now,
if we get a non-extent message with a larger data portion than we are
prepared for, ->alloc_msg() returns a ceph_msg, and then, when we skip
it in the messenger, we don't put the con/osd ref acquired in
ceph_con_in_msg_alloc() (which is normally put in process_message()),
so this also fixes a memory leak.
An existing BUG_ON in ceph_msg_data_cursor_init() ensures we don't
corrupt random memory should a buggy ->alloc_msg() return an unfit
ceph_msg.
While at it, I changed the "unknown tid" dout() to a pr_warn() to make
sure all skips are seen and unified format strings.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
2015-09-02 16:37:09 +08:00
|
|
|
|
|
|
|
m = ceph_msg_get(req->r_reply);
|
2010-03-02 05:02:00 +08:00
|
|
|
dout("get_reply tid %lld %p\n", tid, m);
|
2010-01-12 06:47:13 +08:00
|
|
|
|
2016-04-28 22:07:26 +08:00
|
|
|
out_unlock_session:
|
|
|
|
mutex_unlock(&osd->lock);
|
|
|
|
out_unlock_osdc:
|
|
|
|
up_read(&osdc->lock);
|
2010-01-09 05:58:34 +08:00
|
|
|
return m;
|
2010-02-20 13:43:23 +08:00
|
|
|
}
|
|
|
|
|
2016-04-28 22:07:27 +08:00
|
|
|
/*
|
|
|
|
* TODO: switch to a msg-owned pagelist
|
|
|
|
*/
|
|
|
|
static struct ceph_msg *alloc_msg_with_page_vector(struct ceph_msg_header *hdr)
|
|
|
|
{
|
|
|
|
struct ceph_msg *m;
|
|
|
|
int type = le16_to_cpu(hdr->type);
|
|
|
|
u32 front_len = le32_to_cpu(hdr->front_len);
|
|
|
|
u32 data_len = le32_to_cpu(hdr->data_len);
|
|
|
|
|
2018-10-15 23:38:23 +08:00
|
|
|
m = ceph_msg_new2(type, front_len, 1, GFP_NOIO, false);
|
2016-04-28 22:07:27 +08:00
|
|
|
if (!m)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (data_len) {
|
|
|
|
struct page **pages;
|
|
|
|
struct ceph_osd_data osd_data;
|
|
|
|
|
|
|
|
pages = ceph_alloc_page_vector(calc_pages_for(0, data_len),
|
|
|
|
GFP_NOIO);
|
2016-07-30 08:37:57 +08:00
|
|
|
if (IS_ERR(pages)) {
|
2016-04-28 22:07:27 +08:00
|
|
|
ceph_msg_put(m);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
ceph_osd_data_pages_init(&osd_data, pages, data_len, 0, false,
|
|
|
|
false);
|
|
|
|
ceph_osdc_msg_data_add(m, &osd_data);
|
|
|
|
}
|
|
|
|
|
|
|
|
return m;
|
|
|
|
}
|
|
|
|
|
2010-02-20 13:43:23 +08:00
|
|
|
static struct ceph_msg *alloc_msg(struct ceph_connection *con,
|
|
|
|
struct ceph_msg_header *hdr,
|
|
|
|
int *skip)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
|
|
|
int type = le16_to_cpu(hdr->type);
|
|
|
|
|
2012-06-05 03:43:32 +08:00
|
|
|
*skip = 0;
|
2010-02-20 13:43:23 +08:00
|
|
|
switch (type) {
|
|
|
|
case CEPH_MSG_OSD_MAP:
|
2017-06-19 18:18:05 +08:00
|
|
|
case CEPH_MSG_OSD_BACKOFF:
|
2011-03-22 06:07:16 +08:00
|
|
|
case CEPH_MSG_WATCH_NOTIFY:
|
2016-04-28 22:07:27 +08:00
|
|
|
return alloc_msg_with_page_vector(hdr);
|
2010-02-20 13:43:23 +08:00
|
|
|
case CEPH_MSG_OSD_OPREPLY:
|
|
|
|
return get_reply(con, hdr, skip);
|
|
|
|
default:
|
2016-04-28 22:07:26 +08:00
|
|
|
pr_warn("%s osd%d unknown msg type %d, skipping\n", __func__,
|
|
|
|
osd->o_osd, type);
|
2010-02-20 13:43:23 +08:00
|
|
|
*skip = 1;
|
|
|
|
return NULL;
|
|
|
|
}
|
2009-10-07 02:31:10 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wrappers to refcount containing ceph_osd struct
|
|
|
|
*/
|
|
|
|
static struct ceph_connection *get_osd_con(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
|
|
|
if (get_osd(osd))
|
|
|
|
return con;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void put_osd_con(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
struct ceph_osd *osd = con->private;
|
|
|
|
put_osd(osd);
|
|
|
|
}
|
|
|
|
|
2009-11-19 08:19:57 +08:00
|
|
|
/*
|
|
|
|
* authentication
|
|
|
|
*/
|
2012-05-17 04:16:39 +08:00
|
|
|
/*
|
|
|
|
* Note: returned pointer is the address of a structure that's
|
|
|
|
* managed separately. Caller must *not* attempt to free it.
|
|
|
|
*/
|
|
|
|
static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con,
|
2012-05-17 04:16:39 +08:00
|
|
|
int *proto, int force_new)
|
2009-11-19 08:19:57 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd *o = con->private;
|
|
|
|
struct ceph_osd_client *osdc = o->o_osdc;
|
|
|
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
2012-05-17 04:16:39 +08:00
|
|
|
struct ceph_auth_handshake *auth = &o->o_auth;
|
2009-11-19 08:19:57 +08:00
|
|
|
|
2012-05-17 04:16:39 +08:00
|
|
|
if (force_new && auth->authorizer) {
|
2016-04-12 01:34:49 +08:00
|
|
|
ceph_auth_destroy_authorizer(auth->authorizer);
|
2012-05-17 04:16:39 +08:00
|
|
|
auth->authorizer = NULL;
|
|
|
|
}
|
2013-03-26 01:26:14 +08:00
|
|
|
if (!auth->authorizer) {
|
|
|
|
int ret = ceph_auth_create_authorizer(ac, CEPH_ENTITY_TYPE_OSD,
|
|
|
|
auth);
|
2009-11-19 08:19:57 +08:00
|
|
|
if (ret)
|
2012-05-17 04:16:39 +08:00
|
|
|
return ERR_PTR(ret);
|
2013-03-26 01:26:14 +08:00
|
|
|
} else {
|
|
|
|
int ret = ceph_auth_update_authorizer(ac, CEPH_ENTITY_TYPE_OSD,
|
2013-03-26 01:26:01 +08:00
|
|
|
auth);
|
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
2009-11-19 08:19:57 +08:00
|
|
|
}
|
|
|
|
*proto = ac->protocol;
|
2012-05-17 04:16:39 +08:00
|
|
|
|
2012-05-17 04:16:39 +08:00
|
|
|
return auth;
|
2009-11-19 08:19:57 +08:00
|
|
|
}
|
|
|
|
|
2018-07-28 01:18:34 +08:00
|
|
|
static int add_authorizer_challenge(struct ceph_connection *con,
|
|
|
|
void *challenge_buf, int challenge_buf_len)
|
|
|
|
{
|
|
|
|
struct ceph_osd *o = con->private;
|
|
|
|
struct ceph_osd_client *osdc = o->o_osdc;
|
|
|
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
|
|
|
|
|
|
|
return ceph_auth_add_authorizer_challenge(ac, o->o_auth.authorizer,
|
|
|
|
challenge_buf, challenge_buf_len);
|
|
|
|
}
|
2009-11-19 08:19:57 +08:00
|
|
|
|
2016-12-02 23:35:09 +08:00
|
|
|
static int verify_authorizer_reply(struct ceph_connection *con)
|
2009-11-19 08:19:57 +08:00
|
|
|
{
|
|
|
|
struct ceph_osd *o = con->private;
|
|
|
|
struct ceph_osd_client *osdc = o->o_osdc;
|
|
|
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
|
|
|
|
2016-12-02 23:35:09 +08:00
|
|
|
return ceph_auth_verify_authorizer_reply(ac, o->o_auth.authorizer);
|
2009-11-19 08:19:57 +08:00
|
|
|
}
|
|
|
|
|
2010-02-03 08:21:06 +08:00
|
|
|
static int invalidate_authorizer(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
struct ceph_osd *o = con->private;
|
|
|
|
struct ceph_osd_client *osdc = o->o_osdc;
|
|
|
|
struct ceph_auth_client *ac = osdc->client->monc.auth;
|
|
|
|
|
2013-03-26 01:26:14 +08:00
|
|
|
ceph_auth_invalidate_authorizer(ac, CEPH_ENTITY_TYPE_OSD);
|
2010-02-03 08:21:06 +08:00
|
|
|
return ceph_monc_validate_auth(&osdc->client->monc);
|
|
|
|
}
|
2009-11-19 08:19:57 +08:00
|
|
|
|
2017-06-15 22:30:54 +08:00
|
|
|
static void osd_reencode_message(struct ceph_msg *msg)
|
|
|
|
{
|
2017-07-14 22:08:54 +08:00
|
|
|
int type = le16_to_cpu(msg->hdr.type);
|
|
|
|
|
|
|
|
if (type == CEPH_MSG_OSD_OP)
|
|
|
|
encode_request_finish(msg);
|
2017-06-15 22:30:54 +08:00
|
|
|
}
|
|
|
|
|
2015-10-27 05:23:56 +08:00
|
|
|
static int osd_sign_message(struct ceph_msg *msg)
|
2014-11-04 16:33:37 +08:00
|
|
|
{
|
2015-10-27 05:23:56 +08:00
|
|
|
struct ceph_osd *o = msg->con->private;
|
2014-11-04 16:33:37 +08:00
|
|
|
struct ceph_auth_handshake *auth = &o->o_auth;
|
2015-10-27 05:23:56 +08:00
|
|
|
|
2014-11-04 16:33:37 +08:00
|
|
|
return ceph_auth_sign_message(auth, msg);
|
|
|
|
}
|
|
|
|
|
2015-10-27 05:23:56 +08:00
|
|
|
static int osd_check_message_signature(struct ceph_msg *msg)
|
2014-11-04 16:33:37 +08:00
|
|
|
{
|
2015-10-27 05:23:56 +08:00
|
|
|
struct ceph_osd *o = msg->con->private;
|
2014-11-04 16:33:37 +08:00
|
|
|
struct ceph_auth_handshake *auth = &o->o_auth;
|
2015-10-27 05:23:56 +08:00
|
|
|
|
2014-11-04 16:33:37 +08:00
|
|
|
return ceph_auth_check_message_signature(auth, msg);
|
|
|
|
}
|
|
|
|
|
2010-05-20 16:40:19 +08:00
|
|
|
static const struct ceph_connection_operations osd_con_ops = {
|
2009-10-07 02:31:10 +08:00
|
|
|
.get = get_osd_con,
|
|
|
|
.put = put_osd_con,
|
|
|
|
.dispatch = dispatch,
|
2009-11-19 08:19:57 +08:00
|
|
|
.get_authorizer = get_authorizer,
|
2018-07-28 01:18:34 +08:00
|
|
|
.add_authorizer_challenge = add_authorizer_challenge,
|
2009-11-19 08:19:57 +08:00
|
|
|
.verify_authorizer_reply = verify_authorizer_reply,
|
2010-02-03 08:21:06 +08:00
|
|
|
.invalidate_authorizer = invalidate_authorizer,
|
2009-10-07 02:31:10 +08:00
|
|
|
.alloc_msg = alloc_msg,
|
2017-06-15 22:30:54 +08:00
|
|
|
.reencode_message = osd_reencode_message,
|
2015-10-27 05:23:56 +08:00
|
|
|
.sign_message = osd_sign_message,
|
|
|
|
.check_message_signature = osd_check_message_signature,
|
2016-04-28 22:07:26 +08:00
|
|
|
.fault = osd_fault,
|
2009-10-07 02:31:10 +08:00
|
|
|
};
|