The wait_unsafe_requests() helper dropped the mdsc mutex to wait
for each request to complete, and then examined r_node to get the
next request after retaking the lock. But the request completion
removes the request from the tree, so r_node was always undefined
at this point. Since it's a small race, it usually led to a
valid request, but not always. The result was an occasional
crash in rb_next() while dereferencing node->rb_left.
Fix this by clearing the rb_node when removing the request from
the request tree, and not walking off into the weeds when we
are done waiting for a request. Since the request we waited on
will _always_ be out of the request tree, take a ref on the next
request, in the hopes that it won't be. But if it is, it's ok:
we can start over from the beginning (and traverse over older read
requests again).
Signed-off-by: Sage Weil <sage@newdream.net>
We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release
with MDS requests (e.g. setattr). We don't carry refs on most caps, so
this code worked most of the time, but for setattr (utimes) we try to
drop Fscr.
This causes cap state to get slightly out of sync with reality, and may
result in subsequent mds revoke messages getting ignored.
Fix by only releasing unused caps.
Signed-off-by: Sage Weil <sage@newdream.net>
Drop session mutex unconditionally in handle_cap_grant, and do the
check_caps from the handle_cap_grant helper. This avoids using a magic
return value.
Also avoid using a flag variable in the IMPORT case and call
check_caps at the appropriate point.
Signed-off-by: Sage Weil <sage@newdream.net>
Passing a session pointer to ceph_check_caps() used to mean it would leave
the session mutex locked. That wasn't always possible if it wasn't passed
CHECK_CAPS_AUTHONLY. If could unlock the passed session and lock a
differet session mutex, which was clearly wrong, and also emitted a
warning when it a racing CPU retook it and we did an unlock from the wrong
context.
This was only a problem when there was more than one MDS.
First, make ceph_check_caps unconditionally drop the session mutex, so that
it is free to lock other sessions as needed. Then adjust the one caller
that passes in a session (handle_cap_grant) accordingly.
Signed-off-by: Sage Weil <sage@newdream.net>
This causes an oops when debug output is enabled and we kick
an osd request with no current r_osd (sometime after an osd
failure). Check the pointer before dereferencing.
Signed-off-by: Sage Weil <sage@newdream.net>
Previously we would decode state directly into our current ticket_handler.
This is problematic if for some reason we fail to decode, because we end
up with half new state and half old state.
We are probably already in bad shape if we get an update we can't decode,
but we may as well be tidy anyway. Decode into new_* temporaries and
update the ticket_handler only on success.
Signed-off-by: Sage Weil <sage@newdream.net>
Release the old ticket_blob buffer when we get an updated service ticket
from the monitor. Previously these were getting leaked.
Signed-off-by: Sage Weil <sage@newdream.net>
The buffer size was incorrectly calculated for the ceph_x_encrypt()
encapsulated ticket blob. Use a helper (with correct arithmetic) and
BUG out if we were wrong.
Signed-off-by: Sage Weil <sage@newdream.net>
We were failing to reconnect to services due to an old authenticator, even
though we had the new ticket, because we weren't properly retrying the
connect handshake, because we were calling an old/incorrect helper that
left in_base_pos incorrect. The result was a failure to reconnect to the
OSD or MDS (with an authentication error) if the MDS restarted after the
service had been up a few hours (long enough for the original authenticator
to be invalid). This was only a problem if the AUTH_X authentication was
enabled.
Now that the 'negotiate' and 'connect' stages are fully separated, we
should use the prepare_read_connect() helper instead, and remove the
obsolete one.
Signed-off-by: Sage Weil <sage@newdream.net>
When an inode was dropped while being migrated between two MDSs,
i_cap_exporting_issued was non-zero such that issue caps were non-zero and
__ceph_is_any_caps(ci) was true. This prevented the inode from being
removed from the snap realm, even as it was dropped from the cache.
Fix this by dropping any residual i_snap_realm ref in destroy_inode.
Signed-off-by: Sage Weil <sage@newdream.net>
All ci->i_snap_realm_item/realm->inodes_with_caps manipulation should be
protected by realm->inodes_with_caps_lock. This bug would have only bit
us in a rare race with a realm split (during some snap creations).
Signed-off-by: Sage Weil <sage@newdream.net>
Added assertion, and cleared one case where the implemented caps were
not following the issued caps.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
This simplifies the process of timing out messages. We
keep lru of current messages that are in flight. If a
timeout has passed, we reset the osd connection, so that
messages will be retransmitted. This is a failsafe in case
we hit some sort of problem sending out message to the OSD.
Normally, we'll get notification via an updated osdmap if
there are problems.
If a request is older than the keepalive timeout, send a
keepalive to ensure we detect any breaks in the TCP connection.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
The flush_dirty_caps() used to loop over the first entry of the cap_dirty
dirty list on the assumption that after calling ceph_check_caps() it would
be removed from the list. This isn't true for caps that are being
migrated between MDSs, where we've received the EXPORT but not the IMPORT.
Instead, do a safe list iteration, and pin the next inode on the list via
the CEPH_I_NOFLUSH flag.
Signed-off-by: Sage Weil <sage@newdream.net>
We should include caps that are mid-migration (we've received the EXPORT,
but not the IMPORT) in the issued caps set.
Signed-off-by: Sage Weil <sage@newdream.net>
Verify the file is actually open for the given caps when we are
waiting for caps. This ensures we will wake up and return EBADF
if another thread closes the file out from under us.
Note that EBADF is also the correct return code from write(2)
when called on a file handle opened for reading (although the
vfs should catch that).
Signed-off-by: Sage Weil <sage@newdream.net>
We didn't set the front length correctly. When messages used
the message pool we ended up with the conservative max (4 KB), and
the rest of the time the slightly less conservative estimate. Even
though the OSD ignores the extra data, set it to the right value to avoid
sending extra data over the network.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Reset msg front len when a message is returned to the pool: the caller
may have changed it.
BUG if we try to send a message with a hdr.front_len that doesn't match
the front iov.
Signed-off-by: Sage Weil <sage@newdream.net>
This was simply broken. Apparently at some point we thought about putting
the snaptrace in the middle section, but didn't.
Signed-off-by: Sage Weil <sage@newdream.net>
Clear LOSSYTX bit, so that if/when we reconnect, said reconnect
will retry on failure.
Clear _PENDING bits too, to avoid polluting subsequent
connection state.
Drop unused REGISTERED bit.
Signed-off-by: Sage Weil <sage@newdream.net>
Move any out_sent messages to out_queue _before_ checking if
out_queue is empty and going to STANDBY, or else we may drop
something that was never acked.
And clean up the code a bit (less goto).
Signed-off-by: Sage Weil <sage@newdream.net>
This fixes lock ABBA inversion, as the ->invalidate_authorizer()
op may need to take a lock (or even call back into the
messenger).
Signed-off-by: Sage Weil <sage@newdream.net>
The tid is in the message header, not body. Broken since 6df058c0.
No need to look at next mds session; just mark the request and be done.
(The old error path was broken too, but now it's gone.)
Signed-off-by: Sage Weil <sage@newdream.net>
Verify the mds session is currently registered before handling
incoming messages. Clean up message handlers to pull mds out
of session->s_mds instead of less trustworthy src field.
Clean up con_{get,put} debug output.
Signed-off-by: Sage Weil <sage@newdream.net>
The destroy_inode path needs no inode locks since there are no
inode references. Update __ceph_remove_cap comment to reflect
that it is called without cap->session->s_mutex in this case.
Signed-off-by: Sage Weil <sage@newdream.net>
There is no state in local vars that requires us to loop after temporarily
dropping i_lock.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Instead of truncating the whole range of pages, we skip those
pages that are dirty or in the middle of writeback. Those pages
will be cleared later when the writeback completes.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
This page should have been removed earlier when the cache cap was
revoked, but a writeback was in flight, so it was skipped. We truncate
it here just as the writeback finishes, while it's still locked.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
We need to know whether there was any page left behind, and not the
return value (the total number of pages invalidated). Look at the mapping
to see if we were successful or not.
Move it all into a helper to simplify the two callers.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Since we can now create and destroy pg pools, the pool ids will be sparse,
and an array no longer makes sense for looking up by pool id. Use an
rbtree instead.
The OSDMap encoding also no longer has a max pool count (previously used to
allocate the array). There is a new pool_max, that is the largest pool id
we've ever used, although we don't actually need it in the client.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to be able to iterate over all caps on a session with a
possibly slow callback on each cap. To allow this, we used to
prevent cap reordering while we were iterating. However, we were
not safe from races with removal: removing the 'next' cap would
make the next pointer from list_for_each_entry_safe be invalid,
and cause a lock up or similar badness.
Instead, we keep an iterator pointer in the session pointing to
the current cap. As before, we avoid reordering. For removal,
if the cap isn't the current cap we are iterating over, we are
fine. If it is, we clear cap->ci (to mark the cap as pending
removal) but leave it in the session list. In iterate_caps, we
can safely finish removal and get the next cap pointer.
While we're at it, clean up put_cap to not take a cap reservation
context, as it was never used.
Signed-off-by: Sage Weil <sage@newdream.net>
Use a global counter for the minimum number of allocated caps instead of
hard coding a check against readdir_max. This takes into account multiple
client instances, and avoids examining the superblock mount options when a
cap is dropped.
Signed-off-by: Sage Weil <sage@newdream.net>
Call __validate_auth() under monc->mutex, and use helper for
initial hello so that the pending_auth flag is set. This fixes
possible races in which we have an authentication request (hello
or otherwise) pending and send another one. In particular, with
auth_none, we _never_ want to call ceph_build_auth() from
__validate_auth(), since the ->build_request() method is NULL.
Signed-off-by: Sage Weil <sage@newdream.net>
An rbtree is lighter weight, particularly given we will generally have
very few in-flight statfs requests.
Signed-off-by: Sage Weil <sage@newdream.net>
Switch from radix tree to rbtree for snap realms. This is much more
appropriate given that realm keys are few and far between.
Signed-off-by: Sage Weil <sage@newdream.net>
The rbtree is a more appropriate data structure than a radix_tree. It
avoids extra memory usage and simplifies the code.
It also fixes a bug where the debugfs 'mdsc' file wasn't including the
most recent mds request.
Signed-off-by: Sage Weil <sage@newdream.net>
This ensures that if/when we reopen the connection, we can requeue work on
the connection immediately, without waiting for an old timer to expire.
Queue new delayed work inside con->mutex to avoid any race.
This fixes problems with clients failing to reconnect to the MDS due to
the client_reconnect message arriving too late (due to waiting for an old
delayed work timeout to expire).
Signed-off-by: Sage Weil <sage@newdream.net>
Fix the messenger to allow a ceph_con_open() during the fault callback.
Previously the work wasn't getting queued on the connection because the
fault path avoids requeued work (normally spurious). Loop on reopening by
checking for the OPENING state bit.
This fixes OSD reconnects when a TCP connection drops.
Signed-off-by: Sage Weil <sage@newdream.net>
A single osd connection fault (e.g. tcp disconnect) wasn't
reopening the connection, which causes all current and future
requests for that osd to hang.
Signed-off-by: Sage Weil <sage@newdream.net>
The test was backwards from commit b3d1dbbd: keep the message if the
connection _isn't_ lossy. This allows the client to continue when the
TCP connection drops for some reason (network glitch) but both ends
survive.
Signed-off-by: Sage Weil <sage@newdream.net>
We were invalidating mapping pages when dropping FILE_CACHE in
__send_cap(). But ceph_check_caps attempts to invalidate already, and
also checks for success, so we should never get to this point.
Signed-off-by: Sage Weil <sage@newdream.net>
If a sync read gets a short result from the OSD, it may need to do a
getattr to see if it is short due to reaching end-of-file. The getattr
was being done while holding a reference to FILE_RD, which can lead to
a deadlock if the MDS is revoking that capability bit and can't process
the getattr until it does.
We fix this by setting a flag if EOF size validation is needed, and doing
the getattr in ceph_aio_read, after the RD cap ref is dropped. If the
read needs to be continued, we loop and continue traversing the file.
Signed-off-by: Sage Weil <sage@newdream.net>
Try to invalidate pages in ceph_check_caps() if FILE_CACHE is being
revoked. If we fail, queue an immediate async invalidate if FILE_CACHE
is being revoked. (If it's not being revoked, we just queue the caps
for later evaluation later, as per the old behavior.)
Signed-off-by: Sage Weil <sage@newdream.net>
In the cases where we either do a sync read or a write, we
need to make sure that everything in the page cache is flushed.
In the case of a sync write we invalidate the relevant pages,
so that subsequent read/write reflects the new data written.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
A truncation should occur when either we have the
specified caps for the file, or (in cases where we are
not the only ones referencing the file) when it is mapped
or when it is opened. The latter two cases were not
handled.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Originally ceph_page_mkwrite called ceph_write_begin, hoping that
the returned locked page would be the page that it was requested
to mkwrite. Factored out relevant part of ceph_page_mkwrite and
we lock the right page anyway.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Zeroing of holes was not done correctly: page_off was miscalculated and
zeroing the tail didn't not adjust the 'read' value to include the zeroed
portion.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Instead of removing osd connection immediately when the
requests list is empty, put the osd connection on an lru.
Only if that osd has not been used for more than a specified
time, will it be removed.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
The auth_x protocol implements support for a kerberos-like mutual
authentication infrastructure used by Ceph. We do not simply use vanilla
kerberos because of scalability and performance issues when dealing with
a large cluster of nodes providing a single logical service.
Auth_x provides mutual authentication of client and server and protects
against replay and man in the middle attacks. It does not encrypt
the full session over the wire, however, so data payload may still be
snooped.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Add infrastructure to allow the mon_client to periodically renew its auth
credentials. Also add a messenger callback that will force such a renewal
if a peer rejects our authenticator.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Helper for decoding into a ceph_buffer, and other misc decoding helpers
we will need.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
We release all the pages, even if the osd response was
different than the number of pages written. This could only
happen due to truncation that arrives the osd in
different order, for which we want the pages released anyway.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
This fixes a bug where the read/write ops arrive the osd after
a following truncation request.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
We never truncate to a smaller size without contacting the MDS.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Include a type/version in ceph_entity_addr and filepath. Include extra
byte in filepath encoding as necessary.
Signed-off-by: Sage Weil <sage@newdream.net>
This includes treating all the data preallocation and revokation
at the same place, not having to have a special case for
the reserved pages.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Now doing it in the same callback that is also responsible for
allocating the 'front' part of the message. If we get a message
that we haven't got a corresponding tid for, mark it for skipping.
Moving the mutex unlock/lock from the osd alloc_msg callback
to the calling function in the messenger.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Previously, if the MDS request was interrupted, we would unregister the
request and ignore any reply. This could cause the caps or other cache
state to become out of sync. (For instance, aborting dbench and doing
rm -r on clients would complain about a non-empty directory because the
client didn't realize it's aborted file create request completed.)
Even we don't unregister, we still can't process the reply normally because
we are no longer holding the caller's locks (like the dir i_mutex).
So, mark aborted operations with r_aborted, and in the reply handler, be
sure to process all the caps. Do not process the namespace changes,
though, since we no longer will hold the dir i_mutex. The dentry lease
state can also be ignored as it's more forgiving.
Signed-off-by: Sage Weil <sage@newdream.net>
The variable client is initialized twice to the same (side effect-free)
expression. Drop one initialization.
A simplified version of the semantic match that finds this problem is:
(http://coccinelle.lip6.fr/)
// <smpl>
@forall@
idexpression *x;
identifier f!=ERR_PTR;
@@
x = f(...)
... when != x
(
x = f(...,<+...x...+>,...)
|
* x = f(...)
)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sage Weil <sage@newdream.net>
The ceph_entity_addr erank field is obsolete; remove it. Get rid of
trivial addr comparison helpers while we're at it.
Signed-off-by: Sage Weil <sage@newdream.net>
This fixes a bug, where we had the parent list have dentries with
offsets that are not monotonically increasing, which caused the ceph
dcache_readdir to skip entries.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
The function was broken in the case where there was more than one page
involved, broke the ceph sync_write case.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Use the ceph_pagelist to encode the MDS reconnect message. We change the
message encoding (protocol change!) at the same time to make our life
easier (we don't know how many snaprealms we have when we start encoding).
An empty message implies the session is closed/does not exist.
Signed-off-by: Sage Weil <sage@newdream.net>
The ceph_pagelist is a simple list of whole pages, strung together via
their lru list_head. It facilitates encoding to a "buffer" of unknown
size. Allow its use in place of the ceph_msg page vector.
This will be used to fix the huge buffer preallocation woes of MDS
reconnection.
Signed-off-by: Sage Weil <sage@newdream.net>
Define supported and required feature set. Fail connection if the server
requires features we do not support (TAG_FEATURES), or if the server does
not support features we require.
Signed-off-by: Sage Weil <sage@newdream.net>
Many (most?) message types include a transaction id. By including it in
the fixed size header, we always have it available even when we are unable
to allocate memory for the (larger, variable sized) message body. This
will allow us to error out the appropriate request instead of (silently)
dropping the reply.
Signed-off-by: Sage Weil <sage@newdream.net>
When we issue an OSD read, we specify a vector of pages that the data is to
be read into. The request may be sent multiple times, to multiple OSDs, if
the osdmap changes, which means we can get more than one reply.
Only read data into the page vector if the reply is coming from the
OSD we last sent the request to. Keep track of which connection is using
the vector by taking a reference. If another connection was already
using the vector before and a new reply comes in on the right connection,
revoke the pages from the other connection.
Signed-off-by: Sage Weil <sage@newdream.net>
Use a single mutex (previously out_mutex) to protect both read and write
activity from concurrent ceph_con_* calls. Drop the mutex when doing
callbacks to avoid nested locking (the callback may need to call something
like ceph_con_close).
Signed-off-by: Sage Weil <sage@newdream.net>
Canceled or timed out osd requests were getting left in the request list
and never deallocated (until umount). Unregister if they are canceled
(control-c) or time out.
Signed-off-by: Sage Weil <sage@newdream.net>
Avoid confusing iterate_session_caps(), flag the session while we are
iterating so that __touch_cap does not rearrange items on the list.
All other modifiers of session->s_caps do so under the protection of
s_mutex.
Signed-off-by: Sage Weil <sage@newdream.net>
An incremental pg_temp wasn't being decoded properly (wrong bound on
for loop).
Also remove unused local variable, while we're at it.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to hold session s_mutex for __ceph_mdsc_drop_dentry_lease(), which
we don't, so skip it. It was purely an optimization.
Signed-off-by: Sage Weil <sage@newdream.net>
This works around a bug in vfs_rename_dir() that rehashes the target
dentry. Ensure such dentries always fail revalidation by timing out the
dentry lease and kicking it out of the current directory lease gen.
This can be reverted when the vfs bug is fixed.
Signed-off-by: Sage Weil <sage@newdream.net>
Set bdi congestion bit when amount of write data in flight exceeds adjustable
threshold.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Fixes a deadlock that is triggered due to kswapd,
while the page was locked and the iput couldn't tear
down the address space.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
If we explicitly close a connection, or there is a socket error, we need
to drop any partially received message.
Signed-off-by: Sage Weil <sage@newdream.net>
For lossy connections we drop all state on socket errors, so there is no
reason to keep sent ceph_msg's around.
Signed-off-by: Sage Weil <sage@newdream.net>
The server indicates whether a connection is lossy; set our LOSSYTX bit
appropriately. Do not set lossy bit on outgoing connections.
Signed-off-by: Sage Weil <sage@newdream.net>
We never allocate the ceph_buffer and buffer separtely, so use a single
constructor.
Disallow put on NULL buffer; make the caller check.
Signed-off-by: Sage Weil <sage@newdream.net>
There is certainly no reason not to report this.
The only real downside to allowing the user to set it is that you don't
get default values by zeroing the layout struct (the default is -1).
Signed-off-by: Sage Weil <sage@newdream.net>
We need to skip /.ceph in (cached) readdir results, and exclude "/.ceph"
from the cached ENOENT lookup check.
Signed-off-by: Sage Weil <sage@newdream.net>
ceph_lookup_snap_realm either returns a valid pointer or NULL; there is no
need to check IS_ERR(result).
Reported-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sage Weil <sage@newdream.net>
If the NULL test is necessary, then the dereference should be moved below
the NULL test.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/).
// <smpl>
@@
type T;
expression E;
identifier i,fld;
statement S;
@@
- T i = E->fld;
+ T i;
... when != E
when != i
if (E == NULL) S
+ i = E->fld;
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sage Weil <sage@newdream.net>
Reset the backoff delay when we reopen the connection, so that the delays
for any initial connection problems are reasonable. We were resetting only
after a successful handshake, which was of limited utility.
Signed-off-by: Sage Weil <sage@newdream.net>
The max_size increase request to the MDS can get lost during an MDS
restart and reconnect. Reset our requested value after the MDS recovers,
so that any blocked writes will re-request a larger max_size upon waking.
Also, explicit wake session caps after the reconnect. Normally the cap
renewal catches this, but not in the cases where the caps didn't go stale
in the first place, which would leave writers waiting on max_size asleep.
Signed-off-by: Sage Weil <sage@newdream.net>
The mds map now uses the global_id as the 'key' (instead of the addr,
which was a poor choice).
This is protocol change.
Signed-off-by: Sage Weil <sage@newdream.net>
We may first learn our fsid from any of the mon, osd, or mds maps
(whichever the monitor sends first). Consolidate checks in a single
helper. Initialize the client debugfs entry then, since we need the
fsid (and global_id) for the directory name.
Also remove dead mount code.
Signed-off-by: Sage Weil <sage@newdream.net>
When we open a monitor session, we send an initial AUTH message listing
the auth protocols we support, our entity name, and (possibly) a previously
assigned global_id. The monitor chooses a protocol and responds with an
initial message.
Initially implement AUTH_NONE, a dummy protocol that provides no security,
but works within the new framework. It generates 'authorizers' that are
used when connecting to (mds, osd) services that simply state our entity
name and global_id.
This is a wire protocol change.
Signed-off-by: Sage Weil <sage@newdream.net>
We require that ceph_con_close be called before we drop the connection,
so this is unneeded. Just BUG if con->sock != NULL.
Signed-off-by: Sage Weil <sage@newdream.net>
We want to ceph_con_close when we're done with the connection, before
the ref count reaches 0. Once it does, do not call ceph_con_shutdown,
as that takes the con mutex and may sleep, and besides that is
unnecessary.
Signed-off-by: Sage Weil <sage@newdream.net>
We occasionally want to make a best-effort attempt to invalidate cache
pages without fear of blocking. If this fails, we fall back to an async
invalidate in another thread.
Use invalidate_mapping_pages instead of invalidate_inode_page2, as that
will skip locked pages, and not deadlock.
Signed-off-by: Sage Weil <sage@newdream.net>
This helps the user know what's going on during the (involved) reconnect
process. They already see when the mds fails and reconnect starts.
Signed-off-by: Sage Weil <sage@newdream.net>
We don't get an explicit affirmative confirmation that our caps reconnect,
nor do we necessarily want to pay that cost. So, take all this code out
for now.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to make sure we only swab the address during the banner once. So
break process_banner out of process_connect, and clean up the surrounding
code so that these are distinct phases of the handshake.
Signed-off-by: Sage Weil <sage@newdream.net>
We were using the cap_gen to track both stale caps (caps that timed out
due to temporarily losing touch with the mds) and dead caps that did not
reconnect after an MDS failure. Introduce a recon_gen counter to track
reconnections to restarted MDSs and kill dead caps based on that instead.
Rename gen to cap_gen while we're at it to make it more clear which is
which.
Signed-off-by: Sage Weil <sage@newdream.net>
Make the integer hash function a property of the bucket it is used on. This
allows us to gracefully add support for new hash functions without starting
from scatch.
Signed-off-by: Sage Weil <sage@newdream.net>
The object will be hashed to a placement seed (ps) based on the pg_pool's
hash function. This allows new hashes to be introduced into an existing
object store, or selection of a hash appropriate to the objects that
will be stored in a particular pool.
Signed-off-by: Sage Weil <sage@newdream.net>
We were using the (weak) dcache hash function, but it was leaving lower
bits consecutive for consecutive (inode) objects. We really want to make
the object to pg mapping random and uniform, so use a proper hash function
here.
This is Robert Jenkin's public domain hash function (with some minor
cleanup):
http://burtleburtle.net/bob/hash/evahash.html
This is a protocol revision.
Signed-off-by: Sage Weil <sage@newdream.net>
The endian conversions don't quite work with the old union ceph_pg. Just
make it a regular struct, and make each field __le. This is simpler and it
has the added bonus of actually working.
Signed-off-by: Sage Weil <sage@newdream.net>
We exchange struct ceph_entity_addr over the wire and store it on disk.
The sockaddr_storage.ss_family field, however, is host endianness. So,
fix ss_family endianness to big endian when sending/receiving over the
wire.
Signed-off-by: Sage Weil <sage@newdream.net>
Even when we encounter a corrupt bucket. We still BUG(). This fixes
the warning
fs/ceph/crush/mapper.c: In function 'crush_choose':
fs/ceph/crush/mapper.c:352: warning: control may reach end of non-void function
'crush_bucket_choose' being inlined
Signed-off-by: Sage Weil <sage@newdream.net>
Fixes warning
fs/ceph/xattr.c: In function '__build_xattrs':
fs/ceph/xattr.c:353: warning: 'err' may be used uninitialized in this function
Signed-off-by: Sage Weil <sage@newdream.net>
Commit 645a102581 fixes calculation of object
offset for layouts with multiple stripes per object. This updates the
calculation of the length written to take into account multiple stripes per
object.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
We were incorrectly calculationing of object offset. If we have multiple
stripe units per object, we need to shift to the start of the current
su in addition to the offset within the su.
Also rename bno to ono (object number) to avoid some variable naming
confusion.
Signed-off-by: Sage Weil <sage@newdream.net>
The object extent offset is the file offset _modulo_ the stripe unit.
The code was correct, the comment was wrong.
Reported-by: Noah Watkins <jayhawk@soe.ucsc.edu>
Signed-off-by: Sage Weil <sage@newdream.net>
Using stripe unit size calculated and saved on the stack to avoid
a redundant call to le32_to_cpu.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
Usage of non-list.h list_entry function for container_of
functionality replaced with direct use of container_of.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
This simplifies much of the error handling during mount. It also means
that we have the mount args before client creation, and we can initialize
based on those options.
Signed-off-by: Sage Weil <sage@newdream.net>
Since we've increased the max mon count, we shouldn't put the addr array
on the parse_mount_args stack. Put it on the heap instead.
Signed-off-by: Sage Weil <sage@newdream.net>
Get rid of separate max mon limit; use the system limit instead. This
allows mounts when there are lots of mon addrs provided by mount.ceph (as
with a host with lots of A/AAAA records).
Signed-off-by: Sage Weil <sage@newdream.net>
We can't fill i_size with rbytes at the fill_file_size stage without
adding additional checks for directories. Notably, we want st_blocks
to remain 0 on directories so that 'du' still works.
Fill in i_blocks, i_size specially in ceph_getattr instead.
Signed-off-by: Sage Weil <sage@newdream.net>
Mix the preferred osd (if any) into the placement seed that is fed into
the CRUSH object placement calculation. This prevents all the placement
pgs from peering with the same osds.
Rev the osd client protocol with this change.
Signed-off-by: Sage Weil <sage@newdream.net>
Pass the front_len we need when pulling a message off a msgpool,
and WARN if it is greater than the pool's size. Then try to
allocate a new message (to continue without failing).
Signed-off-by: Sage Weil <sage@newdream.net>
Previously we were flushing dirty caps by passing an extra flag
when traversing the delayed caps list. Besides being a bit ugly,
that can also miss caps that are dirty but didn't result in a
cap requeue: notably, mark_caps_dirty().
Separate the flushing into a separate helper, and traverse the
cap_dirty list.
This also brings i_dirty_item in line with i_dirty_caps: we are
on the list IFF caps != 0. We carry an inode ref IFF
dirty_caps|flushing_caps != 0.
Lose the unused return value from __ceph_mark_caps_dirty().
Signed-off-by: Sage Weil <sage@newdream.net>
Writeback doesn't work without the bdi set, and writeback on
umount doesn't work if we unregister the bdi too early.
Signed-off-by: Sage Weil <sage@newdream.net>
This makes it easier for individual message types to indicate
their particular encoding, and make future changes backward
compatible.
Signed-off-by: Sage Weil <sage@newdream.net>
This ensures we don't submit the same request twice if we are kicking a
specific osd (as with an osd_reset), or when we hit a transient error and
resend.
Signed-off-by: Sage Weil <sage@newdream.net>
The peer_reset just takes longer (until we reconnect and discover the osd
dropped the session... which it will).
Signed-off-by: Sage Weil <sage@newdream.net>
If an osd has failed or returned and a request has been sent twice, it's
possible to get a reply and unregister the request while the request
message is queued for delivery. Since the message references the caller's
page vector, we need to revoke it before completing.
Signed-off-by: Sage Weil <sage@newdream.net>
The osd request submission path registers the request, drops and retakes
the request_mutex, then sends it to the OSD. A racing kick_requests could
sent it during that interval, causing the same msg to be sent twice and
BUGing in the msgr.
Fix by only sending the message if it hasn't been touched by other
threads.
Signed-off-by: Sage Weil <sage@newdream.net>
Be conservative: renew subscription once half the interval has expired.
Do not reuse sub expiration to control hunting.
Signed-off-by: Sage Weil <sage@newdream.net>
Basic state information is available via /sys/kernel/debug/ceph,
including instances of the client, fsids, current monitor, mds and osd
maps, outstanding server requests, and hooks to adjust debug levels.
Signed-off-by: Sage Weil <sage@newdream.net>
A few Ceph ioctls for getting and setting file layout (striping)
parameters, and learning the identity and network address of the OSD a
given region of a file is stored on.
Signed-off-by: Sage Weil <sage@newdream.net>
Basic NFS re-export support is included. This mostly works. However,
Ceph's MDS design precludes the ability to generate a (small)
filehandle that will be valid forever, so this is of limited utility.
Signed-off-by: Sage Weil <sage@newdream.net>
The msgpool is a basic mempool_t-like structure to preallocate
messages we expect to receive over the wire. This ensures we have the
necessary memory preallocated to process replies to requests, or to
process unsolicited messages from various servers.
Signed-off-by: Sage Weil <sage@newdream.net>
A generic message passing library is used to communicate with all
other components in the Ceph file system. The messenger library
provides ordered, reliable delivery of messages between two nodes in
the system.
This implementation is based on TCP.
Signed-off-by: Sage Weil <sage@newdream.net>
Ceph snapshots rely on client cooperation in determining which
operations apply to which snapshots, and appropriately flushing
snapshotted data and metadata back to the OSD and MDS clusters.
Because snapshots apply to subtrees of the file hierarchy and can be
created at any time, there is a fair bit of bookkeeping required to
make this work.
Portions of the hierarchy that belong to the same set of snapshots
are described by a single 'snap realm.' A 'snap context' describes
the set of snapshots that exist for a given file or directory.
Signed-off-by: Sage Weil <sage@newdream.net>
The Ceph metadata servers control client access to inode metadata and
file data by issuing capabilities, granting clients permission to read
and/or write both inode field and file data to OSDs (storage nodes).
Each capability consists of a set of bits indicating which operations
are allowed.
If the client holds a *_SHARED cap, the client has a coherent value
that can be safely read from the cached inode.
In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client
is allowed to change inode attributes (e.g., file size, mtime), note
its dirty state in the ceph_cap, and asynchronously flush that
metadata change to the MDS.
In the event of a conflicting operation (perhaps by another client),
the MDS will revoke the conflicting client capabilities.
In order for a client to cache an inode, it must hold a capability
with at least one MDS server. When inodes are released, release
notifications are batched and periodically sent en masse to the MDS
cluster to release server state.
Signed-off-by: Sage Weil <sage@newdream.net>
The monitor cluster is responsible for managing cluster membership
and state. The monitor client handles what minimal interaction
the Ceph client has with it: checking for updated versions of the
MDS and OSD maps, getting statfs() information, and unmounting.
Signed-off-by: Sage Weil <sage@newdream.net>
CRUSH is a pseudorandom data distribution function designed to map
inputs onto a dynamic hierarchy of devices, while minimizing the
extent to which inputs are remapped when the devices are added or
removed. It includes some features that are specifically useful for
storage, most notably the ability to map each input onto a set of N
devices that are separated across administrator-defined failure
domains. CRUSH is used to distribute data across the cluster of Ceph
storage nodes.
More information about CRUSH can be found in this paper:
http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf
Signed-off-by: Sage Weil <sage@newdream.net>
The OSD client is responsible for reading and writing data from/to the
object storage pool. This includes determining where objects are
stored in the cluster, and ensuring that requests are retried or
redirected in the event of a node failure or data migration.
If an OSD does not respond before a timeout expires, keepalive
messages are sent across the lossless, ordered communications channel
to ensure that any break in the TCP is discovered. If the session
does reset, a reconnection is attempted and affected requests are
resent (by the message transport layer).
Signed-off-by: Sage Weil <sage@newdream.net>
The MDS (metadata server) client is responsible for submitting
requests to the MDS cluster and parsing the response. We decide which
MDS to submit each request to based on cached information about the
current partition of the directory hierarchy across the cluster. A
stateful session is opened with each MDS before we submit requests to
it, and a mutex is used to control the ordering of messages within
each session.
An MDS request may generate two responses. The first indicates the
operation was a success and returns any result. A second reply is
sent when the operation commits to disk. Note that locking on the MDS
ensures that the results of updates are visible only to the updating
client before the operation commits. Requests are linked to the
containing directory so that an fsync will wait for them to commit.
If an MDS fails and/or recovers, we resubmit requests as needed. We
also reconnect existing capabilities to a recovering MDS to
reestablish that shared session state. Old dentry leases are
invalidated.
Signed-off-by: Sage Weil <sage@newdream.net>
The ceph address space methods are concerned primarily with managing
the dirty page accounting in the inode, which (among other things)
must keep track of which snapshot context each page was dirtied in,
and ensure that dirty data is written out to the OSDs in snapshort
order.
A writepage() on a page that is not currently writeable due to
snapshot writeback ordering constraints is ignored (it was presumably
called from kswapd).
Signed-off-by: Sage Weil <sage@newdream.net>
File open and close operations, and read and write methods that ensure
we have obtained the proper capabilities from the MDS cluster before
performing IO on a file. We take references on held capabilities for
the duration of the read/write to avoid prematurely releasing them
back to the MDS.
We implement two main paths for read and write: one that is buffered
(and uses generic_aio_{read,write}), and one that is fully synchronous
and blocking (operating either on a __user pointer or, if O_DIRECT,
directly on user pages).
Signed-off-by: Sage Weil <sage@newdream.net>
Directory operations, including lookup, are defined here. We take
advantage of lookup intents when possible. For the most part, we just
need to build the proper requests for the metadata server(s) and
pass things off to the mds_client.
The results of most operations are normally incorporated into the
client's cache when the reply is parsed by ceph_fill_trace().
However, if the MDS replies without a trace (e.g., when retrying an
update after an MDS failure recovery), some operation-specific cleanup
may be needed.
We can validate cached dentries in two ways. A per-dentry lease may
be issued by the MDS, or a per-directory cap may be issued that acts
as a lease on the entire directory. In the latter case, a 'gen' value
is used to determine which dentries belong to the currently leased
directory contents.
We normally prepopulate the dcache and icache with readdir results.
This makes subsequent lookups and getattrs avoid any server
interaction. It also lets us satisfy readdir operation by peeking at
the dcache IFF we hold the per-directory cap/lease, previously
performed a readdir, and haven't dropped any of the resulting
dentries.
Signed-off-by: Sage Weil <sage@newdream.net>
Inode cache and inode operations. We also include routines to
incorporate metadata structures returned by the MDS into the client
cache, and some helpers to deal with file capabilities and metadata
leases. The bulk of that work is done by fill_inode() and
fill_trace().
Signed-off-by: Sage Weil <sage@newdream.net>
struct ceph_buffer is a simple ref-counted buffer. We transparently
choose between kmalloc for small buffers and vmalloc for large ones.
This is currently used only for allocating memory for xattr data.
Signed-off-by: Sage Weil <sage@newdream.net>
We first define constants, types, and prototypes for the kernel client
proper.
A few subsystems are defined separately later: the MDS, OSD, and
monitor clients, and the messaging layer.
Signed-off-by: Sage Weil <sage@newdream.net>
These headers describe the types used to exchange messages between the
Ceph client and various servers. All types are little-endian and
packed. These headers are shared between the kernel and userspace, so
all types are in terms of e.g. __u32.
Additionally, we define a few magic values to identify the current
version of the protocol(s) in use, so that discrepancies to be
detected on mount.
Signed-off-by: Sage Weil <sage@newdream.net>