Canceled or timed out osd requests were getting left in the request list
and never deallocated (until umount). Unregister if they are canceled
(control-c) or time out.
Signed-off-by: Sage Weil <sage@newdream.net>
Avoid confusing iterate_session_caps(), flag the session while we are
iterating so that __touch_cap does not rearrange items on the list.
All other modifiers of session->s_caps do so under the protection of
s_mutex.
Signed-off-by: Sage Weil <sage@newdream.net>
An incremental pg_temp wasn't being decoded properly (wrong bound on
for loop).
Also remove unused local variable, while we're at it.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to hold session s_mutex for __ceph_mdsc_drop_dentry_lease(), which
we don't, so skip it. It was purely an optimization.
Signed-off-by: Sage Weil <sage@newdream.net>
This works around a bug in vfs_rename_dir() that rehashes the target
dentry. Ensure such dentries always fail revalidation by timing out the
dentry lease and kicking it out of the current directory lease gen.
This can be reverted when the vfs bug is fixed.
Signed-off-by: Sage Weil <sage@newdream.net>
Set bdi congestion bit when amount of write data in flight exceeds adjustable
threshold.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Fixes a deadlock that is triggered due to kswapd,
while the page was locked and the iput couldn't tear
down the address space.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
If we explicitly close a connection, or there is a socket error, we need
to drop any partially received message.
Signed-off-by: Sage Weil <sage@newdream.net>
For lossy connections we drop all state on socket errors, so there is no
reason to keep sent ceph_msg's around.
Signed-off-by: Sage Weil <sage@newdream.net>
The server indicates whether a connection is lossy; set our LOSSYTX bit
appropriately. Do not set lossy bit on outgoing connections.
Signed-off-by: Sage Weil <sage@newdream.net>
We never allocate the ceph_buffer and buffer separtely, so use a single
constructor.
Disallow put on NULL buffer; make the caller check.
Signed-off-by: Sage Weil <sage@newdream.net>
There is certainly no reason not to report this.
The only real downside to allowing the user to set it is that you don't
get default values by zeroing the layout struct (the default is -1).
Signed-off-by: Sage Weil <sage@newdream.net>
We need to skip /.ceph in (cached) readdir results, and exclude "/.ceph"
from the cached ENOENT lookup check.
Signed-off-by: Sage Weil <sage@newdream.net>
ceph_lookup_snap_realm either returns a valid pointer or NULL; there is no
need to check IS_ERR(result).
Reported-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sage Weil <sage@newdream.net>
If the NULL test is necessary, then the dereference should be moved below
the NULL test.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/).
// <smpl>
@@
type T;
expression E;
identifier i,fld;
statement S;
@@
- T i = E->fld;
+ T i;
... when != E
when != i
if (E == NULL) S
+ i = E->fld;
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Sage Weil <sage@newdream.net>
Reset the backoff delay when we reopen the connection, so that the delays
for any initial connection problems are reasonable. We were resetting only
after a successful handshake, which was of limited utility.
Signed-off-by: Sage Weil <sage@newdream.net>
The max_size increase request to the MDS can get lost during an MDS
restart and reconnect. Reset our requested value after the MDS recovers,
so that any blocked writes will re-request a larger max_size upon waking.
Also, explicit wake session caps after the reconnect. Normally the cap
renewal catches this, but not in the cases where the caps didn't go stale
in the first place, which would leave writers waiting on max_size asleep.
Signed-off-by: Sage Weil <sage@newdream.net>
The mds map now uses the global_id as the 'key' (instead of the addr,
which was a poor choice).
This is protocol change.
Signed-off-by: Sage Weil <sage@newdream.net>
We may first learn our fsid from any of the mon, osd, or mds maps
(whichever the monitor sends first). Consolidate checks in a single
helper. Initialize the client debugfs entry then, since we need the
fsid (and global_id) for the directory name.
Also remove dead mount code.
Signed-off-by: Sage Weil <sage@newdream.net>
When we open a monitor session, we send an initial AUTH message listing
the auth protocols we support, our entity name, and (possibly) a previously
assigned global_id. The monitor chooses a protocol and responds with an
initial message.
Initially implement AUTH_NONE, a dummy protocol that provides no security,
but works within the new framework. It generates 'authorizers' that are
used when connecting to (mds, osd) services that simply state our entity
name and global_id.
This is a wire protocol change.
Signed-off-by: Sage Weil <sage@newdream.net>
We require that ceph_con_close be called before we drop the connection,
so this is unneeded. Just BUG if con->sock != NULL.
Signed-off-by: Sage Weil <sage@newdream.net>
We want to ceph_con_close when we're done with the connection, before
the ref count reaches 0. Once it does, do not call ceph_con_shutdown,
as that takes the con mutex and may sleep, and besides that is
unnecessary.
Signed-off-by: Sage Weil <sage@newdream.net>
We occasionally want to make a best-effort attempt to invalidate cache
pages without fear of blocking. If this fails, we fall back to an async
invalidate in another thread.
Use invalidate_mapping_pages instead of invalidate_inode_page2, as that
will skip locked pages, and not deadlock.
Signed-off-by: Sage Weil <sage@newdream.net>
This helps the user know what's going on during the (involved) reconnect
process. They already see when the mds fails and reconnect starts.
Signed-off-by: Sage Weil <sage@newdream.net>
We don't get an explicit affirmative confirmation that our caps reconnect,
nor do we necessarily want to pay that cost. So, take all this code out
for now.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to make sure we only swab the address during the banner once. So
break process_banner out of process_connect, and clean up the surrounding
code so that these are distinct phases of the handshake.
Signed-off-by: Sage Weil <sage@newdream.net>
We were using the cap_gen to track both stale caps (caps that timed out
due to temporarily losing touch with the mds) and dead caps that did not
reconnect after an MDS failure. Introduce a recon_gen counter to track
reconnections to restarted MDSs and kill dead caps based on that instead.
Rename gen to cap_gen while we're at it to make it more clear which is
which.
Signed-off-by: Sage Weil <sage@newdream.net>
Make the integer hash function a property of the bucket it is used on. This
allows us to gracefully add support for new hash functions without starting
from scatch.
Signed-off-by: Sage Weil <sage@newdream.net>
The object will be hashed to a placement seed (ps) based on the pg_pool's
hash function. This allows new hashes to be introduced into an existing
object store, or selection of a hash appropriate to the objects that
will be stored in a particular pool.
Signed-off-by: Sage Weil <sage@newdream.net>
We were using the (weak) dcache hash function, but it was leaving lower
bits consecutive for consecutive (inode) objects. We really want to make
the object to pg mapping random and uniform, so use a proper hash function
here.
This is Robert Jenkin's public domain hash function (with some minor
cleanup):
http://burtleburtle.net/bob/hash/evahash.html
This is a protocol revision.
Signed-off-by: Sage Weil <sage@newdream.net>
The endian conversions don't quite work with the old union ceph_pg. Just
make it a regular struct, and make each field __le. This is simpler and it
has the added bonus of actually working.
Signed-off-by: Sage Weil <sage@newdream.net>
We exchange struct ceph_entity_addr over the wire and store it on disk.
The sockaddr_storage.ss_family field, however, is host endianness. So,
fix ss_family endianness to big endian when sending/receiving over the
wire.
Signed-off-by: Sage Weil <sage@newdream.net>
Even when we encounter a corrupt bucket. We still BUG(). This fixes
the warning
fs/ceph/crush/mapper.c: In function 'crush_choose':
fs/ceph/crush/mapper.c:352: warning: control may reach end of non-void function
'crush_bucket_choose' being inlined
Signed-off-by: Sage Weil <sage@newdream.net>
Fixes warning
fs/ceph/xattr.c: In function '__build_xattrs':
fs/ceph/xattr.c:353: warning: 'err' may be used uninitialized in this function
Signed-off-by: Sage Weil <sage@newdream.net>
Commit 645a102581 fixes calculation of object
offset for layouts with multiple stripes per object. This updates the
calculation of the length written to take into account multiple stripes per
object.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
We were incorrectly calculationing of object offset. If we have multiple
stripe units per object, we need to shift to the start of the current
su in addition to the offset within the su.
Also rename bno to ono (object number) to avoid some variable naming
confusion.
Signed-off-by: Sage Weil <sage@newdream.net>
The object extent offset is the file offset _modulo_ the stripe unit.
The code was correct, the comment was wrong.
Reported-by: Noah Watkins <jayhawk@soe.ucsc.edu>
Signed-off-by: Sage Weil <sage@newdream.net>
Using stripe unit size calculated and saved on the stack to avoid
a redundant call to le32_to_cpu.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
Usage of non-list.h list_entry function for container_of
functionality replaced with direct use of container_of.
Signed-off-by: Noah Watkins <noah@noahdesu.com>
Signed-off-by: Sage Weil <sage@newdream.net>
This simplifies much of the error handling during mount. It also means
that we have the mount args before client creation, and we can initialize
based on those options.
Signed-off-by: Sage Weil <sage@newdream.net>
Since we've increased the max mon count, we shouldn't put the addr array
on the parse_mount_args stack. Put it on the heap instead.
Signed-off-by: Sage Weil <sage@newdream.net>
Get rid of separate max mon limit; use the system limit instead. This
allows mounts when there are lots of mon addrs provided by mount.ceph (as
with a host with lots of A/AAAA records).
Signed-off-by: Sage Weil <sage@newdream.net>
We can't fill i_size with rbytes at the fill_file_size stage without
adding additional checks for directories. Notably, we want st_blocks
to remain 0 on directories so that 'du' still works.
Fill in i_blocks, i_size specially in ceph_getattr instead.
Signed-off-by: Sage Weil <sage@newdream.net>
Mix the preferred osd (if any) into the placement seed that is fed into
the CRUSH object placement calculation. This prevents all the placement
pgs from peering with the same osds.
Rev the osd client protocol with this change.
Signed-off-by: Sage Weil <sage@newdream.net>
Pass the front_len we need when pulling a message off a msgpool,
and WARN if it is greater than the pool's size. Then try to
allocate a new message (to continue without failing).
Signed-off-by: Sage Weil <sage@newdream.net>
Previously we were flushing dirty caps by passing an extra flag
when traversing the delayed caps list. Besides being a bit ugly,
that can also miss caps that are dirty but didn't result in a
cap requeue: notably, mark_caps_dirty().
Separate the flushing into a separate helper, and traverse the
cap_dirty list.
This also brings i_dirty_item in line with i_dirty_caps: we are
on the list IFF caps != 0. We carry an inode ref IFF
dirty_caps|flushing_caps != 0.
Lose the unused return value from __ceph_mark_caps_dirty().
Signed-off-by: Sage Weil <sage@newdream.net>
Writeback doesn't work without the bdi set, and writeback on
umount doesn't work if we unregister the bdi too early.
Signed-off-by: Sage Weil <sage@newdream.net>
This makes it easier for individual message types to indicate
their particular encoding, and make future changes backward
compatible.
Signed-off-by: Sage Weil <sage@newdream.net>
This ensures we don't submit the same request twice if we are kicking a
specific osd (as with an osd_reset), or when we hit a transient error and
resend.
Signed-off-by: Sage Weil <sage@newdream.net>
The peer_reset just takes longer (until we reconnect and discover the osd
dropped the session... which it will).
Signed-off-by: Sage Weil <sage@newdream.net>
If an osd has failed or returned and a request has been sent twice, it's
possible to get a reply and unregister the request while the request
message is queued for delivery. Since the message references the caller's
page vector, we need to revoke it before completing.
Signed-off-by: Sage Weil <sage@newdream.net>
The osd request submission path registers the request, drops and retakes
the request_mutex, then sends it to the OSD. A racing kick_requests could
sent it during that interval, causing the same msg to be sent twice and
BUGing in the msgr.
Fix by only sending the message if it hasn't been touched by other
threads.
Signed-off-by: Sage Weil <sage@newdream.net>
Be conservative: renew subscription once half the interval has expired.
Do not reuse sub expiration to control hunting.
Signed-off-by: Sage Weil <sage@newdream.net>
Basic state information is available via /sys/kernel/debug/ceph,
including instances of the client, fsids, current monitor, mds and osd
maps, outstanding server requests, and hooks to adjust debug levels.
Signed-off-by: Sage Weil <sage@newdream.net>
A few Ceph ioctls for getting and setting file layout (striping)
parameters, and learning the identity and network address of the OSD a
given region of a file is stored on.
Signed-off-by: Sage Weil <sage@newdream.net>
Basic NFS re-export support is included. This mostly works. However,
Ceph's MDS design precludes the ability to generate a (small)
filehandle that will be valid forever, so this is of limited utility.
Signed-off-by: Sage Weil <sage@newdream.net>
The msgpool is a basic mempool_t-like structure to preallocate
messages we expect to receive over the wire. This ensures we have the
necessary memory preallocated to process replies to requests, or to
process unsolicited messages from various servers.
Signed-off-by: Sage Weil <sage@newdream.net>
A generic message passing library is used to communicate with all
other components in the Ceph file system. The messenger library
provides ordered, reliable delivery of messages between two nodes in
the system.
This implementation is based on TCP.
Signed-off-by: Sage Weil <sage@newdream.net>
Ceph snapshots rely on client cooperation in determining which
operations apply to which snapshots, and appropriately flushing
snapshotted data and metadata back to the OSD and MDS clusters.
Because snapshots apply to subtrees of the file hierarchy and can be
created at any time, there is a fair bit of bookkeeping required to
make this work.
Portions of the hierarchy that belong to the same set of snapshots
are described by a single 'snap realm.' A 'snap context' describes
the set of snapshots that exist for a given file or directory.
Signed-off-by: Sage Weil <sage@newdream.net>
The Ceph metadata servers control client access to inode metadata and
file data by issuing capabilities, granting clients permission to read
and/or write both inode field and file data to OSDs (storage nodes).
Each capability consists of a set of bits indicating which operations
are allowed.
If the client holds a *_SHARED cap, the client has a coherent value
that can be safely read from the cached inode.
In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client
is allowed to change inode attributes (e.g., file size, mtime), note
its dirty state in the ceph_cap, and asynchronously flush that
metadata change to the MDS.
In the event of a conflicting operation (perhaps by another client),
the MDS will revoke the conflicting client capabilities.
In order for a client to cache an inode, it must hold a capability
with at least one MDS server. When inodes are released, release
notifications are batched and periodically sent en masse to the MDS
cluster to release server state.
Signed-off-by: Sage Weil <sage@newdream.net>
The monitor cluster is responsible for managing cluster membership
and state. The monitor client handles what minimal interaction
the Ceph client has with it: checking for updated versions of the
MDS and OSD maps, getting statfs() information, and unmounting.
Signed-off-by: Sage Weil <sage@newdream.net>
CRUSH is a pseudorandom data distribution function designed to map
inputs onto a dynamic hierarchy of devices, while minimizing the
extent to which inputs are remapped when the devices are added or
removed. It includes some features that are specifically useful for
storage, most notably the ability to map each input onto a set of N
devices that are separated across administrator-defined failure
domains. CRUSH is used to distribute data across the cluster of Ceph
storage nodes.
More information about CRUSH can be found in this paper:
http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf
Signed-off-by: Sage Weil <sage@newdream.net>
The OSD client is responsible for reading and writing data from/to the
object storage pool. This includes determining where objects are
stored in the cluster, and ensuring that requests are retried or
redirected in the event of a node failure or data migration.
If an OSD does not respond before a timeout expires, keepalive
messages are sent across the lossless, ordered communications channel
to ensure that any break in the TCP is discovered. If the session
does reset, a reconnection is attempted and affected requests are
resent (by the message transport layer).
Signed-off-by: Sage Weil <sage@newdream.net>
The MDS (metadata server) client is responsible for submitting
requests to the MDS cluster and parsing the response. We decide which
MDS to submit each request to based on cached information about the
current partition of the directory hierarchy across the cluster. A
stateful session is opened with each MDS before we submit requests to
it, and a mutex is used to control the ordering of messages within
each session.
An MDS request may generate two responses. The first indicates the
operation was a success and returns any result. A second reply is
sent when the operation commits to disk. Note that locking on the MDS
ensures that the results of updates are visible only to the updating
client before the operation commits. Requests are linked to the
containing directory so that an fsync will wait for them to commit.
If an MDS fails and/or recovers, we resubmit requests as needed. We
also reconnect existing capabilities to a recovering MDS to
reestablish that shared session state. Old dentry leases are
invalidated.
Signed-off-by: Sage Weil <sage@newdream.net>
The ceph address space methods are concerned primarily with managing
the dirty page accounting in the inode, which (among other things)
must keep track of which snapshot context each page was dirtied in,
and ensure that dirty data is written out to the OSDs in snapshort
order.
A writepage() on a page that is not currently writeable due to
snapshot writeback ordering constraints is ignored (it was presumably
called from kswapd).
Signed-off-by: Sage Weil <sage@newdream.net>
File open and close operations, and read and write methods that ensure
we have obtained the proper capabilities from the MDS cluster before
performing IO on a file. We take references on held capabilities for
the duration of the read/write to avoid prematurely releasing them
back to the MDS.
We implement two main paths for read and write: one that is buffered
(and uses generic_aio_{read,write}), and one that is fully synchronous
and blocking (operating either on a __user pointer or, if O_DIRECT,
directly on user pages).
Signed-off-by: Sage Weil <sage@newdream.net>
Directory operations, including lookup, are defined here. We take
advantage of lookup intents when possible. For the most part, we just
need to build the proper requests for the metadata server(s) and
pass things off to the mds_client.
The results of most operations are normally incorporated into the
client's cache when the reply is parsed by ceph_fill_trace().
However, if the MDS replies without a trace (e.g., when retrying an
update after an MDS failure recovery), some operation-specific cleanup
may be needed.
We can validate cached dentries in two ways. A per-dentry lease may
be issued by the MDS, or a per-directory cap may be issued that acts
as a lease on the entire directory. In the latter case, a 'gen' value
is used to determine which dentries belong to the currently leased
directory contents.
We normally prepopulate the dcache and icache with readdir results.
This makes subsequent lookups and getattrs avoid any server
interaction. It also lets us satisfy readdir operation by peeking at
the dcache IFF we hold the per-directory cap/lease, previously
performed a readdir, and haven't dropped any of the resulting
dentries.
Signed-off-by: Sage Weil <sage@newdream.net>
Inode cache and inode operations. We also include routines to
incorporate metadata structures returned by the MDS into the client
cache, and some helpers to deal with file capabilities and metadata
leases. The bulk of that work is done by fill_inode() and
fill_trace().
Signed-off-by: Sage Weil <sage@newdream.net>
struct ceph_buffer is a simple ref-counted buffer. We transparently
choose between kmalloc for small buffers and vmalloc for large ones.
This is currently used only for allocating memory for xattr data.
Signed-off-by: Sage Weil <sage@newdream.net>
We first define constants, types, and prototypes for the kernel client
proper.
A few subsystems are defined separately later: the MDS, OSD, and
monitor clients, and the messaging layer.
Signed-off-by: Sage Weil <sage@newdream.net>
These headers describe the types used to exchange messages between the
Ceph client and various servers. All types are little-endian and
packed. These headers are shared between the kernel and userspace, so
all types are in terms of e.g. __u32.
Additionally, we define a few magic values to identify the current
version of the protocol(s) in use, so that discrepancies to be
detected on mount.
Signed-off-by: Sage Weil <sage@newdream.net>