2007-04-27 06:48:28 +08:00
|
|
|
/* Management of Tx window, Tx resend, ACKs and out-of-sequence reception
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
|
|
|
|
* Written by David Howells (dhowells@redhat.com)
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
2016-06-03 03:08:52 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2007-04-27 06:48:28 +08:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/circ_buf.h>
|
|
|
|
#include <linux/net.h>
|
|
|
|
#include <linux/skbuff.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2007-04-27 06:48:28 +08:00
|
|
|
#include <linux/udp.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_rxrpc.h>
|
|
|
|
#include "ar-internal.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* propose an ACK be sent
|
|
|
|
*/
|
2009-09-16 15:01:13 +08:00
|
|
|
void __rxrpc_propose_ACK(struct rxrpc_call *call, u8 ack_reason,
|
2016-03-04 23:53:46 +08:00
|
|
|
u32 serial, bool immediate)
|
2007-04-27 06:48:28 +08:00
|
|
|
{
|
|
|
|
unsigned long expiry;
|
|
|
|
s8 prior = rxrpc_ack_priority[ack_reason];
|
|
|
|
|
|
|
|
ASSERTCMP(prior, >, 0);
|
|
|
|
|
|
|
|
_enter("{%d},%s,%%%x,%u",
|
2016-03-04 23:53:46 +08:00
|
|
|
call->debug_id, rxrpc_acks(ack_reason), serial, immediate);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
if (prior < rxrpc_ack_priority[call->ackr_reason]) {
|
|
|
|
if (immediate)
|
|
|
|
goto cancel_timer;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* update DELAY, IDLE, REQUESTED and PING_RESPONSE ACK serial
|
|
|
|
* numbers */
|
|
|
|
if (prior == rxrpc_ack_priority[call->ackr_reason]) {
|
|
|
|
if (prior <= 4)
|
|
|
|
call->ackr_serial = serial;
|
|
|
|
if (immediate)
|
|
|
|
goto cancel_timer;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
call->ackr_reason = ack_reason;
|
|
|
|
call->ackr_serial = serial;
|
|
|
|
|
|
|
|
switch (ack_reason) {
|
|
|
|
case RXRPC_ACK_DELAY:
|
|
|
|
_debug("run delay timer");
|
2014-02-08 02:58:44 +08:00
|
|
|
expiry = rxrpc_soft_ack_delay;
|
|
|
|
goto run_timer;
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
case RXRPC_ACK_IDLE:
|
|
|
|
if (!immediate) {
|
|
|
|
_debug("run defer timer");
|
2014-02-08 02:58:44 +08:00
|
|
|
expiry = rxrpc_idle_ack_delay;
|
2007-04-27 06:48:28 +08:00
|
|
|
goto run_timer;
|
|
|
|
}
|
|
|
|
goto cancel_timer;
|
|
|
|
|
|
|
|
case RXRPC_ACK_REQUESTED:
|
2014-02-08 02:58:44 +08:00
|
|
|
expiry = rxrpc_requested_ack_delay;
|
|
|
|
if (!expiry)
|
2007-04-27 06:48:28 +08:00
|
|
|
goto cancel_timer;
|
2016-03-04 23:53:46 +08:00
|
|
|
if (!immediate || serial == 1) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("run defer timer");
|
|
|
|
goto run_timer;
|
|
|
|
}
|
|
|
|
|
|
|
|
default:
|
|
|
|
_debug("immediate ACK");
|
|
|
|
goto cancel_timer;
|
|
|
|
}
|
|
|
|
|
|
|
|
run_timer:
|
|
|
|
expiry += jiffies;
|
|
|
|
if (!timer_pending(&call->ack_timer) ||
|
|
|
|
time_after(call->ack_timer.expires, expiry))
|
|
|
|
mod_timer(&call->ack_timer, expiry);
|
|
|
|
return;
|
|
|
|
|
|
|
|
cancel_timer:
|
2016-03-04 23:53:46 +08:00
|
|
|
_debug("cancel timer %%%u", serial);
|
2007-04-27 06:48:28 +08:00
|
|
|
try_to_del_timer_sync(&call->ack_timer);
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (call->state <= RXRPC_CALL_COMPLETE &&
|
2016-03-04 23:53:46 +08:00
|
|
|
!test_and_set_bit(RXRPC_CALL_EV_ACK, &call->events))
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-27 06:50:17 +08:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* propose an ACK be sent, locking the call structure
|
|
|
|
*/
|
2009-09-16 15:01:13 +08:00
|
|
|
void rxrpc_propose_ACK(struct rxrpc_call *call, u8 ack_reason,
|
2016-03-04 23:53:46 +08:00
|
|
|
u32 serial, bool immediate)
|
2007-04-27 06:48:28 +08:00
|
|
|
{
|
|
|
|
s8 prior = rxrpc_ack_priority[ack_reason];
|
|
|
|
|
|
|
|
if (prior > rxrpc_ack_priority[call->ackr_reason]) {
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
__rxrpc_propose_ACK(call, ack_reason, serial, immediate);
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* set the resend timer
|
|
|
|
*/
|
|
|
|
static void rxrpc_set_resend(struct rxrpc_call *call, u8 resend,
|
|
|
|
unsigned long resend_at)
|
|
|
|
{
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE)
|
|
|
|
resend = 0;
|
|
|
|
|
|
|
|
if (resend & 1) {
|
|
|
|
_debug("SET RESEND");
|
2016-03-04 23:53:46 +08:00
|
|
|
set_bit(RXRPC_CALL_EV_RESEND, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (resend & 2) {
|
|
|
|
_debug("MODIFY RESEND TIMER");
|
|
|
|
set_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
|
|
|
|
mod_timer(&call->resend_timer, resend_at);
|
|
|
|
} else {
|
|
|
|
_debug("KILL RESEND TIMER");
|
|
|
|
del_timer_sync(&call->resend_timer);
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
|
|
|
|
}
|
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* resend packets
|
|
|
|
*/
|
|
|
|
static void rxrpc_resend(struct rxrpc_call *call)
|
|
|
|
{
|
2016-03-04 23:53:46 +08:00
|
|
|
struct rxrpc_wire_header *whdr;
|
2007-04-27 06:48:28 +08:00
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *txb;
|
|
|
|
unsigned long *p_txb, resend_at;
|
2015-02-28 18:51:36 +08:00
|
|
|
bool stop;
|
|
|
|
int loop;
|
2007-04-27 06:48:28 +08:00
|
|
|
u8 resend;
|
|
|
|
|
|
|
|
_enter("{%d,%d,%d,%d},",
|
|
|
|
call->acks_hard, call->acks_unacked,
|
|
|
|
atomic_read(&call->sequence),
|
|
|
|
CIRC_CNT(call->acks_head, call->acks_tail, call->acks_winsz));
|
|
|
|
|
2015-02-28 18:51:36 +08:00
|
|
|
stop = false;
|
2007-04-27 06:48:28 +08:00
|
|
|
resend = 0;
|
|
|
|
resend_at = 0;
|
|
|
|
|
|
|
|
for (loop = call->acks_tail;
|
|
|
|
loop != call->acks_head || stop;
|
|
|
|
loop = (loop + 1) & (call->acks_winsz - 1)
|
|
|
|
) {
|
|
|
|
p_txb = call->acks_window + loop;
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
if (*p_txb & 1)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
txb = (struct sk_buff *) *p_txb;
|
|
|
|
sp = rxrpc_skb(txb);
|
|
|
|
|
|
|
|
if (sp->need_resend) {
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = false;
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
/* each Tx packet has a new serial number */
|
2016-03-04 23:53:46 +08:00
|
|
|
sp->hdr.serial = atomic_inc_return(&call->conn->serial);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
whdr = (struct rxrpc_wire_header *)txb->head;
|
|
|
|
whdr->serial = htonl(sp->hdr.serial);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
_proto("Tx DATA %%%u { #%d }",
|
2016-03-04 23:53:46 +08:00
|
|
|
sp->hdr.serial, sp->hdr.seq);
|
2007-04-27 06:48:28 +08:00
|
|
|
if (rxrpc_send_packet(call->conn->trans, txb) < 0) {
|
2015-02-28 18:51:36 +08:00
|
|
|
stop = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
sp->resend_at = jiffies + 3;
|
|
|
|
} else {
|
|
|
|
sp->resend_at =
|
2015-02-28 18:51:37 +08:00
|
|
|
jiffies + rxrpc_resend_timeout;
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (time_after_eq(jiffies + 1, sp->resend_at)) {
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
resend |= 1;
|
|
|
|
} else if (resend & 2) {
|
|
|
|
if (time_before(sp->resend_at, resend_at))
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
} else {
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
resend |= 2;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rxrpc_set_resend(call, resend, resend_at);
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle resend timer expiry
|
|
|
|
*/
|
|
|
|
static void rxrpc_resend_timer(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *txb;
|
|
|
|
unsigned long *p_txb, resend_at;
|
|
|
|
int loop;
|
|
|
|
u8 resend;
|
|
|
|
|
|
|
|
_enter("%d,%d,%d",
|
|
|
|
call->acks_tail, call->acks_unacked, call->acks_head);
|
|
|
|
|
2010-08-04 10:34:17 +08:00
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE)
|
|
|
|
return;
|
|
|
|
|
2007-04-27 06:48:28 +08:00
|
|
|
resend = 0;
|
|
|
|
resend_at = 0;
|
|
|
|
|
|
|
|
for (loop = call->acks_unacked;
|
|
|
|
loop != call->acks_head;
|
|
|
|
loop = (loop + 1) & (call->acks_winsz - 1)
|
|
|
|
) {
|
|
|
|
p_txb = call->acks_window + loop;
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
txb = (struct sk_buff *) (*p_txb & ~1);
|
|
|
|
sp = rxrpc_skb(txb);
|
|
|
|
|
|
|
|
ASSERT(!(*p_txb & 1));
|
|
|
|
|
|
|
|
if (sp->need_resend) {
|
|
|
|
;
|
|
|
|
} else if (time_after_eq(jiffies + 1, sp->resend_at)) {
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
resend |= 1;
|
|
|
|
} else if (resend & 2) {
|
|
|
|
if (time_before(sp->resend_at, resend_at))
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
} else {
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
resend |= 2;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rxrpc_set_resend(call, resend, resend_at);
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* process soft ACKs of our transmitted packets
|
|
|
|
* - these indicate packets the peer has or has not received, but hasn't yet
|
|
|
|
* given to the consumer, and so can still be discarded and re-requested
|
|
|
|
*/
|
|
|
|
static int rxrpc_process_soft_ACKs(struct rxrpc_call *call,
|
|
|
|
struct rxrpc_ackpacket *ack,
|
|
|
|
struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *txb;
|
|
|
|
unsigned long *p_txb, resend_at;
|
|
|
|
int loop;
|
|
|
|
u8 sacks[RXRPC_MAXACKS], resend;
|
|
|
|
|
|
|
|
_enter("{%d,%d},{%d},",
|
|
|
|
call->acks_hard,
|
|
|
|
CIRC_CNT(call->acks_head, call->acks_tail, call->acks_winsz),
|
|
|
|
ack->nAcks);
|
|
|
|
|
|
|
|
if (skb_copy_bits(skb, 0, sacks, ack->nAcks) < 0)
|
|
|
|
goto protocol_error;
|
|
|
|
|
|
|
|
resend = 0;
|
|
|
|
resend_at = 0;
|
|
|
|
for (loop = 0; loop < ack->nAcks; loop++) {
|
|
|
|
p_txb = call->acks_window;
|
|
|
|
p_txb += (call->acks_tail + loop) & (call->acks_winsz - 1);
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
txb = (struct sk_buff *) (*p_txb & ~1);
|
|
|
|
sp = rxrpc_skb(txb);
|
|
|
|
|
|
|
|
switch (sacks[loop]) {
|
|
|
|
case RXRPC_ACK_TYPE_ACK:
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = false;
|
2007-04-27 06:48:28 +08:00
|
|
|
*p_txb |= 1;
|
|
|
|
break;
|
|
|
|
case RXRPC_ACK_TYPE_NACK:
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
*p_txb &= ~1;
|
|
|
|
resend = 1;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
_debug("Unsupported ACK type %d", sacks[loop]);
|
|
|
|
goto protocol_error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
smp_mb();
|
|
|
|
call->acks_unacked = (call->acks_tail + loop) & (call->acks_winsz - 1);
|
|
|
|
|
|
|
|
/* anything not explicitly ACK'd is implicitly NACK'd, but may just not
|
|
|
|
* have been received or processed yet by the far end */
|
|
|
|
for (loop = call->acks_unacked;
|
|
|
|
loop != call->acks_head;
|
|
|
|
loop = (loop + 1) & (call->acks_winsz - 1)
|
|
|
|
) {
|
|
|
|
p_txb = call->acks_window + loop;
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
txb = (struct sk_buff *) (*p_txb & ~1);
|
|
|
|
sp = rxrpc_skb(txb);
|
|
|
|
|
|
|
|
if (*p_txb & 1) {
|
|
|
|
/* packet must have been discarded */
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
*p_txb &= ~1;
|
|
|
|
resend |= 1;
|
|
|
|
} else if (sp->need_resend) {
|
|
|
|
;
|
|
|
|
} else if (time_after_eq(jiffies + 1, sp->resend_at)) {
|
2011-12-19 21:56:45 +08:00
|
|
|
sp->need_resend = true;
|
2007-04-27 06:48:28 +08:00
|
|
|
resend |= 1;
|
|
|
|
} else if (resend & 2) {
|
|
|
|
if (time_before(sp->resend_at, resend_at))
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
} else {
|
|
|
|
resend_at = sp->resend_at;
|
|
|
|
resend |= 2;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rxrpc_set_resend(call, resend, resend_at);
|
|
|
|
_leave(" = 0");
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
protocol_error:
|
|
|
|
_leave(" = -EPROTO");
|
|
|
|
return -EPROTO;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* discard hard-ACK'd packets from the Tx window
|
|
|
|
*/
|
|
|
|
static void rxrpc_rotate_tx_window(struct rxrpc_call *call, u32 hard)
|
|
|
|
{
|
|
|
|
unsigned long _skb;
|
|
|
|
int tail = call->acks_tail, old_tail;
|
|
|
|
int win = CIRC_CNT(call->acks_head, tail, call->acks_winsz);
|
|
|
|
|
2016-04-08 00:23:09 +08:00
|
|
|
_enter("{%u,%u},%u", call->acks_hard, win, hard);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
ASSERTCMP(hard - call->acks_hard, <=, win);
|
|
|
|
|
|
|
|
while (call->acks_hard < hard) {
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
_skb = call->acks_window[tail] & ~1;
|
|
|
|
rxrpc_free_skb((struct sk_buff *) _skb);
|
|
|
|
old_tail = tail;
|
|
|
|
tail = (tail + 1) & (call->acks_winsz - 1);
|
|
|
|
call->acks_tail = tail;
|
|
|
|
if (call->acks_unacked == old_tail)
|
|
|
|
call->acks_unacked = tail;
|
|
|
|
call->acks_hard++;
|
|
|
|
}
|
|
|
|
|
|
|
|
wake_up(&call->tx_waitq);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* clear the Tx window in the event of a failure
|
|
|
|
*/
|
|
|
|
static void rxrpc_clear_tx_window(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
rxrpc_rotate_tx_window(call, atomic_read(&call->sequence));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* drain the out of sequence received packet queue into the packet Rx queue
|
|
|
|
*/
|
|
|
|
static int rxrpc_drain_rx_oos_queue(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
bool terminal;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
_enter("{%d,%d}", call->rx_data_post, call->rx_first_oos);
|
|
|
|
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
|
|
|
|
ret = -ECONNRESET;
|
|
|
|
if (test_bit(RXRPC_CALL_RELEASED, &call->flags))
|
|
|
|
goto socket_unavailable;
|
|
|
|
|
|
|
|
skb = skb_dequeue(&call->rx_oos_queue);
|
|
|
|
if (skb) {
|
|
|
|
sp = rxrpc_skb(skb);
|
|
|
|
|
|
|
|
_debug("drain OOS packet %d [%d]",
|
2016-03-04 23:53:46 +08:00
|
|
|
sp->hdr.seq, call->rx_first_oos);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (sp->hdr.seq != call->rx_first_oos) {
|
2007-04-27 06:48:28 +08:00
|
|
|
skb_queue_head(&call->rx_oos_queue, skb);
|
2016-03-04 23:53:46 +08:00
|
|
|
call->rx_first_oos = rxrpc_skb(skb)->hdr.seq;
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("requeue %p {%u}", skb, call->rx_first_oos);
|
|
|
|
} else {
|
|
|
|
skb->mark = RXRPC_SKB_MARK_DATA;
|
|
|
|
terminal = ((sp->hdr.flags & RXRPC_LAST_PACKET) &&
|
|
|
|
!(sp->hdr.flags & RXRPC_CLIENT_INITIATED));
|
|
|
|
ret = rxrpc_queue_rcv_skb(call, skb, true, terminal);
|
|
|
|
BUG_ON(ret < 0);
|
|
|
|
_debug("drain #%u", call->rx_data_post);
|
|
|
|
call->rx_data_post++;
|
|
|
|
|
|
|
|
/* find out what the next packet is */
|
|
|
|
skb = skb_peek(&call->rx_oos_queue);
|
|
|
|
if (skb)
|
2016-03-04 23:53:46 +08:00
|
|
|
call->rx_first_oos = rxrpc_skb(skb)->hdr.seq;
|
2007-04-27 06:48:28 +08:00
|
|
|
else
|
|
|
|
call->rx_first_oos = 0;
|
|
|
|
_debug("peek %p {%u}", skb, call->rx_first_oos);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
socket_unavailable:
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* insert an out of sequence packet into the buffer
|
|
|
|
*/
|
|
|
|
static void rxrpc_insert_oos_packet(struct rxrpc_call *call,
|
|
|
|
struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp, *psp;
|
|
|
|
struct sk_buff *p;
|
|
|
|
u32 seq;
|
|
|
|
|
|
|
|
sp = rxrpc_skb(skb);
|
2016-03-04 23:53:46 +08:00
|
|
|
seq = sp->hdr.seq;
|
2007-04-27 06:48:28 +08:00
|
|
|
_enter(",,{%u}", seq);
|
|
|
|
|
|
|
|
skb->destructor = rxrpc_packet_destructor;
|
|
|
|
ASSERTCMP(sp->call, ==, NULL);
|
|
|
|
sp->call = call;
|
|
|
|
rxrpc_get_call(call);
|
|
|
|
|
|
|
|
/* insert into the buffer in sequence order */
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
|
|
|
|
skb_queue_walk(&call->rx_oos_queue, p) {
|
|
|
|
psp = rxrpc_skb(p);
|
2016-03-04 23:53:46 +08:00
|
|
|
if (psp->hdr.seq > seq) {
|
|
|
|
_debug("insert oos #%u before #%u", seq, psp->hdr.seq);
|
2007-04-27 06:48:28 +08:00
|
|
|
skb_insert(p, skb, &call->rx_oos_queue);
|
|
|
|
goto inserted;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
_debug("append oos #%u", seq);
|
|
|
|
skb_queue_tail(&call->rx_oos_queue, skb);
|
|
|
|
inserted:
|
|
|
|
|
|
|
|
/* we might now have a new front to the queue */
|
|
|
|
if (call->rx_first_oos == 0 || seq < call->rx_first_oos)
|
|
|
|
call->rx_first_oos = seq;
|
|
|
|
|
|
|
|
read_lock(&call->state_lock);
|
|
|
|
if (call->state < RXRPC_CALL_COMPLETE &&
|
|
|
|
call->rx_data_post == call->rx_first_oos) {
|
|
|
|
_debug("drain rx oos now");
|
2016-03-04 23:53:46 +08:00
|
|
|
set_bit(RXRPC_CALL_EV_DRAIN_RX_OOS, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
read_unlock(&call->state_lock);
|
|
|
|
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
_leave(" [stored #%u]", call->rx_first_oos);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* clear the Tx window on final ACK reception
|
|
|
|
*/
|
|
|
|
static void rxrpc_zap_tx_window(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
unsigned long _skb, *acks_window;
|
2009-09-16 15:01:13 +08:00
|
|
|
u8 winsz = call->acks_winsz;
|
2007-04-27 06:48:28 +08:00
|
|
|
int tail;
|
|
|
|
|
|
|
|
acks_window = call->acks_window;
|
|
|
|
call->acks_window = NULL;
|
|
|
|
|
|
|
|
while (CIRC_CNT(call->acks_head, call->acks_tail, winsz) > 0) {
|
|
|
|
tail = call->acks_tail;
|
|
|
|
smp_read_barrier_depends();
|
|
|
|
_skb = acks_window[tail] & ~1;
|
|
|
|
smp_mb();
|
|
|
|
call->acks_tail = (call->acks_tail + 1) & (winsz - 1);
|
|
|
|
|
|
|
|
skb = (struct sk_buff *) _skb;
|
|
|
|
sp = rxrpc_skb(skb);
|
2016-03-04 23:53:46 +08:00
|
|
|
_debug("+++ clear Tx %u", sp->hdr.seq);
|
2007-04-27 06:48:28 +08:00
|
|
|
rxrpc_free_skb(skb);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(acks_window);
|
|
|
|
}
|
|
|
|
|
2007-05-05 03:41:11 +08:00
|
|
|
/*
|
|
|
|
* process the extra information that may be appended to an ACK packet
|
|
|
|
*/
|
|
|
|
static void rxrpc_extract_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
|
2012-04-15 13:58:06 +08:00
|
|
|
unsigned int latest, int nAcks)
|
2007-05-05 03:41:11 +08:00
|
|
|
{
|
|
|
|
struct rxrpc_ackinfo ackinfo;
|
|
|
|
struct rxrpc_peer *peer;
|
2012-04-15 13:58:06 +08:00
|
|
|
unsigned int mtu;
|
2007-05-05 03:41:11 +08:00
|
|
|
|
|
|
|
if (skb_copy_bits(skb, nAcks + 3, &ackinfo, sizeof(ackinfo)) < 0) {
|
|
|
|
_leave(" [no ackinfo]");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
_proto("Rx ACK %%%u Info { rx=%u max=%u rwin=%u jm=%u }",
|
|
|
|
latest,
|
|
|
|
ntohl(ackinfo.rxMTU), ntohl(ackinfo.maxMTU),
|
|
|
|
ntohl(ackinfo.rwind), ntohl(ackinfo.jumbo_max));
|
|
|
|
|
|
|
|
mtu = min(ntohl(ackinfo.rxMTU), ntohl(ackinfo.maxMTU));
|
|
|
|
|
|
|
|
peer = call->conn->trans->peer;
|
|
|
|
if (mtu < peer->maxdata) {
|
|
|
|
spin_lock_bh(&peer->lock);
|
|
|
|
peer->maxdata = mtu;
|
|
|
|
peer->mtu = mtu + peer->hdrsize;
|
|
|
|
spin_unlock_bh(&peer->lock);
|
|
|
|
_net("Net MTU %u (maxdata %u)", peer->mtu, peer->maxdata);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-04-27 06:48:28 +08:00
|
|
|
/*
|
|
|
|
* process packets in the reception queue
|
|
|
|
*/
|
|
|
|
static int rxrpc_process_rx_queue(struct rxrpc_call *call,
|
|
|
|
u32 *_abort_code)
|
|
|
|
{
|
|
|
|
struct rxrpc_ackpacket ack;
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
bool post_ACK;
|
|
|
|
int latest;
|
|
|
|
u32 hard, tx;
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
|
|
|
process_further:
|
|
|
|
skb = skb_dequeue(&call->rx_queue);
|
|
|
|
if (!skb)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
_net("deferred skb %p", skb);
|
|
|
|
|
|
|
|
sp = rxrpc_skb(skb);
|
|
|
|
|
|
|
|
_debug("process %s [st %d]", rxrpc_pkts[sp->hdr.type], call->state);
|
|
|
|
|
|
|
|
post_ACK = false;
|
|
|
|
|
|
|
|
switch (sp->hdr.type) {
|
|
|
|
/* data packets that wind up here have been received out of
|
|
|
|
* order, need security processing or are jumbo packets */
|
|
|
|
case RXRPC_PACKET_TYPE_DATA:
|
2016-03-04 23:53:46 +08:00
|
|
|
_proto("OOSQ DATA %%%u { #%u }", sp->hdr.serial, sp->hdr.seq);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
/* secured packets must be verified and possibly decrypted */
|
2016-04-08 00:23:58 +08:00
|
|
|
if (call->conn->security->verify_packet(call, skb,
|
|
|
|
_abort_code) < 0)
|
2007-04-27 06:48:28 +08:00
|
|
|
goto protocol_error;
|
|
|
|
|
|
|
|
rxrpc_insert_oos_packet(call, skb);
|
|
|
|
goto process_further;
|
|
|
|
|
|
|
|
/* partial ACK to process */
|
|
|
|
case RXRPC_PACKET_TYPE_ACK:
|
|
|
|
if (skb_copy_bits(skb, 0, &ack, sizeof(ack)) < 0) {
|
|
|
|
_debug("extraction failure");
|
|
|
|
goto protocol_error;
|
|
|
|
}
|
|
|
|
if (!skb_pull(skb, sizeof(ack)))
|
|
|
|
BUG();
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
latest = sp->hdr.serial;
|
2007-04-27 06:48:28 +08:00
|
|
|
hard = ntohl(ack.firstPacket);
|
|
|
|
tx = atomic_read(&call->sequence);
|
|
|
|
|
|
|
|
_proto("Rx ACK %%%u { m=%hu f=#%u p=#%u s=%%%u r=%s n=%u }",
|
|
|
|
latest,
|
|
|
|
ntohs(ack.maxSkew),
|
|
|
|
hard,
|
|
|
|
ntohl(ack.previousPacket),
|
|
|
|
ntohl(ack.serial),
|
2014-01-20 18:28:59 +08:00
|
|
|
rxrpc_acks(ack.reason),
|
2007-04-27 06:48:28 +08:00
|
|
|
ack.nAcks);
|
|
|
|
|
2007-05-05 03:41:11 +08:00
|
|
|
rxrpc_extract_ackinfo(call, skb, latest, ack.nAcks);
|
|
|
|
|
2007-04-27 06:48:28 +08:00
|
|
|
if (ack.reason == RXRPC_ACK_PING) {
|
|
|
|
_proto("Rx ACK %%%u PING Request", latest);
|
|
|
|
rxrpc_propose_ACK(call, RXRPC_ACK_PING_RESPONSE,
|
|
|
|
sp->hdr.serial, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* discard any out-of-order or duplicate ACKs */
|
|
|
|
if (latest - call->acks_latest <= 0) {
|
|
|
|
_debug("discard ACK %d <= %d",
|
|
|
|
latest, call->acks_latest);
|
|
|
|
goto discard;
|
|
|
|
}
|
|
|
|
call->acks_latest = latest;
|
|
|
|
|
|
|
|
if (call->state != RXRPC_CALL_CLIENT_SEND_REQUEST &&
|
|
|
|
call->state != RXRPC_CALL_CLIENT_AWAIT_REPLY &&
|
|
|
|
call->state != RXRPC_CALL_SERVER_SEND_REPLY &&
|
|
|
|
call->state != RXRPC_CALL_SERVER_AWAIT_ACK)
|
|
|
|
goto discard;
|
|
|
|
|
|
|
|
_debug("Tx=%d H=%u S=%d", tx, call->acks_hard, call->state);
|
|
|
|
|
|
|
|
if (hard > 0) {
|
|
|
|
if (hard - 1 > tx) {
|
|
|
|
_debug("hard-ACK'd packet %d not transmitted"
|
|
|
|
" (%d top)",
|
|
|
|
hard - 1, tx);
|
|
|
|
goto protocol_error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY ||
|
|
|
|
call->state == RXRPC_CALL_SERVER_AWAIT_ACK) &&
|
2015-11-24 22:41:59 +08:00
|
|
|
hard > tx) {
|
|
|
|
call->acks_hard = tx;
|
2007-04-27 06:48:28 +08:00
|
|
|
goto all_acked;
|
2015-11-24 22:41:59 +08:00
|
|
|
}
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
smp_rmb();
|
|
|
|
rxrpc_rotate_tx_window(call, hard - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ack.nAcks > 0) {
|
|
|
|
if (hard - 1 + ack.nAcks > tx) {
|
|
|
|
_debug("soft-ACK'd packet %d+%d not"
|
|
|
|
" transmitted (%d top)",
|
|
|
|
hard - 1, ack.nAcks, tx);
|
|
|
|
goto protocol_error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rxrpc_process_soft_ACKs(call, &ack, skb) < 0)
|
|
|
|
goto protocol_error;
|
|
|
|
}
|
|
|
|
goto discard;
|
|
|
|
|
|
|
|
/* complete ACK to process */
|
|
|
|
case RXRPC_PACKET_TYPE_ACKALL:
|
|
|
|
goto all_acked;
|
|
|
|
|
|
|
|
/* abort and busy are handled elsewhere */
|
|
|
|
case RXRPC_PACKET_TYPE_BUSY:
|
|
|
|
case RXRPC_PACKET_TYPE_ABORT:
|
|
|
|
BUG();
|
|
|
|
|
|
|
|
/* connection level events - also handled elsewhere */
|
|
|
|
case RXRPC_PACKET_TYPE_CHALLENGE:
|
|
|
|
case RXRPC_PACKET_TYPE_RESPONSE:
|
|
|
|
case RXRPC_PACKET_TYPE_DEBUG:
|
|
|
|
BUG();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* if we've had a hard ACK that covers all the packets we've sent, then
|
|
|
|
* that ends that phase of the operation */
|
|
|
|
all_acked:
|
|
|
|
write_lock_bh(&call->state_lock);
|
|
|
|
_debug("ack all %d", call->state);
|
|
|
|
|
|
|
|
switch (call->state) {
|
|
|
|
case RXRPC_CALL_CLIENT_AWAIT_REPLY:
|
|
|
|
call->state = RXRPC_CALL_CLIENT_RECV_REPLY;
|
|
|
|
break;
|
|
|
|
case RXRPC_CALL_SERVER_AWAIT_ACK:
|
|
|
|
_debug("srv complete");
|
|
|
|
call->state = RXRPC_CALL_COMPLETE;
|
|
|
|
post_ACK = true;
|
|
|
|
break;
|
|
|
|
case RXRPC_CALL_CLIENT_SEND_REQUEST:
|
|
|
|
case RXRPC_CALL_SERVER_RECV_REQUEST:
|
|
|
|
goto protocol_error_unlock; /* can't occur yet */
|
|
|
|
default:
|
|
|
|
write_unlock_bh(&call->state_lock);
|
|
|
|
goto discard; /* assume packet left over from earlier phase */
|
|
|
|
}
|
|
|
|
|
|
|
|
write_unlock_bh(&call->state_lock);
|
|
|
|
|
|
|
|
/* if all the packets we sent are hard-ACK'd, then we can discard
|
|
|
|
* whatever we've got left */
|
|
|
|
_debug("clear Tx %d",
|
|
|
|
CIRC_CNT(call->acks_head, call->acks_tail, call->acks_winsz));
|
|
|
|
|
|
|
|
del_timer_sync(&call->resend_timer);
|
|
|
|
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
if (call->acks_window)
|
|
|
|
rxrpc_zap_tx_window(call);
|
|
|
|
|
|
|
|
if (post_ACK) {
|
|
|
|
/* post the final ACK message for userspace to pick up */
|
|
|
|
_debug("post ACK");
|
|
|
|
skb->mark = RXRPC_SKB_MARK_FINAL_ACK;
|
|
|
|
sp->call = call;
|
|
|
|
rxrpc_get_call(call);
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
if (rxrpc_queue_rcv_skb(call, skb, true, true) < 0)
|
|
|
|
BUG();
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
goto process_further;
|
|
|
|
}
|
|
|
|
|
|
|
|
discard:
|
|
|
|
rxrpc_free_skb(skb);
|
|
|
|
goto process_further;
|
|
|
|
|
|
|
|
protocol_error_unlock:
|
|
|
|
write_unlock_bh(&call->state_lock);
|
|
|
|
protocol_error:
|
|
|
|
rxrpc_free_skb(skb);
|
|
|
|
_leave(" = -EPROTO");
|
|
|
|
return -EPROTO;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* post a message to the socket Rx queue for recvmsg() to pick up
|
|
|
|
*/
|
|
|
|
static int rxrpc_post_message(struct rxrpc_call *call, u32 mark, u32 error,
|
|
|
|
bool fatal)
|
|
|
|
{
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
_enter("{%d,%lx},%u,%u,%d",
|
|
|
|
call->debug_id, call->flags, mark, error, fatal);
|
|
|
|
|
|
|
|
/* remove timers and things for fatal messages */
|
|
|
|
if (fatal) {
|
|
|
|
del_timer_sync(&call->resend_timer);
|
|
|
|
del_timer_sync(&call->ack_timer);
|
|
|
|
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mark != RXRPC_SKB_MARK_NEW_CALL &&
|
|
|
|
!test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
|
|
|
|
_leave("[no userid]");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!test_bit(RXRPC_CALL_TERMINAL_MSG, &call->flags)) {
|
|
|
|
skb = alloc_skb(0, GFP_NOFS);
|
|
|
|
if (!skb)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
rxrpc_new_skb(skb);
|
|
|
|
|
|
|
|
skb->mark = mark;
|
|
|
|
|
|
|
|
sp = rxrpc_skb(skb);
|
|
|
|
memset(sp, 0, sizeof(*sp));
|
|
|
|
sp->error = error;
|
|
|
|
sp->call = call;
|
|
|
|
rxrpc_get_call(call);
|
|
|
|
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
ret = rxrpc_queue_rcv_skb(call, skb, true, fatal);
|
|
|
|
spin_unlock_bh(&call->lock);
|
2008-02-18 10:42:03 +08:00
|
|
|
BUG_ON(ret < 0);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle background processing of incoming call packets and ACK / abort
|
|
|
|
* generation
|
|
|
|
*/
|
|
|
|
void rxrpc_process_call(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call =
|
|
|
|
container_of(work, struct rxrpc_call, processor);
|
2016-03-04 23:53:46 +08:00
|
|
|
struct rxrpc_wire_header whdr;
|
2007-04-27 06:48:28 +08:00
|
|
|
struct rxrpc_ackpacket ack;
|
|
|
|
struct rxrpc_ackinfo ackinfo;
|
|
|
|
struct msghdr msg;
|
|
|
|
struct kvec iov[5];
|
2016-03-04 23:53:46 +08:00
|
|
|
enum rxrpc_call_event genbit;
|
2007-04-27 06:48:28 +08:00
|
|
|
unsigned long bits;
|
2007-05-05 03:41:11 +08:00
|
|
|
__be32 data, pad;
|
2007-04-27 06:48:28 +08:00
|
|
|
size_t len;
|
2016-03-04 23:53:46 +08:00
|
|
|
int loop, nbit, ioc, ret, mtu;
|
2016-03-04 23:53:46 +08:00
|
|
|
u32 serial, abort_code = RX_PROTOCOL_ERROR;
|
2007-04-27 06:48:28 +08:00
|
|
|
u8 *acks = NULL;
|
|
|
|
|
|
|
|
//printk("\n--------------------\n");
|
|
|
|
_enter("{%d,%s,%lx} [%lu]",
|
|
|
|
call->debug_id, rxrpc_call_states[call->state], call->events,
|
|
|
|
(jiffies - call->creation_jif) / (HZ / 10));
|
|
|
|
|
|
|
|
if (test_and_set_bit(RXRPC_CALL_PROC_BUSY, &call->flags)) {
|
|
|
|
_debug("XXXXXXXXXXXXX RUNNING ON MULTIPLE CPUS XXXXXXXXXXXXX");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* there's a good chance we're going to have to send a message, so set
|
|
|
|
* one up in advance */
|
2016-04-08 00:23:44 +08:00
|
|
|
msg.msg_name = &call->conn->trans->peer->srx.transport;
|
|
|
|
msg.msg_namelen = call->conn->trans->peer->srx.transport_len;
|
2007-04-27 06:48:28 +08:00
|
|
|
msg.msg_control = NULL;
|
|
|
|
msg.msg_controllen = 0;
|
|
|
|
msg.msg_flags = 0;
|
|
|
|
|
2016-04-04 21:00:36 +08:00
|
|
|
whdr.epoch = htonl(call->conn->proto.epoch);
|
2016-03-04 23:53:46 +08:00
|
|
|
whdr.cid = htonl(call->cid);
|
|
|
|
whdr.callNumber = htonl(call->call_id);
|
|
|
|
whdr.seq = 0;
|
|
|
|
whdr.type = RXRPC_PACKET_TYPE_ACK;
|
|
|
|
whdr.flags = call->conn->out_clientflag;
|
|
|
|
whdr.userStatus = 0;
|
|
|
|
whdr.securityIndex = call->conn->security_ix;
|
|
|
|
whdr._rsvd = 0;
|
|
|
|
whdr.serviceId = htons(call->service_id);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
memset(iov, 0, sizeof(iov));
|
2016-03-04 23:53:46 +08:00
|
|
|
iov[0].iov_base = &whdr;
|
|
|
|
iov[0].iov_len = sizeof(whdr);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
/* deal with events of a final nature */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
rxrpc_release_call(call);
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RELEASE, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events)) {
|
2016-04-04 21:00:34 +08:00
|
|
|
enum rxrpc_skb_mark mark;
|
2007-04-27 06:48:28 +08:00
|
|
|
int error;
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_CONN_ABORT, &call->events);
|
|
|
|
clear_bit(RXRPC_CALL_EV_REJECT_BUSY, &call->events);
|
|
|
|
clear_bit(RXRPC_CALL_EV_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
2016-04-04 21:00:34 +08:00
|
|
|
error = call->error_report;
|
|
|
|
if (error < RXRPC_LOCAL_ERROR_OFFSET) {
|
|
|
|
mark = RXRPC_SKB_MARK_NET_ERROR;
|
|
|
|
_debug("post net error %d", error);
|
|
|
|
} else {
|
|
|
|
mark = RXRPC_SKB_MARK_LOCAL_ERROR;
|
|
|
|
error -= RXRPC_LOCAL_ERROR_OFFSET;
|
|
|
|
_debug("post net local error %d", error);
|
|
|
|
}
|
2007-04-27 06:48:28 +08:00
|
|
|
|
2016-04-04 21:00:34 +08:00
|
|
|
if (rxrpc_post_message(call, mark, error, true) < 0)
|
2007-04-27 06:48:28 +08:00
|
|
|
goto no_mem;
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto kill_ACKs;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_CONN_ABORT, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
ASSERTCMP(call->state, >, RXRPC_CALL_COMPLETE);
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_REJECT_BUSY, &call->events);
|
|
|
|
clear_bit(RXRPC_CALL_EV_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
_debug("post conn abort");
|
|
|
|
|
|
|
|
if (rxrpc_post_message(call, RXRPC_SKB_MARK_LOCAL_ERROR,
|
|
|
|
call->conn->error, true) < 0)
|
|
|
|
goto no_mem;
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_CONN_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto kill_ACKs;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_REJECT_BUSY, &call->events)) {
|
2016-03-04 23:53:46 +08:00
|
|
|
whdr.type = RXRPC_PACKET_TYPE_BUSY;
|
2016-03-04 23:53:46 +08:00
|
|
|
genbit = RXRPC_CALL_EV_REJECT_BUSY;
|
2007-04-27 06:48:28 +08:00
|
|
|
goto send_message;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_ABORT, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
ASSERTCMP(call->state, >, RXRPC_CALL_COMPLETE);
|
|
|
|
|
|
|
|
if (rxrpc_post_message(call, RXRPC_SKB_MARK_LOCAL_ERROR,
|
|
|
|
ECONNABORTED, true) < 0)
|
|
|
|
goto no_mem;
|
2016-03-04 23:53:46 +08:00
|
|
|
whdr.type = RXRPC_PACKET_TYPE_ABORT;
|
2016-04-08 00:23:30 +08:00
|
|
|
data = htonl(call->local_abort);
|
2007-04-27 06:48:28 +08:00
|
|
|
iov[1].iov_base = &data;
|
|
|
|
iov[1].iov_len = sizeof(data);
|
2016-03-04 23:53:46 +08:00
|
|
|
genbit = RXRPC_CALL_EV_ABORT;
|
2007-04-27 06:48:28 +08:00
|
|
|
goto send_message;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events)) {
|
|
|
|
genbit = RXRPC_CALL_EV_ACK_FINAL;
|
2007-05-05 03:41:11 +08:00
|
|
|
|
|
|
|
ack.bufferSpace = htons(8);
|
|
|
|
ack.maxSkew = 0;
|
|
|
|
ack.serial = 0;
|
|
|
|
ack.reason = RXRPC_ACK_IDLE;
|
|
|
|
ack.nAcks = 0;
|
|
|
|
call->ackr_reason = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&call->lock);
|
2016-03-04 23:53:46 +08:00
|
|
|
ack.serial = htonl(call->ackr_serial);
|
|
|
|
ack.previousPacket = htonl(call->ackr_prev_seq);
|
|
|
|
ack.firstPacket = htonl(call->rx_data_eaten + 1);
|
2007-05-05 03:41:11 +08:00
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
|
|
|
|
pad = 0;
|
|
|
|
|
|
|
|
iov[1].iov_base = &ack;
|
|
|
|
iov[1].iov_len = sizeof(ack);
|
|
|
|
iov[2].iov_base = &pad;
|
|
|
|
iov[2].iov_len = 3;
|
|
|
|
iov[3].iov_base = &ackinfo;
|
|
|
|
iov[3].iov_len = sizeof(ackinfo);
|
|
|
|
goto send_ACK;
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (call->events & ((1 << RXRPC_CALL_EV_RCVD_BUSY) |
|
|
|
|
(1 << RXRPC_CALL_EV_RCVD_ABORT))
|
2007-04-27 06:48:28 +08:00
|
|
|
) {
|
|
|
|
u32 mark;
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_RCVD_ABORT, &call->events))
|
2007-04-27 06:48:28 +08:00
|
|
|
mark = RXRPC_SKB_MARK_REMOTE_ABORT;
|
|
|
|
else
|
|
|
|
mark = RXRPC_SKB_MARK_BUSY;
|
|
|
|
|
|
|
|
_debug("post abort/busy");
|
|
|
|
rxrpc_clear_tx_window(call);
|
|
|
|
if (rxrpc_post_message(call, mark, ECONNABORTED, true) < 0)
|
|
|
|
goto no_mem;
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RCVD_BUSY, &call->events);
|
|
|
|
clear_bit(RXRPC_CALL_EV_RCVD_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto kill_ACKs;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_RCVD_ACKALL, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("do implicit ackall");
|
|
|
|
rxrpc_clear_tx_window(call);
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_LIFE_TIMER, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
write_lock_bh(&call->state_lock);
|
|
|
|
if (call->state <= RXRPC_CALL_COMPLETE) {
|
|
|
|
call->state = RXRPC_CALL_LOCALLY_ABORTED;
|
2016-04-08 00:23:30 +08:00
|
|
|
call->local_abort = RX_CALL_TIMEOUT;
|
2016-03-04 23:53:46 +08:00
|
|
|
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
write_unlock_bh(&call->state_lock);
|
|
|
|
|
|
|
|
_debug("post timeout");
|
|
|
|
if (rxrpc_post_message(call, RXRPC_SKB_MARK_LOCAL_ERROR,
|
|
|
|
ETIME, true) < 0)
|
|
|
|
goto no_mem;
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_LIFE_TIMER, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto kill_ACKs;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* deal with assorted inbound messages */
|
|
|
|
if (!skb_queue_empty(&call->rx_queue)) {
|
|
|
|
switch (rxrpc_process_rx_queue(call, &abort_code)) {
|
|
|
|
case 0:
|
|
|
|
case -EAGAIN:
|
|
|
|
break;
|
|
|
|
case -ENOMEM:
|
|
|
|
goto no_mem;
|
|
|
|
case -EKEYEXPIRED:
|
|
|
|
case -EKEYREJECTED:
|
|
|
|
case -EPROTO:
|
|
|
|
rxrpc_abort_call(call, abort_code);
|
|
|
|
goto kill_ACKs;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* handle resending */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events))
|
2007-04-27 06:48:28 +08:00
|
|
|
rxrpc_resend_timer(call);
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events))
|
2007-04-27 06:48:28 +08:00
|
|
|
rxrpc_resend(call);
|
|
|
|
|
|
|
|
/* consider sending an ordinary ACK */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_ACK, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("send ACK: window: %d - %d { %lx }",
|
|
|
|
call->rx_data_eaten, call->ackr_win_top,
|
|
|
|
call->ackr_window[0]);
|
|
|
|
|
|
|
|
if (call->state > RXRPC_CALL_SERVER_ACK_REQUEST &&
|
|
|
|
call->ackr_reason != RXRPC_ACK_PING_RESPONSE) {
|
|
|
|
/* ACK by sending reply DATA packet in this state */
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_ACK, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
genbit = RXRPC_CALL_EV_ACK;
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
acks = kzalloc(call->ackr_win_top - call->rx_data_eaten,
|
|
|
|
GFP_NOFS);
|
|
|
|
if (!acks)
|
|
|
|
goto no_mem;
|
|
|
|
|
|
|
|
//hdr.flags = RXRPC_SLOW_START_OK;
|
|
|
|
ack.bufferSpace = htons(8);
|
|
|
|
ack.maxSkew = 0;
|
|
|
|
|
|
|
|
spin_lock_bh(&call->lock);
|
2016-03-04 23:53:46 +08:00
|
|
|
ack.reason = call->ackr_reason;
|
|
|
|
ack.serial = htonl(call->ackr_serial);
|
|
|
|
ack.previousPacket = htonl(call->ackr_prev_seq);
|
2007-04-27 06:48:28 +08:00
|
|
|
ack.firstPacket = htonl(call->rx_data_eaten + 1);
|
|
|
|
|
|
|
|
ack.nAcks = 0;
|
|
|
|
for (loop = 0; loop < RXRPC_ACKR_WINDOW_ASZ; loop++) {
|
|
|
|
nbit = loop * BITS_PER_LONG;
|
|
|
|
for (bits = call->ackr_window[loop]; bits; bits >>= 1
|
|
|
|
) {
|
|
|
|
_debug("- l=%d n=%d b=%lx", loop, nbit, bits);
|
|
|
|
if (bits & 1) {
|
|
|
|
acks[nbit] = RXRPC_ACK_TYPE_ACK;
|
|
|
|
ack.nAcks = nbit + 1;
|
|
|
|
}
|
|
|
|
nbit++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
call->ackr_reason = 0;
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
|
|
|
|
pad = 0;
|
|
|
|
|
|
|
|
iov[1].iov_base = &ack;
|
|
|
|
iov[1].iov_len = sizeof(ack);
|
|
|
|
iov[2].iov_base = acks;
|
|
|
|
iov[2].iov_len = ack.nAcks;
|
|
|
|
iov[3].iov_base = &pad;
|
|
|
|
iov[3].iov_len = 3;
|
|
|
|
iov[4].iov_base = &ackinfo;
|
|
|
|
iov[4].iov_len = sizeof(ackinfo);
|
|
|
|
|
|
|
|
switch (ack.reason) {
|
|
|
|
case RXRPC_ACK_REQUESTED:
|
|
|
|
case RXRPC_ACK_DUPLICATE:
|
|
|
|
case RXRPC_ACK_OUT_OF_SEQUENCE:
|
|
|
|
case RXRPC_ACK_EXCEEDS_WINDOW:
|
|
|
|
case RXRPC_ACK_NOSPACE:
|
|
|
|
case RXRPC_ACK_PING:
|
|
|
|
case RXRPC_ACK_PING_RESPONSE:
|
|
|
|
goto send_ACK_with_skew;
|
|
|
|
case RXRPC_ACK_DELAY:
|
|
|
|
case RXRPC_ACK_IDLE:
|
|
|
|
goto send_ACK;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* handle completion of security negotiations on an incoming
|
|
|
|
* connection */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_SECURED, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("secured");
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
|
|
|
|
if (call->state == RXRPC_CALL_SERVER_SECURING) {
|
|
|
|
_debug("securing");
|
|
|
|
write_lock(&call->conn->lock);
|
|
|
|
if (!test_bit(RXRPC_CALL_RELEASED, &call->flags) &&
|
2016-03-04 23:53:46 +08:00
|
|
|
!test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("not released");
|
|
|
|
call->state = RXRPC_CALL_SERVER_ACCEPTING;
|
|
|
|
list_move_tail(&call->accept_link,
|
|
|
|
&call->socket->acceptq);
|
|
|
|
}
|
|
|
|
write_unlock(&call->conn->lock);
|
|
|
|
read_lock(&call->state_lock);
|
|
|
|
if (call->state < RXRPC_CALL_COMPLETE)
|
2016-03-04 23:53:46 +08:00
|
|
|
set_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock_bh(&call->lock);
|
2016-03-04 23:53:46 +08:00
|
|
|
if (!test_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events))
|
2007-04-27 06:48:28 +08:00
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* post a notification of an acceptable connection to the app */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("post accept");
|
|
|
|
if (rxrpc_post_message(call, RXRPC_SKB_MARK_NEW_CALL,
|
|
|
|
0, false) < 0)
|
|
|
|
goto no_mem;
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* handle incoming call acceptance */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_ACCEPTED, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
_debug("accepted");
|
|
|
|
ASSERTCMP(call->rx_data_post, ==, 0);
|
|
|
|
call->rx_data_post = 1;
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (call->state < RXRPC_CALL_COMPLETE)
|
2016-03-04 23:53:46 +08:00
|
|
|
set_bit(RXRPC_CALL_EV_DRAIN_RX_OOS, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* drain the out of sequence received packet queue into the packet Rx
|
|
|
|
* queue */
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_DRAIN_RX_OOS, &call->events)) {
|
2007-04-27 06:48:28 +08:00
|
|
|
while (call->rx_data_post == call->rx_first_oos)
|
|
|
|
if (rxrpc_drain_rx_oos_queue(call) < 0)
|
|
|
|
break;
|
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* other events may have been raised since we started checking */
|
|
|
|
goto maybe_reschedule;
|
|
|
|
|
|
|
|
send_ACK_with_skew:
|
|
|
|
ack.maxSkew = htons(atomic_read(&call->conn->hi_serial) -
|
|
|
|
ntohl(ack.serial));
|
|
|
|
send_ACK:
|
2007-05-05 03:41:11 +08:00
|
|
|
mtu = call->conn->trans->peer->if_mtu;
|
|
|
|
mtu -= call->conn->trans->peer->hdrsize;
|
|
|
|
ackinfo.maxMTU = htonl(mtu);
|
2014-02-08 02:10:30 +08:00
|
|
|
ackinfo.rwind = htonl(rxrpc_rx_window_size);
|
2007-05-05 03:41:11 +08:00
|
|
|
|
|
|
|
/* permit the peer to send us jumbo packets if it wants to */
|
2014-02-08 02:10:30 +08:00
|
|
|
ackinfo.rxMTU = htonl(rxrpc_rx_mtu);
|
|
|
|
ackinfo.jumbo_max = htonl(rxrpc_rx_jumbo_max);
|
2007-05-05 03:41:11 +08:00
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
serial = atomic_inc_return(&call->conn->serial);
|
|
|
|
whdr.serial = htonl(serial);
|
2007-04-27 06:48:28 +08:00
|
|
|
_proto("Tx ACK %%%u { m=%hu f=#%u p=#%u s=%%%u r=%s n=%u }",
|
2016-03-04 23:53:46 +08:00
|
|
|
serial,
|
2007-04-27 06:48:28 +08:00
|
|
|
ntohs(ack.maxSkew),
|
|
|
|
ntohl(ack.firstPacket),
|
|
|
|
ntohl(ack.previousPacket),
|
|
|
|
ntohl(ack.serial),
|
2014-01-20 18:28:59 +08:00
|
|
|
rxrpc_acks(ack.reason),
|
2007-04-27 06:48:28 +08:00
|
|
|
ack.nAcks);
|
|
|
|
|
|
|
|
del_timer_sync(&call->ack_timer);
|
|
|
|
if (ack.nAcks > 0)
|
|
|
|
set_bit(RXRPC_CALL_TX_SOFT_ACK, &call->flags);
|
|
|
|
goto send_message_2;
|
|
|
|
|
|
|
|
send_message:
|
|
|
|
_debug("send message");
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
serial = atomic_inc_return(&call->conn->serial);
|
|
|
|
whdr.serial = htonl(serial);
|
|
|
|
_proto("Tx %s %%%u", rxrpc_pkts[whdr.type], serial);
|
2007-04-27 06:48:28 +08:00
|
|
|
send_message_2:
|
|
|
|
|
|
|
|
len = iov[0].iov_len;
|
|
|
|
ioc = 1;
|
|
|
|
if (iov[4].iov_len) {
|
|
|
|
ioc = 5;
|
|
|
|
len += iov[4].iov_len;
|
|
|
|
len += iov[3].iov_len;
|
|
|
|
len += iov[2].iov_len;
|
|
|
|
len += iov[1].iov_len;
|
|
|
|
} else if (iov[3].iov_len) {
|
|
|
|
ioc = 4;
|
|
|
|
len += iov[3].iov_len;
|
|
|
|
len += iov[2].iov_len;
|
|
|
|
len += iov[1].iov_len;
|
|
|
|
} else if (iov[2].iov_len) {
|
|
|
|
ioc = 3;
|
|
|
|
len += iov[2].iov_len;
|
|
|
|
len += iov[1].iov_len;
|
|
|
|
} else if (iov[1].iov_len) {
|
|
|
|
ioc = 2;
|
|
|
|
len += iov[1].iov_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = kernel_sendmsg(call->conn->trans->local->socket,
|
|
|
|
&msg, iov, ioc, len);
|
|
|
|
if (ret < 0) {
|
|
|
|
_debug("sendmsg failed: %d", ret);
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (call->state < RXRPC_CALL_DEAD)
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-27 06:50:17 +08:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (genbit) {
|
2016-03-04 23:53:46 +08:00
|
|
|
case RXRPC_CALL_EV_ABORT:
|
2007-04-27 06:48:28 +08:00
|
|
|
clear_bit(genbit, &call->events);
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_RCVD_ABORT, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
goto kill_ACKs;
|
|
|
|
|
2016-03-04 23:53:46 +08:00
|
|
|
case RXRPC_CALL_EV_ACK_FINAL:
|
2007-04-27 06:48:28 +08:00
|
|
|
write_lock_bh(&call->state_lock);
|
|
|
|
if (call->state == RXRPC_CALL_CLIENT_FINAL_ACK)
|
|
|
|
call->state = RXRPC_CALL_COMPLETE;
|
|
|
|
write_unlock_bh(&call->state_lock);
|
|
|
|
goto kill_ACKs;
|
|
|
|
|
|
|
|
default:
|
|
|
|
clear_bit(genbit, &call->events);
|
|
|
|
switch (call->state) {
|
|
|
|
case RXRPC_CALL_CLIENT_AWAIT_REPLY:
|
|
|
|
case RXRPC_CALL_CLIENT_RECV_REPLY:
|
|
|
|
case RXRPC_CALL_SERVER_RECV_REQUEST:
|
|
|
|
case RXRPC_CALL_SERVER_ACK_REQUEST:
|
|
|
|
_debug("start ACK timer");
|
|
|
|
rxrpc_propose_ACK(call, RXRPC_ACK_DELAY,
|
|
|
|
call->ackr_serial, false);
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|
|
|
|
|
|
|
|
kill_ACKs:
|
|
|
|
del_timer_sync(&call->ack_timer);
|
2016-03-04 23:53:46 +08:00
|
|
|
if (test_and_clear_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events))
|
2007-04-27 06:48:28 +08:00
|
|
|
rxrpc_put_call(call);
|
2016-03-04 23:53:46 +08:00
|
|
|
clear_bit(RXRPC_CALL_EV_ACK, &call->events);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
maybe_reschedule:
|
|
|
|
if (call->events || !skb_queue_empty(&call->rx_queue)) {
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (call->state < RXRPC_CALL_DEAD)
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-27 06:50:17 +08:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* don't leave aborted connections on the accept queue */
|
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE &&
|
|
|
|
!list_empty(&call->accept_link)) {
|
|
|
|
_debug("X unlinking once-pending call %p { e=%lx f=%lx c=%x }",
|
2016-04-04 21:00:36 +08:00
|
|
|
call, call->events, call->flags, call->conn->proto.cid);
|
2007-04-27 06:48:28 +08:00
|
|
|
|
|
|
|
read_lock_bh(&call->state_lock);
|
|
|
|
if (!test_bit(RXRPC_CALL_RELEASED, &call->flags) &&
|
2016-03-04 23:53:46 +08:00
|
|
|
!test_and_set_bit(RXRPC_CALL_EV_RELEASE, &call->events))
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-27 06:50:17 +08:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-27 06:48:28 +08:00
|
|
|
read_unlock_bh(&call->state_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
error:
|
|
|
|
clear_bit(RXRPC_CALL_PROC_BUSY, &call->flags);
|
|
|
|
kfree(acks);
|
|
|
|
|
|
|
|
/* because we don't want two CPUs both processing the work item for one
|
|
|
|
* call at the same time, we use a flag to note when it's busy; however
|
|
|
|
* this means there's a race between clearing the flag and setting the
|
|
|
|
* work pending bit and the work item being processed again */
|
|
|
|
if (call->events && !work_pending(&call->processor)) {
|
2016-04-04 21:00:36 +08:00
|
|
|
_debug("jumpstart %x", call->conn->proto.cid);
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-27 06:50:17 +08:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-27 06:48:28 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
_leave("");
|
|
|
|
return;
|
|
|
|
|
|
|
|
no_mem:
|
|
|
|
_debug("out of memory");
|
|
|
|
goto maybe_reschedule;
|
|
|
|
}
|