Commit Graph

63 Commits

Author SHA1 Message Date
David Howells e34d4234b0 rxrpc: Trace rxrpc_call usage
Add a trace event for debuging rxrpc_call struct usage.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-30 16:02:36 +01:00
David Howells f5c17aaeb2 rxrpc: Calls should only have one terminal state
Condense the terminal states of a call state machine to a single state,
plus a separate completion type value.  The value is then set, along with
error and abort code values, only when the call is transitioned to the
completion state.

Helpers are provided to simplify this.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-30 15:58:31 +01:00
David Howells 45025bceef rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.

Further, there will be limits on the numbers of client connections that may
be live on a machine.  No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.

Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:

 (1) Security is negotiated per-connection and the context is then shared
     between all calls on that connection.  The context can be negotiated
     again if the connection lapses, but that involves holding up calls
     whilst at least two packets are exchanged and various crypto bits are
     performed - so we'd ideally like to cache it for a little while at
     least.

 (2) If a packet goes astray, we will need to retransmit a final ACK or
     ABORT packet.  To make this work, we need to keep around the
     connection details for a little while.

 (3) The locally held structures represent some amount of setup time, to be
     weighed against their occupation of memory when idle.


To this end, the client connection cache is managed by a state machine on
each connection.  There are five states:

 (1) INACTIVE - The connection is not held in any list and may not have
     been exposed to the world.  If it has been previously exposed, it was
     discarded from the idle list after expiring.

 (2) WAITING - The connection is waiting for the number of client conns to
     drop below the maximum capacity.  Calls may be in progress upon it
     from when it was active and got culled.

     The connection is on the rxrpc_waiting_client_conns list which is kept
     in to-be-granted order.  Culled conns with waiters go to the back of
     the queue just like new conns.

 (3) ACTIVE - The connection has at least one call in progress upon it, it
     may freely grant available channels to new calls and calls may be
     waiting on it for channels to become available.

     The connection is on the rxrpc_active_client_conns list which is kept
     in activation order for culling purposes.

 (4) CULLED - The connection got summarily culled to try and free up
     capacity.  Calls currently in progress on the connection are allowed
     to continue, but new calls will have to wait.  There can be no waiters
     in this state - the conn would have to go to the WAITING state
     instead.

 (5) IDLE - The connection has no calls in progress upon it and must have
     been exposed to the world (ie. the EXPOSED flag must be set).  When it
     expires, the EXPOSED flag is cleared and the connection transitions to
     the INACTIVE state.

     The connection is on the rxrpc_idle_client_conns list which is kept in
     order of how soon they'll expire.

A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.

As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue.  This simplifies things a bit and reduces the amount of checking we
need do.


There are a couple flags of relevance to the cache:

 (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
     set, an extra ref is added to the connection preventing it from being
     reaped when it has no calls outstanding.  This flag is cleared and the
     ref dropped when a conn is discarded from the idle list.

 (2) DONT_REUSE - The connection should be discarded as soon as possible and
     should not be reused.


This commit also provides a number of new settings:

 (*) /proc/net/rxrpc/max_client_conns

     The maximum number of live client connections.  Above this number, new
     connections get added to the wait list and must wait for an active
     conn to be culled.  Culled connections can be reused, but they will go
     to the back of the wait list and have to wait.

 (*) /proc/net/rxrpc/reap_client_conns

     If the number of desired connections exceeds the maximum above, the
     active connection list will be culled until there are only this many
     left in it.

 (*) /proc/net/rxrpc/idle_conn_expiry

     The normal expiry time for a client connection, provided there are
     fewer than reap_client_conns of them around.

 (*) /proc/net/rxrpc/idle_conn_fast_expiry

     The expedited expiry time, used when there are more than
     reap_client_conns of them around.


Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.

Note also that, for the moment, the service connection cache still uses the
old connection management code.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 15:17:14 +01:00
David Howells df844fd46b rxrpc: Use a tracepoint for skb accounting debugging
Use a tracepoint to log various skb accounting points to help in debugging
refcounting errors.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-23 15:27:24 +01:00
David Howells dabe5a7906 rxrpc: Tidy up the rxrpc_call struct a bit
Do a little tidying of the rxrpc_call struct:

 (1) in_clientflag is no longer compared against the value that's in the
     packet, so keeping it in this form isn't necessary.  Use a flag in
     flags instead and provide a pair of wrapper functions.

 (2) We don't read the epoch value, so that can go.

 (3) Move what remains of the data that were used for hashing up in the
     struct to be with the channel number.

 (4) Get rid of the local pointer.  We can get at this via the socket
     struct and we only use this in the procfs viewer.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-23 15:27:24 +01:00
David Howells aa390bbe21 rxrpc: Kill off the rxrpc_transport struct
The rxrpc_transport struct is now redundant, given that the rxrpc_peer
struct is now per peer port rather than per peer host, so get rid of it.

Service connection lists are transferred to the rxrpc_peer struct, as is
the conn_lock.  Previous patches moved the client connection handling out
of the rxrpc_transport struct and discarded the connection bundling code.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 14:00:23 +01:00
David Howells 999b69f892 rxrpc: Kill the client connection bundle concept
Kill off the concept of maintaining a bundle of connections to a particular
target service to increase the number of call slots available for any
beyond four for that service (there are four call slots per connection).

This will make cleaning up the connection handling code easier and
facilitate removal of the rxrpc_transport struct.  Bundling can be
reintroduced later if necessary.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:20:55 +01:00
David Howells 985a5c824a rxrpc: Make rxrpc_send_packet() take a connection not a transport
Make rxrpc_send_packet() take a connection not a transport as part of the
phasing out of the rxrpc_transport struct.

Whilst we're at it, rename the function to rxrpc_send_data_packet() to
differentiate it from the other packet sending functions.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:17:51 +01:00
David Howells cc8feb8edd rxrpc: Fix exclusive connection handling
"Exclusive connections" are meant to be used for a single client call and
then scrapped.  The idea is to limit the use of the negotiated security
context.  The current code, however, isn't doing this: it is instead
restricting the socket to a single virtual connection and doing all the
calls over that.

This is changed such that the socket no longer maintains a special virtual
connection over which it will do all the calls, but rather gets a new one
each time a new exclusive call is made.

Further, using a socket option for this is a poor choice.  It should be
done on sendmsg with a control message marker instead so that calls can be
marked exclusive individually.  To that end, add RXRPC_EXCLUSIVE_CALL
which, if passed to sendmsg() as a control message element, will cause the
call to be done on an single-use connection.

The socket option (RXRPC_EXCLUSIVE_CONNECTION) still exists and, if set,
will override any lack of RXRPC_EXCLUSIVE_CALL being specified so that
programs using the setsockopt() will appear to work the same.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:10:00 +01:00
David Howells 85f32278bd rxrpc: Replace conn->trans->{local,peer} with conn->params.{local,peer}
Replace accesses of conn->trans->{local,peer} with
conn->params.{local,peer} thus making it easier for a future commit to
remove the rxrpc_transport struct.

This also reduces the number of memory accesses involved.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:10:00 +01:00
David Howells 19ffa01c9c rxrpc: Use structs to hold connection params and protocol info
Define and use a structure to hold connection parameters.  This makes it
easier to pass multiple connection parameters around.

Define and use a structure to hold protocol information used to hash a
connection for lookup on incoming packet.  Most of these fields will be
disposed of eventually, including the duplicate local pointer.

Whilst we're at it rename "proto" to "family" when referring to a protocol
family.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-22 09:09:59 +01:00
David Howells f66d749019 rxrpc: Use the peer record to distribute network errors
Use the peer record to distribute network errors rather than the transport
object (which I want to get rid of).  An error from a particular peer
terminates all calls on that peer.

For future consideration:

 (1) For ICMP-induced errors it might be worth trying to extract the RxRPC
     header from the offending packet, if one is returned attached to the
     ICMP packet, to better direct the error.

     This may be overkill, though, since an ICMP packet would be expected
     to be relating to the destination port, machine or network.  RxRPC
     ABORT and BUSY packets give notice at RxRPC level.

 (2) To also abort connection-level communications (such as CHALLENGE
     packets) where indicted by an error - but that requires some revamping
     of the connection event handling first.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-15 10:15:16 +01:00
David Howells 8c3e34a4ff rxrpc: Rename files matching ar-*.c to git rid of the "ar-" prefix
Rename files matching net/rxrpc/ar-*.c to get rid of the "ar-" prefix.
This will aid splitting those files by making easier to come up with new
names.

Note that the not all files are simply renamed from ar-X.c to X.c.  The
following exceptions are made:

 (*) ar-call.c -> call_object.c
     ar-ack.c -> call_event.c

     call_object.c is going to contain the core of the call object
     handling.  Call event handling is all going to be in call_event.c.

 (*) ar-accept.c -> call_accept.c

     Incoming call handling is going to be here.

 (*) ar-connection.c -> conn_object.c
     ar-connevent.c -> conn_event.c

     The former file is going to have the basic connection object handling,
     but there will likely be some differentiation between client
     connections and service connections in additional files later.  The
     latter file will have all the connection-level event handling.

 (*) ar-local.c -> local_object.c

     This will have the local endpoint object handling code.  The local
     endpoint event handling code will later be split out into
     local_event.c.

 (*) ar-peer.c -> peer_object.c

     This will have the peer endpoint object handling code.  Peer event
     handling code will be placed in peer_event.c (for the moment, there is
     none).

 (*) ar-error.c -> peer_event.c

     This will become the peer event handling code, though for the moment
     it's actually driven from the local endpoint's perspective.

Note that I haven't renamed ar-transport.c to transport_object.c as the
intention is to delete it when the rxrpc_transport struct is excised.

The only file that actually has its contents changed is net/rxrpc/Makefile.

net/rxrpc/ar-internal.h will need its section marker comments updating, but
I'll do that in a separate patch to make it easier for git to follow the
history across the rename.  I may also want to rename ar-internal.h at some
point - but that would mean updating all the #includes and I'd rather do
that in a separate step.

Signed-off-by: David Howells <dhowells@redhat.com.
2016-06-13 12:16:05 +01:00