2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* INET An implementation of the TCP/IP protocol suite for the LINUX
|
|
|
|
* operating system. INET is implemented using the BSD Socket
|
|
|
|
* interface as the means of communication with the user level.
|
|
|
|
*
|
|
|
|
* PF_INET protocol family socket handler.
|
|
|
|
*
|
2005-05-06 07:16:16 +08:00
|
|
|
* Authors: Ross Biro
|
2005-04-17 06:20:36 +08:00
|
|
|
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
|
|
|
|
* Florian La Roche, <flla@stud.uni-sb.de>
|
|
|
|
* Alan Cox, <A.Cox@swansea.ac.uk>
|
|
|
|
*
|
|
|
|
* Changes (see also sock.c)
|
|
|
|
*
|
|
|
|
* piggy,
|
|
|
|
* Karl Knutson : Socket protocol table
|
|
|
|
* A.N.Kuznetsov : Socket death error in accept().
|
|
|
|
* John Richardson : Fix non blocking error in connect()
|
|
|
|
* so sockets that fail to connect
|
|
|
|
* don't return -EINPROGRESS.
|
|
|
|
* Alan Cox : Asynchronous I/O support
|
|
|
|
* Alan Cox : Keep correct socket pointer on sock
|
|
|
|
* structures
|
|
|
|
* when accept() ed
|
|
|
|
* Alan Cox : Semantics of SO_LINGER aren't state
|
|
|
|
* moved to close when you look carefully.
|
|
|
|
* With this fixed and the accept bug fixed
|
|
|
|
* some RPC stuff seems happier.
|
|
|
|
* Niibe Yutaka : 4.4BSD style write async I/O
|
|
|
|
* Alan Cox,
|
|
|
|
* Tony Gale : Fixed reuse semantics.
|
|
|
|
* Alan Cox : bind() shouldn't abort existing but dead
|
|
|
|
* sockets. Stops FTP netin:.. I hope.
|
|
|
|
* Alan Cox : bind() works correctly for RAW sockets.
|
|
|
|
* Note that FreeBSD at least was broken
|
|
|
|
* in this respect so be careful with
|
|
|
|
* compatibility tests...
|
|
|
|
* Alan Cox : routing cache support
|
|
|
|
* Alan Cox : memzero the socket structure for
|
|
|
|
* compactness.
|
|
|
|
* Matt Day : nonblock connect error handler
|
|
|
|
* Alan Cox : Allow large numbers of pending sockets
|
|
|
|
* (eg for big web sites), but only if
|
|
|
|
* specifically application requested.
|
|
|
|
* Alan Cox : New buffering throughout IP. Used
|
|
|
|
* dumbly.
|
|
|
|
* Alan Cox : New buffering now used smartly.
|
|
|
|
* Alan Cox : BSD rather than common sense
|
|
|
|
* interpretation of listen.
|
|
|
|
* Germano Caronni : Assorted small races.
|
|
|
|
* Alan Cox : sendmsg/recvmsg basic support.
|
|
|
|
* Alan Cox : Only sendmsg/recvmsg now supported.
|
|
|
|
* Alan Cox : Locked down bind (see security list).
|
|
|
|
* Alan Cox : Loosened bind a little.
|
|
|
|
* Mike McLagan : ADD/DEL DLCI Ioctls
|
|
|
|
* Willy Konynenberg : Transparent proxying support.
|
|
|
|
* David S. Miller : New socket lookup architecture.
|
|
|
|
* Some other random speedups.
|
|
|
|
* Cyrus Durgin : Cleaned up file for kmod hacks.
|
|
|
|
* Andi Kleen : Fix inet_stream_connect TCP race.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
2012-03-12 15:03:32 +08:00
|
|
|
#define pr_fmt(fmt) "IPv4: " fmt
|
|
|
|
|
2006-06-22 18:02:40 +08:00
|
|
|
#include <linux/err.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/socket.h>
|
|
|
|
#include <linux/in.h>
|
|
|
|
#include <linux/kernel.h>
|
2016-07-12 04:37:51 +08:00
|
|
|
#include <linux/kmod.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/timer.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/sockios.h>
|
|
|
|
#include <linux/net.h>
|
2006-01-12 04:17:47 +08:00
|
|
|
#include <linux/capability.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/fcntl.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/stat.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/poll.h>
|
|
|
|
#include <linux/netfilter_ipv4.h>
|
2007-03-24 02:40:27 +08:00
|
|
|
#include <linux/random.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-12-25 03:46:01 +08:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <linux/inet.h>
|
|
|
|
#include <linux/igmp.h>
|
2005-12-27 12:43:12 +08:00
|
|
|
#include <linux/inetdevice.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/netdevice.h>
|
2008-12-16 15:41:09 +08:00
|
|
|
#include <net/checksum.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/ip.h>
|
|
|
|
#include <net/protocol.h>
|
|
|
|
#include <net/arp.h>
|
|
|
|
#include <net/route.h>
|
|
|
|
#include <net/ip_fib.h>
|
2005-08-10 11:11:56 +08:00
|
|
|
#include <net/inet_connection_sock.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/tcp.h>
|
|
|
|
#include <net/udp.h>
|
2006-11-28 03:10:57 +08:00
|
|
|
#include <net/udplite.h>
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
#include <net/ping.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/raw.h>
|
|
|
|
#include <net/icmp.h>
|
|
|
|
#include <net/inet_common.h>
|
2015-07-23 16:08:44 +08:00
|
|
|
#include <net/ip_tunnels.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/xfrm.h>
|
2008-07-18 19:01:44 +08:00
|
|
|
#include <net/net_namespace.h>
|
2013-04-29 13:58:52 +08:00
|
|
|
#include <net/secure_seq.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_MROUTE
|
|
|
|
#include <linux/mroute.h>
|
|
|
|
#endif
|
2015-09-30 11:07:14 +08:00
|
|
|
#include <net/l3mdev.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-12-20 11:12:51 +08:00
|
|
|
#include <trace/events/sock.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* The inetsw table contains everything that inet_create needs to
|
|
|
|
* build a new socket.
|
|
|
|
*/
|
|
|
|
static struct list_head inetsw[SOCK_MAX];
|
|
|
|
static DEFINE_SPINLOCK(inetsw_lock);
|
|
|
|
|
|
|
|
/* New destruction routine */
|
|
|
|
|
|
|
|
void inet_sock_destruct(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
|
|
|
|
|
|
|
__skb_queue_purge(&sk->sk_receive_queue);
|
tcp: add one skb cache for rx
Often times, recvmsg() system calls and BH handling for a particular
TCP socket are done on different cpus.
This means the incoming skb had to be allocated on a cpu,
but freed on another.
This incurs a high spinlock contention in slab layer for small rpc,
but also a high number of cache line ping pongs for larger packets.
A full size GRO packet might use 45 page fragments, meaning
that up to 45 put_page() can be involved.
More over performing the __kfree_skb() in the recvmsg() context
adds a latency for user applications, and increase probability
of trapping them in backlog processing, since the BH handler
might found the socket owned by the user.
This patch, combined with the prior one increases the rpc
performance by about 10 % on servers with large number of cores.
(tcp_rr workload with 10,000 flows and 112 threads reach 9 Mpps
instead of 8 Mpps)
This also increases single bulk flow performance on 40Gbit+ links,
since in this case there are often two cpus working in tandem :
- CPU handling the NIC rx interrupts, feeding the receive queue,
and (after this patch) freeing the skbs that were consumed.
- CPU in recvmsg() system call, essentially 100 % busy copying out
data to user space.
Having at most one skb in a per-socket cache has very little risk
of memory exhaustion, and since it is protected by socket lock,
its management is essentially free.
Note that if rps/rfs is used, we do not enable this feature, because
there is high chance that the same cpu is handling both the recvmsg()
system call and the TCP rx path, but that another cpu did the skb
allocations in the device driver right before the RPS/RFS logic.
To properly handle this case, it seems we would need to record
on which cpu skb was allocated, and use a different channel
to give skbs back to this cpu.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-22 23:56:40 +08:00
|
|
|
if (sk->sk_rx_skb_cache) {
|
|
|
|
__kfree_skb(sk->sk_rx_skb_cache);
|
|
|
|
sk->sk_rx_skb_cache = NULL;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
__skb_queue_purge(&sk->sk_error_queue);
|
|
|
|
|
2007-12-31 16:29:24 +08:00
|
|
|
sk_mem_reclaim(sk);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (sk->sk_type == SOCK_STREAM && sk->sk_state != TCP_CLOSE) {
|
2009-08-29 14:45:21 +08:00
|
|
|
pr_err("Attempt to release TCP socket in state %d %p\n",
|
2005-04-17 06:20:36 +08:00
|
|
|
sk->sk_state, sk);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (!sock_flag(sk, SOCK_DEAD)) {
|
2009-08-29 14:45:21 +08:00
|
|
|
pr_err("Attempt to release alive inet socket %p\n", sk);
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-07-26 12:43:18 +08:00
|
|
|
WARN_ON(atomic_read(&sk->sk_rmem_alloc));
|
2017-06-30 18:08:00 +08:00
|
|
|
WARN_ON(refcount_read(&sk->sk_wmem_alloc));
|
2008-07-26 12:43:18 +08:00
|
|
|
WARN_ON(sk->sk_wmem_queued);
|
|
|
|
WARN_ON(sk->sk_forward_alloc);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-04-21 17:45:37 +08:00
|
|
|
kfree(rcu_dereference_protected(inet->inet_opt, 1));
|
2019-03-31 17:03:02 +08:00
|
|
|
dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
|
2012-06-20 12:22:05 +08:00
|
|
|
dst_release(sk->sk_rx_dst);
|
2005-08-10 10:45:38 +08:00
|
|
|
sk_refcnt_debug_dec(sk);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_sock_destruct);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The routines beyond this point handle the behaviour of an AF_INET
|
|
|
|
* socket object. Mostly it punts to the subprotocols of IP to do
|
|
|
|
* the work.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Automatically bind an unbound socket.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int inet_autobind(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct inet_sock *inet;
|
|
|
|
/* We may need to bind the socket. */
|
|
|
|
lock_sock(sk);
|
|
|
|
inet = inet_sk(sk);
|
2009-10-15 14:30:45 +08:00
|
|
|
if (!inet->inet_num) {
|
2005-04-17 06:20:36 +08:00
|
|
|
if (sk->sk_prot->get_port(sk, 0)) {
|
|
|
|
release_sock(sk);
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_sport = htons(inet->inet_num);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
release_sock(sk);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move a socket into listening state.
|
|
|
|
*/
|
|
|
|
int inet_listen(struct socket *sock, int backlog)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
unsigned char old_state;
|
2017-09-27 11:35:40 +08:00
|
|
|
int err, tcp_fastopen;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
lock_sock(sk);
|
|
|
|
|
|
|
|
err = -EINVAL;
|
|
|
|
if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
old_state = sk->sk_state;
|
|
|
|
if (!((1 << old_state) & (TCPF_CLOSE | TCPF_LISTEN)))
|
|
|
|
goto out;
|
|
|
|
|
2018-11-07 19:20:16 +08:00
|
|
|
sk->sk_max_ack_backlog = backlog;
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Really, if the socket is already in listen state
|
|
|
|
* we can only allow the backlog to be adjusted.
|
|
|
|
*/
|
|
|
|
if (old_state != TCP_LISTEN) {
|
2016-08-23 08:17:54 +08:00
|
|
|
/* Enable TFO w/o requiring TCP_FASTOPEN socket option.
|
2012-08-31 20:29:12 +08:00
|
|
|
* Note that only TCP sockets (SOCK_STREAM) will reach here.
|
2016-08-23 08:17:54 +08:00
|
|
|
* Also fastopen backlog may already been set via the option
|
|
|
|
* because the socket was in TCP_LISTEN state previously but
|
|
|
|
* was shutdown() rather than close().
|
2012-08-31 20:29:12 +08:00
|
|
|
*/
|
2017-09-27 11:35:40 +08:00
|
|
|
tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen;
|
|
|
|
if ((tcp_fastopen & TFO_SERVER_WO_SOCKOPT1) &&
|
|
|
|
(tcp_fastopen & TFO_SERVER_ENABLE) &&
|
2015-09-29 22:42:52 +08:00
|
|
|
!inet_csk(sk)->icsk_accept_queue.fastopenq.max_qlen) {
|
2016-08-23 08:17:54 +08:00
|
|
|
fastopen_queue_tune(sk, backlog);
|
2017-09-27 11:35:42 +08:00
|
|
|
tcp_fastopen_init_key_once(sock_net(sk));
|
2012-08-31 20:29:12 +08:00
|
|
|
}
|
2016-08-23 08:17:54 +08:00
|
|
|
|
2006-11-16 18:30:37 +08:00
|
|
|
err = inet_csk_listen_start(sk, backlog);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
2018-07-12 08:33:32 +08:00
|
|
|
tcp_call_bpf(sk, BPF_SOCK_OPS_TCP_LISTEN_CB, 0, NULL);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
err = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
release_sock(sk);
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_listen);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create an inet socket.
|
|
|
|
*/
|
|
|
|
|
2009-11-06 14:18:14 +08:00
|
|
|
static int inet_create(struct net *net, struct socket *sock, int protocol,
|
|
|
|
int kern)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk;
|
|
|
|
struct inet_protosw *answer;
|
|
|
|
struct inet_sock *inet;
|
|
|
|
struct proto *answer_prot;
|
|
|
|
unsigned char answer_flags;
|
2005-08-10 11:19:14 +08:00
|
|
|
int try_loading_module = 0;
|
2005-12-03 12:43:26 +08:00
|
|
|
int err;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-12-15 05:03:39 +08:00
|
|
|
if (protocol < 0 || protocol >= IPPROTO_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
sock->state = SS_UNCONNECTED;
|
|
|
|
|
|
|
|
/* Look for the requested type/protocol pair. */
|
2005-08-10 11:19:14 +08:00
|
|
|
lookup_protocol:
|
2005-12-03 12:43:26 +08:00
|
|
|
err = -ESOCKTNOSUPPORT;
|
2005-04-17 06:20:36 +08:00
|
|
|
rcu_read_lock();
|
2008-07-25 16:45:34 +08:00
|
|
|
list_for_each_entry_rcu(answer, &inetsw[sock->type], list) {
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-25 16:45:34 +08:00
|
|
|
err = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Check the non-wild match. */
|
|
|
|
if (protocol == answer->protocol) {
|
|
|
|
if (protocol != IPPROTO_IP)
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
/* Check for the two wild cases. */
|
|
|
|
if (IPPROTO_IP == protocol) {
|
|
|
|
protocol = answer->protocol;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (IPPROTO_IP == answer->protocol)
|
|
|
|
break;
|
|
|
|
}
|
2005-12-03 12:43:26 +08:00
|
|
|
err = -EPROTONOSUPPORT;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-25 16:45:34 +08:00
|
|
|
if (unlikely(err)) {
|
2005-08-10 11:19:14 +08:00
|
|
|
if (try_loading_module < 2) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
/*
|
|
|
|
* Be more specific, e.g. net-pf-2-proto-132-type-1
|
|
|
|
* (net-pf-PF_INET-proto-IPPROTO_SCTP-type-SOCK_STREAM)
|
|
|
|
*/
|
|
|
|
if (++try_loading_module == 1)
|
|
|
|
request_module("net-pf-%d-proto-%d-type-%d",
|
|
|
|
PF_INET, protocol, sock->type);
|
|
|
|
/*
|
|
|
|
* Fall back to generic, e.g. net-pf-2-proto-132
|
|
|
|
* (net-pf-PF_INET-proto-IPPROTO_SCTP)
|
|
|
|
*/
|
|
|
|
else
|
|
|
|
request_module("net-pf-%d-proto-%d",
|
|
|
|
PF_INET, protocol);
|
|
|
|
goto lookup_protocol;
|
|
|
|
} else
|
|
|
|
goto out_rcu_unlock;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
err = -EPERM;
|
net: Allow userns root to control ipv4
Allow an unpriviled user who has created a user namespace, and then
created a network namespace to effectively use the new network
namespace, by reducing capable(CAP_NET_ADMIN) and
capable(CAP_NET_RAW) calls to be ns_capable(net->user_ns,
CAP_NET_ADMIN), or capable(net->user_ns, CAP_NET_RAW) calls.
Settings that merely control a single network device are allowed.
Either the network device is a logical network device where
restrictions make no difference or the network device is hardware NIC
that has been explicity moved from the initial network namespace.
In general policy and network stack state changes are allowed
while resource control is left unchanged.
Allow creating raw sockets.
Allow the SIOCSARP ioctl to control the arp cache.
Allow the SIOCSIFFLAG ioctl to allow setting network device flags.
Allow the SIOCSIFADDR ioctl to allow setting a netdevice ipv4 address.
Allow the SIOCSIFBRDADDR ioctl to allow setting a netdevice ipv4 broadcast address.
Allow the SIOCSIFDSTADDR ioctl to allow setting a netdevice ipv4 destination address.
Allow the SIOCSIFNETMASK ioctl to allow setting a netdevice ipv4 netmask.
Allow the SIOCADDRT and SIOCDELRT ioctls to allow adding and deleting ipv4 routes.
Allow the SIOCADDTUNNEL, SIOCCHGTUNNEL and SIOCDELTUNNEL ioctls for
adding, changing and deleting gre tunnels.
Allow the SIOCADDTUNNEL, SIOCCHGTUNNEL and SIOCDELTUNNEL ioctls for
adding, changing and deleting ipip tunnels.
Allow the SIOCADDTUNNEL, SIOCCHGTUNNEL and SIOCDELTUNNEL ioctls for
adding, changing and deleting ipsec virtual tunnel interfaces.
Allow setting the MRT_INIT, MRT_DONE, MRT_ADD_VIF, MRT_DEL_VIF, MRT_ADD_MFC,
MRT_DEL_MFC, MRT_ASSERT, MRT_PIM, MRT_TABLE socket options on multicast routing
sockets.
Allow setting and receiving IPOPT_CIPSO, IP_OPT_SEC, IP_OPT_SID and
arbitrary ip options.
Allow setting IP_SEC_POLICY/IP_XFRM_POLICY ipv4 socket option.
Allow setting the IP_TRANSPARENT ipv4 socket option.
Allow setting the TCP_REPAIR socket option.
Allow setting the TCP_CONGESTION socket option.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-16 11:03:05 +08:00
|
|
|
if (sock->type == SOCK_RAW && !kern &&
|
|
|
|
!ns_capable(net->user_ns, CAP_NET_RAW))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_rcu_unlock;
|
|
|
|
|
|
|
|
sock->ops = answer->ops;
|
|
|
|
answer_prot = answer->prot;
|
|
|
|
answer_flags = answer->flags;
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2015-04-03 16:17:26 +08:00
|
|
|
WARN_ON(!answer_prot->slab);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
err = -ENOBUFS;
|
2015-05-09 10:09:13 +08:00
|
|
|
sk = sk_alloc(net, PF_INET, GFP_KERNEL, answer_prot, kern);
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!sk)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
err = 0;
|
|
|
|
if (INET_PROTOSW_REUSE & answer_flags)
|
2012-04-19 11:39:36 +08:00
|
|
|
sk->sk_reuse = SK_CAN_REUSE;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
inet = inet_sk(sk);
|
2007-01-10 06:37:06 +08:00
|
|
|
inet->is_icsk = (INET_PROTOSW_ICSK & answer_flags) != 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-06-15 09:07:31 +08:00
|
|
|
inet->nodefrag = 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (SOCK_RAW == sock->type) {
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_num = protocol;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (IPPROTO_RAW == protocol)
|
|
|
|
inet->hdrincl = 1;
|
|
|
|
}
|
|
|
|
|
2013-12-14 12:13:38 +08:00
|
|
|
if (net->ipv4.sysctl_ip_no_pmtu_disc)
|
2005-04-17 06:20:36 +08:00
|
|
|
inet->pmtudisc = IP_PMTUDISC_DONT;
|
|
|
|
else
|
|
|
|
inet->pmtudisc = IP_PMTUDISC_WANT;
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_id = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
sock_init_data(sock, sk);
|
|
|
|
|
|
|
|
sk->sk_destruct = inet_sock_destruct;
|
|
|
|
sk->sk_protocol = protocol;
|
|
|
|
sk->sk_backlog_rcv = sk->sk_prot->backlog_rcv;
|
|
|
|
|
|
|
|
inet->uc_ttl = -1;
|
|
|
|
inet->mc_loop = 1;
|
|
|
|
inet->mc_ttl = 1;
|
2009-05-28 15:00:46 +08:00
|
|
|
inet->mc_all = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
inet->mc_index = 0;
|
|
|
|
inet->mc_list = NULL;
|
2012-02-09 17:35:49 +08:00
|
|
|
inet->rcv_tos = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-08-10 10:45:38 +08:00
|
|
|
sk_refcnt_debug_inc(sk);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
if (inet->inet_num) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/* It assumes that any protocol which allows
|
|
|
|
* the user to assign a number at socket
|
|
|
|
* creation time automatically
|
|
|
|
* shares.
|
|
|
|
*/
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_sport = htons(inet->inet_num);
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Add to protocol hash chains. */
|
2016-02-11 00:50:35 +08:00
|
|
|
err = sk->sk_prot->hash(sk);
|
|
|
|
if (err) {
|
|
|
|
sk_common_release(sk);
|
|
|
|
goto out;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (sk->sk_prot->init) {
|
|
|
|
err = sk->sk_prot->init(sk);
|
2016-12-02 00:48:04 +08:00
|
|
|
if (err) {
|
|
|
|
sk_common_release(sk);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!kern) {
|
|
|
|
err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
|
|
|
|
if (err) {
|
2005-04-17 06:20:36 +08:00
|
|
|
sk_common_release(sk);
|
2016-12-02 00:48:04 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
out_rcu_unlock:
|
|
|
|
rcu_read_unlock();
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The peer socket should always be NULL (or else). When we call this
|
|
|
|
* function we are destroying the object and from then on nobody
|
|
|
|
* should refer to it.
|
|
|
|
*/
|
|
|
|
int inet_release(struct socket *sock)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
|
|
|
|
if (sk) {
|
|
|
|
long timeout;
|
|
|
|
|
|
|
|
/* Applications forget to leave groups before exiting */
|
|
|
|
ip_mc_drop_socket(sk);
|
|
|
|
|
|
|
|
/* If linger is set, we don't return until the close
|
|
|
|
* is complete. Otherwise we return immediately. The
|
|
|
|
* actually closing is done the same either way.
|
|
|
|
*
|
|
|
|
* If the close is due to the process exiting, we never
|
|
|
|
* linger..
|
|
|
|
*/
|
|
|
|
timeout = 0;
|
|
|
|
if (sock_flag(sk, SOCK_LINGER) &&
|
|
|
|
!(current->flags & PF_EXITING))
|
|
|
|
timeout = sk->sk_lingertime;
|
|
|
|
sock->sk = NULL;
|
|
|
|
sk->sk_prot->close(sk, timeout);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_release);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
/* If the socket has its own bind function then use it. (RAW) */
|
|
|
|
if (sk->sk_prot->bind) {
|
2018-03-31 06:08:04 +08:00
|
|
|
return sk->sk_prot->bind(sk, uaddr, addr_len);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
if (addr_len < sizeof(struct sockaddr_in))
|
2018-03-31 06:08:04 +08:00
|
|
|
return -EINVAL;
|
2018-03-31 06:08:02 +08:00
|
|
|
|
|
|
|
/* BPF prog is run before any checks are done so that if the prog
|
|
|
|
* changes context in a wrong way it will be caught.
|
|
|
|
*/
|
|
|
|
err = BPF_CGROUP_RUN_PROG_INET4_BIND(sk, uaddr);
|
|
|
|
if (err)
|
2018-03-31 06:08:04 +08:00
|
|
|
return err;
|
|
|
|
|
|
|
|
return __inet_bind(sk, uaddr, addr_len, false, true);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(inet_bind);
|
|
|
|
|
|
|
|
int __inet_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
|
|
|
|
bool force_bind_address_no_port, bool with_lock)
|
|
|
|
{
|
|
|
|
struct sockaddr_in *addr = (struct sockaddr_in *)uaddr;
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
|
|
|
struct net *net = sock_net(sk);
|
|
|
|
unsigned short snum;
|
|
|
|
int chk_addr_ret;
|
|
|
|
u32 tb_id = RT_TABLE_LOCAL;
|
|
|
|
int err;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
net: bind() fix error return on wrong address family
Hi,
Reinhard Max also pointed out that the error should EAFNOSUPPORT according
to POSIX.
The Linux manpages have it as EINVAL, some other OSes (Minix, HPUX, perhaps BSD) use
EAFNOSUPPORT. Windows uses WSAEFAULT according to MSDN.
Other protocols error values in their af bind() methods in current mainline git as far
as a brief look shows:
EAFNOSUPPORT: atm, appletalk, l2tp, llc, phonet, rxrpc
EINVAL: ax25, bluetooth, decnet, econet, ieee802154, iucv, netlink, netrom, packet, rds, rose, unix, x25,
No check?: can/raw, ipv6/raw, irda, l2tp/l2tp_ip
Ciao, Marcus
Signed-off-by: Marcus Meissner <meissner@suse.de>
Cc: Reinhard Max <max@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-04 09:30:29 +08:00
|
|
|
if (addr->sin_family != AF_INET) {
|
2011-08-31 06:57:00 +08:00
|
|
|
/* Compatibility games : accept AF_UNSPEC (mapped to AF_INET)
|
|
|
|
* only if s_addr is INADDR_ANY.
|
|
|
|
*/
|
net: bind() fix error return on wrong address family
Hi,
Reinhard Max also pointed out that the error should EAFNOSUPPORT according
to POSIX.
The Linux manpages have it as EINVAL, some other OSes (Minix, HPUX, perhaps BSD) use
EAFNOSUPPORT. Windows uses WSAEFAULT according to MSDN.
Other protocols error values in their af bind() methods in current mainline git as far
as a brief look shows:
EAFNOSUPPORT: atm, appletalk, l2tp, llc, phonet, rxrpc
EINVAL: ax25, bluetooth, decnet, econet, ieee802154, iucv, netlink, netrom, packet, rds, rose, unix, x25,
No check?: can/raw, ipv6/raw, irda, l2tp/l2tp_ip
Ciao, Marcus
Signed-off-by: Marcus Meissner <meissner@suse.de>
Cc: Reinhard Max <max@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-04 09:30:29 +08:00
|
|
|
err = -EAFNOSUPPORT;
|
2011-08-31 06:57:00 +08:00
|
|
|
if (addr->sin_family != AF_UNSPEC ||
|
|
|
|
addr->sin_addr.s_addr != htonl(INADDR_ANY))
|
|
|
|
goto out;
|
net: bind() fix error return on wrong address family
Hi,
Reinhard Max also pointed out that the error should EAFNOSUPPORT according
to POSIX.
The Linux manpages have it as EINVAL, some other OSes (Minix, HPUX, perhaps BSD) use
EAFNOSUPPORT. Windows uses WSAEFAULT according to MSDN.
Other protocols error values in their af bind() methods in current mainline git as far
as a brief look shows:
EAFNOSUPPORT: atm, appletalk, l2tp, llc, phonet, rxrpc
EINVAL: ax25, bluetooth, decnet, econet, ieee802154, iucv, netlink, netrom, packet, rds, rose, unix, x25,
No check?: can/raw, ipv6/raw, irda, l2tp/l2tp_ip
Ciao, Marcus
Signed-off-by: Marcus Meissner <meissner@suse.de>
Cc: Reinhard Max <max@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-04 09:30:29 +08:00
|
|
|
}
|
2011-06-02 12:05:22 +08:00
|
|
|
|
2015-09-30 11:07:14 +08:00
|
|
|
tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
|
2015-08-14 04:59:05 +08:00
|
|
|
chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Not specified by any standard per-se, however it breaks too
|
|
|
|
* many applications when removed. It is unfortunate since
|
|
|
|
* allowing applications to make a non-local bind solves
|
|
|
|
* several problems with systems using dynamic addressing.
|
|
|
|
* (ie. your servers still start up even if your ISDN link
|
|
|
|
* is temporarily down)
|
|
|
|
*/
|
|
|
|
err = -EADDRNOTAVAIL;
|
2018-08-01 03:18:11 +08:00
|
|
|
if (!inet_can_nonlocal_bind(net, inet) &&
|
2008-03-18 13:44:53 +08:00
|
|
|
addr->sin_addr.s_addr != htonl(INADDR_ANY) &&
|
2005-04-17 06:20:36 +08:00
|
|
|
chk_addr_ret != RTN_LOCAL &&
|
|
|
|
chk_addr_ret != RTN_MULTICAST &&
|
|
|
|
chk_addr_ret != RTN_BROADCAST)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
snum = ntohs(addr->sin_port);
|
|
|
|
err = -EACCES;
|
2017-01-21 09:49:11 +08:00
|
|
|
if (snum && snum < inet_prot_sock(net) &&
|
2012-11-16 11:03:12 +08:00
|
|
|
!ns_capable(net->user_ns, CAP_NET_BIND_SERVICE))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* We keep a pair of addresses. rcv_saddr is the one
|
|
|
|
* used by hash lookups, and saddr is used for transmit.
|
|
|
|
*
|
|
|
|
* In the BSD API these are the same except where it
|
|
|
|
* would be illegal to use them (multicast/broadcast) in
|
|
|
|
* which case the sending device address is used.
|
|
|
|
*/
|
2018-03-31 06:08:04 +08:00
|
|
|
if (with_lock)
|
|
|
|
lock_sock(sk);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Check these errors (active socket, double bind). */
|
|
|
|
err = -EINVAL;
|
2009-10-15 14:30:45 +08:00
|
|
|
if (sk->sk_state != TCP_CLOSE || inet->inet_num)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_release_sock;
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_rcv_saddr = inet->inet_saddr = addr->sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST)
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_saddr = 0; /* Use device */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Make sure we are allowed to bind here. */
|
2018-03-31 06:08:07 +08:00
|
|
|
if (snum || !(inet->bind_address_no_port ||
|
|
|
|
force_bind_address_no_port)) {
|
|
|
|
if (sk->sk_prot->get_port(sk, snum)) {
|
|
|
|
inet->inet_saddr = inet->inet_rcv_saddr = 0;
|
|
|
|
err = -EADDRINUSE;
|
|
|
|
goto out_release_sock;
|
|
|
|
}
|
|
|
|
err = BPF_CGROUP_RUN_PROG_INET4_POST_BIND(sk);
|
|
|
|
if (err) {
|
|
|
|
inet->inet_saddr = inet->inet_rcv_saddr = 0;
|
|
|
|
goto out_release_sock;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
if (inet->inet_rcv_saddr)
|
2005-04-17 06:20:36 +08:00
|
|
|
sk->sk_userlocks |= SOCK_BINDADDR_LOCK;
|
|
|
|
if (snum)
|
|
|
|
sk->sk_userlocks |= SOCK_BINDPORT_LOCK;
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_sport = htons(inet->inet_num);
|
|
|
|
inet->inet_daddr = 0;
|
|
|
|
inet->inet_dport = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
sk_dst_reset(sk);
|
|
|
|
err = 0;
|
|
|
|
out_release_sock:
|
2018-03-31 06:08:04 +08:00
|
|
|
if (with_lock)
|
|
|
|
release_sock(sk);
|
2005-04-17 06:20:36 +08:00
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2012-04-15 09:34:41 +08:00
|
|
|
int inet_dgram_connect(struct socket *sock, struct sockaddr *uaddr,
|
2005-04-17 06:20:36 +08:00
|
|
|
int addr_len, int flags)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
2018-03-31 06:08:05 +08:00
|
|
|
int err;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-04-01 06:58:26 +08:00
|
|
|
if (addr_len < sizeof(uaddr->sa_family))
|
|
|
|
return -EINVAL;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (uaddr->sa_family == AF_UNSPEC)
|
|
|
|
return sk->sk_prot->disconnect(sk, flags);
|
|
|
|
|
2018-03-31 06:08:05 +08:00
|
|
|
if (BPF_CGROUP_PRE_CONNECT_ENABLED(sk)) {
|
|
|
|
err = sk->sk_prot->pre_connect(sk, uaddr, addr_len);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
if (!inet_sk(sk)->inet_num && inet_autobind(sk))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EAGAIN;
|
2012-06-04 01:41:40 +08:00
|
|
|
return sk->sk_prot->connect(sk, uaddr, addr_len);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_dgram_connect);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-19 14:43:07 +08:00
|
|
|
static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-11-02 07:04:36 +08:00
|
|
|
DEFINE_WAIT_FUNC(wait, woken_wake_function);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-11-02 07:04:36 +08:00
|
|
|
add_wait_queue(sk_sleep(sk), &wait);
|
2012-07-19 14:43:07 +08:00
|
|
|
sk->sk_write_pending += writebias;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Basic assumption: if someone sets sk->sk_err, he _must_
|
|
|
|
* change state of the socket from TCP_SYN_*.
|
|
|
|
* Connect() does not allow to get error notifications
|
|
|
|
* without closing the socket.
|
|
|
|
*/
|
|
|
|
while ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
|
|
|
|
release_sock(sk);
|
2016-11-02 07:04:36 +08:00
|
|
|
timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
|
2005-04-17 06:20:36 +08:00
|
|
|
lock_sock(sk);
|
|
|
|
if (signal_pending(current) || !timeo)
|
|
|
|
break;
|
|
|
|
}
|
2016-11-02 07:04:36 +08:00
|
|
|
remove_wait_queue(sk_sleep(sk), &wait);
|
2012-07-19 14:43:07 +08:00
|
|
|
sk->sk_write_pending -= writebias;
|
2005-04-17 06:20:36 +08:00
|
|
|
return timeo;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Connect to a remote host. There is regrettably still a little
|
|
|
|
* TCP 'magic' in here.
|
|
|
|
*/
|
2012-07-19 14:43:09 +08:00
|
|
|
int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr,
|
2017-01-25 21:42:46 +08:00
|
|
|
int addr_len, int flags, int is_sendmsg)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int err;
|
|
|
|
long timeo;
|
|
|
|
|
net/tcp-fastopen: Add new API support
This patch adds a new socket option, TCP_FASTOPEN_CONNECT, as an
alternative way to perform Fast Open on the active side (client). Prior
to this patch, a client needs to replace the connect() call with
sendto(MSG_FASTOPEN). This can be cumbersome for applications who want
to use Fast Open: these socket operations are often done in lower layer
libraries used by many other applications. Changing these libraries
and/or the socket call sequences are not trivial. A more convenient
approach is to perform Fast Open by simply enabling a socket option when
the socket is created w/o changing other socket calls sequence:
s = socket()
create a new socket
setsockopt(s, IPPROTO_TCP, TCP_FASTOPEN_CONNECT …);
newly introduced sockopt
If set, new functionality described below will be used.
Return ENOTSUPP if TFO is not supported or not enabled in the
kernel.
connect()
With cookie present, return 0 immediately.
With no cookie, initiate 3WHS with TFO cookie-request option and
return -1 with errno = EINPROGRESS.
write()/sendmsg()
With cookie present, send out SYN with data and return the number of
bytes buffered.
With no cookie, and 3WHS not yet completed, return -1 with errno =
EINPROGRESS.
No MSG_FASTOPEN flag is needed.
read()
Return -1 with errno = EWOULDBLOCK/EAGAIN if connect() is called but
write() is not called yet.
Return -1 with errno = EWOULDBLOCK/EAGAIN if connection is
established but no msg is received yet.
Return number of bytes read if socket is established and there is
msg received.
The new API simplifies life for applications that always perform a write()
immediately after a successful connect(). Such applications can now take
advantage of Fast Open by merely making one new setsockopt() call at the time
of creating the socket. Nothing else about the application's socket call
sequence needs to change.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 02:59:22 +08:00
|
|
|
/*
|
|
|
|
* uaddr can be NULL and addr_len can be 0 if:
|
|
|
|
* sk is a TCP fastopen active socket and
|
|
|
|
* TCP_FASTOPEN_CONNECT sockopt is set and
|
|
|
|
* we already have a valid cookie for this socket.
|
|
|
|
* In this case, user can call write() after connect().
|
|
|
|
* write() will invoke tcp_sendmsg_fastopen() which calls
|
|
|
|
* __inet_stream_connect().
|
|
|
|
*/
|
|
|
|
if (uaddr) {
|
|
|
|
if (addr_len < sizeof(uaddr->sa_family))
|
|
|
|
return -EINVAL;
|
2010-04-01 06:58:26 +08:00
|
|
|
|
net/tcp-fastopen: Add new API support
This patch adds a new socket option, TCP_FASTOPEN_CONNECT, as an
alternative way to perform Fast Open on the active side (client). Prior
to this patch, a client needs to replace the connect() call with
sendto(MSG_FASTOPEN). This can be cumbersome for applications who want
to use Fast Open: these socket operations are often done in lower layer
libraries used by many other applications. Changing these libraries
and/or the socket call sequences are not trivial. A more convenient
approach is to perform Fast Open by simply enabling a socket option when
the socket is created w/o changing other socket calls sequence:
s = socket()
create a new socket
setsockopt(s, IPPROTO_TCP, TCP_FASTOPEN_CONNECT …);
newly introduced sockopt
If set, new functionality described below will be used.
Return ENOTSUPP if TFO is not supported or not enabled in the
kernel.
connect()
With cookie present, return 0 immediately.
With no cookie, initiate 3WHS with TFO cookie-request option and
return -1 with errno = EINPROGRESS.
write()/sendmsg()
With cookie present, send out SYN with data and return the number of
bytes buffered.
With no cookie, and 3WHS not yet completed, return -1 with errno =
EINPROGRESS.
No MSG_FASTOPEN flag is needed.
read()
Return -1 with errno = EWOULDBLOCK/EAGAIN if connect() is called but
write() is not called yet.
Return -1 with errno = EWOULDBLOCK/EAGAIN if connection is
established but no msg is received yet.
Return number of bytes read if socket is established and there is
msg received.
The new API simplifies life for applications that always perform a write()
immediately after a successful connect(). Such applications can now take
advantage of Fast Open by merely making one new setsockopt() call at the time
of creating the socket. Nothing else about the application's socket call
sequence needs to change.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 02:59:22 +08:00
|
|
|
if (uaddr->sa_family == AF_UNSPEC) {
|
|
|
|
err = sk->sk_prot->disconnect(sk, flags);
|
|
|
|
sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED;
|
|
|
|
goto out;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
switch (sock->state) {
|
|
|
|
default:
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
case SS_CONNECTED:
|
|
|
|
err = -EISCONN;
|
|
|
|
goto out;
|
|
|
|
case SS_CONNECTING:
|
net/tcp-fastopen: Add new API support
This patch adds a new socket option, TCP_FASTOPEN_CONNECT, as an
alternative way to perform Fast Open on the active side (client). Prior
to this patch, a client needs to replace the connect() call with
sendto(MSG_FASTOPEN). This can be cumbersome for applications who want
to use Fast Open: these socket operations are often done in lower layer
libraries used by many other applications. Changing these libraries
and/or the socket call sequences are not trivial. A more convenient
approach is to perform Fast Open by simply enabling a socket option when
the socket is created w/o changing other socket calls sequence:
s = socket()
create a new socket
setsockopt(s, IPPROTO_TCP, TCP_FASTOPEN_CONNECT …);
newly introduced sockopt
If set, new functionality described below will be used.
Return ENOTSUPP if TFO is not supported or not enabled in the
kernel.
connect()
With cookie present, return 0 immediately.
With no cookie, initiate 3WHS with TFO cookie-request option and
return -1 with errno = EINPROGRESS.
write()/sendmsg()
With cookie present, send out SYN with data and return the number of
bytes buffered.
With no cookie, and 3WHS not yet completed, return -1 with errno =
EINPROGRESS.
No MSG_FASTOPEN flag is needed.
read()
Return -1 with errno = EWOULDBLOCK/EAGAIN if connect() is called but
write() is not called yet.
Return -1 with errno = EWOULDBLOCK/EAGAIN if connection is
established but no msg is received yet.
Return number of bytes read if socket is established and there is
msg received.
The new API simplifies life for applications that always perform a write()
immediately after a successful connect(). Such applications can now take
advantage of Fast Open by merely making one new setsockopt() call at the time
of creating the socket. Nothing else about the application's socket call
sequence needs to change.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 02:59:22 +08:00
|
|
|
if (inet_sk(sk)->defer_connect)
|
2017-01-25 21:42:46 +08:00
|
|
|
err = is_sendmsg ? -EINPROGRESS : -EISCONN;
|
net/tcp-fastopen: Add new API support
This patch adds a new socket option, TCP_FASTOPEN_CONNECT, as an
alternative way to perform Fast Open on the active side (client). Prior
to this patch, a client needs to replace the connect() call with
sendto(MSG_FASTOPEN). This can be cumbersome for applications who want
to use Fast Open: these socket operations are often done in lower layer
libraries used by many other applications. Changing these libraries
and/or the socket call sequences are not trivial. A more convenient
approach is to perform Fast Open by simply enabling a socket option when
the socket is created w/o changing other socket calls sequence:
s = socket()
create a new socket
setsockopt(s, IPPROTO_TCP, TCP_FASTOPEN_CONNECT …);
newly introduced sockopt
If set, new functionality described below will be used.
Return ENOTSUPP if TFO is not supported or not enabled in the
kernel.
connect()
With cookie present, return 0 immediately.
With no cookie, initiate 3WHS with TFO cookie-request option and
return -1 with errno = EINPROGRESS.
write()/sendmsg()
With cookie present, send out SYN with data and return the number of
bytes buffered.
With no cookie, and 3WHS not yet completed, return -1 with errno =
EINPROGRESS.
No MSG_FASTOPEN flag is needed.
read()
Return -1 with errno = EWOULDBLOCK/EAGAIN if connect() is called but
write() is not called yet.
Return -1 with errno = EWOULDBLOCK/EAGAIN if connection is
established but no msg is received yet.
Return number of bytes read if socket is established and there is
msg received.
The new API simplifies life for applications that always perform a write()
immediately after a successful connect(). Such applications can now take
advantage of Fast Open by merely making one new setsockopt() call at the time
of creating the socket. Nothing else about the application's socket call
sequence needs to change.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 02:59:22 +08:00
|
|
|
else
|
|
|
|
err = -EALREADY;
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Fall out of switch with err, set for this state */
|
|
|
|
break;
|
|
|
|
case SS_UNCONNECTED:
|
|
|
|
err = -EISCONN;
|
|
|
|
if (sk->sk_state != TCP_CLOSE)
|
|
|
|
goto out;
|
|
|
|
|
2018-03-31 06:08:05 +08:00
|
|
|
if (BPF_CGROUP_PRE_CONNECT_ENABLED(sk)) {
|
|
|
|
err = sk->sk_prot->pre_connect(sk, uaddr, addr_len);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
err = sk->sk_prot->connect(sk, uaddr, addr_len);
|
|
|
|
if (err < 0)
|
|
|
|
goto out;
|
|
|
|
|
2007-02-09 22:24:47 +08:00
|
|
|
sock->state = SS_CONNECTING;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
net/tcp-fastopen: Add new API support
This patch adds a new socket option, TCP_FASTOPEN_CONNECT, as an
alternative way to perform Fast Open on the active side (client). Prior
to this patch, a client needs to replace the connect() call with
sendto(MSG_FASTOPEN). This can be cumbersome for applications who want
to use Fast Open: these socket operations are often done in lower layer
libraries used by many other applications. Changing these libraries
and/or the socket call sequences are not trivial. A more convenient
approach is to perform Fast Open by simply enabling a socket option when
the socket is created w/o changing other socket calls sequence:
s = socket()
create a new socket
setsockopt(s, IPPROTO_TCP, TCP_FASTOPEN_CONNECT …);
newly introduced sockopt
If set, new functionality described below will be used.
Return ENOTSUPP if TFO is not supported or not enabled in the
kernel.
connect()
With cookie present, return 0 immediately.
With no cookie, initiate 3WHS with TFO cookie-request option and
return -1 with errno = EINPROGRESS.
write()/sendmsg()
With cookie present, send out SYN with data and return the number of
bytes buffered.
With no cookie, and 3WHS not yet completed, return -1 with errno =
EINPROGRESS.
No MSG_FASTOPEN flag is needed.
read()
Return -1 with errno = EWOULDBLOCK/EAGAIN if connect() is called but
write() is not called yet.
Return -1 with errno = EWOULDBLOCK/EAGAIN if connection is
established but no msg is received yet.
Return number of bytes read if socket is established and there is
msg received.
The new API simplifies life for applications that always perform a write()
immediately after a successful connect(). Such applications can now take
advantage of Fast Open by merely making one new setsockopt() call at the time
of creating the socket. Nothing else about the application's socket call
sequence needs to change.
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 02:59:22 +08:00
|
|
|
if (!err && inet_sk(sk)->defer_connect)
|
|
|
|
goto out;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Just entered SS_CONNECTING state; the only
|
|
|
|
* difference is that return value in non-blocking
|
|
|
|
* case is EINPROGRESS, rather than EALREADY.
|
|
|
|
*/
|
|
|
|
err = -EINPROGRESS;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
timeo = sock_sndtimeo(sk, flags & O_NONBLOCK);
|
|
|
|
|
|
|
|
if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
|
2012-07-19 14:43:07 +08:00
|
|
|
int writebias = (sk->sk_protocol == IPPROTO_TCP) &&
|
|
|
|
tcp_sk(sk)->fastopen_req &&
|
|
|
|
tcp_sk(sk)->fastopen_req->data ? 1 : 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Error code is set above */
|
2012-07-19 14:43:07 +08:00
|
|
|
if (!timeo || !inet_wait_for_connect(sk, timeo, writebias))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
err = sock_intr_errno(timeo);
|
|
|
|
if (signal_pending(current))
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Connection was closed by RST, timeout, ICMP error
|
|
|
|
* or another process disconnected us.
|
|
|
|
*/
|
|
|
|
if (sk->sk_state == TCP_CLOSE)
|
|
|
|
goto sock_error;
|
|
|
|
|
|
|
|
/* sk->sk_err may be not zero now, if RECVERR was ordered by user
|
|
|
|
* and error was received after socket entered established state.
|
|
|
|
* Hence, it is handled normally after connect() return successfully.
|
|
|
|
*/
|
|
|
|
|
|
|
|
sock->state = SS_CONNECTED;
|
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
|
|
|
|
sock_error:
|
|
|
|
err = sock_error(sk) ? : -ECONNABORTED;
|
|
|
|
sock->state = SS_UNCONNECTED;
|
|
|
|
if (sk->sk_prot->disconnect(sk, flags))
|
|
|
|
sock->state = SS_DISCONNECTING;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-07-19 14:43:09 +08:00
|
|
|
EXPORT_SYMBOL(__inet_stream_connect);
|
|
|
|
|
|
|
|
int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr,
|
|
|
|
int addr_len, int flags)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
lock_sock(sock->sk);
|
2017-01-25 21:42:46 +08:00
|
|
|
err = __inet_stream_connect(sock, uaddr, addr_len, flags, 0);
|
2012-07-19 14:43:09 +08:00
|
|
|
release_sock(sock->sk);
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_stream_connect);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Accept a pending connection. The TCP layer now gives BSD semantics.
|
|
|
|
*/
|
|
|
|
|
2017-03-09 16:09:05 +08:00
|
|
|
int inet_accept(struct socket *sock, struct socket *newsock, int flags,
|
|
|
|
bool kern)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk1 = sock->sk;
|
|
|
|
int err = -EINVAL;
|
2017-03-09 16:09:05 +08:00
|
|
|
struct sock *sk2 = sk1->sk_prot->accept(sk1, flags, &err, kern);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!sk2)
|
|
|
|
goto do_err;
|
|
|
|
|
|
|
|
lock_sock(sk2);
|
|
|
|
|
net: rfs: enable RFS before first data packet is received
Le jeudi 16 juin 2011 à 23:38 -0400, David Miller a écrit :
> From: Ben Hutchings <bhutchings@solarflare.com>
> Date: Fri, 17 Jun 2011 00:50:46 +0100
>
> > On Wed, 2011-06-15 at 04:15 +0200, Eric Dumazet wrote:
> >> @@ -1594,6 +1594,7 @@ int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb)
> >> goto discard;
> >>
> >> if (nsk != sk) {
> >> + sock_rps_save_rxhash(nsk, skb->rxhash);
> >> if (tcp_child_process(sk, nsk, skb)) {
> >> rsk = nsk;
> >> goto reset;
> >>
> >
> > I haven't tried this, but it looks reasonable to me.
> >
> > What about IPv6? The logic in tcp_v6_do_rcv() looks very similar.
>
> Indeed ipv6 side needs the same fix.
>
> Eric please add that part and resubmit. And in fact I might stick
> this into net-2.6 instead of net-next-2.6
>
OK, here is the net-2.6 based one then, thanks !
[PATCH v2] net: rfs: enable RFS before first data packet is received
First packet received on a passive tcp flow is not correctly RFS
steered.
One sock_rps_record_flow() call is missing in inet_accept()
But before that, we also must record rxhash when child socket is setup.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Tom Herbert <therbert@google.com>
CC: Ben Hutchings <bhutchings@solarflare.com>
CC: Jamal Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@conan.davemloft.net>
2011-06-17 11:45:15 +08:00
|
|
|
sock_rps_record_flow(sk2);
|
2008-07-26 12:43:18 +08:00
|
|
|
WARN_ON(!((1 << sk2->sk_state) &
|
2012-08-31 20:29:12 +08:00
|
|
|
(TCPF_ESTABLISHED | TCPF_SYN_RECV |
|
|
|
|
TCPF_CLOSE_WAIT | TCPF_CLOSE)));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
sock_graft(sk2, newsock);
|
|
|
|
|
|
|
|
newsock->state = SS_CONNECTED;
|
|
|
|
err = 0;
|
|
|
|
release_sock(sk2);
|
|
|
|
do_err:
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_accept);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This does both peername and sockname.
|
|
|
|
*/
|
|
|
|
int inet_getname(struct socket *sock, struct sockaddr *uaddr,
|
2018-02-13 03:00:20 +08:00
|
|
|
int peer)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
2009-10-29 17:59:18 +08:00
|
|
|
DECLARE_SOCKADDR(struct sockaddr_in *, sin, uaddr);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
sin->sin_family = AF_INET;
|
|
|
|
if (peer) {
|
2009-10-15 14:30:45 +08:00
|
|
|
if (!inet->inet_dport ||
|
2005-04-17 06:20:36 +08:00
|
|
|
(((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_SYN_SENT)) &&
|
|
|
|
peer == 1))
|
|
|
|
return -ENOTCONN;
|
2009-10-15 14:30:45 +08:00
|
|
|
sin->sin_port = inet->inet_dport;
|
|
|
|
sin->sin_addr.s_addr = inet->inet_daddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2009-10-15 14:30:45 +08:00
|
|
|
__be32 addr = inet->inet_rcv_saddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!addr)
|
2009-10-15 14:30:45 +08:00
|
|
|
addr = inet->inet_saddr;
|
|
|
|
sin->sin_port = inet->inet_sport;
|
2005-04-17 06:20:36 +08:00
|
|
|
sin->sin_addr.s_addr = addr;
|
|
|
|
}
|
|
|
|
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
|
2018-02-13 03:00:20 +08:00
|
|
|
return sizeof(*sin);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_getname);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-03-02 15:37:48 +08:00
|
|
|
int inet_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
|
2010-04-28 06:05:31 +08:00
|
|
|
sock_rps_record_flow(sk);
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* We may need to bind the socket. */
|
2010-07-11 04:41:55 +08:00
|
|
|
if (!inet_sk(sk)->inet_num && !sk->sk_prot->no_autobind &&
|
|
|
|
inet_autobind(sk))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2015-03-02 15:37:48 +08:00
|
|
|
return sk->sk_prot->sendmsg(sk, msg, size);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_sendmsg);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-07-11 04:41:55 +08:00
|
|
|
ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset,
|
|
|
|
size_t size, int flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
|
2010-04-28 06:05:31 +08:00
|
|
|
sock_rps_record_flow(sk);
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* We may need to bind the socket. */
|
2010-07-11 04:41:55 +08:00
|
|
|
if (!inet_sk(sk)->inet_num && !sk->sk_prot->no_autobind &&
|
|
|
|
inet_autobind(sk))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
if (sk->sk_prot->sendpage)
|
|
|
|
return sk->sk_prot->sendpage(sk, page, offset, size, flags);
|
|
|
|
return sock_no_sendpage(sock, page, offset, size, flags);
|
|
|
|
}
|
2010-07-11 04:41:55 +08:00
|
|
|
EXPORT_SYMBOL(inet_sendpage);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-03-02 15:37:48 +08:00
|
|
|
int inet_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
|
|
|
|
int flags)
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int addr_len = 0;
|
|
|
|
int err;
|
|
|
|
|
2018-01-04 10:47:10 +08:00
|
|
|
if (likely(!(flags & MSG_ERRQUEUE)))
|
|
|
|
sock_rps_record_flow(sk);
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
|
2015-03-02 15:37:48 +08:00
|
|
|
err = sk->sk_prot->recvmsg(sk, msg, size, flags & MSG_DONTWAIT,
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
flags & ~MSG_DONTWAIT, &addr_len);
|
|
|
|
if (err >= 0)
|
|
|
|
msg->msg_namelen = addr_len;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(inet_recvmsg);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
int inet_shutdown(struct socket *sock, int how)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
/* This should really check to make sure
|
|
|
|
* the socket is a TCP socket. (WHY AC...)
|
|
|
|
*/
|
|
|
|
how++; /* maps 0->1 has the advantage of making bit 1 rcvs and
|
|
|
|
1->2 bit 2 snds.
|
|
|
|
2->3 */
|
|
|
|
if ((how & ~SHUTDOWN_MASK) || !how) /* MAXINT->0 */
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
lock_sock(sk);
|
|
|
|
if (sock->state == SS_CONNECTING) {
|
|
|
|
if ((1 << sk->sk_state) &
|
|
|
|
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE))
|
|
|
|
sock->state = SS_DISCONNECTING;
|
|
|
|
else
|
|
|
|
sock->state = SS_CONNECTED;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (sk->sk_state) {
|
|
|
|
case TCP_CLOSE:
|
|
|
|
err = -ENOTCONN;
|
|
|
|
/* Hack to wake up other listeners, who can poll for
|
2018-02-12 06:34:03 +08:00
|
|
|
EPOLLHUP, even on eg. unconnected UDP sockets -- RR */
|
2017-10-17 04:48:55 +08:00
|
|
|
/* fall through */
|
2005-04-17 06:20:36 +08:00
|
|
|
default:
|
|
|
|
sk->sk_shutdown |= how;
|
|
|
|
if (sk->sk_prot->shutdown)
|
|
|
|
sk->sk_prot->shutdown(sk, how);
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Remaining two branches are temporary solution for missing
|
|
|
|
* close() in multithreaded environment. It is _not_ a good idea,
|
|
|
|
* but we have no choice until close() is repaired at VFS level.
|
|
|
|
*/
|
|
|
|
case TCP_LISTEN:
|
|
|
|
if (!(how & RCV_SHUTDOWN))
|
|
|
|
break;
|
2017-10-17 04:48:55 +08:00
|
|
|
/* fall through */
|
2005-04-17 06:20:36 +08:00
|
|
|
case TCP_SYN_SENT:
|
|
|
|
err = sk->sk_prot->disconnect(sk, O_NONBLOCK);
|
|
|
|
sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Wake up anyone sleeping in poll. */
|
|
|
|
sk->sk_state_change(sk);
|
|
|
|
release_sock(sk);
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_shutdown);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ioctl() calls you can issue on an INET socket. Most of these are
|
|
|
|
* device configuration and stuff and very rarely used. Some ioctls
|
|
|
|
* pass on to the socket itself.
|
|
|
|
*
|
|
|
|
* NOTE: I like the idea of a module for the config stuff. ie ifconfig
|
|
|
|
* loads the devconfigure module does its configuring and unloads it.
|
|
|
|
* There's a good 20K of config code hanging around the kernel.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int inet_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int err = 0;
|
2008-03-26 01:26:21 +08:00
|
|
|
struct net *net = sock_net(sk);
|
2017-07-01 19:53:12 +08:00
|
|
|
void __user *p = (void __user *)arg;
|
|
|
|
struct ifreq ifr;
|
2017-07-01 20:03:10 +08:00
|
|
|
struct rtentry rt;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
switch (cmd) {
|
2009-08-29 14:45:21 +08:00
|
|
|
case SIOCADDRT:
|
|
|
|
case SIOCDELRT:
|
2017-07-01 20:03:10 +08:00
|
|
|
if (copy_from_user(&rt, p, sizeof(struct rtentry)))
|
|
|
|
return -EFAULT;
|
|
|
|
err = ip_rt_ioctl(net, cmd, &rt);
|
|
|
|
break;
|
2009-08-29 14:45:21 +08:00
|
|
|
case SIOCRTMSG:
|
2017-07-01 20:03:10 +08:00
|
|
|
err = -EINVAL;
|
2009-08-29 14:45:21 +08:00
|
|
|
break;
|
|
|
|
case SIOCDARP:
|
|
|
|
case SIOCGARP:
|
|
|
|
case SIOCSARP:
|
|
|
|
err = arp_ioctl(net, cmd, (void __user *)arg);
|
|
|
|
break;
|
|
|
|
case SIOCGIFADDR:
|
|
|
|
case SIOCGIFBRDADDR:
|
|
|
|
case SIOCGIFNETMASK:
|
|
|
|
case SIOCGIFDSTADDR:
|
2017-07-01 19:53:12 +08:00
|
|
|
case SIOCGIFPFLAGS:
|
|
|
|
if (copy_from_user(&ifr, p, sizeof(struct ifreq)))
|
|
|
|
return -EFAULT;
|
|
|
|
err = devinet_ioctl(net, cmd, &ifr);
|
|
|
|
if (!err && copy_to_user(p, &ifr, sizeof(struct ifreq)))
|
|
|
|
err = -EFAULT;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case SIOCSIFADDR:
|
|
|
|
case SIOCSIFBRDADDR:
|
|
|
|
case SIOCSIFNETMASK:
|
2009-08-29 14:45:21 +08:00
|
|
|
case SIOCSIFDSTADDR:
|
|
|
|
case SIOCSIFPFLAGS:
|
|
|
|
case SIOCSIFFLAGS:
|
2017-07-01 19:53:12 +08:00
|
|
|
if (copy_from_user(&ifr, p, sizeof(struct ifreq)))
|
|
|
|
return -EFAULT;
|
|
|
|
err = devinet_ioctl(net, cmd, &ifr);
|
2009-08-29 14:45:21 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (sk->sk_prot->ioctl)
|
|
|
|
err = sk->sk_prot->ioctl(sk, cmd, arg);
|
|
|
|
else
|
|
|
|
err = -ENOIOCTLCMD;
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_ioctl);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-01-30 00:15:56 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
2011-10-15 17:26:56 +08:00
|
|
|
static int inet_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
|
2011-01-30 00:15:56 +08:00
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
|
|
|
int err = -ENOIOCTLCMD;
|
|
|
|
|
|
|
|
if (sk->sk_prot->compat_ioctl)
|
|
|
|
err = sk->sk_prot->compat_ioctl(sk, cmd, arg);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-12-23 04:49:22 +08:00
|
|
|
const struct proto_ops inet_stream_ops = {
|
2006-03-21 14:48:35 +08:00
|
|
|
.family = PF_INET,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.release = inet_release,
|
|
|
|
.bind = inet_bind,
|
|
|
|
.connect = inet_stream_connect,
|
|
|
|
.socketpair = sock_no_socketpair,
|
|
|
|
.accept = inet_accept,
|
|
|
|
.getname = inet_getname,
|
2018-06-29 00:43:44 +08:00
|
|
|
.poll = tcp_poll,
|
2006-03-21 14:48:35 +08:00
|
|
|
.ioctl = inet_ioctl,
|
2019-04-18 04:51:48 +08:00
|
|
|
.gettstamp = sock_gettstamp,
|
2006-03-21 14:48:35 +08:00
|
|
|
.listen = inet_listen,
|
|
|
|
.shutdown = inet_shutdown,
|
|
|
|
.setsockopt = sock_common_setsockopt,
|
|
|
|
.getsockopt = sock_common_getsockopt,
|
2010-07-11 04:41:55 +08:00
|
|
|
.sendmsg = inet_sendmsg,
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
.recvmsg = inet_recvmsg,
|
tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive
When adding tcp mmap() implementation, I forgot that socket lock
had to be taken before current->mm->mmap_sem. syzbot eventually caught
the bug.
Since we can not lock the socket in tcp mmap() handler we have to
split the operation in two phases.
1) mmap() on a tcp socket simply reserves VMA space, and nothing else.
This operation does not involve any TCP locking.
2) getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) implements
the transfert of pages from skbs to one VMA.
This operation only uses down_read(¤t->mm->mmap_sem) after
holding TCP lock, thus solving the lockdep issue.
This new implementation was suggested by Andy Lutomirski with great details.
Benefits are :
- Better scalability, in case multiple threads reuse VMAS
(without mmap()/munmap() calls) since mmap_sem wont be write locked.
- Better error recovery.
The previous mmap() model had to provide the expected size of the
mapping. If for some reason one part could not be mapped (partial MSS),
the whole operation had to be aborted.
With the tcp_zerocopy_receive struct, kernel can report how
many bytes were successfuly mapped, and how many bytes should
be read to skip the problematic sequence.
- No more memory allocation to hold an array of page pointers.
16 MB mappings needed 32 KB for this array, potentially using vmalloc() :/
- skbs are freed while mmap_sem has been released
Following patch makes the change in tcp_mmap tool to demonstrate
one possible use of mmap() and setsockopt(... TCP_ZEROCOPY_RECEIVE ...)
Note that memcg might require additional changes.
Fixes: 93ab6cc69162 ("tcp: implement mmap() for zero copy receive")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Cc: linux-mm@kvack.org
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-27 23:58:08 +08:00
|
|
|
#ifdef CONFIG_MMU
|
tcp: implement mmap() for zero copy receive
Some networks can make sure TCP payload can exactly fit 4KB pages,
with well chosen MSS/MTU and architectures.
Implement mmap() system call so that applications can avoid
copying data without complex splice() games.
Note that a successful mmap( X bytes) on TCP socket is consuming
bytes, as if recvmsg() has been done. (tp->copied += X)
Only PROT_READ mappings are accepted, as skb page frags
are fundamentally shared and read only.
If tcp_mmap() finds data that is not a full page, or a patch of
urgent data, -EINVAL is returned, no bytes are consumed.
Application must fallback to recvmsg() to read the problematic sequence.
mmap() wont block, regardless of socket being in blocking or
non-blocking mode. If not enough bytes are in receive queue,
mmap() would return -EAGAIN, or -EIO if socket is in a state
where no other bytes can be added into receive queue.
An application might use SO_RCVLOWAT, poll() and/or ioctl( FIONREAD)
to efficiently use mmap()
On the sender side, MSG_EOR might help to clearly separate unaligned
headers and 4K-aligned chunks if necessary.
Tested:
mlx4 (cx-3) 40Gbit NIC, with tcp_mmap program provided in following patch.
MTU set to 4168 (4096 TCP payload, 40 bytes IPv6 header, 32 bytes TCP header)
Without mmap() (tcp_mmap -s)
received 32768 MB (0 % mmap'ed) in 8.13342 s, 33.7961 Gbit,
cpu usage user:0.034 sys:3.778, 116.333 usec per MB, 63062 c-switches
received 32768 MB (0 % mmap'ed) in 8.14501 s, 33.748 Gbit,
cpu usage user:0.029 sys:3.997, 122.864 usec per MB, 61903 c-switches
received 32768 MB (0 % mmap'ed) in 8.11723 s, 33.8635 Gbit,
cpu usage user:0.048 sys:3.964, 122.437 usec per MB, 62983 c-switches
received 32768 MB (0 % mmap'ed) in 8.39189 s, 32.7552 Gbit,
cpu usage user:0.038 sys:4.181, 128.754 usec per MB, 55834 c-switches
With mmap() on receiver (tcp_mmap -s -z)
received 32768 MB (100 % mmap'ed) in 8.03083 s, 34.2278 Gbit,
cpu usage user:0.024 sys:1.466, 45.4712 usec per MB, 65479 c-switches
received 32768 MB (100 % mmap'ed) in 7.98805 s, 34.4111 Gbit,
cpu usage user:0.026 sys:1.401, 43.5486 usec per MB, 65447 c-switches
received 32768 MB (100 % mmap'ed) in 7.98377 s, 34.4296 Gbit,
cpu usage user:0.028 sys:1.452, 45.166 usec per MB, 65496 c-switches
received 32768 MB (99.9969 % mmap'ed) in 8.01838 s, 34.281 Gbit,
cpu usage user:0.02 sys:1.446, 44.7388 usec per MB, 65505 c-switches
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-17 01:33:38 +08:00
|
|
|
.mmap = tcp_mmap,
|
tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive
When adding tcp mmap() implementation, I forgot that socket lock
had to be taken before current->mm->mmap_sem. syzbot eventually caught
the bug.
Since we can not lock the socket in tcp mmap() handler we have to
split the operation in two phases.
1) mmap() on a tcp socket simply reserves VMA space, and nothing else.
This operation does not involve any TCP locking.
2) getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) implements
the transfert of pages from skbs to one VMA.
This operation only uses down_read(¤t->mm->mmap_sem) after
holding TCP lock, thus solving the lockdep issue.
This new implementation was suggested by Andy Lutomirski with great details.
Benefits are :
- Better scalability, in case multiple threads reuse VMAS
(without mmap()/munmap() calls) since mmap_sem wont be write locked.
- Better error recovery.
The previous mmap() model had to provide the expected size of the
mapping. If for some reason one part could not be mapped (partial MSS),
the whole operation had to be aborted.
With the tcp_zerocopy_receive struct, kernel can report how
many bytes were successfuly mapped, and how many bytes should
be read to skip the problematic sequence.
- No more memory allocation to hold an array of page pointers.
16 MB mappings needed 32 KB for this array, potentially using vmalloc() :/
- skbs are freed while mmap_sem has been released
Following patch makes the change in tcp_mmap tool to demonstrate
one possible use of mmap() and setsockopt(... TCP_ZEROCOPY_RECEIVE ...)
Note that memcg might require additional changes.
Fixes: 93ab6cc69162 ("tcp: implement mmap() for zero copy receive")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Suggested-by: Andy Lutomirski <luto@kernel.org>
Cc: linux-mm@kvack.org
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-27 23:58:08 +08:00
|
|
|
#endif
|
2010-07-11 04:41:55 +08:00
|
|
|
.sendpage = inet_sendpage,
|
2007-11-07 15:30:13 +08:00
|
|
|
.splice_read = tcp_splice_read,
|
2016-08-29 05:43:18 +08:00
|
|
|
.read_sock = tcp_read_sock,
|
2017-07-29 07:22:41 +08:00
|
|
|
.sendmsg_locked = tcp_sendmsg_locked,
|
|
|
|
.sendpage_locked = tcp_sendpage_locked,
|
2016-08-29 05:43:18 +08:00
|
|
|
.peek_len = tcp_peek_len,
|
2006-03-21 14:45:21 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
2006-03-21 14:48:35 +08:00
|
|
|
.compat_setsockopt = compat_sock_common_setsockopt,
|
|
|
|
.compat_getsockopt = compat_sock_common_getsockopt,
|
2011-01-30 00:15:56 +08:00
|
|
|
.compat_ioctl = inet_compat_ioctl,
|
2006-03-21 14:45:21 +08:00
|
|
|
#endif
|
2018-04-17 01:33:35 +08:00
|
|
|
.set_rcvlowat = tcp_set_rcvlowat,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_stream_ops);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-12-23 04:49:22 +08:00
|
|
|
const struct proto_ops inet_dgram_ops = {
|
2006-03-21 14:48:35 +08:00
|
|
|
.family = PF_INET,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.release = inet_release,
|
|
|
|
.bind = inet_bind,
|
|
|
|
.connect = inet_dgram_connect,
|
|
|
|
.socketpair = sock_no_socketpair,
|
|
|
|
.accept = sock_no_accept,
|
|
|
|
.getname = inet_getname,
|
2018-06-29 00:43:44 +08:00
|
|
|
.poll = udp_poll,
|
2006-03-21 14:48:35 +08:00
|
|
|
.ioctl = inet_ioctl,
|
2019-04-18 04:51:48 +08:00
|
|
|
.gettstamp = sock_gettstamp,
|
2006-03-21 14:48:35 +08:00
|
|
|
.listen = sock_no_listen,
|
|
|
|
.shutdown = inet_shutdown,
|
|
|
|
.setsockopt = sock_common_setsockopt,
|
|
|
|
.getsockopt = sock_common_getsockopt,
|
|
|
|
.sendmsg = inet_sendmsg,
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
.recvmsg = inet_recvmsg,
|
2006-03-21 14:48:35 +08:00
|
|
|
.mmap = sock_no_mmap,
|
|
|
|
.sendpage = inet_sendpage,
|
2016-04-06 00:41:16 +08:00
|
|
|
.set_peek_off = sk_set_peek_off,
|
2006-03-21 14:45:21 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
2006-03-21 14:48:35 +08:00
|
|
|
.compat_setsockopt = compat_sock_common_setsockopt,
|
|
|
|
.compat_getsockopt = compat_sock_common_getsockopt,
|
2011-01-30 00:15:56 +08:00
|
|
|
.compat_ioctl = inet_compat_ioctl,
|
2006-03-21 14:45:21 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_dgram_ops);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For SOCK_RAW sockets; should be the same as inet_dgram_ops but without
|
2018-06-29 00:43:44 +08:00
|
|
|
* udp_poll
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2005-12-23 04:49:22 +08:00
|
|
|
static const struct proto_ops inet_sockraw_ops = {
|
2006-03-21 14:48:35 +08:00
|
|
|
.family = PF_INET,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.release = inet_release,
|
|
|
|
.bind = inet_bind,
|
|
|
|
.connect = inet_dgram_connect,
|
|
|
|
.socketpair = sock_no_socketpair,
|
|
|
|
.accept = sock_no_accept,
|
|
|
|
.getname = inet_getname,
|
2018-06-29 00:43:44 +08:00
|
|
|
.poll = datagram_poll,
|
2006-03-21 14:48:35 +08:00
|
|
|
.ioctl = inet_ioctl,
|
2019-04-18 04:51:48 +08:00
|
|
|
.gettstamp = sock_gettstamp,
|
2006-03-21 14:48:35 +08:00
|
|
|
.listen = sock_no_listen,
|
|
|
|
.shutdown = inet_shutdown,
|
|
|
|
.setsockopt = sock_common_setsockopt,
|
|
|
|
.getsockopt = sock_common_getsockopt,
|
|
|
|
.sendmsg = inet_sendmsg,
|
rfs: Receive Flow Steering
This patch implements receive flow steering (RFS). RFS steers
received packets for layer 3 and 4 processing to the CPU where
the application for the corresponding flow is running. RFS is an
extension of Receive Packet Steering (RPS).
The basic idea of RFS is that when an application calls recvmsg
(or sendmsg) the application's running CPU is stored in a hash
table that is indexed by the connection's rxhash which is stored in
the socket structure. The rxhash is passed in skb's received on
the connection from netif_receive_skb. For each received packet,
the associated rxhash is used to look up the CPU in the hash table,
if a valid CPU is set then the packet is steered to that CPU using
the RPS mechanisms.
The convolution of the simple approach is that it would potentially
allow OOO packets. If threads are thrashing around CPUs or multiple
threads are trying to read from the same sockets, a quickly changing
CPU value in the hash table could cause rampant OOO packets--
we consider this a non-starter.
To avoid OOO packets, this solution implements two types of hash
tables: rps_sock_flow_table and rps_dev_flow_table.
rps_sock_table is a global hash table. Each entry is just a CPU
number and it is populated in recvmsg and sendmsg as described above.
This table contains the "desired" CPUs for flows.
rps_dev_flow_table is specific to each device queue. Each entry
contains a CPU and a tail queue counter. The CPU is the "current"
CPU for a matching flow. The tail queue counter holds the value
of a tail queue counter for the associated CPU's backlog queue at
the time of last enqueue for a flow matching the entry.
Each backlog queue has a queue head counter which is incremented
on dequeue, and so a queue tail counter is computed as queue head
count + queue length. When a packet is enqueued on a backlog queue,
the current value of the queue tail counter is saved in the hash
entry of the rps_dev_flow_table.
And now the trick: when selecting the CPU for RPS (get_rps_cpu)
the rps_sock_flow table and the rps_dev_flow table for the RX queue
are consulted. When the desired CPU for the flow (found in the
rps_sock_flow table) does not match the current CPU (found in the
rps_dev_flow table), the current CPU is changed to the desired CPU
if one of the following is true:
- The current CPU is unset (equal to RPS_NO_CPU)
- Current CPU is offline
- The current CPU's queue head counter >= queue tail counter in the
rps_dev_flow table. This checks if the queue tail has advanced
beyond the last packet that was enqueued using this table entry.
This guarantees that all packets queued using this entry have been
dequeued, thus preserving in order delivery.
Making each queue have its own rps_dev_flow table has two advantages:
1) the tail queue counters will be written on each receive, so
keeping the table local to interrupting CPU s good for locality. 2)
this allows lockless access to the table-- the CPU number and queue
tail counter need to be accessed together under mutual exclusion
from netif_receive_skb, we assume that this is only called from
device napi_poll which is non-reentrant.
This patch implements RFS for TCP and connected UDP sockets.
It should be usable for other flow oriented protocols.
There are two configuration parameters for RFS. The
"rps_flow_entries" kernel init parameter sets the number of
entries in the rps_sock_flow_table, the per rxqueue sysfs entry
"rps_flow_cnt" contains the number of entries in the rps_dev_flow
table for the rxqueue. Both are rounded to power of two.
The obvious benefit of RFS (over just RPS) is that it achieves
CPU locality between the receive processing for a flow and the
applications processing; this can result in increased performance
(higher pps, lower latency).
The benefits of RFS are dependent on cache hierarchy, application
load, and other factors. On simple benchmarks, we don't necessarily
see improvement and sometimes see degradation. However, for more
complex benchmarks and for applications where cache pressure is
much higher this technique seems to perform very well.
Below are some benchmark results which show the potential benfit of
this patch. The netperf test has 500 instances of netperf TCP_RR
test with 1 byte req. and resp. The RPC test is an request/response
test similar in structure to netperf RR test ith 100 threads on
each host, but does more work in userspace that netperf.
e1000e on 8 core Intel
No RFS or RPS 104K tps at 30% CPU
No RFS (best RPS config): 290K tps at 63% CPU
RFS 303K tps at 61% CPU
RPC test tps CPU% 50/90/99% usec latency Latency StdDev
No RFS/RPS 103K 48% 757/900/3185 4472.35
RPS only: 174K 73% 415/993/2468 491.66
RFS 223K 73% 379/651/1382 315.61
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-17 07:01:27 +08:00
|
|
|
.recvmsg = inet_recvmsg,
|
2006-03-21 14:48:35 +08:00
|
|
|
.mmap = sock_no_mmap,
|
|
|
|
.sendpage = inet_sendpage,
|
2006-03-21 14:45:21 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
2006-03-21 14:48:35 +08:00
|
|
|
.compat_setsockopt = compat_sock_common_setsockopt,
|
|
|
|
.compat_getsockopt = compat_sock_common_getsockopt,
|
2011-01-30 00:15:56 +08:00
|
|
|
.compat_ioctl = inet_compat_ioctl,
|
2006-03-21 14:45:21 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2009-10-05 13:58:39 +08:00
|
|
|
static const struct net_proto_family inet_family_ops = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.family = PF_INET,
|
|
|
|
.create = inet_create,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Upon startup we insert all the elements in inetsw_array[] into
|
|
|
|
* the linked list inetsw.
|
|
|
|
*/
|
|
|
|
static struct inet_protosw inetsw_array[] =
|
|
|
|
{
|
2007-02-09 22:24:47 +08:00
|
|
|
{
|
|
|
|
.type = SOCK_STREAM,
|
|
|
|
.protocol = IPPROTO_TCP,
|
|
|
|
.prot = &tcp_prot,
|
|
|
|
.ops = &inet_stream_ops,
|
|
|
|
.flags = INET_PROTOSW_PERMANENT |
|
2005-12-14 15:26:10 +08:00
|
|
|
INET_PROTOSW_ICSK,
|
2007-02-09 22:24:47 +08:00
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
.type = SOCK_DGRAM,
|
|
|
|
.protocol = IPPROTO_UDP,
|
|
|
|
.prot = &udp_prot,
|
|
|
|
.ops = &inet_dgram_ops,
|
|
|
|
.flags = INET_PROTOSW_PERMANENT,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2007-02-09 22:24:47 +08:00
|
|
|
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
{
|
|
|
|
.type = SOCK_DGRAM,
|
|
|
|
.protocol = IPPROTO_ICMP,
|
|
|
|
.prot = &ping_prot,
|
2017-06-04 00:29:25 +08:00
|
|
|
.ops = &inet_sockraw_ops,
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
.flags = INET_PROTOSW_REUSE,
|
|
|
|
},
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
{
|
2007-02-09 22:24:47 +08:00
|
|
|
.type = SOCK_RAW,
|
|
|
|
.protocol = IPPROTO_IP, /* wild card */
|
|
|
|
.prot = &raw_prot,
|
|
|
|
.ops = &inet_sockraw_ops,
|
|
|
|
.flags = INET_PROTOSW_REUSE,
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2007-09-17 07:39:25 +08:00
|
|
|
#define INETSW_ARRAY_LEN ARRAY_SIZE(inetsw_array)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void inet_register_protosw(struct inet_protosw *p)
|
|
|
|
{
|
|
|
|
struct list_head *lh;
|
|
|
|
struct inet_protosw *answer;
|
|
|
|
int protocol = p->protocol;
|
|
|
|
struct list_head *last_perm;
|
|
|
|
|
|
|
|
spin_lock_bh(&inetsw_lock);
|
|
|
|
|
|
|
|
if (p->type >= SOCK_MAX)
|
|
|
|
goto out_illegal;
|
|
|
|
|
|
|
|
/* If we are trying to override a permanent protocol, bail. */
|
|
|
|
last_perm = &inetsw[p->type];
|
|
|
|
list_for_each(lh, &inetsw[p->type]) {
|
|
|
|
answer = list_entry(lh, struct inet_protosw, list);
|
|
|
|
/* Check only the non-wild match. */
|
2015-09-18 12:00:05 +08:00
|
|
|
if ((INET_PROTOSW_PERMANENT & answer->flags) == 0)
|
|
|
|
break;
|
|
|
|
if (protocol == answer->protocol)
|
|
|
|
goto out_permanent;
|
|
|
|
last_perm = lh;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Add the new entry after the last permanent entry if any, so that
|
|
|
|
* the new entry does not override a permanent entry when matched with
|
|
|
|
* a wild-card protocol. But it is allowed to override any existing
|
2007-02-09 22:24:47 +08:00
|
|
|
* non-permanent entry. This means that when we remove this entry, the
|
2005-04-17 06:20:36 +08:00
|
|
|
* system automatically returns to the old behavior.
|
|
|
|
*/
|
|
|
|
list_add_rcu(&p->list, last_perm);
|
|
|
|
out:
|
|
|
|
spin_unlock_bh(&inetsw_lock);
|
|
|
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
out_permanent:
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_err("Attempt to override permanent protocol %d\n", protocol);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
out_illegal:
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_err("Ignoring attempt to register invalid socket type %d\n",
|
2005-04-17 06:20:36 +08:00
|
|
|
p->type);
|
|
|
|
goto out;
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_register_protosw);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void inet_unregister_protosw(struct inet_protosw *p)
|
|
|
|
{
|
|
|
|
if (INET_PROTOSW_PERMANENT & p->flags) {
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_err("Attempt to unregister permanent protocol %d\n",
|
2005-04-17 06:20:36 +08:00
|
|
|
p->protocol);
|
|
|
|
} else {
|
|
|
|
spin_lock_bh(&inetsw_lock);
|
|
|
|
list_del_rcu(&p->list);
|
|
|
|
spin_unlock_bh(&inetsw_lock);
|
|
|
|
|
|
|
|
synchronize_net();
|
|
|
|
}
|
|
|
|
}
|
2009-08-29 14:45:21 +08:00
|
|
|
EXPORT_SYMBOL(inet_unregister_protosw);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-08-10 10:50:02 +08:00
|
|
|
static int inet_sk_reselect_saddr(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
2009-10-15 14:30:45 +08:00
|
|
|
__be32 old_saddr = inet->inet_saddr;
|
|
|
|
__be32 daddr = inet->inet_daddr;
|
2011-05-07 07:18:04 +08:00
|
|
|
struct flowi4 *fl4;
|
2011-03-03 06:31:35 +08:00
|
|
|
struct rtable *rt;
|
|
|
|
__be32 new_saddr;
|
2011-04-21 17:45:37 +08:00
|
|
|
struct ip_options_rcu *inet_opt;
|
2005-08-10 10:50:02 +08:00
|
|
|
|
2011-04-21 17:45:37 +08:00
|
|
|
inet_opt = rcu_dereference_protected(inet->inet_opt,
|
2016-04-05 23:10:15 +08:00
|
|
|
lockdep_sock_is_held(sk));
|
2011-04-21 17:45:37 +08:00
|
|
|
if (inet_opt && inet_opt->opt.srr)
|
|
|
|
daddr = inet_opt->opt.faddr;
|
2005-08-10 10:50:02 +08:00
|
|
|
|
|
|
|
/* Query new route. */
|
2011-05-07 07:18:04 +08:00
|
|
|
fl4 = &inet->cork.fl.u.ip4;
|
|
|
|
rt = ip_route_connect(fl4, daddr, 0, RT_CONN_FLAGS(sk),
|
2011-03-03 06:31:35 +08:00
|
|
|
sk->sk_bound_dev_if, sk->sk_protocol,
|
2013-08-28 14:04:14 +08:00
|
|
|
inet->inet_sport, inet->inet_dport, sk);
|
2011-03-03 06:31:35 +08:00
|
|
|
if (IS_ERR(rt))
|
|
|
|
return PTR_ERR(rt);
|
2005-08-10 10:50:02 +08:00
|
|
|
|
2010-06-11 14:31:35 +08:00
|
|
|
sk_setup_caps(sk, &rt->dst);
|
2005-08-10 10:50:02 +08:00
|
|
|
|
2011-05-07 07:18:04 +08:00
|
|
|
new_saddr = fl4->saddr;
|
2005-08-10 10:50:02 +08:00
|
|
|
|
|
|
|
if (new_saddr == old_saddr)
|
|
|
|
return 0;
|
|
|
|
|
2016-02-15 18:11:29 +08:00
|
|
|
if (sock_net(sk)->ipv4.sysctl_ip_dynaddr > 1) {
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_info("%s(): shifting inet->saddr from %pI4 to %pI4\n",
|
|
|
|
__func__, &old_saddr, &new_saddr);
|
2005-08-10 10:50:02 +08:00
|
|
|
}
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
inet->inet_saddr = inet->inet_rcv_saddr = new_saddr;
|
2005-08-10 10:50:02 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* XXX The only one ugly spot where we need to
|
|
|
|
* XXX really change the sockets identity after
|
|
|
|
* XXX it has entered the hashes. -DaveM
|
|
|
|
*
|
|
|
|
* Besides that, it does not check for connection
|
|
|
|
* uniqueness. Wait for troubles.
|
|
|
|
*/
|
2016-02-11 00:50:35 +08:00
|
|
|
return __sk_prot_rehash(sk);
|
2005-08-10 10:50:02 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int inet_sk_rebuild_header(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
|
|
|
struct rtable *rt = (struct rtable *)__sk_dst_check(sk, 0);
|
2006-09-28 09:28:07 +08:00
|
|
|
__be32 daddr;
|
2011-04-21 17:45:37 +08:00
|
|
|
struct ip_options_rcu *inet_opt;
|
2011-05-07 07:18:04 +08:00
|
|
|
struct flowi4 *fl4;
|
2005-08-10 10:50:02 +08:00
|
|
|
int err;
|
|
|
|
|
|
|
|
/* Route is OK, nothing to do. */
|
|
|
|
if (rt)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Reroute. */
|
2011-04-21 17:45:37 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
inet_opt = rcu_dereference(inet->inet_opt);
|
2009-10-15 14:30:45 +08:00
|
|
|
daddr = inet->inet_daddr;
|
2011-04-21 17:45:37 +08:00
|
|
|
if (inet_opt && inet_opt->opt.srr)
|
|
|
|
daddr = inet_opt->opt.faddr;
|
|
|
|
rcu_read_unlock();
|
2011-05-07 07:18:04 +08:00
|
|
|
fl4 = &inet->cork.fl.u.ip4;
|
|
|
|
rt = ip_route_output_ports(sock_net(sk), fl4, sk, daddr, inet->inet_saddr,
|
2011-03-12 13:00:52 +08:00
|
|
|
inet->inet_dport, inet->inet_sport,
|
|
|
|
sk->sk_protocol, RT_CONN_FLAGS(sk),
|
|
|
|
sk->sk_bound_dev_if);
|
2011-03-03 06:31:35 +08:00
|
|
|
if (!IS_ERR(rt)) {
|
|
|
|
err = 0;
|
2010-06-11 14:31:35 +08:00
|
|
|
sk_setup_caps(sk, &rt->dst);
|
2011-03-03 06:31:35 +08:00
|
|
|
} else {
|
|
|
|
err = PTR_ERR(rt);
|
|
|
|
|
2005-08-10 10:50:02 +08:00
|
|
|
/* Routing failed... */
|
|
|
|
sk->sk_route_caps = 0;
|
|
|
|
/*
|
|
|
|
* Other protocols have to map its equivalent state to TCP_SYN_SENT.
|
|
|
|
* DCCP maps its DCCP_REQUESTING state to TCP_SYN_SENT. -acme
|
|
|
|
*/
|
2016-02-15 18:11:29 +08:00
|
|
|
if (!sock_net(sk)->ipv4.sysctl_ip_dynaddr ||
|
2005-08-10 10:50:02 +08:00
|
|
|
sk->sk_state != TCP_SYN_SENT ||
|
|
|
|
(sk->sk_userlocks & SOCK_BINDADDR_LOCK) ||
|
|
|
|
(err = inet_sk_reselect_saddr(sk)) != 0)
|
|
|
|
sk->sk_err_soft = -err;
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(inet_sk_rebuild_header);
|
|
|
|
|
2017-12-20 11:12:51 +08:00
|
|
|
void inet_sk_set_state(struct sock *sk, int state)
|
|
|
|
{
|
|
|
|
trace_inet_sock_set_state(sk, sk->sk_state, state);
|
|
|
|
sk->sk_state = state;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(inet_sk_set_state);
|
|
|
|
|
|
|
|
void inet_sk_state_store(struct sock *sk, int newstate)
|
|
|
|
{
|
|
|
|
trace_inet_sock_set_state(sk, sk->sk_state, newstate);
|
|
|
|
smp_store_release(&sk->sk_state, newstate);
|
|
|
|
}
|
|
|
|
|
2016-05-19 00:06:23 +08:00
|
|
|
struct sk_buff *inet_gso_segment(struct sk_buff *skb,
|
|
|
|
netdev_features_t features)
|
2006-06-22 18:02:40 +08:00
|
|
|
{
|
net: accept UFO datagrams from tuntap and packet
Tuntap and similar devices can inject GSO packets. Accept type
VIRTIO_NET_HDR_GSO_UDP, even though not generating UFO natively.
Processes are expected to use feature negotiation such as TUNSETOFFLOAD
to detect supported offload types and refrain from injecting other
packets. This process breaks down with live migration: guest kernels
do not renegotiate flags, so destination hosts need to expose all
features that the source host does.
Partially revert the UFO removal from 182e0b6b5846~1..d9d30adf5677.
This patch introduces nearly(*) no new code to simplify verification.
It brings back verbatim tuntap UFO negotiation, VIRTIO_NET_HDR_GSO_UDP
insertion and software UFO segmentation.
It does not reinstate protocol stack support, hardware offload
(NETIF_F_UFO), SKB_GSO_UDP tunneling in SKB_GSO_SOFTWARE or reception
of VIRTIO_NET_HDR_GSO_UDP packets in tuntap.
To support SKB_GSO_UDP reappearing in the stack, also reinstate
logic in act_csum and openvswitch. Achieve equivalence with v4.13 HEAD
by squashing in commit 939912216fa8 ("net: skb_needs_check() removes
CHECKSUM_UNNECESSARY check for tx.") and reverting commit 8d63bee643f1
("net: avoid skb_warn_bad_offload false positives on UFO").
(*) To avoid having to bring back skb_shinfo(skb)->ip6_frag_id,
ipv6_proxy_select_ident is changed to return a __be32 and this is
assigned directly to the frag_hdr. Also, SKB_GSO_UDP is inserted
at the end of the enum to minimize code churn.
Tested
Booted a v4.13 guest kernel with QEMU. On a host kernel before this
patch `ethtool -k eth0` shows UFO disabled. After the patch, it is
enabled, same as on a v4.13 host kernel.
A UFO packet sent from the guest appears on the tap device:
host:
nc -l -p -u 8000 &
tcpdump -n -i tap0
guest:
dd if=/dev/zero of=payload.txt bs=1 count=2000
nc -u 192.16.1.1 8000 < payload.txt
Direct tap to tap transmission of VIRTIO_NET_HDR_GSO_UDP succeeds,
packets arriving fragmented:
./with_tap_pair.sh ./tap_send_ufo tap0 tap1
(from https://github.com/wdebruij/kerneltools/tree/master/tests)
Changes
v1 -> v2
- simplified set_offload change (review comment)
- documented test procedure
Link: http://lkml.kernel.org/r/<CAF=yD-LuUeDuL9YWPJD9ykOZ0QCjNeznPDr6whqZ9NGMNF12Mw@mail.gmail.com>
Fixes: fb652fdfe837 ("macvlan/macvtap: Remove NETIF_F_UFO advertisement.")
Reported-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-21 23:22:25 +08:00
|
|
|
bool udpfrag = false, fixedid = false, gso_partial, encap;
|
2006-06-22 18:02:40 +08:00
|
|
|
struct sk_buff *segs = ERR_PTR(-EINVAL);
|
2012-11-15 16:49:14 +08:00
|
|
|
const struct net_offload *ops;
|
net: accept UFO datagrams from tuntap and packet
Tuntap and similar devices can inject GSO packets. Accept type
VIRTIO_NET_HDR_GSO_UDP, even though not generating UFO natively.
Processes are expected to use feature negotiation such as TUNSETOFFLOAD
to detect supported offload types and refrain from injecting other
packets. This process breaks down with live migration: guest kernels
do not renegotiate flags, so destination hosts need to expose all
features that the source host does.
Partially revert the UFO removal from 182e0b6b5846~1..d9d30adf5677.
This patch introduces nearly(*) no new code to simplify verification.
It brings back verbatim tuntap UFO negotiation, VIRTIO_NET_HDR_GSO_UDP
insertion and software UFO segmentation.
It does not reinstate protocol stack support, hardware offload
(NETIF_F_UFO), SKB_GSO_UDP tunneling in SKB_GSO_SOFTWARE or reception
of VIRTIO_NET_HDR_GSO_UDP packets in tuntap.
To support SKB_GSO_UDP reappearing in the stack, also reinstate
logic in act_csum and openvswitch. Achieve equivalence with v4.13 HEAD
by squashing in commit 939912216fa8 ("net: skb_needs_check() removes
CHECKSUM_UNNECESSARY check for tx.") and reverting commit 8d63bee643f1
("net: avoid skb_warn_bad_offload false positives on UFO").
(*) To avoid having to bring back skb_shinfo(skb)->ip6_frag_id,
ipv6_proxy_select_ident is changed to return a __be32 and this is
assigned directly to the frag_hdr. Also, SKB_GSO_UDP is inserted
at the end of the enum to minimize code churn.
Tested
Booted a v4.13 guest kernel with QEMU. On a host kernel before this
patch `ethtool -k eth0` shows UFO disabled. After the patch, it is
enabled, same as on a v4.13 host kernel.
A UFO packet sent from the guest appears on the tap device:
host:
nc -l -p -u 8000 &
tcpdump -n -i tap0
guest:
dd if=/dev/zero of=payload.txt bs=1 count=2000
nc -u 192.16.1.1 8000 < payload.txt
Direct tap to tap transmission of VIRTIO_NET_HDR_GSO_UDP succeeds,
packets arriving fragmented:
./with_tap_pair.sh ./tap_send_ufo tap0 tap1
(from https://github.com/wdebruij/kerneltools/tree/master/tests)
Changes
v1 -> v2
- simplified set_offload change (review comment)
- documented test procedure
Link: http://lkml.kernel.org/r/<CAF=yD-LuUeDuL9YWPJD9ykOZ0QCjNeznPDr6whqZ9NGMNF12Mw@mail.gmail.com>
Fixes: fb652fdfe837 ("macvlan/macvtap: Remove NETIF_F_UFO advertisement.")
Reported-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-21 23:22:25 +08:00
|
|
|
unsigned int offset = 0;
|
2012-06-20 09:56:21 +08:00
|
|
|
struct iphdr *iph;
|
2016-04-11 09:45:03 +08:00
|
|
|
int proto, tot_len;
|
2013-10-20 02:42:56 +08:00
|
|
|
int nhoff;
|
2006-06-22 18:02:40 +08:00
|
|
|
int ihl;
|
|
|
|
int id;
|
|
|
|
|
2013-10-20 02:42:56 +08:00
|
|
|
skb_reset_network_header(skb);
|
|
|
|
nhoff = skb_network_header(skb) - skb_mac_header(skb);
|
2006-07-04 10:38:35 +08:00
|
|
|
if (unlikely(!pskb_may_pull(skb, sizeof(*iph))))
|
2006-06-22 18:02:40 +08:00
|
|
|
goto out;
|
|
|
|
|
2007-04-21 13:47:35 +08:00
|
|
|
iph = ip_hdr(skb);
|
2006-06-22 18:02:40 +08:00
|
|
|
ihl = iph->ihl * 4;
|
|
|
|
if (ihl < sizeof(*iph))
|
|
|
|
goto out;
|
|
|
|
|
2013-10-19 04:13:27 +08:00
|
|
|
id = ntohs(iph->id);
|
|
|
|
proto = iph->protocol;
|
|
|
|
|
|
|
|
/* Warning: after this point, iph might be no longer valid */
|
2006-07-04 10:38:35 +08:00
|
|
|
if (unlikely(!pskb_may_pull(skb, ihl)))
|
2006-06-22 18:02:40 +08:00
|
|
|
goto out;
|
2013-10-19 04:13:27 +08:00
|
|
|
__skb_pull(skb, ihl);
|
2006-06-22 18:02:40 +08:00
|
|
|
|
2013-10-28 09:18:16 +08:00
|
|
|
encap = SKB_GSO_CB(skb)->encap_level > 0;
|
|
|
|
if (encap)
|
2014-10-20 19:49:16 +08:00
|
|
|
features &= skb->dev->hw_enc_features;
|
2013-10-20 02:42:56 +08:00
|
|
|
SKB_GSO_CB(skb)->encap_level += ihl;
|
2013-03-07 21:21:51 +08:00
|
|
|
|
2007-03-14 00:06:52 +08:00
|
|
|
skb_reset_transport_header(skb);
|
2013-10-19 04:13:27 +08:00
|
|
|
|
2006-06-22 18:02:40 +08:00
|
|
|
segs = ERR_PTR(-EPROTONOSUPPORT);
|
|
|
|
|
2016-04-11 09:44:51 +08:00
|
|
|
if (!skb->encapsulation || encap) {
|
net: accept UFO datagrams from tuntap and packet
Tuntap and similar devices can inject GSO packets. Accept type
VIRTIO_NET_HDR_GSO_UDP, even though not generating UFO natively.
Processes are expected to use feature negotiation such as TUNSETOFFLOAD
to detect supported offload types and refrain from injecting other
packets. This process breaks down with live migration: guest kernels
do not renegotiate flags, so destination hosts need to expose all
features that the source host does.
Partially revert the UFO removal from 182e0b6b5846~1..d9d30adf5677.
This patch introduces nearly(*) no new code to simplify verification.
It brings back verbatim tuntap UFO negotiation, VIRTIO_NET_HDR_GSO_UDP
insertion and software UFO segmentation.
It does not reinstate protocol stack support, hardware offload
(NETIF_F_UFO), SKB_GSO_UDP tunneling in SKB_GSO_SOFTWARE or reception
of VIRTIO_NET_HDR_GSO_UDP packets in tuntap.
To support SKB_GSO_UDP reappearing in the stack, also reinstate
logic in act_csum and openvswitch. Achieve equivalence with v4.13 HEAD
by squashing in commit 939912216fa8 ("net: skb_needs_check() removes
CHECKSUM_UNNECESSARY check for tx.") and reverting commit 8d63bee643f1
("net: avoid skb_warn_bad_offload false positives on UFO").
(*) To avoid having to bring back skb_shinfo(skb)->ip6_frag_id,
ipv6_proxy_select_ident is changed to return a __be32 and this is
assigned directly to the frag_hdr. Also, SKB_GSO_UDP is inserted
at the end of the enum to minimize code churn.
Tested
Booted a v4.13 guest kernel with QEMU. On a host kernel before this
patch `ethtool -k eth0` shows UFO disabled. After the patch, it is
enabled, same as on a v4.13 host kernel.
A UFO packet sent from the guest appears on the tap device:
host:
nc -l -p -u 8000 &
tcpdump -n -i tap0
guest:
dd if=/dev/zero of=payload.txt bs=1 count=2000
nc -u 192.16.1.1 8000 < payload.txt
Direct tap to tap transmission of VIRTIO_NET_HDR_GSO_UDP succeeds,
packets arriving fragmented:
./with_tap_pair.sh ./tap_send_ufo tap0 tap1
(from https://github.com/wdebruij/kerneltools/tree/master/tests)
Changes
v1 -> v2
- simplified set_offload change (review comment)
- documented test procedure
Link: http://lkml.kernel.org/r/<CAF=yD-LuUeDuL9YWPJD9ykOZ0QCjNeznPDr6whqZ9NGMNF12Mw@mail.gmail.com>
Fixes: fb652fdfe837 ("macvlan/macvtap: Remove NETIF_F_UFO advertisement.")
Reported-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-21 23:22:25 +08:00
|
|
|
udpfrag = !!(skb_shinfo(skb)->gso_type & SKB_GSO_UDP);
|
2016-04-11 09:44:51 +08:00
|
|
|
fixedid = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TCP_FIXEDID);
|
|
|
|
|
|
|
|
/* fixed ID is invalid if DF bit is not set */
|
2016-11-28 23:36:58 +08:00
|
|
|
if (fixedid && !(ip_hdr(skb)->frag_off & htons(IP_DF)))
|
2016-04-11 09:44:51 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2013-11-08 10:32:06 +08:00
|
|
|
|
2012-11-15 16:49:14 +08:00
|
|
|
ops = rcu_dereference(inet_offloads[proto]);
|
2012-11-15 16:49:23 +08:00
|
|
|
if (likely(ops && ops->callbacks.gso_segment))
|
|
|
|
segs = ops->callbacks.gso_segment(skb, features);
|
2006-06-22 18:02:40 +08:00
|
|
|
|
2013-01-22 14:32:49 +08:00
|
|
|
if (IS_ERR_OR_NULL(segs))
|
2006-06-22 18:02:40 +08:00
|
|
|
goto out;
|
|
|
|
|
2016-09-19 18:58:47 +08:00
|
|
|
gso_partial = !!(skb_shinfo(segs)->gso_type & SKB_GSO_PARTIAL);
|
|
|
|
|
2006-06-22 18:02:40 +08:00
|
|
|
skb = segs;
|
|
|
|
do {
|
2013-10-20 02:42:56 +08:00
|
|
|
iph = (struct iphdr *)(skb_mac_header(skb) + nhoff);
|
net: accept UFO datagrams from tuntap and packet
Tuntap and similar devices can inject GSO packets. Accept type
VIRTIO_NET_HDR_GSO_UDP, even though not generating UFO natively.
Processes are expected to use feature negotiation such as TUNSETOFFLOAD
to detect supported offload types and refrain from injecting other
packets. This process breaks down with live migration: guest kernels
do not renegotiate flags, so destination hosts need to expose all
features that the source host does.
Partially revert the UFO removal from 182e0b6b5846~1..d9d30adf5677.
This patch introduces nearly(*) no new code to simplify verification.
It brings back verbatim tuntap UFO negotiation, VIRTIO_NET_HDR_GSO_UDP
insertion and software UFO segmentation.
It does not reinstate protocol stack support, hardware offload
(NETIF_F_UFO), SKB_GSO_UDP tunneling in SKB_GSO_SOFTWARE or reception
of VIRTIO_NET_HDR_GSO_UDP packets in tuntap.
To support SKB_GSO_UDP reappearing in the stack, also reinstate
logic in act_csum and openvswitch. Achieve equivalence with v4.13 HEAD
by squashing in commit 939912216fa8 ("net: skb_needs_check() removes
CHECKSUM_UNNECESSARY check for tx.") and reverting commit 8d63bee643f1
("net: avoid skb_warn_bad_offload false positives on UFO").
(*) To avoid having to bring back skb_shinfo(skb)->ip6_frag_id,
ipv6_proxy_select_ident is changed to return a __be32 and this is
assigned directly to the frag_hdr. Also, SKB_GSO_UDP is inserted
at the end of the enum to minimize code churn.
Tested
Booted a v4.13 guest kernel with QEMU. On a host kernel before this
patch `ethtool -k eth0` shows UFO disabled. After the patch, it is
enabled, same as on a v4.13 host kernel.
A UFO packet sent from the guest appears on the tap device:
host:
nc -l -p -u 8000 &
tcpdump -n -i tap0
guest:
dd if=/dev/zero of=payload.txt bs=1 count=2000
nc -u 192.16.1.1 8000 < payload.txt
Direct tap to tap transmission of VIRTIO_NET_HDR_GSO_UDP succeeds,
packets arriving fragmented:
./with_tap_pair.sh ./tap_send_ufo tap0 tap1
(from https://github.com/wdebruij/kerneltools/tree/master/tests)
Changes
v1 -> v2
- simplified set_offload change (review comment)
- documented test procedure
Link: http://lkml.kernel.org/r/<CAF=yD-LuUeDuL9YWPJD9ykOZ0QCjNeznPDr6whqZ9NGMNF12Mw@mail.gmail.com>
Fixes: fb652fdfe837 ("macvlan/macvtap: Remove NETIF_F_UFO advertisement.")
Reported-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-21 23:22:25 +08:00
|
|
|
if (udpfrag) {
|
|
|
|
iph->frag_off = htons(offset >> 3);
|
|
|
|
if (skb->next)
|
|
|
|
iph->frag_off |= htons(IP_MF);
|
|
|
|
offset += skb->len - nhoff - ihl;
|
|
|
|
tot_len = skb->len - nhoff;
|
|
|
|
} else if (skb_is_gso(skb)) {
|
2016-04-11 09:45:03 +08:00
|
|
|
if (!fixedid) {
|
|
|
|
iph->id = htons(id);
|
|
|
|
id += skb_shinfo(skb)->gso_segs;
|
|
|
|
}
|
2016-09-19 18:58:47 +08:00
|
|
|
|
|
|
|
if (gso_partial)
|
|
|
|
tot_len = skb_shinfo(skb)->gso_size +
|
|
|
|
SKB_GSO_CB(skb)->data_offset +
|
|
|
|
skb->head - (unsigned char *)iph;
|
|
|
|
else
|
|
|
|
tot_len = skb->len - nhoff;
|
2016-04-11 09:45:03 +08:00
|
|
|
} else {
|
|
|
|
if (!fixedid)
|
|
|
|
iph->id = htons(id++);
|
|
|
|
tot_len = skb->len - nhoff;
|
2013-02-22 15:30:30 +08:00
|
|
|
}
|
2016-04-11 09:45:03 +08:00
|
|
|
iph->tot_len = htons(tot_len);
|
2013-10-19 04:13:27 +08:00
|
|
|
ip_send_check(iph);
|
2013-10-28 09:18:16 +08:00
|
|
|
if (encap)
|
2013-10-20 02:42:56 +08:00
|
|
|
skb_reset_inner_headers(skb);
|
|
|
|
skb->network_header = (u8 *)iph - skb->head;
|
2018-09-13 22:43:07 +08:00
|
|
|
skb_reset_mac_len(skb);
|
2006-06-22 18:02:40 +08:00
|
|
|
} while ((skb = skb->next));
|
|
|
|
|
|
|
|
out:
|
|
|
|
return segs;
|
|
|
|
}
|
2016-05-19 00:06:23 +08:00
|
|
|
EXPORT_SYMBOL(inet_gso_segment);
|
2006-06-22 18:02:40 +08:00
|
|
|
|
2019-02-20 23:52:12 +08:00
|
|
|
static struct sk_buff *ipip_gso_segment(struct sk_buff *skb,
|
|
|
|
netdev_features_t features)
|
|
|
|
{
|
|
|
|
if (!(skb_shinfo(skb)->gso_type & SKB_GSO_IPXIP4))
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
return inet_gso_segment(skb, features);
|
|
|
|
}
|
|
|
|
|
2018-12-14 18:51:59 +08:00
|
|
|
INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp4_gro_receive(struct list_head *,
|
|
|
|
struct sk_buff *));
|
|
|
|
INDIRECT_CALLABLE_DECLARE(struct sk_buff *udp4_gro_receive(struct list_head *,
|
|
|
|
struct sk_buff *));
|
2018-06-24 13:13:49 +08:00
|
|
|
struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb)
|
2008-12-16 15:41:09 +08:00
|
|
|
{
|
2012-11-15 16:49:14 +08:00
|
|
|
const struct net_offload *ops;
|
2018-06-24 13:13:49 +08:00
|
|
|
struct sk_buff *pp = NULL;
|
2011-04-22 12:53:02 +08:00
|
|
|
const struct iphdr *iph;
|
2018-06-24 13:13:49 +08:00
|
|
|
struct sk_buff *p;
|
2009-05-27 02:50:28 +08:00
|
|
|
unsigned int hlen;
|
|
|
|
unsigned int off;
|
2009-05-27 02:50:29 +08:00
|
|
|
unsigned int id;
|
2008-12-16 15:41:09 +08:00
|
|
|
int flush = 1;
|
|
|
|
int proto;
|
|
|
|
|
2009-05-27 02:50:28 +08:00
|
|
|
off = skb_gro_offset(skb);
|
|
|
|
hlen = off + sizeof(*iph);
|
|
|
|
iph = skb_gro_header_fast(skb, off);
|
|
|
|
if (skb_gro_header_hard(skb, hlen)) {
|
|
|
|
iph = skb_gro_header_slow(skb, hlen, off);
|
|
|
|
if (unlikely(!iph))
|
|
|
|
goto out;
|
|
|
|
}
|
2008-12-16 15:41:09 +08:00
|
|
|
|
2012-06-20 09:56:21 +08:00
|
|
|
proto = iph->protocol;
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
rcu_read_lock();
|
2012-11-15 16:49:14 +08:00
|
|
|
ops = rcu_dereference(inet_offloads[proto]);
|
2012-11-15 16:49:23 +08:00
|
|
|
if (!ops || !ops->callbacks.gro_receive)
|
2008-12-16 15:41:09 +08:00
|
|
|
goto out_unlock;
|
|
|
|
|
2009-02-09 02:00:39 +08:00
|
|
|
if (*(u8 *)iph != 0x45)
|
2008-12-16 15:41:09 +08:00
|
|
|
goto out_unlock;
|
|
|
|
|
2017-04-28 16:54:32 +08:00
|
|
|
if (ip_is_fragment(iph))
|
|
|
|
goto out_unlock;
|
|
|
|
|
net: tcp: GRO should be ECN friendly
While doing TCP ECN tests, I discovered GRO was reordering packets if it
receives one packet with CE set, while previous packets in same NAPI run
have ECT(0) for the same flow :
09:25:25.857620 IP (tos 0x2,ECT(0), ttl 64, id 27893, offset 0, flags
[DF], proto TCP (6), length 4396)
172.30.42.19.54550 > 172.30.42.13.44139: Flags [.], seq
233801:238145, ack 1, win 115, options [nop,nop,TS val 3397779 ecr
1990627], length 4344
09:25:25.857626 IP (tos 0x3,CE, ttl 64, id 27892, offset 0, flags [DF],
proto TCP (6), length 1500)
172.30.42.19.54550 > 172.30.42.13.44139: Flags [.], seq
232353:233801, ack 1, win 115, options [nop,nop,TS val 3397779 ecr
1990627], length 1448
09:25:25.857638 IP (tos 0x0, ttl 64, id 34581, offset 0, flags [DF],
proto TCP (6), length 64)
172.30.42.13.44139 > 172.30.42.19.54550: Flags [.], cksum 0xac8f
(incorrect -> 0xca69), ack 232353, win 1271, options [nop,nop,TS val
1990627 ecr 3397779,nop,nop,sack 1 {233801:238145}], length 0
We have two problems here :
1) GRO reorders packets
If NIC gave packet1, then packet2, which happen to be from "different
flows" GRO feeds stack with packet2, then packet1. I have yet to
understand how to solve this problem.
2) GRO is not ECN friendly
Delivering packets out of order makes TCP stack not as fast as it could
be.
In this patch I suggest we make the tos test not part of the 'same_flow'
determination, but part of the 'should flush' logic
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-06 06:34:50 +08:00
|
|
|
if (unlikely(ip_fast_csum((u8 *)iph, 5)))
|
2008-12-16 15:41:09 +08:00
|
|
|
goto out_unlock;
|
|
|
|
|
2010-04-21 10:06:52 +08:00
|
|
|
id = ntohl(*(__be32 *)&iph->id);
|
2013-05-31 19:18:10 +08:00
|
|
|
flush = (u16)((ntohl(*(__be32 *)iph) ^ skb_gro_len(skb)) | (id & ~IP_DF));
|
2009-05-27 02:50:29 +08:00
|
|
|
id >>= 16;
|
2008-12-16 15:41:09 +08:00
|
|
|
|
2018-06-24 13:13:49 +08:00
|
|
|
list_for_each_entry(p, head, list) {
|
2008-12-16 15:41:09 +08:00
|
|
|
struct iphdr *iph2;
|
2016-04-11 09:44:57 +08:00
|
|
|
u16 flush_id;
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
if (!NAPI_GRO_CB(p)->same_flow)
|
|
|
|
continue;
|
|
|
|
|
net-gro: Prepare GRO stack for the upcoming tunneling support
This patch modifies the GRO stack to avoid the use of "network_header"
and associated macros like ip_hdr() and ipv6_hdr() in order to allow
an arbitary number of IP hdrs (v4 or v6) to be used in the
encapsulation chain. This lays the foundation for various IP
tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.
With this patch, the GRO stack traversing now is mostly based on
skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
skb->network_header). As a result all but the top layer (i.e., the
the transport layer) must have hdrs of the same length in order for
a pkt to be considered for aggregation. Therefore when adding a new
encap layer (e.g., for tunneling), one must check and skip flows
(e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
different hdr length.
Note that unlike the network header, the transport header can and
will continue to be set by the GRO code since there will be at
most one "transport layer" in the encap chain.
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-12 12:53:45 +08:00
|
|
|
iph2 = (struct iphdr *)(p->data + off);
|
|
|
|
/* The above works because, with the exception of the top
|
|
|
|
* (inner most) layer, we only aggregate pkts with the same
|
|
|
|
* hdr length so all the hdrs we'll need to verify will start
|
|
|
|
* at the same offset.
|
|
|
|
*/
|
2009-02-09 02:00:39 +08:00
|
|
|
if ((iph->protocol ^ iph2->protocol) |
|
2010-04-21 10:06:52 +08:00
|
|
|
((__force u32)iph->saddr ^ (__force u32)iph2->saddr) |
|
|
|
|
((__force u32)iph->daddr ^ (__force u32)iph2->daddr)) {
|
2008-12-16 15:41:09 +08:00
|
|
|
NAPI_GRO_CB(p)->same_flow = 0;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* All fields must match except length and checksum. */
|
|
|
|
NAPI_GRO_CB(p)->flush |=
|
2009-02-09 02:00:39 +08:00
|
|
|
(iph->ttl ^ iph2->ttl) |
|
net: tcp: GRO should be ECN friendly
While doing TCP ECN tests, I discovered GRO was reordering packets if it
receives one packet with CE set, while previous packets in same NAPI run
have ECT(0) for the same flow :
09:25:25.857620 IP (tos 0x2,ECT(0), ttl 64, id 27893, offset 0, flags
[DF], proto TCP (6), length 4396)
172.30.42.19.54550 > 172.30.42.13.44139: Flags [.], seq
233801:238145, ack 1, win 115, options [nop,nop,TS val 3397779 ecr
1990627], length 4344
09:25:25.857626 IP (tos 0x3,CE, ttl 64, id 27892, offset 0, flags [DF],
proto TCP (6), length 1500)
172.30.42.19.54550 > 172.30.42.13.44139: Flags [.], seq
232353:233801, ack 1, win 115, options [nop,nop,TS val 3397779 ecr
1990627], length 1448
09:25:25.857638 IP (tos 0x0, ttl 64, id 34581, offset 0, flags [DF],
proto TCP (6), length 64)
172.30.42.13.44139 > 172.30.42.19.54550: Flags [.], cksum 0xac8f
(incorrect -> 0xca69), ack 232353, win 1271, options [nop,nop,TS val
1990627 ecr 3397779,nop,nop,sack 1 {233801:238145}], length 0
We have two problems here :
1) GRO reorders packets
If NIC gave packet1, then packet2, which happen to be from "different
flows" GRO feeds stack with packet2, then packet1. I have yet to
understand how to solve this problem.
2) GRO is not ECN friendly
Delivering packets out of order makes TCP stack not as fast as it could
be.
In this patch I suggest we make the tos test not part of the 'same_flow'
determination, but part of the 'should flush' logic
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-06 06:34:50 +08:00
|
|
|
(iph->tos ^ iph2->tos) |
|
net-gre-gro: Add GRE support to the GRO stack
This patch built on top of Commit 299603e8370a93dd5d8e8d800f0dff1ce2c53d36
("net-gro: Prepare GRO stack for the upcoming tunneling support") to add
the support of the standard GRE (RFC1701/RFC2784/RFC2890) to the GRO
stack. It also serves as an example for supporting other encapsulation
protocols in the GRO stack in the future.
The patch supports version 0 and all the flags (key, csum, seq#) but
will flush any pkt with the S (seq#) flag. This is because the S flag
is not support by GSO, and a GRO pkt may end up in the forwarding path,
thus requiring GSO support to break it up correctly.
Currently the "packet_offload" structure only contains L3 (ETH_P_IP/
ETH_P_IPV6) GRO offload support so the encapped pkts are limited to
IP pkts (i.e., w/o L2 hdr). But support for other protocol type can
be easily added, so is the support for GRE variations like NVGRE.
The patch also support csum offload. Specifically if the csum flag is on
and the h/w is capable of checksumming the payload (CHECKSUM_COMPLETE),
the code will take advantage of the csum computed by the h/w when
validating the GRE csum.
Note that commit 60769a5dcd8755715c7143b4571d5c44f01796f1 "ipv4: gre:
add GRO capability" already introduces GRO capability to IPv4 GRE
tunnels, using the gro_cells infrastructure. But GRO is done after
GRE hdr has been removed (i.e., decapped). The following patch applies
GRO when pkts first come in (before hitting the GRE tunnel code). There
is some performance advantage for applying GRO as early as possible.
Also this approach is transparent to other subsystem like Open vSwitch
where GRE decap is handled outside of the IP stack hence making it
harder for the gro_cells stuff to apply. On the other hand, some NICs
are still not capable of hashing on the inner hdr of a GRE pkt (RSS).
In that case the GRO processing of pkts from the same remote host will
all happen on the same CPU and the performance may be suboptimal.
I'm including some rough preliminary performance numbers below. Note
that the performance will be highly dependent on traffic load, mix as
usual. Moreover it also depends on NIC offload features hence the
following is by no means a comprehesive study. Local testing and tuning
will be needed to decide the best setting.
All tests spawned 50 copies of netperf TCP_STREAM and ran for 30 secs.
(super_netperf 50 -H 192.168.1.18 -l 30)
An IP GRE tunnel with only the key flag on (e.g., ip tunnel add gre1
mode gre local 10.246.17.18 remote 10.246.17.17 ttl 255 key 123)
is configured.
The GRO support for pkts AFTER decap are controlled through the device
feature of the GRE device (e.g., ethtool -K gre1 gro on/off).
1.1 ethtool -K gre1 gro off; ethtool -K eth0 gro off
thruput: 9.16Gbps
CPU utilization: 19%
1.2 ethtool -K gre1 gro on; ethtool -K eth0 gro off
thruput: 5.9Gbps
CPU utilization: 15%
1.3 ethtool -K gre1 gro off; ethtool -K eth0 gro on
thruput: 9.26Gbps
CPU utilization: 12-13%
1.4 ethtool -K gre1 gro on; ethtool -K eth0 gro on
thruput: 9.26Gbps
CPU utilization: 10%
The following tests were performed on a different NIC that is capable of
csum offload. I.e., the h/w is capable of computing IP payload csum
(CHECKSUM_COMPLETE).
2.1 ethtool -K gre1 gro on (hence will use gro_cells)
2.1.1 ethtool -K eth0 gro off; csum offload disabled
thruput: 8.53Gbps
CPU utilization: 9%
2.1.2 ethtool -K eth0 gro off; csum offload enabled
thruput: 8.97Gbps
CPU utilization: 7-8%
2.1.3 ethtool -K eth0 gro on; csum offload disabled
thruput: 8.83Gbps
CPU utilization: 5-6%
2.1.4 ethtool -K eth0 gro on; csum offload enabled
thruput: 8.98Gbps
CPU utilization: 5%
2.2 ethtool -K gre1 gro off
2.2.1 ethtool -K eth0 gro off; csum offload disabled
thruput: 5.93Gbps
CPU utilization: 9%
2.2.2 ethtool -K eth0 gro off; csum offload enabled
thruput: 5.62Gbps
CPU utilization: 8%
2.2.3 ethtool -K eth0 gro on; csum offload disabled
thruput: 7.69Gbps
CPU utilization: 8%
2.2.4 ethtool -K eth0 gro on; csum offload enabled
thruput: 8.96Gbps
CPU utilization: 5-6%
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-08 02:23:19 +08:00
|
|
|
((iph->frag_off ^ iph2->frag_off) & htons(IP_DF));
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
NAPI_GRO_CB(p)->flush |= flush;
|
2016-04-11 09:44:57 +08:00
|
|
|
|
|
|
|
/* We need to store of the IP ID check to be included later
|
|
|
|
* when we can verify that this packet does in fact belong
|
|
|
|
* to a given flow.
|
|
|
|
*/
|
|
|
|
flush_id = (u16)(id - ntohs(iph2->id));
|
|
|
|
|
|
|
|
/* This bit of code makes it much easier for us to identify
|
|
|
|
* the cases where we are doing atomic vs non-atomic IP ID
|
|
|
|
* checks. Specifically an atomic check can return IP ID
|
|
|
|
* values 0 - 0xFFFF, while a non-atomic check can only
|
|
|
|
* return 0 or 0xFFFF.
|
|
|
|
*/
|
|
|
|
if (!NAPI_GRO_CB(p)->is_atomic ||
|
|
|
|
!(iph->frag_off & htons(IP_DF))) {
|
|
|
|
flush_id ^= NAPI_GRO_CB(p)->count;
|
|
|
|
flush_id = flush_id ? 0xFFFF : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If the previous IP ID value was based on an atomic
|
|
|
|
* datagram we can overwrite the value and ignore it.
|
|
|
|
*/
|
|
|
|
if (NAPI_GRO_CB(skb)->is_atomic)
|
|
|
|
NAPI_GRO_CB(p)->flush_id = flush_id;
|
|
|
|
else
|
|
|
|
NAPI_GRO_CB(p)->flush_id |= flush_id;
|
2008-12-16 15:41:09 +08:00
|
|
|
}
|
|
|
|
|
2016-04-11 09:44:57 +08:00
|
|
|
NAPI_GRO_CB(skb)->is_atomic = !!(iph->frag_off & htons(IP_DF));
|
2008-12-16 15:41:09 +08:00
|
|
|
NAPI_GRO_CB(skb)->flush |= flush;
|
net-gro: Prepare GRO stack for the upcoming tunneling support
This patch modifies the GRO stack to avoid the use of "network_header"
and associated macros like ip_hdr() and ipv6_hdr() in order to allow
an arbitary number of IP hdrs (v4 or v6) to be used in the
encapsulation chain. This lays the foundation for various IP
tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.
With this patch, the GRO stack traversing now is mostly based on
skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
skb->network_header). As a result all but the top layer (i.e., the
the transport layer) must have hdrs of the same length in order for
a pkt to be considered for aggregation. Therefore when adding a new
encap layer (e.g., for tunneling), one must check and skip flows
(e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
different hdr length.
Note that unlike the network header, the transport header can and
will continue to be set by the GRO code since there will be at
most one "transport layer" in the encap chain.
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-12 12:53:45 +08:00
|
|
|
skb_set_network_header(skb, off);
|
|
|
|
/* The above will be needed by the transport layer if there is one
|
|
|
|
* immediately following this IP hdr.
|
|
|
|
*/
|
|
|
|
|
2014-10-01 13:12:05 +08:00
|
|
|
/* Note : No need to call skb_gro_postpull_rcsum() here,
|
|
|
|
* as we already checked checksum over ipv4 header was 0
|
|
|
|
*/
|
2009-01-29 22:19:50 +08:00
|
|
|
skb_gro_pull(skb, sizeof(*iph));
|
|
|
|
skb_set_transport_header(skb, skb_gro_offset(skb));
|
2008-12-16 15:41:09 +08:00
|
|
|
|
2018-12-14 18:51:59 +08:00
|
|
|
pp = indirect_call_gro_receive(tcp4_gro_receive, udp4_gro_receive,
|
|
|
|
ops->callbacks.gro_receive, head, skb);
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
out:
|
2017-02-15 16:39:39 +08:00
|
|
|
skb_gro_flush_final(skb, pp, flush);
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
return pp;
|
|
|
|
}
|
2016-05-19 00:06:23 +08:00
|
|
|
EXPORT_SYMBOL(inet_gro_receive);
|
2008-12-16 15:41:09 +08:00
|
|
|
|
2018-06-24 13:13:49 +08:00
|
|
|
static struct sk_buff *ipip_gro_receive(struct list_head *head,
|
|
|
|
struct sk_buff *skb)
|
2016-03-20 00:32:01 +08:00
|
|
|
{
|
|
|
|
if (NAPI_GRO_CB(skb)->encap_mark) {
|
|
|
|
NAPI_GRO_CB(skb)->flush = 1;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
NAPI_GRO_CB(skb)->encap_mark = 1;
|
|
|
|
|
|
|
|
return inet_gro_receive(head, skb);
|
|
|
|
}
|
|
|
|
|
2016-02-27 16:32:15 +08:00
|
|
|
#define SECONDS_PER_DAY 86400
|
|
|
|
|
|
|
|
/* inet_current_timestamp - Return IP network timestamp
|
|
|
|
*
|
|
|
|
* Return milliseconds since midnight in network byte order.
|
|
|
|
*/
|
|
|
|
__be32 inet_current_timestamp(void)
|
|
|
|
{
|
|
|
|
u32 secs;
|
|
|
|
u32 msecs;
|
|
|
|
struct timespec64 ts;
|
|
|
|
|
|
|
|
ktime_get_real_ts64(&ts);
|
|
|
|
|
|
|
|
/* Get secs since midnight. */
|
|
|
|
(void)div_u64_rem(ts.tv_sec, SECONDS_PER_DAY, &secs);
|
|
|
|
/* Convert to msecs. */
|
|
|
|
msecs = secs * MSEC_PER_SEC;
|
|
|
|
/* Convert nsec to msec. */
|
|
|
|
msecs += (u32)ts.tv_nsec / NSEC_PER_MSEC;
|
|
|
|
|
|
|
|
/* Convert to network byte order. */
|
2016-03-22 09:21:26 +08:00
|
|
|
return htonl(msecs);
|
2016-02-27 16:32:15 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(inet_current_timestamp);
|
|
|
|
|
2014-11-27 03:53:02 +08:00
|
|
|
int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
|
|
|
|
{
|
|
|
|
if (sk->sk_family == AF_INET)
|
|
|
|
return ip_recv_error(sk, msg, len, addr_len);
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
if (sk->sk_family == AF_INET6)
|
|
|
|
return pingv6_ops.ipv6_recv_error(sk, msg, len, addr_len);
|
|
|
|
#endif
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2018-12-14 18:51:59 +08:00
|
|
|
INDIRECT_CALLABLE_DECLARE(int tcp4_gro_complete(struct sk_buff *, int));
|
|
|
|
INDIRECT_CALLABLE_DECLARE(int udp4_gro_complete(struct sk_buff *, int));
|
2016-05-19 00:06:23 +08:00
|
|
|
int inet_gro_complete(struct sk_buff *skb, int nhoff)
|
2008-12-16 15:41:09 +08:00
|
|
|
{
|
net-gro: Prepare GRO stack for the upcoming tunneling support
This patch modifies the GRO stack to avoid the use of "network_header"
and associated macros like ip_hdr() and ipv6_hdr() in order to allow
an arbitary number of IP hdrs (v4 or v6) to be used in the
encapsulation chain. This lays the foundation for various IP
tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.
With this patch, the GRO stack traversing now is mostly based on
skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
skb->network_header). As a result all but the top layer (i.e., the
the transport layer) must have hdrs of the same length in order for
a pkt to be considered for aggregation. Therefore when adding a new
encap layer (e.g., for tunneling), one must check and skip flows
(e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
different hdr length.
Note that unlike the network header, the transport header can and
will continue to be set by the GRO code since there will be at
most one "transport layer" in the encap chain.
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-12 12:53:45 +08:00
|
|
|
__be16 newlen = htons(skb->len - nhoff);
|
|
|
|
struct iphdr *iph = (struct iphdr *)(skb->data + nhoff);
|
2012-11-15 16:49:14 +08:00
|
|
|
const struct net_offload *ops;
|
2012-06-20 09:56:21 +08:00
|
|
|
int proto = iph->protocol;
|
2008-12-16 15:41:09 +08:00
|
|
|
int err = -ENOSYS;
|
|
|
|
|
2017-03-08 01:33:31 +08:00
|
|
|
if (skb->encapsulation) {
|
|
|
|
skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IP));
|
2014-07-15 06:54:46 +08:00
|
|
|
skb_set_inner_network_header(skb, nhoff);
|
2017-03-08 01:33:31 +08:00
|
|
|
}
|
2014-07-15 06:54:46 +08:00
|
|
|
|
2008-12-16 15:41:09 +08:00
|
|
|
csum_replace2(&iph->check, iph->tot_len, newlen);
|
|
|
|
iph->tot_len = newlen;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2012-11-15 16:49:14 +08:00
|
|
|
ops = rcu_dereference(inet_offloads[proto]);
|
2012-11-15 16:49:23 +08:00
|
|
|
if (WARN_ON(!ops || !ops->callbacks.gro_complete))
|
2008-12-16 15:41:09 +08:00
|
|
|
goto out_unlock;
|
|
|
|
|
net-gro: Prepare GRO stack for the upcoming tunneling support
This patch modifies the GRO stack to avoid the use of "network_header"
and associated macros like ip_hdr() and ipv6_hdr() in order to allow
an arbitary number of IP hdrs (v4 or v6) to be used in the
encapsulation chain. This lays the foundation for various IP
tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.
With this patch, the GRO stack traversing now is mostly based on
skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
skb->network_header). As a result all but the top layer (i.e., the
the transport layer) must have hdrs of the same length in order for
a pkt to be considered for aggregation. Therefore when adding a new
encap layer (e.g., for tunneling), one must check and skip flows
(e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
different hdr length.
Note that unlike the network header, the transport header can and
will continue to be set by the GRO code since there will be at
most one "transport layer" in the encap chain.
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-12 12:53:45 +08:00
|
|
|
/* Only need to add sizeof(*iph) to get to the next hdr below
|
|
|
|
* because any hdr with option will have been flushed in
|
|
|
|
* inet_gro_receive().
|
|
|
|
*/
|
2018-12-14 18:51:59 +08:00
|
|
|
err = INDIRECT_CALL_2(ops->callbacks.gro_complete,
|
|
|
|
tcp4_gro_complete, udp4_gro_complete,
|
|
|
|
skb, nhoff + sizeof(*iph));
|
2008-12-16 15:41:09 +08:00
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
2016-05-19 00:06:23 +08:00
|
|
|
EXPORT_SYMBOL(inet_gro_complete);
|
2008-12-16 15:41:09 +08:00
|
|
|
|
2016-03-20 00:32:00 +08:00
|
|
|
static int ipip_gro_complete(struct sk_buff *skb, int nhoff)
|
|
|
|
{
|
|
|
|
skb->encapsulation = 1;
|
2016-05-19 00:06:10 +08:00
|
|
|
skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4;
|
2016-03-20 00:32:00 +08:00
|
|
|
return inet_gro_complete(skb, nhoff);
|
|
|
|
}
|
|
|
|
|
2008-04-04 05:27:58 +08:00
|
|
|
int inet_ctl_sock_create(struct sock **sk, unsigned short family,
|
2008-04-04 05:28:30 +08:00
|
|
|
unsigned short type, unsigned char protocol,
|
|
|
|
struct net *net)
|
2008-04-04 05:22:32 +08:00
|
|
|
{
|
2008-04-04 05:27:58 +08:00
|
|
|
struct socket *sock;
|
2015-05-09 10:10:31 +08:00
|
|
|
int rc = sock_create_kern(net, family, type, protocol, &sock);
|
2008-04-04 05:22:32 +08:00
|
|
|
|
|
|
|
if (rc == 0) {
|
2008-04-04 05:27:58 +08:00
|
|
|
*sk = sock->sk;
|
|
|
|
(*sk)->sk_allocation = GFP_ATOMIC;
|
2008-04-04 05:22:32 +08:00
|
|
|
/*
|
|
|
|
* Unhash it so that IP input processing does not even see it,
|
|
|
|
* we do not wish this socket to see incoming packets.
|
|
|
|
*/
|
2008-04-04 05:27:58 +08:00
|
|
|
(*sk)->sk_prot->unhash(*sk);
|
2008-04-04 05:22:32 +08:00
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(inet_ctl_sock_create);
|
|
|
|
|
2015-08-30 13:59:41 +08:00
|
|
|
u64 snmp_get_cpu_field(void __percpu *mib, int cpu, int offt)
|
|
|
|
{
|
|
|
|
return *(((unsigned long *)per_cpu_ptr(mib, cpu)) + offt);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(snmp_get_cpu_field);
|
|
|
|
|
2014-05-06 06:55:55 +08:00
|
|
|
unsigned long snmp_fold_field(void __percpu *mib, int offt)
|
2007-04-25 12:53:35 +08:00
|
|
|
{
|
|
|
|
unsigned long res = 0;
|
2014-05-06 06:55:55 +08:00
|
|
|
int i;
|
2007-04-25 12:53:35 +08:00
|
|
|
|
2014-05-06 06:55:55 +08:00
|
|
|
for_each_possible_cpu(i)
|
2015-08-30 13:59:41 +08:00
|
|
|
res += snmp_get_cpu_field(mib, i, offt);
|
2007-04-25 12:53:35 +08:00
|
|
|
return res;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(snmp_fold_field);
|
|
|
|
|
2010-07-01 04:31:19 +08:00
|
|
|
#if BITS_PER_LONG==32
|
|
|
|
|
2015-08-31 13:40:44 +08:00
|
|
|
u64 snmp_get_cpu_field64(void __percpu *mib, int cpu, int offt,
|
2015-08-30 13:59:41 +08:00
|
|
|
size_t syncp_offset)
|
|
|
|
{
|
|
|
|
void *bhptr;
|
|
|
|
struct u64_stats_sync *syncp;
|
|
|
|
u64 v;
|
|
|
|
unsigned int start;
|
|
|
|
|
|
|
|
bhptr = per_cpu_ptr(mib, cpu);
|
|
|
|
syncp = (struct u64_stats_sync *)(bhptr + syncp_offset);
|
|
|
|
do {
|
|
|
|
start = u64_stats_fetch_begin_irq(syncp);
|
|
|
|
v = *(((u64 *)bhptr) + offt);
|
|
|
|
} while (u64_stats_fetch_retry_irq(syncp, start));
|
|
|
|
|
|
|
|
return v;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(snmp_get_cpu_field64);
|
|
|
|
|
2014-05-06 06:55:55 +08:00
|
|
|
u64 snmp_fold_field64(void __percpu *mib, int offt, size_t syncp_offset)
|
2010-07-01 04:31:19 +08:00
|
|
|
{
|
|
|
|
u64 res = 0;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
for_each_possible_cpu(cpu) {
|
2015-08-31 20:46:07 +08:00
|
|
|
res += snmp_get_cpu_field64(mib, cpu, offt, syncp_offset);
|
2010-07-01 04:31:19 +08:00
|
|
|
}
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(snmp_fold_field64);
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_MULTICAST
|
2009-09-14 20:21:47 +08:00
|
|
|
static const struct net_protocol igmp_protocol = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.handler = igmp_rcv,
|
2008-12-26 08:42:23 +08:00
|
|
|
.netns_ok = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2017-08-29 06:14:20 +08:00
|
|
|
/* thinking of making this const? Don't.
|
|
|
|
* early_demux can change based on sysctl.
|
|
|
|
*/
|
2017-08-29 04:23:09 +08:00
|
|
|
static struct net_protocol tcp_protocol = {
|
2012-06-20 12:22:05 +08:00
|
|
|
.early_demux = tcp_v4_early_demux,
|
2017-03-24 03:34:16 +08:00
|
|
|
.early_demux_handler = tcp_v4_early_demux,
|
2012-06-20 12:22:05 +08:00
|
|
|
.handler = tcp_v4_rcv,
|
|
|
|
.err_handler = tcp_v4_err,
|
|
|
|
.no_policy = 1,
|
|
|
|
.netns_ok = 1,
|
2014-01-09 17:01:17 +08:00
|
|
|
.icmp_strict_tag_validation = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2017-08-29 06:14:20 +08:00
|
|
|
/* thinking of making this const? Don't.
|
|
|
|
* early_demux can change based on sysctl.
|
|
|
|
*/
|
2017-08-29 04:23:09 +08:00
|
|
|
static struct net_protocol udp_protocol = {
|
2013-10-08 00:01:39 +08:00
|
|
|
.early_demux = udp_v4_early_demux,
|
2017-03-24 03:34:16 +08:00
|
|
|
.early_demux_handler = udp_v4_early_demux,
|
2005-04-17 06:20:36 +08:00
|
|
|
.handler = udp_rcv,
|
|
|
|
.err_handler = udp_err,
|
|
|
|
.no_policy = 1,
|
2008-03-25 06:34:06 +08:00
|
|
|
.netns_ok = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2009-09-14 20:21:47 +08:00
|
|
|
static const struct net_protocol icmp_protocol = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.handler = icmp_rcv,
|
2013-02-22 06:18:44 +08:00
|
|
|
.err_handler = icmp_err,
|
2007-12-13 02:44:43 +08:00
|
|
|
.no_policy = 1,
|
2008-03-25 06:34:06 +08:00
|
|
|
.netns_ok = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2008-07-18 19:01:44 +08:00
|
|
|
static __net_init int ipv4_mib_init_net(struct net *net)
|
|
|
|
{
|
2013-10-08 06:51:58 +08:00
|
|
|
int i;
|
|
|
|
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.tcp_statistics = alloc_percpu(struct tcp_mib);
|
|
|
|
if (!net->mib.tcp_statistics)
|
2008-07-18 19:02:08 +08:00
|
|
|
goto err_tcp_mib;
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.ip_statistics = alloc_percpu(struct ipstats_mib);
|
|
|
|
if (!net->mib.ip_statistics)
|
2008-07-18 19:02:42 +08:00
|
|
|
goto err_ip_mib;
|
2013-10-08 06:51:58 +08:00
|
|
|
|
|
|
|
for_each_possible_cpu(i) {
|
|
|
|
struct ipstats_mib *af_inet_stats;
|
2014-05-06 06:55:55 +08:00
|
|
|
af_inet_stats = per_cpu_ptr(net->mib.ip_statistics, i);
|
2013-10-08 06:51:58 +08:00
|
|
|
u64_stats_init(&af_inet_stats->syncp);
|
|
|
|
}
|
|
|
|
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.net_statistics = alloc_percpu(struct linux_mib);
|
|
|
|
if (!net->mib.net_statistics)
|
2008-07-18 19:03:08 +08:00
|
|
|
goto err_net_mib;
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.udp_statistics = alloc_percpu(struct udp_mib);
|
|
|
|
if (!net->mib.udp_statistics)
|
2008-07-18 19:03:27 +08:00
|
|
|
goto err_udp_mib;
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.udplite_statistics = alloc_percpu(struct udp_mib);
|
|
|
|
if (!net->mib.udplite_statistics)
|
2008-07-18 19:03:45 +08:00
|
|
|
goto err_udplite_mib;
|
2014-05-06 06:55:55 +08:00
|
|
|
net->mib.icmp_statistics = alloc_percpu(struct icmp_mib);
|
|
|
|
if (!net->mib.icmp_statistics)
|
2008-07-18 19:04:02 +08:00
|
|
|
goto err_icmp_mib;
|
2011-11-08 21:04:43 +08:00
|
|
|
net->mib.icmpmsg_statistics = kzalloc(sizeof(struct icmpmsg_mib),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!net->mib.icmpmsg_statistics)
|
2008-07-18 19:04:22 +08:00
|
|
|
goto err_icmpmsg_mib;
|
2008-07-18 19:02:08 +08:00
|
|
|
|
|
|
|
tcp_mib_init(net);
|
2008-07-18 19:01:44 +08:00
|
|
|
return 0;
|
2008-07-18 19:02:08 +08:00
|
|
|
|
2008-07-18 19:04:22 +08:00
|
|
|
err_icmpmsg_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.icmp_statistics);
|
2008-07-18 19:04:02 +08:00
|
|
|
err_icmp_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.udplite_statistics);
|
2008-07-18 19:03:45 +08:00
|
|
|
err_udplite_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.udp_statistics);
|
2008-07-18 19:03:27 +08:00
|
|
|
err_udp_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.net_statistics);
|
2008-07-18 19:03:08 +08:00
|
|
|
err_net_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.ip_statistics);
|
2008-07-18 19:02:42 +08:00
|
|
|
err_ip_mib:
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.tcp_statistics);
|
2008-07-18 19:02:08 +08:00
|
|
|
err_tcp_mib:
|
|
|
|
return -ENOMEM;
|
2008-07-18 19:01:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static __net_exit void ipv4_mib_exit_net(struct net *net)
|
|
|
|
{
|
2011-11-08 21:04:43 +08:00
|
|
|
kfree(net->mib.icmpmsg_statistics);
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->mib.icmp_statistics);
|
|
|
|
free_percpu(net->mib.udplite_statistics);
|
|
|
|
free_percpu(net->mib.udp_statistics);
|
|
|
|
free_percpu(net->mib.net_statistics);
|
|
|
|
free_percpu(net->mib.ip_statistics);
|
|
|
|
free_percpu(net->mib.tcp_statistics);
|
2008-07-18 19:01:44 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static __net_initdata struct pernet_operations ipv4_mib_ops = {
|
|
|
|
.init = ipv4_mib_init_net,
|
|
|
|
.exit = ipv4_mib_exit_net,
|
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int __init init_ipv4_mibs(void)
|
|
|
|
{
|
2008-07-18 19:04:51 +08:00
|
|
|
return register_pernet_subsys(&ipv4_mib_ops);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-05-07 02:02:49 +08:00
|
|
|
static __net_init int inet_init_net(struct net *net)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Set defaults for local port range
|
|
|
|
*/
|
|
|
|
seqlock_init(&net->ipv4.ip_local_ports.lock);
|
|
|
|
net->ipv4.ip_local_ports.range[0] = 32768;
|
tcp/dccp: try to not exhaust ip_local_port_range in connect()
A long standing problem on busy servers is the tiny available TCP port
range (/proc/sys/net/ipv4/ip_local_port_range) and the default
sequential allocation of source ports in connect() system call.
If a host is having a lot of active TCP sessions, chances are
very high that all ports are in use by at least one flow,
and subsequent bind(0) attempts fail, or have to scan a big portion of
space to find a slot.
In this patch, I changed the starting point in __inet_hash_connect()
so that we try to favor even [1] ports, leaving odd ports for bind()
users.
We still perform a sequential search, so there is no guarantee, but
if connect() targets are very different, end result is we leave
more ports available to bind(), and we spread them all over the range,
lowering time for both connect() and bind() to find a slot.
This strategy only works well if /proc/sys/net/ipv4/ip_local_port_range
is even, ie if start/end values have different parity.
Therefore, default /proc/sys/net/ipv4/ip_local_port_range was changed to
32768 - 60999 (instead of 32768 - 61000)
There is no change on security aspects here, only some poor hashing
schemes could be eventually impacted by this change.
[1] : The odd/even property depends on ip_local_port_range values parity
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-25 05:49:35 +08:00
|
|
|
net->ipv4.ip_local_ports.range[1] = 60999;
|
2014-05-07 02:02:50 +08:00
|
|
|
|
|
|
|
seqlock_init(&net->ipv4.ping_group_range.lock);
|
|
|
|
/*
|
|
|
|
* Sane defaults - nobody may create ping sockets.
|
|
|
|
* Boot scripts should set this to distro-specific group.
|
|
|
|
*/
|
|
|
|
net->ipv4.ping_group_range.range[0] = make_kgid(&init_user_ns, 1);
|
|
|
|
net->ipv4.ping_group_range.range[1] = make_kgid(&init_user_ns, 0);
|
2016-05-21 00:21:10 +08:00
|
|
|
|
|
|
|
/* Default values for sysctl-controlled parameters.
|
|
|
|
* We set them here, in case sysctl is not compiled.
|
|
|
|
*/
|
|
|
|
net->ipv4.sysctl_ip_default_ttl = IPDEFTTL;
|
2018-08-01 06:36:03 +08:00
|
|
|
net->ipv4.sysctl_ip_fwd_update_priority = 1;
|
2016-05-21 00:21:10 +08:00
|
|
|
net->ipv4.sysctl_ip_dynaddr = 0;
|
|
|
|
net->ipv4.sysctl_ip_early_demux = 1;
|
2017-03-24 03:34:16 +08:00
|
|
|
net->ipv4.sysctl_udp_early_demux = 1;
|
|
|
|
net->ipv4.sysctl_tcp_early_demux = 1;
|
2017-01-21 09:49:11 +08:00
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
net->ipv4.sysctl_ip_prot_sock = PROT_SOCK;
|
|
|
|
#endif
|
2016-05-21 00:21:10 +08:00
|
|
|
|
2017-08-09 19:38:04 +08:00
|
|
|
/* Some igmp sysctl, whose values are always used */
|
|
|
|
net->ipv4.sysctl_igmp_max_memberships = 20;
|
|
|
|
net->ipv4.sysctl_igmp_max_msf = 10;
|
|
|
|
/* IGMP reports for link-local multicast groups are enabled by default */
|
|
|
|
net->ipv4.sysctl_igmp_llm_reports = 1;
|
|
|
|
net->ipv4.sysctl_igmp_qrv = 2;
|
|
|
|
|
2014-05-07 02:02:49 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __net_exit void inet_exit_net(struct net *net)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static __net_initdata struct pernet_operations af_inet_ops = {
|
|
|
|
.init = inet_init_net,
|
|
|
|
.exit = inet_exit_net,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init init_inet_pernet_ops(void)
|
|
|
|
{
|
|
|
|
return register_pernet_subsys(&af_inet_ops);
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int ipv4_proc_init(void);
|
|
|
|
|
2005-07-06 05:40:10 +08:00
|
|
|
/*
|
|
|
|
* IP protocol layer initialiser
|
|
|
|
*/
|
|
|
|
|
2012-11-15 16:49:11 +08:00
|
|
|
static struct packet_offload ip_packet_offload __read_mostly = {
|
|
|
|
.type = cpu_to_be16(ETH_P_IP),
|
2012-11-15 16:49:23 +08:00
|
|
|
.callbacks = {
|
|
|
|
.gso_segment = inet_gso_segment,
|
|
|
|
.gro_receive = inet_gro_receive,
|
|
|
|
.gro_complete = inet_gro_complete,
|
|
|
|
},
|
2005-07-06 05:40:10 +08:00
|
|
|
};
|
|
|
|
|
2013-10-20 02:42:57 +08:00
|
|
|
static const struct net_offload ipip_offload = {
|
|
|
|
.callbacks = {
|
2019-02-20 23:52:12 +08:00
|
|
|
.gso_segment = ipip_gso_segment,
|
2016-03-20 00:32:01 +08:00
|
|
|
.gro_receive = ipip_gro_receive,
|
2016-03-20 00:32:00 +08:00
|
|
|
.gro_complete = ipip_gro_complete,
|
2013-10-20 02:42:57 +08:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2017-08-03 00:34:15 +08:00
|
|
|
static int __init ipip_offload_init(void)
|
|
|
|
{
|
|
|
|
return inet_add_offload(&ipip_offload, IPPROTO_IPIP);
|
|
|
|
}
|
|
|
|
|
2012-11-15 16:49:21 +08:00
|
|
|
static int __init ipv4_offload_init(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Add offloads
|
|
|
|
*/
|
2013-06-08 18:56:03 +08:00
|
|
|
if (udpv4_offload_init() < 0)
|
2012-11-15 16:49:21 +08:00
|
|
|
pr_crit("%s: Cannot add UDP protocol offload\n", __func__);
|
2013-06-07 13:11:46 +08:00
|
|
|
if (tcpv4_offload_init() < 0)
|
|
|
|
pr_crit("%s: Cannot add TCP protocol offload\n", __func__);
|
2017-08-03 00:34:15 +08:00
|
|
|
if (ipip_offload_init() < 0)
|
|
|
|
pr_crit("%s: Cannot add IPIP protocol offload\n", __func__);
|
2012-11-15 16:49:21 +08:00
|
|
|
|
|
|
|
dev_add_offload(&ip_packet_offload);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
fs_initcall(ipv4_offload_init);
|
|
|
|
|
|
|
|
static struct packet_type ip_packet_type __read_mostly = {
|
|
|
|
.type = cpu_to_be16(ETH_P_IP),
|
|
|
|
.func = ip_rcv,
|
2018-07-02 23:14:12 +08:00
|
|
|
.list_func = ip_list_rcv,
|
2012-11-15 16:49:21 +08:00
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int __init inet_init(void)
|
|
|
|
{
|
|
|
|
struct inet_protosw *q;
|
|
|
|
struct list_head *r;
|
|
|
|
int rc = -EINVAL;
|
|
|
|
|
2015-03-01 20:58:29 +08:00
|
|
|
sock_skb_cb_check_size(sizeof(struct inet_skb_parm));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
rc = proto_register(&tcp_prot, 1);
|
|
|
|
if (rc)
|
2014-05-13 07:04:53 +08:00
|
|
|
goto out;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
rc = proto_register(&udp_prot, 1);
|
|
|
|
if (rc)
|
|
|
|
goto out_unregister_tcp_proto;
|
|
|
|
|
|
|
|
rc = proto_register(&raw_prot, 1);
|
|
|
|
if (rc)
|
|
|
|
goto out_unregister_udp_proto;
|
|
|
|
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
rc = proto_register(&ping_prot, 1);
|
|
|
|
if (rc)
|
|
|
|
goto out_unregister_raw_proto;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2007-02-09 22:24:47 +08:00
|
|
|
* Tell SOCKET that we are alive...
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2007-02-09 22:24:47 +08:00
|
|
|
(void)sock_register(&inet_family_ops);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-16 04:00:59 +08:00
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
ip_static_sysctl_init();
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Add all the base protocols.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (inet_add_protocol(&icmp_protocol, IPPROTO_ICMP) < 0)
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot add ICMP protocol\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (inet_add_protocol(&udp_protocol, IPPROTO_UDP) < 0)
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot add UDP protocol\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (inet_add_protocol(&tcp_protocol, IPPROTO_TCP) < 0)
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot add TCP protocol\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_MULTICAST
|
|
|
|
if (inet_add_protocol(&igmp_protocol, IPPROTO_IGMP) < 0)
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot add IGMP protocol\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Register the socket-side information for inet_create. */
|
|
|
|
for (r = &inetsw[0]; r < &inetsw[SOCK_MAX]; ++r)
|
|
|
|
INIT_LIST_HEAD(r);
|
|
|
|
|
|
|
|
for (q = inetsw_array; q < &inetsw_array[INETSW_ARRAY_LEN]; ++q)
|
|
|
|
inet_register_protosw(q);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the ARP module up
|
|
|
|
*/
|
|
|
|
|
|
|
|
arp_init();
|
|
|
|
|
2007-02-09 22:24:47 +08:00
|
|
|
/*
|
|
|
|
* Set the IP module up
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
ip_init();
|
|
|
|
|
|
|
|
/* Setup TCP slab cache for open requests. */
|
|
|
|
tcp_init();
|
|
|
|
|
2007-12-31 16:29:24 +08:00
|
|
|
/* Setup UDP memory threshold */
|
|
|
|
udp_init();
|
|
|
|
|
2006-11-28 03:10:57 +08:00
|
|
|
/* Add UDP-Lite (RFC 3828) */
|
|
|
|
udplite4_register();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-07 23:36:05 +08:00
|
|
|
raw_init();
|
|
|
|
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
ping_init();
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Set the ICMP layer up
|
|
|
|
*/
|
|
|
|
|
2008-03-01 03:14:50 +08:00
|
|
|
if (icmp_init() < 0)
|
|
|
|
panic("Failed to create the ICMP control socket.\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialise the multicast router
|
|
|
|
*/
|
|
|
|
#if defined(CONFIG_IP_MROUTE)
|
2008-07-03 12:13:36 +08:00
|
|
|
if (ip_mr_init())
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot init ipv4 mroute\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
2014-05-07 02:02:49 +08:00
|
|
|
|
|
|
|
if (init_inet_pernet_ops())
|
|
|
|
pr_crit("%s: Cannot init ipv4 inet pernet ops\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Initialise per-cpu ipv4 mibs
|
2007-02-09 22:24:47 +08:00
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-03-09 12:44:43 +08:00
|
|
|
if (init_ipv4_mibs())
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_crit("%s: Cannot init ipv4 mibs\n", __func__);
|
2007-02-09 22:24:47 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
ipv4_proc_init();
|
|
|
|
|
|
|
|
ipfrag_init();
|
|
|
|
|
2005-07-06 05:40:10 +08:00
|
|
|
dev_add_pack(&ip_packet_type);
|
|
|
|
|
2015-07-23 16:08:44 +08:00
|
|
|
ip_tunnel_core_init();
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
rc = 0;
|
|
|
|
out:
|
|
|
|
return rc;
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
out_unregister_raw_proto:
|
|
|
|
proto_unregister(&raw_prot);
|
2005-04-17 06:20:36 +08:00
|
|
|
out_unregister_udp_proto:
|
|
|
|
proto_unregister(&udp_prot);
|
2006-09-28 07:33:45 +08:00
|
|
|
out_unregister_tcp_proto:
|
|
|
|
proto_unregister(&tcp_prot);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2006-04-29 06:19:17 +08:00
|
|
|
fs_initcall(inet_init);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* ------------------------------------------------------------------------ */
|
|
|
|
|
|
|
|
#ifdef CONFIG_PROC_FS
|
|
|
|
static int __init ipv4_proc_init(void)
|
|
|
|
{
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
if (raw_proc_init())
|
|
|
|
goto out_raw;
|
|
|
|
if (tcp4_proc_init())
|
|
|
|
goto out_tcp;
|
|
|
|
if (udp4_proc_init())
|
|
|
|
goto out_udp;
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
if (ping_proc_init())
|
|
|
|
goto out_ping;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (ip_misc_proc_init())
|
|
|
|
goto out_misc;
|
|
|
|
out:
|
|
|
|
return rc;
|
|
|
|
out_misc:
|
net: ipv4: add IPPROTO_ICMP socket kind
This patch adds IPPROTO_ICMP socket kind. It makes it possible to send
ICMP_ECHO messages and receive the corresponding ICMP_ECHOREPLY messages
without any special privileges. In other words, the patch makes it
possible to implement setuid-less and CAP_NET_RAW-less /bin/ping. In
order not to increase the kernel's attack surface, the new functionality
is disabled by default, but is enabled at bootup by supporting Linux
distributions, optionally with restriction to a group or a group range
(see below).
Similar functionality is implemented in Mac OS X:
http://www.manpagez.com/man/4/icmp/
A new ping socket is created with
socket(PF_INET, SOCK_DGRAM, PROT_ICMP)
Message identifiers (octets 4-5 of ICMP header) are interpreted as local
ports. Addresses are stored in struct sockaddr_in. No port numbers are
reserved for privileged processes, port 0 is reserved for API ("let the
kernel pick a free number"). There is no notion of remote ports, remote
port numbers provided by the user (e.g. in connect()) are ignored.
Data sent and received include ICMP headers. This is deliberate to:
1) Avoid the need to transport headers values like sequence numbers by
other means.
2) Make it easier to port existing programs using raw sockets.
ICMP headers given to send() are checked and sanitized. The type must be
ICMP_ECHO and the code must be zero (future extensions might relax this,
see below). The id is set to the number (local port) of the socket, the
checksum is always recomputed.
ICMP reply packets received from the network are demultiplexed according
to their id's, and are returned by recv() without any modifications.
IP header information and ICMP errors of those packets may be obtained
via ancillary data (IP_RECVTTL, IP_RETOPTS, and IP_RECVERR). ICMP source
quenches and redirects are reported as fake errors via the error queue
(IP_RECVERR); the next hop address for redirects is saved to ee_info (in
network order).
socket(2) is restricted to the group range specified in
"/proc/sys/net/ipv4/ping_group_range". It is "1 0" by default, meaning
that nobody (not even root) may create ping sockets. Setting it to "100
100" would grant permissions to the single group (to either make
/sbin/ping g+s and owned by this group or to grant permissions to the
"netadmins" group), "0 4294967295" would enable it for the world, "100
4294967295" would enable it for the users, but not daemons.
The existing code might be (in the unlikely case anyone needs it)
extended rather easily to handle other similar pairs of ICMP messages
(Timestamp/Reply, Information Request/Reply, Address Mask Request/Reply
etc.).
Userspace ping util & patch for it:
http://openwall.info/wiki/people/segoon/ping
For Openwall GNU/*/Linux it was the last step on the road to the
setuid-less distro. A revision of this patch (for RHEL5/OpenVZ kernels)
is in use in Owl-current, such as in the 2011/03/12 LiveCD ISOs:
http://mirrors.kernel.org/openwall/Owl/current/iso/
Initially this functionality was written by Pavel Kankovsky for
Linux 2.4.32, but unfortunately it was never made public.
All ping options (-b, -p, -Q, -R, -s, -t, -T, -M, -I), are tested with
the patch.
PATCH v3:
- switched to flowi4.
- minor changes to be consistent with raw sockets code.
PATCH v2:
- changed ping_debug() to pr_debug().
- removed CONFIG_IP_PING.
- removed ping_seq_fops.owner field (unused for procfs).
- switched to proc_net_fops_create().
- switched to %pK in seq_printf().
PATCH v1:
- fixed checksumming bug.
- CAP_NET_RAW may not create icmp sockets anymore.
RFC v2:
- minor cleanups.
- introduced sysctl'able group range to restrict socket(2).
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-13 18:01:00 +08:00
|
|
|
ping_proc_exit();
|
|
|
|
out_ping:
|
2005-04-17 06:20:36 +08:00
|
|
|
udp4_proc_exit();
|
|
|
|
out_udp:
|
|
|
|
tcp4_proc_exit();
|
|
|
|
out_tcp:
|
|
|
|
raw_proc_exit();
|
|
|
|
out_raw:
|
|
|
|
rc = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_PROC_FS */
|
|
|
|
static int __init ipv4_proc_init(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_PROC_FS */
|