2019-05-27 14:55:01 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* INET An implementation of the TCP/IP protocol suite for the LINUX
|
|
|
|
* operating system. INET is implemented using the BSD Socket
|
|
|
|
* interface as the means of communication with the user level.
|
|
|
|
*
|
|
|
|
* IPv4 Forwarding Information Base: semantics.
|
|
|
|
*
|
|
|
|
* Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
|
|
|
|
*/
|
|
|
|
|
2016-12-25 03:46:01 +08:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/bitops.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/jiffies.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/socket.h>
|
|
|
|
#include <linux/sockios.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/in.h>
|
|
|
|
#include <linux/inet.h>
|
2005-12-27 12:43:12 +08:00
|
|
|
#include <linux/inetdevice.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/if_arp.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/init.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2017-05-22 00:12:03 +08:00
|
|
|
#include <linux/netlink.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-12-27 12:43:12 +08:00
|
|
|
#include <net/arp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/ip.h>
|
|
|
|
#include <net/protocol.h>
|
|
|
|
#include <net/route.h>
|
|
|
|
#include <net/tcp.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/ip_fib.h>
|
2019-04-06 07:30:32 +08:00
|
|
|
#include <net/ip6_fib.h>
|
2019-06-04 11:19:49 +08:00
|
|
|
#include <net/nexthop.h>
|
2006-08-15 15:34:17 +08:00
|
|
|
#include <net/netlink.h>
|
2019-04-21 00:28:20 +08:00
|
|
|
#include <net/rtnh.h>
|
2015-07-21 16:43:47 +08:00
|
|
|
#include <net/lwtunnel.h>
|
2017-08-03 19:28:11 +08:00
|
|
|
#include <net/fib_notifier.h>
|
2019-04-03 05:11:58 +08:00
|
|
|
#include <net/addrconf.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include "fib_lookup.h"
|
|
|
|
|
2006-08-30 07:48:09 +08:00
|
|
|
static DEFINE_SPINLOCK(fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
static struct hlist_head *fib_info_hash;
|
|
|
|
static struct hlist_head *fib_info_laddrhash;
|
2011-02-02 07:34:21 +08:00
|
|
|
static unsigned int fib_info_hash_size;
|
2005-04-17 06:20:36 +08:00
|
|
|
static unsigned int fib_info_cnt;
|
|
|
|
|
|
|
|
#define DEVINDEX_HASHBITS 8
|
|
|
|
#define DEVINDEX_HASHSIZE (1U << DEVINDEX_HASHBITS)
|
|
|
|
static struct hlist_head fib_info_devhash[DEVINDEX_HASHSIZE];
|
|
|
|
|
2019-06-04 11:19:50 +08:00
|
|
|
/* for_nexthops and change_nexthops only used when nexthop object
|
|
|
|
* is not set in a fib_info. The logic within can reference fib_nh.
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
|
|
|
|
2010-10-05 04:00:18 +08:00
|
|
|
#define for_nexthops(fi) { \
|
|
|
|
int nhsel; const struct fib_nh *nh; \
|
|
|
|
for (nhsel = 0, nh = (fi)->fib_nh; \
|
2019-06-04 11:19:49 +08:00
|
|
|
nhsel < fib_info_num_path((fi)); \
|
2010-10-05 04:00:18 +08:00
|
|
|
nh++, nhsel++)
|
|
|
|
|
|
|
|
#define change_nexthops(fi) { \
|
|
|
|
int nhsel; struct fib_nh *nexthop_nh; \
|
|
|
|
for (nhsel = 0, nexthop_nh = (struct fib_nh *)((fi)->fib_nh); \
|
2019-06-04 11:19:49 +08:00
|
|
|
nhsel < fib_info_num_path((fi)); \
|
2010-10-05 04:00:18 +08:00
|
|
|
nexthop_nh++, nhsel++)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#else /* CONFIG_IP_ROUTE_MULTIPATH */
|
|
|
|
|
|
|
|
/* Hope, that gcc will optimize it to get rid of dummy loop */
|
|
|
|
|
2010-10-05 04:00:18 +08:00
|
|
|
#define for_nexthops(fi) { \
|
|
|
|
int nhsel; const struct fib_nh *nh = (fi)->fib_nh; \
|
|
|
|
for (nhsel = 0; nhsel < 1; nhsel++)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-05 04:00:18 +08:00
|
|
|
#define change_nexthops(fi) { \
|
|
|
|
int nhsel; \
|
|
|
|
struct fib_nh *nexthop_nh = (struct fib_nh *)((fi)->fib_nh); \
|
|
|
|
for (nhsel = 0; nhsel < 1; nhsel++)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#endif /* CONFIG_IP_ROUTE_MULTIPATH */
|
|
|
|
|
|
|
|
#define endfor_nexthops(fi) }
|
|
|
|
|
|
|
|
|
2011-03-08 07:01:10 +08:00
|
|
|
const struct fib_prop fib_props[RTN_MAX + 1] = {
|
2010-10-05 04:00:18 +08:00
|
|
|
[RTN_UNSPEC] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_NOWHERE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_UNICAST] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_LOCAL] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_HOST,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_BROADCAST] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_LINK,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_ANYCAST] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_LINK,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_MULTICAST] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = 0,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_BLACKHOLE] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EINVAL,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_UNREACHABLE] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EHOSTUNREACH,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_PROHIBIT] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EACCES,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_THROW] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EAGAIN,
|
|
|
|
.scope = RT_SCOPE_UNIVERSE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_NAT] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EINVAL,
|
|
|
|
.scope = RT_SCOPE_NOWHERE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
|
|
|
[RTN_XRESOLVE] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.error = -EINVAL,
|
|
|
|
.scope = RT_SCOPE_NOWHERE,
|
2010-10-05 04:00:18 +08:00
|
|
|
},
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2012-08-01 06:02:02 +08:00
|
|
|
static void rt_fibinfo_free(struct rtable __rcu **rtp)
|
|
|
|
{
|
|
|
|
struct rtable *rt = rcu_dereference_protected(*rtp, 1);
|
|
|
|
|
|
|
|
if (!rt)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Not even needed : RCU_INIT_POINTER(*rtp, NULL);
|
|
|
|
* because we waited an RCU grace period before calling
|
|
|
|
* free_fib_info_rcu()
|
|
|
|
*/
|
|
|
|
|
2017-06-18 01:42:30 +08:00
|
|
|
dst_dev_put(&rt->dst);
|
2017-06-18 01:42:32 +08:00
|
|
|
dst_release_immediate(&rt->dst);
|
2012-08-01 06:02:02 +08:00
|
|
|
}
|
|
|
|
|
2019-04-30 22:45:50 +08:00
|
|
|
static void free_nh_exceptions(struct fib_nh_common *nhc)
|
2012-07-17 19:19:00 +08:00
|
|
|
{
|
2014-09-04 13:21:56 +08:00
|
|
|
struct fnhe_hash_bucket *hash;
|
2012-07-17 19:19:00 +08:00
|
|
|
int i;
|
|
|
|
|
2019-04-30 22:45:50 +08:00
|
|
|
hash = rcu_dereference_protected(nhc->nhc_exceptions, 1);
|
2014-09-04 13:21:56 +08:00
|
|
|
if (!hash)
|
|
|
|
return;
|
2012-07-17 19:19:00 +08:00
|
|
|
for (i = 0; i < FNHE_HASH_SIZE; i++) {
|
|
|
|
struct fib_nh_exception *fnhe;
|
|
|
|
|
2012-07-18 04:42:13 +08:00
|
|
|
fnhe = rcu_dereference_protected(hash[i].chain, 1);
|
2012-07-17 19:19:00 +08:00
|
|
|
while (fnhe) {
|
|
|
|
struct fib_nh_exception *next;
|
2018-02-28 07:48:21 +08:00
|
|
|
|
2012-07-18 04:42:13 +08:00
|
|
|
next = rcu_dereference_protected(fnhe->fnhe_next, 1);
|
2012-08-01 06:02:02 +08:00
|
|
|
|
2013-06-27 15:27:05 +08:00
|
|
|
rt_fibinfo_free(&fnhe->fnhe_rth_input);
|
|
|
|
rt_fibinfo_free(&fnhe->fnhe_rth_output);
|
2012-08-01 06:02:02 +08:00
|
|
|
|
2012-07-17 19:19:00 +08:00
|
|
|
kfree(fnhe);
|
|
|
|
|
|
|
|
fnhe = next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
kfree(hash);
|
|
|
|
}
|
|
|
|
|
2012-08-01 06:02:02 +08:00
|
|
|
static void rt_fibinfo_free_cpus(struct rtable __rcu * __percpu *rtp)
|
2012-07-31 13:45:30 +08:00
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (!rtp)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for_each_possible_cpu(cpu) {
|
|
|
|
struct rtable *rt;
|
|
|
|
|
|
|
|
rt = rcu_dereference_protected(*per_cpu_ptr(rtp, cpu), 1);
|
2017-06-18 01:42:29 +08:00
|
|
|
if (rt) {
|
2017-06-18 01:42:30 +08:00
|
|
|
dst_dev_put(&rt->dst);
|
2017-06-18 01:42:32 +08:00
|
|
|
dst_release_immediate(&rt->dst);
|
2017-06-18 01:42:29 +08:00
|
|
|
}
|
2012-07-31 13:45:30 +08:00
|
|
|
}
|
|
|
|
free_percpu(rtp);
|
|
|
|
}
|
|
|
|
|
2019-03-28 11:53:58 +08:00
|
|
|
void fib_nh_common_release(struct fib_nh_common *nhc)
|
|
|
|
{
|
2021-08-05 19:55:27 +08:00
|
|
|
dev_put(nhc->nhc_dev);
|
2019-03-28 11:53:58 +08:00
|
|
|
lwtstate_put(nhc->nhc_lwtstate);
|
2019-04-30 22:45:48 +08:00
|
|
|
rt_fibinfo_free_cpus(nhc->nhc_pcpu_rth_output);
|
|
|
|
rt_fibinfo_free(&nhc->nhc_rth_input);
|
2019-04-30 22:45:50 +08:00
|
|
|
free_nh_exceptions(nhc);
|
2019-03-28 11:53:58 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(fib_nh_common_release);
|
|
|
|
|
2019-03-28 11:53:49 +08:00
|
|
|
void fib_nh_release(struct net *net, struct fib_nh *fib_nh)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
|
|
|
if (fib_nh->nh_tclassid)
|
2021-12-02 10:26:35 +08:00
|
|
|
atomic_dec(&net->ipv4.fib_num_tclassid_users);
|
2019-03-28 11:53:49 +08:00
|
|
|
#endif
|
2019-03-28 11:53:58 +08:00
|
|
|
fib_nh_common_release(&fib_nh->nh_common);
|
2019-03-28 11:53:49 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Release a nexthop info record */
|
2011-09-05 04:24:20 +08:00
|
|
|
static void free_fib_info_rcu(struct rcu_head *head)
|
|
|
|
{
|
|
|
|
struct fib_info *fi = container_of(head, struct fib_info, rcu);
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh) {
|
|
|
|
nexthop_put(fi->nh);
|
|
|
|
} else {
|
|
|
|
change_nexthops(fi) {
|
|
|
|
fib_nh_release(fi->fib_net, nexthop_nh);
|
|
|
|
} endfor_nexthops(fi);
|
|
|
|
}
|
2012-05-23 23:39:45 +08:00
|
|
|
|
2018-10-05 11:07:52 +08:00
|
|
|
ip_fib_metrics_put(fi->fib_metrics);
|
|
|
|
|
2011-09-05 04:24:20 +08:00
|
|
|
kfree(fi);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void free_fib_info(struct fib_info *fi)
|
|
|
|
{
|
|
|
|
if (fi->fib_dead == 0) {
|
2012-03-12 02:36:11 +08:00
|
|
|
pr_warn("Freeing alive fib_info %p\n", fi);
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
fib_info_cnt--;
|
2019-03-28 11:53:49 +08:00
|
|
|
|
2011-09-05 04:24:20 +08:00
|
|
|
call_rcu(&fi->rcu, free_fib_info_rcu);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2016-12-03 23:44:58 +08:00
|
|
|
EXPORT_SYMBOL_GPL(free_fib_info);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void fib_release_info(struct fib_info *fi)
|
|
|
|
{
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_lock_bh(&fib_info_lock);
|
2021-07-29 15:13:50 +08:00
|
|
|
if (fi && refcount_dec_and_test(&fi->fib_treeref)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
hlist_del(&fi->fib_hash);
|
|
|
|
if (fi->fib_prefsrc)
|
|
|
|
hlist_del(&fi->fib_lhash);
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh) {
|
|
|
|
list_del(&fi->nh_list);
|
|
|
|
} else {
|
|
|
|
change_nexthops(fi) {
|
|
|
|
if (!nexthop_nh->fib_nh_dev)
|
|
|
|
continue;
|
|
|
|
hlist_del(&nexthop_nh->nh_hash);
|
|
|
|
} endfor_nexthops(fi)
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
fi->fib_dead = 1;
|
|
|
|
fib_info_put(fi);
|
|
|
|
}
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_unlock_bh(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:49 +08:00
|
|
|
static inline int nh_comp(struct fib_info *fi, struct fib_info *ofi)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2019-06-04 11:19:49 +08:00
|
|
|
const struct fib_nh *onh;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh || ofi->nh)
|
|
|
|
return nexthop_cmp(fi->nh, ofi->nh) ? 0 : -1;
|
|
|
|
|
|
|
|
if (ofi->fib_nhs == 0)
|
|
|
|
return 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
for_nexthops(fi) {
|
2019-06-04 11:19:49 +08:00
|
|
|
onh = fib_info_nh(ofi, nhsel);
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_oif != onh->fib_nh_oif ||
|
2019-04-06 07:30:30 +08:00
|
|
|
nh->fib_nh_gw_family != onh->fib_nh_gw_family ||
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_scope != onh->fib_nh_scope ||
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_weight != onh->fib_nh_weight ||
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
2011-01-14 20:36:42 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
2005-04-17 06:20:36 +08:00
|
|
|
nh->nh_tclassid != onh->nh_tclassid ||
|
|
|
|
#endif
|
2019-03-28 11:53:55 +08:00
|
|
|
lwtunnel_cmp_encap(nh->fib_nh_lws, onh->fib_nh_lws) ||
|
|
|
|
((nh->fib_nh_flags ^ onh->fib_nh_flags) & ~RTNH_COMPARE_MASK))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -1;
|
2019-04-06 07:30:30 +08:00
|
|
|
|
|
|
|
if (nh->fib_nh_gw_family == AF_INET &&
|
|
|
|
nh->fib_nh_gw4 != onh->fib_nh_gw4)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (nh->fib_nh_gw_family == AF_INET6 &&
|
|
|
|
ipv6_addr_cmp(&nh->fib_nh_gw6, &onh->fib_nh_gw6))
|
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-01-13 13:49:01 +08:00
|
|
|
static inline unsigned int fib_devindex_hashfn(unsigned int val)
|
|
|
|
{
|
|
|
|
unsigned int mask = DEVINDEX_HASHSIZE - 1;
|
|
|
|
|
|
|
|
return (val ^
|
|
|
|
(val >> DEVINDEX_HASHBITS) ^
|
|
|
|
(val >> (DEVINDEX_HASHBITS * 2))) & mask;
|
|
|
|
}
|
|
|
|
|
2019-06-09 05:53:33 +08:00
|
|
|
static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope,
|
|
|
|
u32 prefsrc, u32 priority)
|
|
|
|
{
|
|
|
|
unsigned int val = init_val;
|
|
|
|
|
|
|
|
val ^= (protocol << 8) | scope;
|
|
|
|
val ^= prefsrc;
|
|
|
|
val ^= priority;
|
|
|
|
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int fib_info_hashfn_result(unsigned int val)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-02-02 07:34:21 +08:00
|
|
|
unsigned int mask = (fib_info_hash_size - 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-09 05:53:33 +08:00
|
|
|
return (val ^ (val >> 7) ^ (val >> 12)) & mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int fib_info_hashfn(struct fib_info *fi)
|
|
|
|
{
|
|
|
|
unsigned int val;
|
|
|
|
|
|
|
|
val = fib_info_hashfn_1(fi->fib_nhs, fi->fib_protocol,
|
|
|
|
fi->fib_scope, (__force u32)fi->fib_prefsrc,
|
|
|
|
fi->fib_priority);
|
2019-06-04 11:19:51 +08:00
|
|
|
|
|
|
|
if (fi->nh) {
|
|
|
|
val ^= fib_devindex_hashfn(fi->nh->id);
|
|
|
|
} else {
|
|
|
|
for_nexthops(fi) {
|
|
|
|
val ^= fib_devindex_hashfn(nh->fib_nh_oif);
|
|
|
|
} endfor_nexthops(fi)
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-09 05:53:33 +08:00
|
|
|
return fib_info_hashfn_result(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* no metrics, only nexthop id */
|
|
|
|
static struct fib_info *fib_find_info_nh(struct net *net,
|
|
|
|
const struct fib_config *cfg)
|
|
|
|
{
|
|
|
|
struct hlist_head *head;
|
|
|
|
struct fib_info *fi;
|
|
|
|
unsigned int hash;
|
|
|
|
|
|
|
|
hash = fib_info_hashfn_1(fib_devindex_hashfn(cfg->fc_nh_id),
|
|
|
|
cfg->fc_protocol, cfg->fc_scope,
|
|
|
|
(__force u32)cfg->fc_prefsrc,
|
|
|
|
cfg->fc_priority);
|
|
|
|
hash = fib_info_hashfn_result(hash);
|
|
|
|
head = &fib_info_hash[hash];
|
|
|
|
|
|
|
|
hlist_for_each_entry(fi, head, fib_hash) {
|
|
|
|
if (!net_eq(fi->fib_net, net))
|
|
|
|
continue;
|
|
|
|
if (!fi->nh || fi->nh->id != cfg->fc_nh_id)
|
|
|
|
continue;
|
|
|
|
if (cfg->fc_protocol == fi->fib_protocol &&
|
|
|
|
cfg->fc_scope == fi->fib_scope &&
|
|
|
|
cfg->fc_prefsrc == fi->fib_prefsrc &&
|
|
|
|
cfg->fc_priority == fi->fib_priority &&
|
|
|
|
cfg->fc_type == fi->fib_type &&
|
|
|
|
cfg->fc_table == fi->fib_tb_id &&
|
|
|
|
!((cfg->fc_flags ^ fi->fib_flags) & ~RTNH_COMPARE_MASK))
|
|
|
|
return fi;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:49 +08:00
|
|
|
static struct fib_info *fib_find_info(struct fib_info *nfi)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct hlist_head *head;
|
|
|
|
struct fib_info *fi;
|
|
|
|
unsigned int hash;
|
|
|
|
|
|
|
|
hash = fib_info_hashfn(nfi);
|
|
|
|
head = &fib_info_hash[hash];
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(fi, head, fib_hash) {
|
2009-11-26 07:14:13 +08:00
|
|
|
if (!net_eq(fi->fib_net, nfi->fib_net))
|
2008-02-01 10:50:07 +08:00
|
|
|
continue;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (fi->fib_nhs != nfi->fib_nhs)
|
|
|
|
continue;
|
|
|
|
if (nfi->fib_protocol == fi->fib_protocol &&
|
2011-03-25 09:06:47 +08:00
|
|
|
nfi->fib_scope == fi->fib_scope &&
|
2005-04-17 06:20:36 +08:00
|
|
|
nfi->fib_prefsrc == fi->fib_prefsrc &&
|
|
|
|
nfi->fib_priority == fi->fib_priority &&
|
2012-10-04 09:25:26 +08:00
|
|
|
nfi->fib_type == fi->fib_type &&
|
2005-04-17 06:20:36 +08:00
|
|
|
memcmp(nfi->fib_metrics, fi->fib_metrics,
|
2011-03-24 15:01:24 +08:00
|
|
|
sizeof(u32) * RTAX_MAX) == 0 &&
|
2015-06-24 01:45:36 +08:00
|
|
|
!((nfi->fib_flags ^ fi->fib_flags) & ~RTNH_COMPARE_MASK) &&
|
2019-06-04 11:19:51 +08:00
|
|
|
nh_comp(fi, nfi) == 0)
|
2005-04-17 06:20:36 +08:00
|
|
|
return fi;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check, that the gateway is already configured.
|
2010-10-05 04:00:18 +08:00
|
|
|
* Used only by redirect accept routine.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2006-09-27 13:18:13 +08:00
|
|
|
int ip_fib_check_default(__be32 gw, struct net_device *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct hlist_head *head;
|
|
|
|
struct fib_nh *nh;
|
|
|
|
unsigned int hash;
|
|
|
|
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_lock(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
hash = fib_devindex_hashfn(dev->ifindex);
|
|
|
|
head = &fib_info_devhash[hash];
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(nh, head, nh_hash) {
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_dev == dev &&
|
|
|
|
nh->fib_nh_gw4 == gw &&
|
|
|
|
!(nh->fib_nh_flags & RTNH_F_DEAD)) {
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_unlock(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_unlock(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2021-02-02 03:47:51 +08:00
|
|
|
size_t fib_nlmsg_size(struct fib_info *fi)
|
2006-11-11 06:10:15 +08:00
|
|
|
{
|
|
|
|
size_t payload = NLMSG_ALIGN(sizeof(struct rtmsg))
|
|
|
|
+ nla_total_size(4) /* RTA_TABLE */
|
|
|
|
+ nla_total_size(4) /* RTA_DST */
|
|
|
|
+ nla_total_size(4) /* RTA_PRIORITY */
|
2015-01-06 06:57:47 +08:00
|
|
|
+ nla_total_size(4) /* RTA_PREFSRC */
|
|
|
|
+ nla_total_size(TCP_CA_NAME_MAX); /* RTAX_CC_ALGO */
|
2019-06-04 11:19:49 +08:00
|
|
|
unsigned int nhs = fib_info_num_path(fi);
|
2006-11-11 06:10:15 +08:00
|
|
|
|
|
|
|
/* space for nested metrics */
|
|
|
|
payload += nla_total_size((RTAX_MAX * nla_total_size(4)));
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh)
|
|
|
|
payload += nla_total_size(4); /* RTA_NH_ID */
|
|
|
|
|
2019-06-04 11:19:49 +08:00
|
|
|
if (nhs) {
|
2015-07-21 16:43:47 +08:00
|
|
|
size_t nh_encapsize = 0;
|
2019-06-04 11:19:49 +08:00
|
|
|
/* Also handles the special case nhs == 1 */
|
2006-11-11 06:10:15 +08:00
|
|
|
|
|
|
|
/* each nexthop is packed in an attribute */
|
|
|
|
size_t nhsize = nla_total_size(sizeof(struct rtnexthop));
|
2019-06-04 11:19:50 +08:00
|
|
|
unsigned int i;
|
2006-11-11 06:10:15 +08:00
|
|
|
|
|
|
|
/* may contain flow and gateway attribute */
|
|
|
|
nhsize += 2 * nla_total_size(4);
|
|
|
|
|
2015-07-21 16:43:47 +08:00
|
|
|
/* grab encap info */
|
2019-06-04 11:19:50 +08:00
|
|
|
for (i = 0; i < fib_info_num_path(fi); i++) {
|
|
|
|
struct fib_nh_common *nhc = fib_info_nhc(fi, i);
|
|
|
|
|
|
|
|
if (nhc->nhc_lwtstate) {
|
2015-07-21 16:43:47 +08:00
|
|
|
/* RTA_ENCAP_TYPE */
|
|
|
|
nh_encapsize += lwtunnel_get_encap_size(
|
2019-06-04 11:19:50 +08:00
|
|
|
nhc->nhc_lwtstate);
|
2015-07-21 16:43:47 +08:00
|
|
|
/* RTA_ENCAP */
|
|
|
|
nh_encapsize += nla_total_size(2);
|
|
|
|
}
|
2019-06-04 11:19:50 +08:00
|
|
|
}
|
2015-07-21 16:43:47 +08:00
|
|
|
|
2006-11-11 06:10:15 +08:00
|
|
|
/* all nexthops are packed in a nested attribute */
|
2019-06-04 11:19:49 +08:00
|
|
|
payload += nla_total_size((nhs * nhsize) + nh_encapsize);
|
2015-07-21 16:43:47 +08:00
|
|
|
|
2006-11-11 06:10:15 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return payload;
|
|
|
|
}
|
|
|
|
|
2006-09-28 09:40:00 +08:00
|
|
|
void rtmsg_fib(int event, __be32 key, struct fib_alias *fa,
|
2013-10-18 04:34:11 +08:00
|
|
|
int dst_len, u32 tb_id, const struct nl_info *info,
|
2007-05-24 05:55:06 +08:00
|
|
|
unsigned int nlm_flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2020-01-14 19:23:10 +08:00
|
|
|
struct fib_rt_info fri;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct sk_buff *skb;
|
2006-08-18 09:14:52 +08:00
|
|
|
u32 seq = info->nlh ? info->nlh->nlmsg_seq : 0;
|
2006-08-15 15:34:17 +08:00
|
|
|
int err = -ENOBUFS;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-11-11 06:10:15 +08:00
|
|
|
skb = nlmsg_new(fib_nlmsg_size(fa->fa_info), GFP_KERNEL);
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!skb)
|
2006-08-15 15:34:17 +08:00
|
|
|
goto errout;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-01-14 19:23:10 +08:00
|
|
|
fri.fi = fa->fa_info;
|
|
|
|
fri.tb_id = tb_id;
|
|
|
|
fri.dst = key;
|
|
|
|
fri.dst_len = dst_len;
|
|
|
|
fri.tos = fa->fa_tos;
|
|
|
|
fri.type = fa->fa_type;
|
ipv4: Add "offload" and "trap" indications to routes
When performing L3 offload, routes and nexthops are usually programmed
into two different tables in the underlying device. Therefore, the fact
that a nexthop resides in hardware does not necessarily mean that all
the associated routes also reside in hardware and vice-versa.
While the kernel can signal to user space the presence of a nexthop in
hardware (via 'RTNH_F_OFFLOAD'), it does not have a corresponding flag
for routes. In addition, the fact that a route resides in hardware does
not necessarily mean that the traffic is offloaded. For example,
unreachable routes (i.e., 'RTN_UNREACHABLE') are programmed to trap
packets to the CPU so that the kernel will be able to generate the
appropriate ICMP error packet.
This patch adds an "offload" and "trap" indications to IPv4 routes, so
that users will have better visibility into the offload process.
'struct fib_alias' is extended with two new fields that indicate if the
route resides in hardware or not and if it is offloading traffic from
the kernel or trapping packets to it. Note that the new fields are added
in the 6 bytes hole and therefore the struct still fits in a single
cache line [1].
Capable drivers are expected to invoke fib_alias_hw_flags_set() with the
route's key in order to set the flags.
The indications are dumped to user space via a new flags (i.e.,
'RTM_F_OFFLOAD' and 'RTM_F_TRAP') in the 'rtm_flags' field in the
ancillary header.
v2:
* Make use of 'struct fib_rt_info' in fib_alias_hw_flags_set()
[1]
struct fib_alias {
struct hlist_node fa_list; /* 0 16 */
struct fib_info * fa_info; /* 16 8 */
u8 fa_tos; /* 24 1 */
u8 fa_type; /* 25 1 */
u8 fa_state; /* 26 1 */
u8 fa_slen; /* 27 1 */
u32 tb_id; /* 28 4 */
s16 fa_default; /* 32 2 */
u8 offload:1; /* 34: 0 1 */
u8 trap:1; /* 34: 1 1 */
u8 unused:6; /* 34: 2 1 */
/* XXX 5 bytes hole, try to pack */
struct callback_head rcu __attribute__((__aligned__(8))); /* 40 16 */
/* size: 56, cachelines: 1, members: 12 */
/* sum members: 50, holes: 1, sum holes: 5 */
/* sum bitfield members: 8 bits (1 bytes) */
/* forced alignments: 1, forced holes: 1, sum forced holes: 5 */
/* last cacheline: 56 bytes */
} __attribute__((__aligned__(8)));
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-14 19:23:11 +08:00
|
|
|
fri.offload = fa->offload;
|
|
|
|
fri.trap = fa->trap;
|
IPv4: Add "offload failed" indication to routes
After installing a route to the kernel, user space receives an
acknowledgment, which means the route was installed in the kernel, but not
necessarily in hardware.
The asynchronous nature of route installation in hardware can lead to a
routing daemon advertising a route before it was actually installed in
hardware. This can result in packet loss or mis-routed packets until the
route is installed in hardware.
To avoid such cases, previous patch set added the ability to emit
RTM_NEWROUTE notifications whenever RTM_F_OFFLOAD/RTM_F_TRAP flags
are changed, this behavior is controlled by sysctl.
With the above mentioned behavior, it is possible to know from user-space
if the route was offloaded, but if the offload fails there is no indication
to user-space. Following a failure, a routing daemon will wait indefinitely
for a notification that will never come.
This patch adds an "offload_failed" indication to IPv4 routes, so that
users will have better visibility into the offload process.
'struct fib_alias', and 'struct fib_rt_info' are extended with new field
that indicates if route offload failed. Note that the new field is added
using unused bit and therefore there is no need to increase structs size.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-07 16:22:50 +08:00
|
|
|
fri.offload_failed = fa->offload_failed;
|
2020-01-14 19:23:10 +08:00
|
|
|
err = fib_dump_info(skb, info->portid, seq, event, &fri, nlm_flags);
|
2007-02-01 15:16:40 +08:00
|
|
|
if (err < 0) {
|
|
|
|
/* -EMSGSIZE implies BUG in fib_nlmsg_size() */
|
|
|
|
WARN_ON(err == -EMSGSIZE);
|
|
|
|
kfree_skb(skb);
|
|
|
|
goto errout;
|
|
|
|
}
|
2012-09-08 04:12:54 +08:00
|
|
|
rtnl_notify(skb, info->nl_net, info->portid, RTNLGRP_IPV4_ROUTE,
|
2009-02-25 15:18:28 +08:00
|
|
|
info->nlh, GFP_KERNEL);
|
|
|
|
return;
|
2006-08-15 15:34:17 +08:00
|
|
|
errout:
|
|
|
|
if (err < 0)
|
2008-01-10 19:26:13 +08:00
|
|
|
rtnl_set_sk_err(info->nl_net, RTNLGRP_IPV4_ROUTE, err);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-12-29 03:05:36 +08:00
|
|
|
static int fib_detect_death(struct fib_info *fi, int order,
|
|
|
|
struct fib_info **last_resort, int *last_idx,
|
|
|
|
int dflt)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2019-04-06 07:30:37 +08:00
|
|
|
const struct fib_nh_common *nhc = fib_info_nhc(fi, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
struct neighbour *n;
|
|
|
|
int state = NUD_NONE;
|
|
|
|
|
2019-04-06 07:30:37 +08:00
|
|
|
if (likely(nhc->nhc_gw_family == AF_INET))
|
|
|
|
n = neigh_lookup(&arp_tbl, &nhc->nhc_gw.ipv4, nhc->nhc_dev);
|
|
|
|
else if (nhc->nhc_gw_family == AF_INET6)
|
|
|
|
n = neigh_lookup(ipv6_stub->nd_tbl, &nhc->nhc_gw.ipv6,
|
|
|
|
nhc->nhc_dev);
|
|
|
|
else
|
|
|
|
n = NULL;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (n) {
|
|
|
|
state = n->nud_state;
|
|
|
|
neigh_release(n);
|
2015-07-23 15:39:35 +08:00
|
|
|
} else {
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-11-03 16:23:42 +08:00
|
|
|
if (state == NUD_REACHABLE)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2010-10-05 04:00:18 +08:00
|
|
|
if ((state & NUD_VALID) && order != dflt)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2010-10-05 04:00:18 +08:00
|
|
|
if ((state & NUD_VALID) ||
|
2015-07-23 15:39:35 +08:00
|
|
|
(*last_idx < 0 && order > dflt && state != NUD_INCOMPLETE)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
*last_resort = fi;
|
|
|
|
*last_idx = order;
|
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-03-28 06:00:21 +08:00
|
|
|
int fib_nh_common_init(struct net *net, struct fib_nh_common *nhc,
|
|
|
|
struct nlattr *encap, u16 encap_type,
|
|
|
|
void *cfg, gfp_t gfp_flags,
|
2019-03-28 11:53:58 +08:00
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
2019-04-30 22:45:48 +08:00
|
|
|
int err;
|
|
|
|
|
|
|
|
nhc->nhc_pcpu_rth_output = alloc_percpu_gfp(struct rtable __rcu *,
|
|
|
|
gfp_flags);
|
|
|
|
if (!nhc->nhc_pcpu_rth_output)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2019-03-28 11:53:58 +08:00
|
|
|
if (encap) {
|
|
|
|
struct lwtunnel_state *lwtstate;
|
|
|
|
|
|
|
|
if (encap_type == LWTUNNEL_ENCAP_NONE) {
|
|
|
|
NL_SET_ERR_MSG(extack, "LWT encap type not specified");
|
2019-04-30 22:45:48 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
goto lwt_failure;
|
2019-03-28 11:53:58 +08:00
|
|
|
}
|
2020-03-28 06:00:21 +08:00
|
|
|
err = lwtunnel_build_state(net, encap_type, encap,
|
|
|
|
nhc->nhc_family, cfg, &lwtstate,
|
|
|
|
extack);
|
2019-03-28 11:53:58 +08:00
|
|
|
if (err)
|
2019-04-30 22:45:48 +08:00
|
|
|
goto lwt_failure;
|
2019-03-28 11:53:58 +08:00
|
|
|
|
|
|
|
nhc->nhc_lwtstate = lwtstate_get(lwtstate);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2019-04-30 22:45:48 +08:00
|
|
|
|
|
|
|
lwt_failure:
|
|
|
|
rt_fibinfo_free_cpus(nhc->nhc_pcpu_rth_output);
|
|
|
|
nhc->nhc_pcpu_rth_output = NULL;
|
|
|
|
return err;
|
2019-03-28 11:53:58 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(fib_nh_common_init);
|
|
|
|
|
2019-03-28 11:53:48 +08:00
|
|
|
int fib_nh_init(struct net *net, struct fib_nh *nh,
|
|
|
|
struct fib_config *cfg, int nh_weight,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
2019-04-30 22:45:48 +08:00
|
|
|
int err;
|
2019-03-28 11:53:48 +08:00
|
|
|
|
2019-03-28 11:53:57 +08:00
|
|
|
nh->fib_nh_family = AF_INET;
|
|
|
|
|
2020-03-28 06:00:21 +08:00
|
|
|
err = fib_nh_common_init(net, &nh->nh_common, cfg->fc_encap,
|
2019-03-28 11:53:58 +08:00
|
|
|
cfg->fc_encap_type, cfg, GFP_KERNEL, extack);
|
|
|
|
if (err)
|
2019-04-30 22:45:48 +08:00
|
|
|
return err;
|
2019-03-28 11:53:48 +08:00
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_oif = cfg->fc_oif;
|
2019-04-06 07:30:30 +08:00
|
|
|
nh->fib_nh_gw_family = cfg->fc_gw_family;
|
|
|
|
if (cfg->fc_gw_family == AF_INET)
|
2019-04-06 07:30:28 +08:00
|
|
|
nh->fib_nh_gw4 = cfg->fc_gw4;
|
2019-04-06 07:30:30 +08:00
|
|
|
else if (cfg->fc_gw_family == AF_INET6)
|
|
|
|
nh->fib_nh_gw6 = cfg->fc_gw6;
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_flags = cfg->fc_flags;
|
2019-03-28 11:53:48 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
|
|
|
nh->nh_tclassid = cfg->fc_flow;
|
|
|
|
if (nh->nh_tclassid)
|
2021-12-02 10:26:35 +08:00
|
|
|
atomic_inc(&net->ipv4.fib_num_tclassid_users);
|
2019-03-28 11:53:48 +08:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_weight = nh_weight;
|
2019-03-28 11:53:48 +08:00
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
|
|
|
|
2017-05-22 00:12:02 +08:00
|
|
|
static int fib_count_nexthops(struct rtnexthop *rtnh, int remaining,
|
|
|
|
struct netlink_ext_ack *extack)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int nhs = 0;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
while (rtnh_ok(rtnh, remaining)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
nhs++;
|
2006-08-18 09:14:52 +08:00
|
|
|
rtnh = rtnh_next(rtnh, &remaining);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* leftover implies invalid nexthop configuration, discard it */
|
2017-05-22 00:12:03 +08:00
|
|
|
if (remaining > 0) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Invalid nexthop configuration - extra data after nexthops");
|
|
|
|
nhs = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nhs;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2021-12-31 08:36:31 +08:00
|
|
|
static int fib_gw_from_attr(__be32 *gw, struct nlattr *nla,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
if (nla_len(nla) < sizeof(*gw)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid IPv4 address in RTA_GATEWAY");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
*gw = nla_get_in_addr(nla);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
/* only called when fib_nh is integrated into fib_info */
|
2006-08-18 09:14:52 +08:00
|
|
|
static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
|
2017-05-22 00:12:02 +08:00
|
|
|
int remaining, struct fib_config *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2019-03-28 11:53:48 +08:00
|
|
|
struct net *net = fi->fib_net;
|
|
|
|
struct fib_config fib_cfg;
|
2019-06-04 11:19:49 +08:00
|
|
|
struct fib_nh *nh;
|
2015-07-21 16:43:47 +08:00
|
|
|
int ret;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
change_nexthops(fi) {
|
2006-08-18 09:14:52 +08:00
|
|
|
int attrlen;
|
|
|
|
|
2019-03-28 11:53:48 +08:00
|
|
|
memset(&fib_cfg, 0, sizeof(fib_cfg));
|
|
|
|
|
2017-05-22 00:12:03 +08:00
|
|
|
if (!rtnh_ok(rtnh, remaining)) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Invalid nexthop configuration - extra data after nexthop");
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EINVAL;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2006-08-18 09:14:52 +08:00
|
|
|
|
2017-05-22 00:12:03 +08:00
|
|
|
if (rtnh->rtnh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Invalid flags for nexthop - can not contain DEAD or LINKDOWN");
|
2016-07-11 02:11:55 +08:00
|
|
|
return -EINVAL;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2016-07-11 02:11:55 +08:00
|
|
|
|
2019-03-28 11:53:48 +08:00
|
|
|
fib_cfg.fc_flags = (cfg->fc_flags & ~0xFF) | rtnh->rtnh_flags;
|
|
|
|
fib_cfg.fc_oif = rtnh->rtnh_ifindex;
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
attrlen = rtnh_attrlen(rtnh);
|
|
|
|
if (attrlen > 0) {
|
2019-04-06 07:30:40 +08:00
|
|
|
struct nlattr *nla, *nlav, *attrs = rtnh_attrs(rtnh);
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
nla = nla_find(attrs, attrlen, RTA_GATEWAY);
|
2019-04-06 07:30:40 +08:00
|
|
|
nlav = nla_find(attrs, attrlen, RTA_VIA);
|
|
|
|
if (nla && nlav) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Nexthop configuration can not contain both GATEWAY and VIA");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-04-06 07:30:28 +08:00
|
|
|
if (nla) {
|
2021-12-31 08:36:31 +08:00
|
|
|
ret = fib_gw_from_attr(&fib_cfg.fc_gw4, nla,
|
|
|
|
extack);
|
|
|
|
if (ret)
|
|
|
|
goto errout;
|
|
|
|
|
2019-04-11 01:05:51 +08:00
|
|
|
if (fib_cfg.fc_gw4)
|
|
|
|
fib_cfg.fc_gw_family = AF_INET;
|
2019-04-06 07:30:40 +08:00
|
|
|
} else if (nlav) {
|
|
|
|
ret = fib_gw_from_via(&fib_cfg, nlav, extack);
|
|
|
|
if (ret)
|
|
|
|
goto errout;
|
2019-04-06 07:30:28 +08:00
|
|
|
}
|
2019-03-28 11:53:48 +08:00
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
nla = nla_find(attrs, attrlen, RTA_FLOW);
|
2021-12-31 08:36:32 +08:00
|
|
|
if (nla) {
|
|
|
|
if (nla_len(nla) < sizeof(u32)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid RTA_FLOW");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2019-03-28 11:53:48 +08:00
|
|
|
fib_cfg.fc_flow = nla_get_u32(nla);
|
2021-12-31 08:36:32 +08:00
|
|
|
}
|
2019-03-28 11:53:48 +08:00
|
|
|
|
|
|
|
fib_cfg.fc_encap = nla_find(attrs, attrlen, RTA_ENCAP);
|
|
|
|
nla = nla_find(attrs, attrlen, RTA_ENCAP_TYPE);
|
|
|
|
if (nla)
|
|
|
|
fib_cfg.fc_encap_type = nla_get_u16(nla);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-08-18 09:14:52 +08:00
|
|
|
|
2019-03-28 11:53:48 +08:00
|
|
|
ret = fib_nh_init(net, nexthop_nh, &fib_cfg,
|
|
|
|
rtnh->rtnh_hops + 1, extack);
|
|
|
|
if (ret)
|
|
|
|
goto errout;
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
rtnh = rtnh_next(rtnh, &remaining);
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi);
|
2006-08-18 09:14:52 +08:00
|
|
|
|
2015-07-21 16:43:47 +08:00
|
|
|
ret = -EINVAL;
|
2019-06-04 11:19:49 +08:00
|
|
|
nh = fib_info_nh(fi, 0);
|
|
|
|
if (cfg->fc_oif && nh->fib_nh_oif != cfg->fc_oif) {
|
2019-03-28 11:53:48 +08:00
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Nexthop device index does not match RTA_OIF");
|
|
|
|
goto errout;
|
|
|
|
}
|
2019-04-06 07:30:28 +08:00
|
|
|
if (cfg->fc_gw_family) {
|
2019-06-04 11:19:49 +08:00
|
|
|
if (cfg->fc_gw_family != nh->fib_nh_gw_family ||
|
2019-04-06 07:30:28 +08:00
|
|
|
(cfg->fc_gw_family == AF_INET &&
|
2019-06-04 11:19:49 +08:00
|
|
|
nh->fib_nh_gw4 != cfg->fc_gw4) ||
|
2019-04-06 07:30:30 +08:00
|
|
|
(cfg->fc_gw_family == AF_INET6 &&
|
2019-06-04 11:19:49 +08:00
|
|
|
ipv6_addr_cmp(&nh->fib_nh_gw6, &cfg->fc_gw6))) {
|
2019-04-06 07:30:28 +08:00
|
|
|
NL_SET_ERR_MSG(extack,
|
2019-04-06 07:30:30 +08:00
|
|
|
"Nexthop gateway does not match RTA_GATEWAY or RTA_VIA");
|
2019-04-06 07:30:28 +08:00
|
|
|
goto errout;
|
|
|
|
}
|
2019-03-28 11:53:48 +08:00
|
|
|
}
|
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
2019-06-04 11:19:49 +08:00
|
|
|
if (cfg->fc_flow && nh->nh_tclassid != cfg->fc_flow) {
|
2019-03-28 11:53:48 +08:00
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Nexthop class id does not match RTA_FLOW");
|
|
|
|
goto errout;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
ret = 0;
|
2015-07-21 16:43:47 +08:00
|
|
|
errout:
|
|
|
|
return ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
/* only called when fib_nh is integrated into fib_info */
|
2015-09-30 16:12:21 +08:00
|
|
|
static void fib_rebalance(struct fib_info *fi)
|
|
|
|
{
|
|
|
|
int total;
|
|
|
|
int w;
|
|
|
|
|
2019-06-04 11:19:49 +08:00
|
|
|
if (fib_info_num_path(fi) < 2)
|
2015-09-30 16:12:21 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
total = 0;
|
|
|
|
for_nexthops(fi) {
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_flags & RTNH_F_DEAD)
|
2015-09-30 16:12:21 +08:00
|
|
|
continue;
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
if (ip_ignore_linkdown(nh->fib_nh_dev) &&
|
|
|
|
nh->fib_nh_flags & RTNH_F_LINKDOWN)
|
2015-09-30 16:12:21 +08:00
|
|
|
continue;
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
total += nh->fib_nh_weight;
|
2015-09-30 16:12:21 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
|
|
|
|
w = 0;
|
|
|
|
change_nexthops(fi) {
|
|
|
|
int upper_bound;
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nexthop_nh->fib_nh_flags & RTNH_F_DEAD) {
|
2015-09-30 16:12:21 +08:00
|
|
|
upper_bound = -1;
|
2019-03-28 11:53:55 +08:00
|
|
|
} else if (ip_ignore_linkdown(nexthop_nh->fib_nh_dev) &&
|
|
|
|
nexthop_nh->fib_nh_flags & RTNH_F_LINKDOWN) {
|
2015-09-30 16:12:21 +08:00
|
|
|
upper_bound = -1;
|
|
|
|
} else {
|
2019-03-28 11:53:55 +08:00
|
|
|
w += nexthop_nh->fib_nh_weight;
|
2015-10-06 13:24:47 +08:00
|
|
|
upper_bound = DIV_ROUND_CLOSEST_ULL((u64)w << 31,
|
|
|
|
total) - 1;
|
2015-09-30 16:12:21 +08:00
|
|
|
}
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
atomic_set(&nexthop_nh->fib_nh_upper_bound, upper_bound);
|
2015-09-30 16:12:21 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
}
|
|
|
|
#else /* CONFIG_IP_ROUTE_MULTIPATH */
|
|
|
|
|
2019-03-28 11:53:46 +08:00
|
|
|
static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
|
|
|
|
int remaining, struct fib_config *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
NL_SET_ERR_MSG(extack, "Multipath support not enabled in kernel");
|
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2015-09-30 16:12:21 +08:00
|
|
|
#define fib_rebalance(fi) do { } while (0)
|
|
|
|
|
|
|
|
#endif /* CONFIG_IP_ROUTE_MULTIPATH */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-03-28 06:00:21 +08:00
|
|
|
static int fib_encap_match(struct net *net, u16 encap_type,
|
2015-08-19 16:04:51 +08:00
|
|
|
struct nlattr *encap,
|
2017-01-31 04:07:37 +08:00
|
|
|
const struct fib_nh *nh,
|
2017-05-28 06:19:28 +08:00
|
|
|
const struct fib_config *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
2015-07-21 16:43:47 +08:00
|
|
|
{
|
|
|
|
struct lwtunnel_state *lwtstate;
|
2015-08-19 00:41:13 +08:00
|
|
|
int ret, result = 0;
|
2015-07-21 16:43:47 +08:00
|
|
|
|
|
|
|
if (encap_type == LWTUNNEL_ENCAP_NONE)
|
|
|
|
return 0;
|
|
|
|
|
2020-03-28 06:00:21 +08:00
|
|
|
ret = lwtunnel_build_state(net, encap_type, encap, AF_INET,
|
2017-05-28 06:19:28 +08:00
|
|
|
cfg, &lwtstate, extack);
|
2015-08-19 00:41:13 +08:00
|
|
|
if (!ret) {
|
2019-03-28 11:53:55 +08:00
|
|
|
result = lwtunnel_cmp_encap(lwtstate, nh->fib_nh_lws);
|
2015-08-19 00:41:13 +08:00
|
|
|
lwtstate_free(lwtstate);
|
|
|
|
}
|
2015-07-21 16:43:47 +08:00
|
|
|
|
2015-08-19 00:41:13 +08:00
|
|
|
return result;
|
2015-07-21 16:43:47 +08:00
|
|
|
}
|
|
|
|
|
2020-03-28 06:00:21 +08:00
|
|
|
int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
|
2017-05-28 06:19:28 +08:00
|
|
|
struct netlink_ext_ack *extack)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2006-08-18 09:14:52 +08:00
|
|
|
struct rtnexthop *rtnh;
|
|
|
|
int remaining;
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if (cfg->fc_priority && cfg->fc_priority != fi->fib_priority)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
|
2019-06-09 05:53:32 +08:00
|
|
|
if (cfg->fc_nh_id) {
|
|
|
|
if (fi->nh && cfg->fc_nh_id == fi->nh->id)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2019-04-06 07:30:28 +08:00
|
|
|
if (cfg->fc_oif || cfg->fc_gw_family) {
|
2019-06-04 11:19:49 +08:00
|
|
|
struct fib_nh *nh = fib_info_nh(fi, 0);
|
|
|
|
|
2015-07-21 16:43:47 +08:00
|
|
|
if (cfg->fc_encap) {
|
2020-03-28 06:00:21 +08:00
|
|
|
if (fib_encap_match(net, cfg->fc_encap_type,
|
|
|
|
cfg->fc_encap, nh, cfg, extack))
|
2017-05-28 06:19:28 +08:00
|
|
|
return 1;
|
2015-07-21 16:43:47 +08:00
|
|
|
}
|
2018-02-15 16:46:03 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
|
|
|
if (cfg->fc_flow &&
|
2019-06-04 11:19:49 +08:00
|
|
|
cfg->fc_flow != nh->nh_tclassid)
|
2018-02-15 16:46:03 +08:00
|
|
|
return 1;
|
|
|
|
#endif
|
2019-06-04 11:19:49 +08:00
|
|
|
if ((cfg->fc_oif && cfg->fc_oif != nh->fib_nh_oif) ||
|
2019-04-06 07:30:28 +08:00
|
|
|
(cfg->fc_gw_family &&
|
2019-06-04 11:19:49 +08:00
|
|
|
cfg->fc_gw_family != nh->fib_nh_gw_family))
|
2019-04-06 07:30:28 +08:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (cfg->fc_gw_family == AF_INET &&
|
2019-06-04 11:19:49 +08:00
|
|
|
cfg->fc_gw4 != nh->fib_nh_gw4)
|
2019-04-06 07:30:28 +08:00
|
|
|
return 1;
|
|
|
|
|
2019-04-06 07:30:30 +08:00
|
|
|
if (cfg->fc_gw_family == AF_INET6 &&
|
2019-06-04 11:19:49 +08:00
|
|
|
ipv6_addr_cmp(&cfg->fc_gw6, &nh->fib_nh_gw6))
|
2019-04-06 07:30:30 +08:00
|
|
|
return 1;
|
|
|
|
|
2019-04-06 07:30:28 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!cfg->fc_mp)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
rtnh = cfg->fc_mp;
|
|
|
|
remaining = cfg->fc_mp_len;
|
2007-02-09 22:24:47 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
for_nexthops(fi) {
|
2006-08-18 09:14:52 +08:00
|
|
|
int attrlen;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if (!rtnh_ok(rtnh, remaining))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EINVAL;
|
2006-08-18 09:14:52 +08:00
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
if (rtnh->rtnh_ifindex && rtnh->rtnh_ifindex != nh->fib_nh_oif)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
attrlen = rtnh_attrlen(rtnh);
|
2014-10-13 22:34:10 +08:00
|
|
|
if (attrlen > 0) {
|
2019-04-06 07:30:40 +08:00
|
|
|
struct nlattr *nla, *nlav, *attrs = rtnh_attrs(rtnh);
|
2021-12-31 08:36:31 +08:00
|
|
|
int err;
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
nla = nla_find(attrs, attrlen, RTA_GATEWAY);
|
2019-04-06 07:30:40 +08:00
|
|
|
nlav = nla_find(attrs, attrlen, RTA_VIA);
|
|
|
|
if (nla && nlav) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Nexthop configuration can not contain both GATEWAY and VIA");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nla) {
|
2021-12-31 08:36:31 +08:00
|
|
|
__be32 gw;
|
|
|
|
|
|
|
|
err = fib_gw_from_attr(&gw, nla, extack);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2019-04-06 07:30:40 +08:00
|
|
|
if (nh->fib_nh_gw_family != AF_INET ||
|
2021-12-31 08:36:31 +08:00
|
|
|
gw != nh->fib_nh_gw4)
|
2019-04-06 07:30:40 +08:00
|
|
|
return 1;
|
|
|
|
} else if (nlav) {
|
|
|
|
struct fib_config cfg2;
|
|
|
|
|
|
|
|
err = fib_gw_from_via(&cfg2, nlav, extack);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
switch (nh->fib_nh_gw_family) {
|
|
|
|
case AF_INET:
|
|
|
|
if (cfg2.fc_gw_family != AF_INET ||
|
|
|
|
cfg2.fc_gw4 != nh->fib_nh_gw4)
|
|
|
|
return 1;
|
|
|
|
break;
|
|
|
|
case AF_INET6:
|
|
|
|
if (cfg2.fc_gw_family != AF_INET6 ||
|
|
|
|
ipv6_addr_cmp(&cfg2.fc_gw6,
|
|
|
|
&nh->fib_nh_gw6))
|
|
|
|
return 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-01-14 20:36:42 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
2006-08-18 09:14:52 +08:00
|
|
|
nla = nla_find(attrs, attrlen, RTA_FLOW);
|
2021-12-31 08:36:32 +08:00
|
|
|
if (nla) {
|
|
|
|
if (nla_len(nla) < sizeof(u32)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid RTA_FLOW");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (nla_get_u32(nla) != nh->nh_tclassid)
|
|
|
|
return 1;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
|
|
|
}
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
rtnh = rtnh_next(rtnh, &remaining);
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-08-23 10:07:26 +08:00
|
|
|
bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi)
|
|
|
|
{
|
|
|
|
struct nlattr *nla;
|
|
|
|
int remaining;
|
|
|
|
|
|
|
|
if (!cfg->fc_mx)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
nla_for_each_attr(nla, cfg->fc_mx, cfg->fc_mx_len, remaining) {
|
|
|
|
int type = nla_type(nla);
|
2017-12-19 22:17:13 +08:00
|
|
|
u32 fi_val, val;
|
2017-08-23 10:07:26 +08:00
|
|
|
|
|
|
|
if (!type)
|
|
|
|
continue;
|
|
|
|
if (type > RTAX_MAX)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (type == RTAX_CC_ALGO) {
|
|
|
|
char tmp[TCP_CA_NAME_MAX];
|
|
|
|
bool ecn_ca = false;
|
|
|
|
|
2020-11-16 01:08:06 +08:00
|
|
|
nla_strscpy(tmp, nla, sizeof(tmp));
|
2017-11-15 00:25:49 +08:00
|
|
|
val = tcp_ca_get_key_by_name(fi->fib_net, tmp, &ecn_ca);
|
2017-08-23 10:07:26 +08:00
|
|
|
} else {
|
2018-06-05 21:06:19 +08:00
|
|
|
if (nla_len(nla) != sizeof(u32))
|
|
|
|
return false;
|
2017-08-23 10:07:26 +08:00
|
|
|
val = nla_get_u32(nla);
|
|
|
|
}
|
|
|
|
|
2017-12-19 22:17:13 +08:00
|
|
|
fi_val = fi->fib_metrics->metrics[type - 1];
|
|
|
|
if (type == RTAX_FEATURES)
|
|
|
|
fi_val &= ~DST_FEATURE_ECN_CA;
|
|
|
|
|
|
|
|
if (fi_val != val)
|
2017-08-23 10:07:26 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2019-04-06 07:30:32 +08:00
|
|
|
static int fib_check_nh_v6_gw(struct net *net, struct fib_nh *nh,
|
|
|
|
u32 table, struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
struct fib6_config cfg = {
|
|
|
|
.fc_table = table,
|
|
|
|
.fc_flags = nh->fib_nh_flags | RTF_GATEWAY,
|
|
|
|
.fc_ifindex = nh->fib_nh_oif,
|
|
|
|
.fc_gateway = nh->fib_nh_gw6,
|
|
|
|
};
|
|
|
|
struct fib6_nh fib6_nh = {};
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = ipv6_stub->fib6_nh_init(net, &fib6_nh, &cfg, GFP_KERNEL, extack);
|
|
|
|
if (!err) {
|
|
|
|
nh->fib_nh_dev = fib6_nh.fib_nh_dev;
|
|
|
|
dev_hold(nh->fib_nh_dev);
|
|
|
|
nh->fib_nh_oif = nh->fib_nh_dev->ifindex;
|
|
|
|
nh->fib_nh_scope = RT_SCOPE_LINK;
|
|
|
|
|
|
|
|
ipv6_stub->fib6_nh_release(&fib6_nh);
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2010-10-05 04:00:18 +08:00
|
|
|
* Picture
|
|
|
|
* -------
|
|
|
|
*
|
|
|
|
* Semantics of nexthop is very messy by historical reasons.
|
|
|
|
* We have to take into account, that:
|
|
|
|
* a) gateway can be actually local interface address,
|
|
|
|
* so that gatewayed route is direct.
|
|
|
|
* b) gateway must be on-link address, possibly
|
|
|
|
* described not by an ifaddr, but also by a direct route.
|
|
|
|
* c) If both gateway and interface are specified, they should not
|
|
|
|
* contradict.
|
|
|
|
* d) If we use tunnel routes, gateway could be not on-link.
|
|
|
|
*
|
|
|
|
* Attempt to reconcile all of these (alas, self-contradictory) conditions
|
|
|
|
* results in pretty ugly and hairy code with obscure logic.
|
|
|
|
*
|
|
|
|
* I chose to generalized it instead, so that the size
|
|
|
|
* of code does not increase practically, but it becomes
|
|
|
|
* much more general.
|
|
|
|
* Every prefix is assigned a "scope" value: "host" is local address,
|
|
|
|
* "link" is direct route,
|
|
|
|
* [ ... "site" ... "interior" ... ]
|
|
|
|
* and "universe" is true gateway route with global meaning.
|
|
|
|
*
|
|
|
|
* Every prefix refers to a set of "nexthop"s (gw, oif),
|
|
|
|
* where gw must have narrower scope. This recursion stops
|
|
|
|
* when gw has LOCAL scope or if "nexthop" is declared ONLINK,
|
|
|
|
* which means that gw is forced to be on link.
|
|
|
|
*
|
|
|
|
* Code is still hairy, but now it is apparently logically
|
|
|
|
* consistent and very flexible. F.e. as by-product it allows
|
|
|
|
* to co-exists in peace independent exterior and interior
|
|
|
|
* routing processes.
|
|
|
|
*
|
|
|
|
* Normally it looks as following.
|
|
|
|
*
|
|
|
|
* {universe prefix} -> (gw, oif) [scope link]
|
|
|
|
* |
|
|
|
|
* |-> {link prefix} -> (gw, oif) [scope local]
|
|
|
|
* |
|
|
|
|
* |-> {local prefix} (terminal node)
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2019-04-06 07:30:31 +08:00
|
|
|
static int fib_check_nh_v4_gw(struct net *net, struct fib_nh *nh, u32 table,
|
|
|
|
u8 scope, struct netlink_ext_ack *extack)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-10-05 04:00:18 +08:00
|
|
|
struct net_device *dev;
|
2019-04-06 07:30:31 +08:00
|
|
|
struct fib_result res;
|
2019-06-06 22:43:17 +08:00
|
|
|
int err = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-04-06 07:30:31 +08:00
|
|
|
if (nh->fib_nh_flags & RTNH_F_ONLINK) {
|
|
|
|
unsigned int addr_type;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-04-06 07:30:31 +08:00
|
|
|
if (scope >= RT_SCOPE_LINK) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop has invalid scope");
|
|
|
|
return -EINVAL;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
dev = __dev_get_by_index(net, nh->fib_nh_oif);
|
|
|
|
if (!dev) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop device required for onlink");
|
|
|
|
return -ENODEV;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
if (!(dev->flags & IFF_UP)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop device is not up");
|
|
|
|
return -ENETDOWN;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
addr_type = inet_addr_type_dev_table(net, dev, nh->fib_nh_gw4);
|
|
|
|
if (addr_type != RTN_UNICAST) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop has invalid gateway");
|
|
|
|
return -EINVAL;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2015-06-24 01:45:36 +08:00
|
|
|
if (!netif_carrier_ok(dev))
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_flags |= RTNH_F_LINKDOWN;
|
2019-04-06 07:30:31 +08:00
|
|
|
nh->fib_nh_dev = dev;
|
|
|
|
dev_hold(dev);
|
|
|
|
nh->fib_nh_scope = RT_SCOPE_LINK;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
rcu_read_lock();
|
|
|
|
{
|
|
|
|
struct fib_table *tbl = NULL;
|
|
|
|
struct flowi4 fl4 = {
|
|
|
|
.daddr = nh->fib_nh_gw4,
|
|
|
|
.flowi4_scope = scope + 1,
|
|
|
|
.flowi4_oif = nh->fib_nh_oif,
|
|
|
|
.flowi4_iif = LOOPBACK_IFINDEX,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* It is not necessary, but requires a bit of thinking */
|
|
|
|
if (fl4.flowi4_scope < RT_SCOPE_LINK)
|
|
|
|
fl4.flowi4_scope = RT_SCOPE_LINK;
|
|
|
|
|
2020-06-17 10:07:16 +08:00
|
|
|
if (table && table != RT_TABLE_MAIN)
|
2019-04-06 07:30:31 +08:00
|
|
|
tbl = fib_get_table(net, table);
|
|
|
|
|
|
|
|
if (tbl)
|
|
|
|
err = fib_table_lookup(tbl, &fl4, &res,
|
|
|
|
FIB_LOOKUP_IGNORE_LINKSTATE |
|
|
|
|
FIB_LOOKUP_NOREF);
|
|
|
|
|
|
|
|
/* on error or if no table given do full lookup. This
|
|
|
|
* is needed for example when nexthops are in the local
|
|
|
|
* table rather than the given table
|
|
|
|
*/
|
|
|
|
if (!tbl || err) {
|
|
|
|
err = fib_lookup(net, &fl4, &res,
|
|
|
|
FIB_LOOKUP_IGNORE_LINKSTATE);
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
|
|
|
|
if (err) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop has invalid gateway");
|
2010-10-19 08:39:26 +08:00
|
|
|
goto out;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
|
|
|
|
err = -EINVAL;
|
|
|
|
if (res.type != RTN_UNICAST && res.type != RTN_LOCAL) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop has invalid gateway");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
nh->fib_nh_scope = res.scope;
|
|
|
|
nh->fib_nh_oif = FIB_RES_OIF(res);
|
|
|
|
nh->fib_nh_dev = dev = FIB_RES_DEV(res);
|
|
|
|
if (!dev) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"No egress device for nexthop gateway");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
dev_hold(dev);
|
|
|
|
if (!netif_carrier_ok(dev))
|
|
|
|
nh->fib_nh_flags |= RTNH_F_LINKDOWN;
|
|
|
|
err = (dev->flags & IFF_UP) ? 0 : -ENETDOWN;
|
2010-10-19 08:39:26 +08:00
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
|
|
|
return err;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2019-04-06 07:30:31 +08:00
|
|
|
|
|
|
|
static int fib_check_nh_nongw(struct net *net, struct fib_nh *nh,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
struct in_device *in_dev;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (nh->fib_nh_flags & (RTNH_F_PERVASIVE | RTNH_F_ONLINK)) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Invalid flags for nexthop - PERVASIVE and ONLINK can not be set");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
|
|
|
|
err = -ENODEV;
|
|
|
|
in_dev = inetdev_by_index(net, nh->fib_nh_oif);
|
|
|
|
if (!in_dev)
|
|
|
|
goto out;
|
|
|
|
err = -ENETDOWN;
|
|
|
|
if (!(in_dev->dev->flags & IFF_UP)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Device for nexthop is not up");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
nh->fib_nh_dev = in_dev->dev;
|
|
|
|
dev_hold(nh->fib_nh_dev);
|
|
|
|
nh->fib_nh_scope = RT_SCOPE_HOST;
|
|
|
|
if (!netif_carrier_ok(nh->fib_nh_dev))
|
|
|
|
nh->fib_nh_flags |= RTNH_F_LINKDOWN;
|
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2019-05-23 03:04:43 +08:00
|
|
|
int fib_check_nh(struct net *net, struct fib_nh *nh, u32 table, u8 scope,
|
|
|
|
struct netlink_ext_ack *extack)
|
2019-04-06 07:30:31 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (nh->fib_nh_gw_family == AF_INET)
|
2019-05-23 03:04:43 +08:00
|
|
|
err = fib_check_nh_v4_gw(net, nh, table, scope, extack);
|
2019-04-06 07:30:32 +08:00
|
|
|
else if (nh->fib_nh_gw_family == AF_INET6)
|
|
|
|
err = fib_check_nh_v6_gw(net, nh, table, extack);
|
2019-04-06 07:30:31 +08:00
|
|
|
else
|
|
|
|
err = fib_check_nh_nongw(net, nh, extack);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-28 09:40:00 +08:00
|
|
|
static inline unsigned int fib_laddr_hashfn(__be32 val)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-02-02 07:34:21 +08:00
|
|
|
unsigned int mask = (fib_info_hash_size - 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-05 04:00:18 +08:00
|
|
|
return ((__force u32)val ^
|
|
|
|
((__force u32)val >> 7) ^
|
|
|
|
((__force u32)val >> 14)) & mask;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-02-02 07:34:21 +08:00
|
|
|
static struct hlist_head *fib_info_hash_alloc(int bytes)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
if (bytes <= PAGE_SIZE)
|
2007-11-26 23:29:32 +08:00
|
|
|
return kzalloc(bytes, GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
else
|
|
|
|
return (struct hlist_head *)
|
2010-10-05 04:00:18 +08:00
|
|
|
__get_free_pages(GFP_KERNEL | __GFP_ZERO,
|
|
|
|
get_order(bytes));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-02-02 07:34:21 +08:00
|
|
|
static void fib_info_hash_free(struct hlist_head *hash, int bytes)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
if (!hash)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (bytes <= PAGE_SIZE)
|
|
|
|
kfree(hash);
|
|
|
|
else
|
|
|
|
free_pages((unsigned long) hash, get_order(bytes));
|
|
|
|
}
|
|
|
|
|
2011-02-02 07:34:21 +08:00
|
|
|
static void fib_info_hash_move(struct hlist_head *new_info_hash,
|
|
|
|
struct hlist_head *new_laddrhash,
|
|
|
|
unsigned int new_size)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2005-08-05 19:12:48 +08:00
|
|
|
struct hlist_head *old_info_hash, *old_laddrhash;
|
2011-02-02 07:34:21 +08:00
|
|
|
unsigned int old_size = fib_info_hash_size;
|
2005-08-05 19:12:48 +08:00
|
|
|
unsigned int i, bytes;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_lock_bh(&fib_info_lock);
|
2005-08-05 19:12:48 +08:00
|
|
|
old_info_hash = fib_info_hash;
|
|
|
|
old_laddrhash = fib_info_laddrhash;
|
2011-02-02 07:34:21 +08:00
|
|
|
fib_info_hash_size = new_size;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
for (i = 0; i < old_size; i++) {
|
|
|
|
struct hlist_head *head = &fib_info_hash[i];
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
struct hlist_node *n;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct fib_info *fi;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(fi, n, head, fib_hash) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hlist_head *dest;
|
|
|
|
unsigned int new_hash;
|
|
|
|
|
|
|
|
new_hash = fib_info_hashfn(fi);
|
|
|
|
dest = &new_info_hash[new_hash];
|
|
|
|
hlist_add_head(&fi->fib_hash, dest);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fib_info_hash = new_info_hash;
|
|
|
|
|
|
|
|
for (i = 0; i < old_size; i++) {
|
|
|
|
struct hlist_head *lhead = &fib_info_laddrhash[i];
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
struct hlist_node *n;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct fib_info *fi;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(fi, n, lhead, fib_lhash) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hlist_head *ldest;
|
|
|
|
unsigned int new_hash;
|
|
|
|
|
|
|
|
new_hash = fib_laddr_hashfn(fi->fib_prefsrc);
|
|
|
|
ldest = &new_laddrhash[new_hash];
|
|
|
|
hlist_add_head(&fi->fib_lhash, ldest);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fib_info_laddrhash = new_laddrhash;
|
|
|
|
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_unlock_bh(&fib_info_lock);
|
2005-08-05 19:12:48 +08:00
|
|
|
|
|
|
|
bytes = old_size * sizeof(struct hlist_head *);
|
2011-02-02 07:34:21 +08:00
|
|
|
fib_info_hash_free(old_info_hash, bytes);
|
|
|
|
fib_info_hash_free(old_laddrhash, bytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:50 +08:00
|
|
|
__be32 fib_info_update_nhc_saddr(struct net *net, struct fib_nh_common *nhc,
|
|
|
|
unsigned char scope)
|
2011-03-25 08:42:21 +08:00
|
|
|
{
|
2019-06-04 11:19:50 +08:00
|
|
|
struct fib_nh *nh;
|
|
|
|
|
|
|
|
if (nhc->nhc_family != AF_INET)
|
|
|
|
return inet_select_addr(nhc->nhc_dev, 0, scope);
|
|
|
|
|
|
|
|
nh = container_of(nhc, struct fib_nh, nh_common);
|
2019-05-23 03:04:45 +08:00
|
|
|
nh->nh_saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope);
|
2011-03-25 08:42:21 +08:00
|
|
|
nh->nh_saddr_genid = atomic_read(&net->ipv4.dev_addr_genid);
|
|
|
|
|
|
|
|
return nh->nh_saddr;
|
|
|
|
}
|
|
|
|
|
2019-04-03 05:11:55 +08:00
|
|
|
__be32 fib_result_prefsrc(struct net *net, struct fib_result *res)
|
|
|
|
{
|
|
|
|
struct fib_nh_common *nhc = res->nhc;
|
|
|
|
|
|
|
|
if (res->fi->fib_prefsrc)
|
|
|
|
return res->fi->fib_prefsrc;
|
|
|
|
|
2019-06-04 11:19:50 +08:00
|
|
|
if (nhc->nhc_family == AF_INET) {
|
|
|
|
struct fib_nh *nh;
|
2019-04-03 05:11:55 +08:00
|
|
|
|
2019-06-04 11:19:50 +08:00
|
|
|
nh = container_of(nhc, struct fib_nh, nh_common);
|
|
|
|
if (nh->nh_saddr_genid == atomic_read(&net->ipv4.dev_addr_genid))
|
|
|
|
return nh->nh_saddr;
|
|
|
|
}
|
|
|
|
|
|
|
|
return fib_info_update_nhc_saddr(net, nhc, res->fi->fib_scope);
|
2019-04-03 05:11:55 +08:00
|
|
|
}
|
|
|
|
|
2015-08-14 04:59:06 +08:00
|
|
|
static bool fib_valid_prefsrc(struct fib_config *cfg, __be32 fib_prefsrc)
|
|
|
|
{
|
|
|
|
if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst ||
|
|
|
|
fib_prefsrc != cfg->fc_dst) {
|
2015-09-02 04:26:35 +08:00
|
|
|
u32 tb_id = cfg->fc_table;
|
2015-11-04 07:59:28 +08:00
|
|
|
int rc;
|
2015-08-14 04:59:06 +08:00
|
|
|
|
|
|
|
if (tb_id == RT_TABLE_MAIN)
|
|
|
|
tb_id = RT_TABLE_LOCAL;
|
|
|
|
|
2015-11-04 07:59:28 +08:00
|
|
|
rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
|
|
|
|
fib_prefsrc, tb_id);
|
|
|
|
|
|
|
|
if (rc != RTN_LOCAL && tb_id != RT_TABLE_LOCAL) {
|
|
|
|
rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
|
|
|
|
fib_prefsrc, RT_TABLE_LOCAL);
|
2015-08-14 04:59:06 +08:00
|
|
|
}
|
2015-11-04 07:59:28 +08:00
|
|
|
|
|
|
|
if (rc != RTN_LOCAL)
|
|
|
|
return false;
|
2015-08-14 04:59:06 +08:00
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-05-22 00:12:02 +08:00
|
|
|
struct fib_info *fib_create_info(struct fib_config *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct fib_info *fi = NULL;
|
2019-06-04 11:19:51 +08:00
|
|
|
struct nexthop *nh = NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct fib_info *ofi;
|
|
|
|
int nhs = 1;
|
2008-02-01 10:49:32 +08:00
|
|
|
struct net *net = cfg->fc_nlinfo.nl_net;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-08 06:27:38 +08:00
|
|
|
if (cfg->fc_type > RTN_MAX)
|
|
|
|
goto err_inval;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Fast check to catch the most weird cases */
|
2017-05-22 00:12:03 +08:00
|
|
|
if (fib_props[cfg->fc_type].scope > cfg->fc_scope) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid scope");
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-05-22 00:12:03 +08:00
|
|
|
if (cfg->fc_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Invalid rtm_flags - can not contain DEAD or LINKDOWN");
|
2016-07-11 02:11:55 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2016-07-11 02:11:55 +08:00
|
|
|
|
2019-06-09 05:53:32 +08:00
|
|
|
if (cfg->fc_nh_id) {
|
2019-06-09 05:53:33 +08:00
|
|
|
if (!cfg->fc_mx) {
|
|
|
|
fi = fib_find_info_nh(net, cfg);
|
|
|
|
if (fi) {
|
2021-07-29 15:13:50 +08:00
|
|
|
refcount_inc(&fi->fib_treeref);
|
2019-06-09 05:53:33 +08:00
|
|
|
return fi;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-06-09 05:53:32 +08:00
|
|
|
nh = nexthop_find_by_id(net, cfg->fc_nh_id);
|
|
|
|
if (!nh) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop id does not exist");
|
|
|
|
goto err_inval;
|
|
|
|
}
|
|
|
|
nhs = 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2006-08-18 09:14:52 +08:00
|
|
|
if (cfg->fc_mp) {
|
2017-05-22 00:12:02 +08:00
|
|
|
nhs = fib_count_nexthops(cfg->fc_mp, cfg->fc_mp_len, extack);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (nhs == 0)
|
|
|
|
goto err_inval;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
err = -ENOBUFS;
|
2011-02-02 07:34:21 +08:00
|
|
|
if (fib_info_cnt >= fib_info_hash_size) {
|
|
|
|
unsigned int new_size = fib_info_hash_size << 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hlist_head *new_info_hash;
|
|
|
|
struct hlist_head *new_laddrhash;
|
|
|
|
unsigned int bytes;
|
|
|
|
|
|
|
|
if (!new_size)
|
2012-10-22 04:12:09 +08:00
|
|
|
new_size = 16;
|
2005-04-17 06:20:36 +08:00
|
|
|
bytes = new_size * sizeof(struct hlist_head *);
|
2011-02-02 07:34:21 +08:00
|
|
|
new_info_hash = fib_info_hash_alloc(bytes);
|
|
|
|
new_laddrhash = fib_info_hash_alloc(bytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!new_info_hash || !new_laddrhash) {
|
2011-02-02 07:34:21 +08:00
|
|
|
fib_info_hash_free(new_info_hash, bytes);
|
|
|
|
fib_info_hash_free(new_laddrhash, bytes);
|
2007-11-26 23:29:32 +08:00
|
|
|
} else
|
2011-02-02 07:34:21 +08:00
|
|
|
fib_info_hash_move(new_info_hash, new_laddrhash, new_size);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-02-02 07:34:21 +08:00
|
|
|
if (!fib_info_hash_size)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto failure;
|
|
|
|
}
|
|
|
|
|
2019-01-31 08:51:48 +08:00
|
|
|
fi = kzalloc(struct_size(fi, fib_nh, nhs), GFP_KERNEL);
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!fi)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto failure;
|
2018-10-05 11:07:51 +08:00
|
|
|
fi->fib_metrics = ip_fib_metrics_init(fi->fib_net, cfg->fc_mx,
|
2018-11-07 04:51:15 +08:00
|
|
|
cfg->fc_mx_len, extack);
|
2019-06-06 05:09:05 +08:00
|
|
|
if (IS_ERR(fi->fib_metrics)) {
|
2018-10-05 11:07:51 +08:00
|
|
|
err = PTR_ERR(fi->fib_metrics);
|
|
|
|
kfree(fi);
|
|
|
|
return ERR_PTR(err);
|
2017-08-15 20:26:17 +08:00
|
|
|
}
|
2018-10-05 11:07:51 +08:00
|
|
|
|
2017-08-15 20:26:17 +08:00
|
|
|
fib_info_cnt++;
|
2015-03-12 12:04:08 +08:00
|
|
|
fi->fib_net = net;
|
2006-08-18 09:14:52 +08:00
|
|
|
fi->fib_protocol = cfg->fc_protocol;
|
2011-03-25 09:06:47 +08:00
|
|
|
fi->fib_scope = cfg->fc_scope;
|
2006-08-18 09:14:52 +08:00
|
|
|
fi->fib_flags = cfg->fc_flags;
|
|
|
|
fi->fib_priority = cfg->fc_priority;
|
|
|
|
fi->fib_prefsrc = cfg->fc_prefsrc;
|
2012-10-04 09:25:26 +08:00
|
|
|
fi->fib_type = cfg->fc_type;
|
2016-09-05 06:20:20 +08:00
|
|
|
fi->fib_tb_id = cfg->fc_table;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
fi->fib_nhs = nhs;
|
2019-06-04 11:19:51 +08:00
|
|
|
if (nh) {
|
|
|
|
if (!nexthop_get(nh)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
|
|
|
|
err = -EINVAL;
|
|
|
|
} else {
|
|
|
|
err = 0;
|
|
|
|
fi->nh = nh;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
change_nexthops(fi) {
|
|
|
|
nexthop_nh->nh_parent = fi;
|
|
|
|
} endfor_nexthops(fi)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (cfg->fc_mp)
|
|
|
|
err = fib_get_nhs(fi, cfg->fc_mp, cfg->fc_mp_len, cfg,
|
|
|
|
extack);
|
|
|
|
else
|
|
|
|
err = fib_nh_init(net, fi->fib_nh, cfg, 1, extack);
|
|
|
|
}
|
2015-07-21 16:43:47 +08:00
|
|
|
|
2019-03-28 11:53:48 +08:00
|
|
|
if (err != 0)
|
|
|
|
goto failure;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-08-18 09:14:52 +08:00
|
|
|
if (fib_props[cfg->fc_type].error) {
|
2019-04-06 07:30:28 +08:00
|
|
|
if (cfg->fc_gw_family || cfg->fc_oif || cfg->fc_mp) {
|
2017-05-22 00:12:03 +08:00
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Gateway, device and multipath can not be specified for this route type");
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
goto link_it;
|
2011-03-08 06:27:38 +08:00
|
|
|
} else {
|
|
|
|
switch (cfg->fc_type) {
|
|
|
|
case RTN_UNICAST:
|
|
|
|
case RTN_LOCAL:
|
|
|
|
case RTN_BROADCAST:
|
|
|
|
case RTN_ANYCAST:
|
|
|
|
case RTN_MULTICAST:
|
|
|
|
break;
|
|
|
|
default:
|
2017-05-22 00:12:03 +08:00
|
|
|
NL_SET_ERR_MSG(extack, "Invalid route type");
|
2011-03-08 06:27:38 +08:00
|
|
|
goto err_inval;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2017-05-22 00:12:03 +08:00
|
|
|
if (cfg->fc_scope > RT_SCOPE_HOST) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid scope");
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh) {
|
|
|
|
err = fib_check_nexthop(fi->nh, cfg->fc_scope, extack);
|
|
|
|
if (err)
|
|
|
|
goto failure;
|
|
|
|
} else if (cfg->fc_scope == RT_SCOPE_HOST) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct fib_nh *nh = fi->fib_nh;
|
|
|
|
|
|
|
|
/* Local address is added. */
|
2017-05-22 00:12:03 +08:00
|
|
|
if (nhs != 1) {
|
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Route with host scope can not have multiple nexthops");
|
2017-05-22 00:12:02 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2019-04-06 07:30:26 +08:00
|
|
|
if (nh->fib_nh_gw_family) {
|
2017-05-22 00:12:03 +08:00
|
|
|
NL_SET_ERR_MSG(extack,
|
|
|
|
"Route with host scope can not have a gateway");
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2019-03-28 11:53:55 +08:00
|
|
|
nh->fib_nh_scope = RT_SCOPE_NOWHERE;
|
2019-06-04 11:19:49 +08:00
|
|
|
nh->fib_nh_dev = dev_get_by_index(net, nh->fib_nh_oif);
|
2005-04-17 06:20:36 +08:00
|
|
|
err = -ENODEV;
|
2019-03-28 11:53:55 +08:00
|
|
|
if (!nh->fib_nh_dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto failure;
|
|
|
|
} else {
|
2015-06-24 01:45:36 +08:00
|
|
|
int linkdown = 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
change_nexthops(fi) {
|
2019-05-23 03:04:43 +08:00
|
|
|
err = fib_check_nh(cfg->fc_nlinfo.nl_net, nexthop_nh,
|
|
|
|
cfg->fc_table, cfg->fc_scope,
|
|
|
|
extack);
|
2010-10-05 04:00:18 +08:00
|
|
|
if (err != 0)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto failure;
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nexthop_nh->fib_nh_flags & RTNH_F_LINKDOWN)
|
2015-06-24 01:45:36 +08:00
|
|
|
linkdown++;
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi)
|
2015-06-24 01:45:36 +08:00
|
|
|
if (linkdown == fi->fib_nhs)
|
|
|
|
fi->fib_flags |= RTNH_F_LINKDOWN;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2017-05-22 00:12:03 +08:00
|
|
|
if (fi->fib_prefsrc && !fib_valid_prefsrc(cfg, fi->fib_prefsrc)) {
|
|
|
|
NL_SET_ERR_MSG(extack, "Invalid prefsrc address");
|
2015-08-14 04:59:06 +08:00
|
|
|
goto err_inval;
|
2017-05-22 00:12:03 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (!fi->nh) {
|
|
|
|
change_nexthops(fi) {
|
|
|
|
fib_info_update_nhc_saddr(net, &nexthop_nh->nh_common,
|
|
|
|
fi->fib_scope);
|
|
|
|
if (nexthop_nh->fib_nh_gw_family == AF_INET6)
|
|
|
|
fi->fib_nh_is_v6 = true;
|
|
|
|
} endfor_nexthops(fi)
|
2011-03-08 12:54:48 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
fib_rebalance(fi);
|
|
|
|
}
|
2015-09-30 16:12:21 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
link_it:
|
2010-10-05 04:00:18 +08:00
|
|
|
ofi = fib_find_info(fi);
|
|
|
|
if (ofi) {
|
2005-04-17 06:20:36 +08:00
|
|
|
fi->fib_dead = 1;
|
|
|
|
free_fib_info(fi);
|
2021-07-29 15:13:50 +08:00
|
|
|
refcount_inc(&ofi->fib_treeref);
|
2005-04-17 06:20:36 +08:00
|
|
|
return ofi;
|
|
|
|
}
|
|
|
|
|
2021-08-03 00:02:21 +08:00
|
|
|
refcount_set(&fi->fib_treeref, 1);
|
2017-07-04 14:35:02 +08:00
|
|
|
refcount_set(&fi->fib_clntref, 1);
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_lock_bh(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
hlist_add_head(&fi->fib_hash,
|
|
|
|
&fib_info_hash[fib_info_hashfn(fi)]);
|
|
|
|
if (fi->fib_prefsrc) {
|
|
|
|
struct hlist_head *head;
|
|
|
|
|
|
|
|
head = &fib_info_laddrhash[fib_laddr_hashfn(fi->fib_prefsrc)];
|
|
|
|
hlist_add_head(&fi->fib_lhash, head);
|
|
|
|
}
|
2019-06-04 11:19:51 +08:00
|
|
|
if (fi->nh) {
|
|
|
|
list_add(&fi->nh_list, &nh->fi_list);
|
|
|
|
} else {
|
|
|
|
change_nexthops(fi) {
|
|
|
|
struct hlist_head *head;
|
|
|
|
unsigned int hash;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (!nexthop_nh->fib_nh_dev)
|
|
|
|
continue;
|
|
|
|
hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex);
|
|
|
|
head = &fib_info_devhash[hash];
|
|
|
|
hlist_add_head(&nexthop_nh->nh_hash, head);
|
|
|
|
} endfor_nexthops(fi)
|
|
|
|
}
|
2006-08-30 07:48:09 +08:00
|
|
|
spin_unlock_bh(&fib_info_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
return fi;
|
|
|
|
|
|
|
|
err_inval:
|
|
|
|
err = -EINVAL;
|
|
|
|
|
|
|
|
failure:
|
2007-02-09 22:24:47 +08:00
|
|
|
if (fi) {
|
2005-04-17 06:20:36 +08:00
|
|
|
fi->fib_dead = 1;
|
|
|
|
free_fib_info(fi);
|
|
|
|
}
|
2006-08-18 09:14:52 +08:00
|
|
|
|
|
|
|
return ERR_PTR(err);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-04-03 05:11:58 +08:00
|
|
|
int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nhc,
|
2019-09-04 22:11:58 +08:00
|
|
|
u8 rt_family, unsigned char *flags, bool skip_oif)
|
2019-04-03 05:11:56 +08:00
|
|
|
{
|
2019-04-03 05:11:57 +08:00
|
|
|
if (nhc->nhc_flags & RTNH_F_DEAD)
|
2019-04-03 05:11:56 +08:00
|
|
|
*flags |= RTNH_F_DEAD;
|
|
|
|
|
2019-04-03 05:11:57 +08:00
|
|
|
if (nhc->nhc_flags & RTNH_F_LINKDOWN) {
|
2019-04-03 05:11:56 +08:00
|
|
|
*flags |= RTNH_F_LINKDOWN;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2019-04-03 05:11:57 +08:00
|
|
|
switch (nhc->nhc_family) {
|
|
|
|
case AF_INET:
|
|
|
|
if (ip_ignore_linkdown(nhc->nhc_dev))
|
|
|
|
*flags |= RTNH_F_DEAD;
|
|
|
|
break;
|
2019-04-03 05:11:58 +08:00
|
|
|
case AF_INET6:
|
|
|
|
if (ip6_ignore_linkdown(nhc->nhc_dev))
|
|
|
|
*flags |= RTNH_F_DEAD;
|
|
|
|
break;
|
2019-04-03 05:11:57 +08:00
|
|
|
}
|
2019-04-03 05:11:56 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
2019-04-06 07:30:26 +08:00
|
|
|
switch (nhc->nhc_gw_family) {
|
|
|
|
case AF_INET:
|
|
|
|
if (nla_put_in_addr(skb, RTA_GATEWAY, nhc->nhc_gw.ipv4))
|
|
|
|
goto nla_put_failure;
|
|
|
|
break;
|
|
|
|
case AF_INET6:
|
2019-04-06 07:30:40 +08:00
|
|
|
/* if gateway family does not match nexthop family
|
|
|
|
* gateway is encoded as RTA_VIA
|
|
|
|
*/
|
2019-09-04 22:11:58 +08:00
|
|
|
if (rt_family != nhc->nhc_gw_family) {
|
2019-04-06 07:30:40 +08:00
|
|
|
int alen = sizeof(struct in6_addr);
|
|
|
|
struct nlattr *nla;
|
|
|
|
struct rtvia *via;
|
|
|
|
|
|
|
|
nla = nla_reserve(skb, RTA_VIA, alen + 2);
|
|
|
|
if (!nla)
|
|
|
|
goto nla_put_failure;
|
|
|
|
|
|
|
|
via = nla_data(nla);
|
|
|
|
via->rtvia_family = AF_INET6;
|
|
|
|
memcpy(via->rtvia_addr, &nhc->nhc_gw.ipv6, alen);
|
|
|
|
} else if (nla_put_in6_addr(skb, RTA_GATEWAY,
|
|
|
|
&nhc->nhc_gw.ipv6) < 0) {
|
2019-04-06 07:30:26 +08:00
|
|
|
goto nla_put_failure;
|
2019-04-06 07:30:40 +08:00
|
|
|
}
|
2019-04-06 07:30:26 +08:00
|
|
|
break;
|
2019-04-03 05:11:57 +08:00
|
|
|
}
|
2019-04-03 05:11:56 +08:00
|
|
|
|
2020-11-10 18:25:53 +08:00
|
|
|
*flags |= (nhc->nhc_flags &
|
|
|
|
(RTNH_F_ONLINK | RTNH_F_OFFLOAD | RTNH_F_TRAP));
|
2019-04-03 05:11:56 +08:00
|
|
|
|
2019-04-03 05:11:57 +08:00
|
|
|
if (!skip_oif && nhc->nhc_dev &&
|
|
|
|
nla_put_u32(skb, RTA_OIF, nhc->nhc_dev->ifindex))
|
2019-04-03 05:11:56 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
|
2019-04-03 05:11:57 +08:00
|
|
|
if (nhc->nhc_lwtstate &&
|
2019-04-23 23:23:41 +08:00
|
|
|
lwtunnel_fill_encap(skb, nhc->nhc_lwtstate,
|
|
|
|
RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
|
2019-04-03 05:11:56 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
nla_put_failure:
|
|
|
|
return -EMSGSIZE;
|
|
|
|
}
|
2019-04-03 05:11:58 +08:00
|
|
|
EXPORT_SYMBOL_GPL(fib_nexthop_info);
|
2019-04-03 05:11:56 +08:00
|
|
|
|
2019-04-03 05:11:58 +08:00
|
|
|
#if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6)
|
|
|
|
int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
|
2021-09-23 23:03:19 +08:00
|
|
|
int nh_weight, u8 rt_family, u32 nh_tclassid)
|
2019-04-03 05:11:56 +08:00
|
|
|
{
|
2019-04-03 05:11:57 +08:00
|
|
|
const struct net_device *dev = nhc->nhc_dev;
|
2019-04-03 05:11:56 +08:00
|
|
|
struct rtnexthop *rtnh;
|
2019-04-23 23:48:09 +08:00
|
|
|
unsigned char flags = 0;
|
2019-04-03 05:11:56 +08:00
|
|
|
|
|
|
|
rtnh = nla_reserve_nohdr(skb, sizeof(*rtnh));
|
|
|
|
if (!rtnh)
|
|
|
|
goto nla_put_failure;
|
|
|
|
|
2019-04-03 05:11:57 +08:00
|
|
|
rtnh->rtnh_hops = nh_weight - 1;
|
2019-04-03 05:11:56 +08:00
|
|
|
rtnh->rtnh_ifindex = dev ? dev->ifindex : 0;
|
|
|
|
|
2019-09-04 22:11:58 +08:00
|
|
|
if (fib_nexthop_info(skb, nhc, rt_family, &flags, true) < 0)
|
2019-04-03 05:11:56 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
|
|
|
|
rtnh->rtnh_flags = flags;
|
|
|
|
|
2021-09-23 23:03:19 +08:00
|
|
|
if (nh_tclassid && nla_put_u32(skb, RTA_FLOW, nh_tclassid))
|
|
|
|
goto nla_put_failure;
|
|
|
|
|
2019-04-03 05:11:56 +08:00
|
|
|
/* length of rtnetlink header + attributes */
|
|
|
|
rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *)rtnh;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
nla_put_failure:
|
|
|
|
return -EMSGSIZE;
|
|
|
|
}
|
2019-04-03 05:11:58 +08:00
|
|
|
EXPORT_SYMBOL_GPL(fib_add_nexthop);
|
2019-04-03 05:11:57 +08:00
|
|
|
#endif
|
2019-04-03 05:11:56 +08:00
|
|
|
|
2019-04-03 05:11:57 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2019-04-03 05:11:56 +08:00
|
|
|
static int fib_add_multipath(struct sk_buff *skb, struct fib_info *fi)
|
|
|
|
{
|
|
|
|
struct nlattr *mp;
|
|
|
|
|
2019-04-26 17:13:06 +08:00
|
|
|
mp = nla_nest_start_noflag(skb, RTA_MULTIPATH);
|
2019-04-03 05:11:56 +08:00
|
|
|
if (!mp)
|
|
|
|
goto nla_put_failure;
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (unlikely(fi->nh)) {
|
2019-09-04 22:11:58 +08:00
|
|
|
if (nexthop_mpath_fill_node(skb, fi->nh, AF_INET) < 0)
|
2019-06-04 11:19:51 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
goto mp_end;
|
|
|
|
}
|
|
|
|
|
2019-04-03 05:11:56 +08:00
|
|
|
for_nexthops(fi) {
|
2021-09-23 23:03:19 +08:00
|
|
|
u32 nh_tclassid = 0;
|
2019-04-03 05:11:56 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
2021-09-23 23:03:19 +08:00
|
|
|
nh_tclassid = nh->nh_tclassid;
|
2019-04-03 05:11:56 +08:00
|
|
|
#endif
|
2021-09-23 23:03:19 +08:00
|
|
|
if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
|
|
|
|
AF_INET, nh_tclassid) < 0)
|
|
|
|
goto nla_put_failure;
|
2019-04-03 05:11:56 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
mp_end:
|
2019-04-03 05:11:56 +08:00
|
|
|
nla_nest_end(skb, mp);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
nla_put_failure:
|
|
|
|
return -EMSGSIZE;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static int fib_add_multipath(struct sk_buff *skb, struct fib_info *fi)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2012-09-08 04:12:54 +08:00
|
|
|
int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event,
|
2021-02-02 03:47:50 +08:00
|
|
|
const struct fib_rt_info *fri, unsigned int flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2020-01-14 19:23:10 +08:00
|
|
|
unsigned int nhs = fib_info_num_path(fri->fi);
|
|
|
|
struct fib_info *fi = fri->fi;
|
|
|
|
u32 tb_id = fri->tb_id;
|
2006-08-18 09:15:17 +08:00
|
|
|
struct nlmsghdr *nlh;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct rtmsg *rtm;
|
|
|
|
|
2012-09-08 04:12:54 +08:00
|
|
|
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*rtm), flags);
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!nlh)
|
2007-02-01 15:16:40 +08:00
|
|
|
return -EMSGSIZE;
|
2006-08-18 09:15:17 +08:00
|
|
|
|
|
|
|
rtm = nlmsg_data(nlh);
|
2005-04-17 06:20:36 +08:00
|
|
|
rtm->rtm_family = AF_INET;
|
2020-01-14 19:23:10 +08:00
|
|
|
rtm->rtm_dst_len = fri->dst_len;
|
2005-04-17 06:20:36 +08:00
|
|
|
rtm->rtm_src_len = 0;
|
2020-01-14 19:23:10 +08:00
|
|
|
rtm->rtm_tos = fri->tos;
|
2008-06-11 06:44:49 +08:00
|
|
|
if (tb_id < 256)
|
|
|
|
rtm->rtm_table = tb_id;
|
|
|
|
else
|
|
|
|
rtm->rtm_table = RT_TABLE_COMPAT;
|
2012-04-02 08:39:02 +08:00
|
|
|
if (nla_put_u32(skb, RTA_TABLE, tb_id))
|
|
|
|
goto nla_put_failure;
|
2020-01-14 19:23:10 +08:00
|
|
|
rtm->rtm_type = fri->type;
|
2005-04-17 06:20:36 +08:00
|
|
|
rtm->rtm_flags = fi->fib_flags;
|
2011-03-25 09:06:47 +08:00
|
|
|
rtm->rtm_scope = fi->fib_scope;
|
2005-04-17 06:20:36 +08:00
|
|
|
rtm->rtm_protocol = fi->fib_protocol;
|
2006-08-18 09:15:17 +08:00
|
|
|
|
2012-04-02 08:39:02 +08:00
|
|
|
if (rtm->rtm_dst_len &&
|
2020-01-14 19:23:10 +08:00
|
|
|
nla_put_in_addr(skb, RTA_DST, fri->dst))
|
2012-04-02 08:39:02 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
if (fi->fib_priority &&
|
|
|
|
nla_put_u32(skb, RTA_PRIORITY, fi->fib_priority))
|
|
|
|
goto nla_put_failure;
|
2017-05-26 05:27:35 +08:00
|
|
|
if (rtnetlink_put_metrics(skb, fi->fib_metrics->metrics) < 0)
|
2006-08-18 09:15:17 +08:00
|
|
|
goto nla_put_failure;
|
|
|
|
|
2012-04-02 08:39:02 +08:00
|
|
|
if (fi->fib_prefsrc &&
|
2015-03-29 22:59:25 +08:00
|
|
|
nla_put_in_addr(skb, RTA_PREFSRC, fi->fib_prefsrc))
|
2012-04-02 08:39:02 +08:00
|
|
|
goto nla_put_failure;
|
2019-06-04 11:19:51 +08:00
|
|
|
|
|
|
|
if (fi->nh) {
|
|
|
|
if (nla_put_u32(skb, RTA_NH_ID, fi->nh->id))
|
|
|
|
goto nla_put_failure;
|
|
|
|
if (nexthop_is_blackhole(fi->nh))
|
|
|
|
rtm->rtm_type = RTN_BLACKHOLE;
|
2020-04-28 04:56:46 +08:00
|
|
|
if (!fi->fib_net->ipv4.sysctl_nexthop_compat_mode)
|
|
|
|
goto offload;
|
2019-06-04 11:19:51 +08:00
|
|
|
}
|
|
|
|
|
2019-06-04 11:19:49 +08:00
|
|
|
if (nhs == 1) {
|
2019-06-04 11:19:50 +08:00
|
|
|
const struct fib_nh_common *nhc = fib_info_nhc(fi, 0);
|
2019-04-23 23:48:09 +08:00
|
|
|
unsigned char flags = 0;
|
2019-04-03 05:11:56 +08:00
|
|
|
|
2019-09-04 22:11:58 +08:00
|
|
|
if (fib_nexthop_info(skb, nhc, AF_INET, &flags, false) < 0)
|
2012-04-02 08:39:02 +08:00
|
|
|
goto nla_put_failure;
|
2019-04-03 05:11:56 +08:00
|
|
|
|
|
|
|
rtm->rtm_flags = flags;
|
2011-01-14 20:36:42 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_CLASSID
|
2019-06-04 11:19:50 +08:00
|
|
|
if (nhc->nhc_family == AF_INET) {
|
|
|
|
struct fib_nh *nh;
|
|
|
|
|
|
|
|
nh = container_of(nhc, struct fib_nh, nh_common);
|
|
|
|
if (nh->nh_tclassid &&
|
|
|
|
nla_put_u32(skb, RTA_FLOW, nh->nh_tclassid))
|
|
|
|
goto nla_put_failure;
|
|
|
|
}
|
2006-07-22 06:09:55 +08:00
|
|
|
#endif
|
2019-04-03 05:11:56 +08:00
|
|
|
} else {
|
|
|
|
if (fib_add_multipath(skb, fi) < 0)
|
2017-01-12 06:29:54 +08:00
|
|
|
goto nla_put_failure;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2020-04-28 04:56:46 +08:00
|
|
|
offload:
|
ipv4: Add "offload" and "trap" indications to routes
When performing L3 offload, routes and nexthops are usually programmed
into two different tables in the underlying device. Therefore, the fact
that a nexthop resides in hardware does not necessarily mean that all
the associated routes also reside in hardware and vice-versa.
While the kernel can signal to user space the presence of a nexthop in
hardware (via 'RTNH_F_OFFLOAD'), it does not have a corresponding flag
for routes. In addition, the fact that a route resides in hardware does
not necessarily mean that the traffic is offloaded. For example,
unreachable routes (i.e., 'RTN_UNREACHABLE') are programmed to trap
packets to the CPU so that the kernel will be able to generate the
appropriate ICMP error packet.
This patch adds an "offload" and "trap" indications to IPv4 routes, so
that users will have better visibility into the offload process.
'struct fib_alias' is extended with two new fields that indicate if the
route resides in hardware or not and if it is offloading traffic from
the kernel or trapping packets to it. Note that the new fields are added
in the 6 bytes hole and therefore the struct still fits in a single
cache line [1].
Capable drivers are expected to invoke fib_alias_hw_flags_set() with the
route's key in order to set the flags.
The indications are dumped to user space via a new flags (i.e.,
'RTM_F_OFFLOAD' and 'RTM_F_TRAP') in the 'rtm_flags' field in the
ancillary header.
v2:
* Make use of 'struct fib_rt_info' in fib_alias_hw_flags_set()
[1]
struct fib_alias {
struct hlist_node fa_list; /* 0 16 */
struct fib_info * fa_info; /* 16 8 */
u8 fa_tos; /* 24 1 */
u8 fa_type; /* 25 1 */
u8 fa_state; /* 26 1 */
u8 fa_slen; /* 27 1 */
u32 tb_id; /* 28 4 */
s16 fa_default; /* 32 2 */
u8 offload:1; /* 34: 0 1 */
u8 trap:1; /* 34: 1 1 */
u8 unused:6; /* 34: 2 1 */
/* XXX 5 bytes hole, try to pack */
struct callback_head rcu __attribute__((__aligned__(8))); /* 40 16 */
/* size: 56, cachelines: 1, members: 12 */
/* sum members: 50, holes: 1, sum holes: 5 */
/* sum bitfield members: 8 bits (1 bytes) */
/* forced alignments: 1, forced holes: 1, sum forced holes: 5 */
/* last cacheline: 56 bytes */
} __attribute__((__aligned__(8)));
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-14 19:23:11 +08:00
|
|
|
if (fri->offload)
|
|
|
|
rtm->rtm_flags |= RTM_F_OFFLOAD;
|
|
|
|
if (fri->trap)
|
|
|
|
rtm->rtm_flags |= RTM_F_TRAP;
|
IPv4: Add "offload failed" indication to routes
After installing a route to the kernel, user space receives an
acknowledgment, which means the route was installed in the kernel, but not
necessarily in hardware.
The asynchronous nature of route installation in hardware can lead to a
routing daemon advertising a route before it was actually installed in
hardware. This can result in packet loss or mis-routed packets until the
route is installed in hardware.
To avoid such cases, previous patch set added the ability to emit
RTM_NEWROUTE notifications whenever RTM_F_OFFLOAD/RTM_F_TRAP flags
are changed, this behavior is controlled by sysctl.
With the above mentioned behavior, it is possible to know from user-space
if the route was offloaded, but if the offload fails there is no indication
to user-space. Following a failure, a routing daemon will wait indefinitely
for a notification that will never come.
This patch adds an "offload_failed" indication to IPv4 routes, so that
users will have better visibility into the offload process.
'struct fib_alias', and 'struct fib_rt_info' are extended with new field
that indicates if route offload failed. Note that the new field is added
using unused bit and therefore there is no need to increase structs size.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-07 16:22:50 +08:00
|
|
|
if (fri->offload_failed)
|
|
|
|
rtm->rtm_flags |= RTM_F_OFFLOAD_FAILED;
|
ipv4: Add "offload" and "trap" indications to routes
When performing L3 offload, routes and nexthops are usually programmed
into two different tables in the underlying device. Therefore, the fact
that a nexthop resides in hardware does not necessarily mean that all
the associated routes also reside in hardware and vice-versa.
While the kernel can signal to user space the presence of a nexthop in
hardware (via 'RTNH_F_OFFLOAD'), it does not have a corresponding flag
for routes. In addition, the fact that a route resides in hardware does
not necessarily mean that the traffic is offloaded. For example,
unreachable routes (i.e., 'RTN_UNREACHABLE') are programmed to trap
packets to the CPU so that the kernel will be able to generate the
appropriate ICMP error packet.
This patch adds an "offload" and "trap" indications to IPv4 routes, so
that users will have better visibility into the offload process.
'struct fib_alias' is extended with two new fields that indicate if the
route resides in hardware or not and if it is offloading traffic from
the kernel or trapping packets to it. Note that the new fields are added
in the 6 bytes hole and therefore the struct still fits in a single
cache line [1].
Capable drivers are expected to invoke fib_alias_hw_flags_set() with the
route's key in order to set the flags.
The indications are dumped to user space via a new flags (i.e.,
'RTM_F_OFFLOAD' and 'RTM_F_TRAP') in the 'rtm_flags' field in the
ancillary header.
v2:
* Make use of 'struct fib_rt_info' in fib_alias_hw_flags_set()
[1]
struct fib_alias {
struct hlist_node fa_list; /* 0 16 */
struct fib_info * fa_info; /* 16 8 */
u8 fa_tos; /* 24 1 */
u8 fa_type; /* 25 1 */
u8 fa_state; /* 26 1 */
u8 fa_slen; /* 27 1 */
u32 tb_id; /* 28 4 */
s16 fa_default; /* 32 2 */
u8 offload:1; /* 34: 0 1 */
u8 trap:1; /* 34: 1 1 */
u8 unused:6; /* 34: 2 1 */
/* XXX 5 bytes hole, try to pack */
struct callback_head rcu __attribute__((__aligned__(8))); /* 40 16 */
/* size: 56, cachelines: 1, members: 12 */
/* sum members: 50, holes: 1, sum holes: 5 */
/* sum bitfield members: 8 bits (1 bytes) */
/* forced alignments: 1, forced holes: 1, sum forced holes: 5 */
/* last cacheline: 56 bytes */
} __attribute__((__aligned__(8)));
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-14 19:23:11 +08:00
|
|
|
|
2015-01-17 05:09:00 +08:00
|
|
|
nlmsg_end(skb, nlh);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-08-18 09:15:17 +08:00
|
|
|
nla_put_failure:
|
2007-02-01 15:16:40 +08:00
|
|
|
nlmsg_cancel(skb, nlh);
|
|
|
|
return -EMSGSIZE;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-10-05 04:00:18 +08:00
|
|
|
* Update FIB if:
|
|
|
|
* - local address disappeared -> we must delete all the entries
|
|
|
|
* referring to it.
|
|
|
|
* - device went down -> we must shutdown all nexthops going via it.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2016-09-05 06:20:20 +08:00
|
|
|
int fib_sync_down_addr(struct net_device *dev, __be32 local)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int ret = 0;
|
2008-02-01 10:48:47 +08:00
|
|
|
unsigned int hash = fib_laddr_hashfn(local);
|
|
|
|
struct hlist_head *head = &fib_info_laddrhash[hash];
|
2019-11-08 02:29:52 +08:00
|
|
|
int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN;
|
2016-09-05 06:20:20 +08:00
|
|
|
struct net *net = dev_net(dev);
|
2008-02-01 10:48:47 +08:00
|
|
|
struct fib_info *fi;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!fib_info_laddrhash || local == 0)
|
2008-02-01 10:48:47 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(fi, head, fib_lhash) {
|
2016-09-05 06:20:20 +08:00
|
|
|
if (!net_eq(fi->fib_net, net) ||
|
|
|
|
fi->fib_tb_id != tb_id)
|
2008-02-01 10:50:07 +08:00
|
|
|
continue;
|
2008-02-01 10:48:47 +08:00
|
|
|
if (fi->fib_prefsrc == local) {
|
|
|
|
fi->fib_flags |= RTNH_F_DEAD;
|
|
|
|
ret++;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
2008-02-01 10:48:47 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
static int call_fib_nh_notifiers(struct fib_nh *nh,
|
2017-02-08 18:16:39 +08:00
|
|
|
enum fib_event_type event_type)
|
|
|
|
{
|
2019-03-28 11:53:55 +08:00
|
|
|
bool ignore_link_down = ip_ignore_linkdown(nh->fib_nh_dev);
|
2017-02-08 18:16:39 +08:00
|
|
|
struct fib_nh_notifier_info info = {
|
2019-03-28 11:53:55 +08:00
|
|
|
.fib_nh = nh,
|
2017-02-08 18:16:39 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
switch (event_type) {
|
|
|
|
case FIB_EVENT_NH_ADD:
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_flags & RTNH_F_DEAD)
|
2017-02-08 18:16:39 +08:00
|
|
|
break;
|
2019-03-28 11:53:55 +08:00
|
|
|
if (ignore_link_down && nh->fib_nh_flags & RTNH_F_LINKDOWN)
|
2017-02-08 18:16:39 +08:00
|
|
|
break;
|
2019-03-28 11:53:55 +08:00
|
|
|
return call_fib4_notifiers(dev_net(nh->fib_nh_dev), event_type,
|
2017-08-03 19:28:11 +08:00
|
|
|
&info.info);
|
2017-02-08 18:16:39 +08:00
|
|
|
case FIB_EVENT_NH_DEL:
|
2019-03-28 11:53:55 +08:00
|
|
|
if ((ignore_link_down && nh->fib_nh_flags & RTNH_F_LINKDOWN) ||
|
|
|
|
(nh->fib_nh_flags & RTNH_F_DEAD))
|
|
|
|
return call_fib4_notifiers(dev_net(nh->fib_nh_dev),
|
2017-08-03 19:28:11 +08:00
|
|
|
event_type, &info.info);
|
2020-11-21 02:25:57 +08:00
|
|
|
break;
|
2017-02-08 18:16:39 +08:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
net: ipv4: update fnhe_pmtu when first hop's MTU changes
Since commit 5aad1de5ea2c ("ipv4: use separate genid for next hop
exceptions"), exceptions get deprecated separately from cached
routes. In particular, administrative changes don't clear PMTU anymore.
As Stefano described in commit e9fa1495d738 ("ipv6: Reflect MTU changes
on PMTU of exceptions for MTU-less routes"), the PMTU discovered before
the local MTU change can become stale:
- if the local MTU is now lower than the PMTU, that PMTU is now
incorrect
- if the local MTU was the lowest value in the path, and is increased,
we might discover a higher PMTU
Similarly to what commit e9fa1495d738 did for IPv6, update PMTU in those
cases.
If the exception was locked, the discovered PMTU was smaller than the
minimal accepted PMTU. In that case, if the new local MTU is smaller
than the current PMTU, let PMTU discovery figure out if locking of the
exception is still needed.
To do this, we need to know the old link MTU in the NETDEV_CHANGEMTU
notifier. By the time the notifier is called, dev->mtu has been
changed. This patch adds the old MTU as additional information in the
notifier structure, and a new call_netdevice_notifiers_u32() function.
Fixes: 5aad1de5ea2c ("ipv4: use separate genid for next hop exceptions")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-09 23:48:14 +08:00
|
|
|
/* Update the PMTU of exceptions when:
|
|
|
|
* - the new MTU of the first hop becomes smaller than the PMTU
|
|
|
|
* - the old MTU was the same as the PMTU, and it limited discovery of
|
|
|
|
* larger MTUs on the path. With that limit raised, we can now
|
|
|
|
* discover larger MTUs
|
|
|
|
* A special case is locked exceptions, for which the PMTU is smaller
|
|
|
|
* than the minimal accepted PMTU:
|
|
|
|
* - if the new MTU is greater than the PMTU, don't make any change
|
|
|
|
* - otherwise, unlock and set PMTU
|
|
|
|
*/
|
2019-05-23 03:04:46 +08:00
|
|
|
void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig)
|
net: ipv4: update fnhe_pmtu when first hop's MTU changes
Since commit 5aad1de5ea2c ("ipv4: use separate genid for next hop
exceptions"), exceptions get deprecated separately from cached
routes. In particular, administrative changes don't clear PMTU anymore.
As Stefano described in commit e9fa1495d738 ("ipv6: Reflect MTU changes
on PMTU of exceptions for MTU-less routes"), the PMTU discovered before
the local MTU change can become stale:
- if the local MTU is now lower than the PMTU, that PMTU is now
incorrect
- if the local MTU was the lowest value in the path, and is increased,
we might discover a higher PMTU
Similarly to what commit e9fa1495d738 did for IPv6, update PMTU in those
cases.
If the exception was locked, the discovered PMTU was smaller than the
minimal accepted PMTU. In that case, if the new local MTU is smaller
than the current PMTU, let PMTU discovery figure out if locking of the
exception is still needed.
To do this, we need to know the old link MTU in the NETDEV_CHANGEMTU
notifier. By the time the notifier is called, dev->mtu has been
changed. This patch adds the old MTU as additional information in the
notifier structure, and a new call_netdevice_notifiers_u32() function.
Fixes: 5aad1de5ea2c ("ipv4: use separate genid for next hop exceptions")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-09 23:48:14 +08:00
|
|
|
{
|
|
|
|
struct fnhe_hash_bucket *bucket;
|
|
|
|
int i;
|
|
|
|
|
2019-04-30 22:45:50 +08:00
|
|
|
bucket = rcu_dereference_protected(nhc->nhc_exceptions, 1);
|
net: ipv4: update fnhe_pmtu when first hop's MTU changes
Since commit 5aad1de5ea2c ("ipv4: use separate genid for next hop
exceptions"), exceptions get deprecated separately from cached
routes. In particular, administrative changes don't clear PMTU anymore.
As Stefano described in commit e9fa1495d738 ("ipv6: Reflect MTU changes
on PMTU of exceptions for MTU-less routes"), the PMTU discovered before
the local MTU change can become stale:
- if the local MTU is now lower than the PMTU, that PMTU is now
incorrect
- if the local MTU was the lowest value in the path, and is increased,
we might discover a higher PMTU
Similarly to what commit e9fa1495d738 did for IPv6, update PMTU in those
cases.
If the exception was locked, the discovered PMTU was smaller than the
minimal accepted PMTU. In that case, if the new local MTU is smaller
than the current PMTU, let PMTU discovery figure out if locking of the
exception is still needed.
To do this, we need to know the old link MTU in the NETDEV_CHANGEMTU
notifier. By the time the notifier is called, dev->mtu has been
changed. This patch adds the old MTU as additional information in the
notifier structure, and a new call_netdevice_notifiers_u32() function.
Fixes: 5aad1de5ea2c ("ipv4: use separate genid for next hop exceptions")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-09 23:48:14 +08:00
|
|
|
if (!bucket)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < FNHE_HASH_SIZE; i++) {
|
|
|
|
struct fib_nh_exception *fnhe;
|
|
|
|
|
|
|
|
for (fnhe = rcu_dereference_protected(bucket[i].chain, 1);
|
|
|
|
fnhe;
|
|
|
|
fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1)) {
|
|
|
|
if (fnhe->fnhe_mtu_locked) {
|
|
|
|
if (new <= fnhe->fnhe_pmtu) {
|
|
|
|
fnhe->fnhe_pmtu = new;
|
|
|
|
fnhe->fnhe_mtu_locked = false;
|
|
|
|
}
|
|
|
|
} else if (new < fnhe->fnhe_pmtu ||
|
|
|
|
orig == fnhe->fnhe_pmtu) {
|
|
|
|
fnhe->fnhe_pmtu = new;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
|
|
|
|
{
|
|
|
|
unsigned int hash = fib_devindex_hashfn(dev->ifindex);
|
|
|
|
struct hlist_head *head = &fib_info_devhash[hash];
|
|
|
|
struct fib_nh *nh;
|
|
|
|
|
|
|
|
hlist_for_each_entry(nh, head, nh_hash) {
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_dev == dev)
|
2019-05-23 03:04:46 +08:00
|
|
|
fib_nhc_update_mtu(&nh->nh_common, dev->mtu, orig_mtu);
|
net: ipv4: update fnhe_pmtu when first hop's MTU changes
Since commit 5aad1de5ea2c ("ipv4: use separate genid for next hop
exceptions"), exceptions get deprecated separately from cached
routes. In particular, administrative changes don't clear PMTU anymore.
As Stefano described in commit e9fa1495d738 ("ipv6: Reflect MTU changes
on PMTU of exceptions for MTU-less routes"), the PMTU discovered before
the local MTU change can become stale:
- if the local MTU is now lower than the PMTU, that PMTU is now
incorrect
- if the local MTU was the lowest value in the path, and is increased,
we might discover a higher PMTU
Similarly to what commit e9fa1495d738 did for IPv6, update PMTU in those
cases.
If the exception was locked, the discovered PMTU was smaller than the
minimal accepted PMTU. In that case, if the new local MTU is smaller
than the current PMTU, let PMTU discovery figure out if locking of the
exception is still needed.
To do this, we need to know the old link MTU in the NETDEV_CHANGEMTU
notifier. By the time the notifier is called, dev->mtu has been
changed. This patch adds the old MTU as additional information in the
notifier structure, and a new call_netdevice_notifiers_u32() function.
Fixes: 5aad1de5ea2c ("ipv4: use separate genid for next hop exceptions")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-09 23:48:14 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-10-30 16:23:33 +08:00
|
|
|
/* Event force Flags Description
|
|
|
|
* NETDEV_CHANGE 0 LINKDOWN Carrier OFF, not for scope host
|
|
|
|
* NETDEV_DOWN 0 LINKDOWN|DEAD Link down, not for scope host
|
|
|
|
* NETDEV_DOWN 1 LINKDOWN|DEAD Last address removed
|
|
|
|
* NETDEV_UNREGISTER 1 LINKDOWN|DEAD Device removed
|
2019-06-04 11:19:51 +08:00
|
|
|
*
|
|
|
|
* only used when fib_nh is built into fib_info
|
2015-10-30 16:23:33 +08:00
|
|
|
*/
|
|
|
|
int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
|
2008-02-01 10:48:47 +08:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
int scope = RT_SCOPE_NOWHERE;
|
|
|
|
struct fib_info *prev_fi = NULL;
|
|
|
|
unsigned int hash = fib_devindex_hashfn(dev->ifindex);
|
|
|
|
struct hlist_head *head = &fib_info_devhash[hash];
|
|
|
|
struct fib_nh *nh;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-10-30 16:23:33 +08:00
|
|
|
if (force)
|
2008-02-01 10:48:47 +08:00
|
|
|
scope = -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(nh, head, nh_hash) {
|
2008-02-01 10:48:47 +08:00
|
|
|
struct fib_info *fi = nh->nh_parent;
|
|
|
|
int dead;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-02-01 10:48:47 +08:00
|
|
|
BUG_ON(!fi->fib_nhs);
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_dev != dev || fi == prev_fi)
|
2008-02-01 10:48:47 +08:00
|
|
|
continue;
|
|
|
|
prev_fi = fi;
|
|
|
|
dead = 0;
|
|
|
|
change_nexthops(fi) {
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nexthop_nh->fib_nh_flags & RTNH_F_DEAD)
|
2008-02-01 10:48:47 +08:00
|
|
|
dead++;
|
2019-03-28 11:53:55 +08:00
|
|
|
else if (nexthop_nh->fib_nh_dev == dev &&
|
|
|
|
nexthop_nh->fib_nh_scope != scope) {
|
2015-06-24 01:45:36 +08:00
|
|
|
switch (event) {
|
|
|
|
case NETDEV_DOWN:
|
|
|
|
case NETDEV_UNREGISTER:
|
2019-03-28 11:53:55 +08:00
|
|
|
nexthop_nh->fib_nh_flags |= RTNH_F_DEAD;
|
2020-03-13 06:50:22 +08:00
|
|
|
fallthrough;
|
2015-06-24 01:45:36 +08:00
|
|
|
case NETDEV_CHANGE:
|
2019-03-28 11:53:55 +08:00
|
|
|
nexthop_nh->fib_nh_flags |= RTNH_F_LINKDOWN;
|
2015-06-24 01:45:36 +08:00
|
|
|
break;
|
|
|
|
}
|
2017-02-08 18:16:39 +08:00
|
|
|
call_fib_nh_notifiers(nexthop_nh,
|
|
|
|
FIB_EVENT_NH_DEL);
|
2008-02-01 10:48:47 +08:00
|
|
|
dead++;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2015-06-24 01:45:36 +08:00
|
|
|
if (event == NETDEV_UNREGISTER &&
|
2019-03-28 11:53:55 +08:00
|
|
|
nexthop_nh->fib_nh_dev == dev) {
|
2008-02-01 10:48:47 +08:00
|
|
|
dead = fi->fib_nhs;
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-02-01 10:48:47 +08:00
|
|
|
#endif
|
|
|
|
} endfor_nexthops(fi)
|
|
|
|
if (dead == fi->fib_nhs) {
|
2015-06-24 01:45:36 +08:00
|
|
|
switch (event) {
|
|
|
|
case NETDEV_DOWN:
|
|
|
|
case NETDEV_UNREGISTER:
|
|
|
|
fi->fib_flags |= RTNH_F_DEAD;
|
2020-03-13 06:50:22 +08:00
|
|
|
fallthrough;
|
2015-06-24 01:45:36 +08:00
|
|
|
case NETDEV_CHANGE:
|
|
|
|
fi->fib_flags |= RTNH_F_LINKDOWN;
|
|
|
|
break;
|
|
|
|
}
|
2008-02-01 10:48:47 +08:00
|
|
|
ret++;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2015-09-30 16:12:21 +08:00
|
|
|
|
|
|
|
fib_rebalance(fi);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-02-01 08:16:50 +08:00
|
|
|
/* Must be invoked inside of an RCU protected region. */
|
2017-01-06 11:33:59 +08:00
|
|
|
static void fib_select_default(const struct flowi4 *flp, struct fib_result *res)
|
2011-02-01 08:16:50 +08:00
|
|
|
{
|
|
|
|
struct fib_info *fi = NULL, *last_resort = NULL;
|
2015-02-26 07:31:31 +08:00
|
|
|
struct hlist_head *fa_head = res->fa_head;
|
2011-02-01 08:16:50 +08:00
|
|
|
struct fib_table *tb = res->table;
|
2015-07-22 15:43:22 +08:00
|
|
|
u8 slen = 32 - res->prefixlen;
|
2011-02-01 08:16:50 +08:00
|
|
|
int order = -1, last_idx = -1;
|
2015-07-22 15:43:23 +08:00
|
|
|
struct fib_alias *fa, *fa1 = NULL;
|
|
|
|
u32 last_prio = res->fi->fib_priority;
|
|
|
|
u8 last_tos = 0;
|
2011-02-01 08:16:50 +08:00
|
|
|
|
2015-02-26 07:31:31 +08:00
|
|
|
hlist_for_each_entry_rcu(fa, fa_head, fa_list) {
|
2011-02-01 08:16:50 +08:00
|
|
|
struct fib_info *next_fi = fa->fa_info;
|
2020-04-23 05:40:20 +08:00
|
|
|
struct fib_nh_common *nhc;
|
2011-02-01 08:16:50 +08:00
|
|
|
|
2015-07-22 15:43:22 +08:00
|
|
|
if (fa->fa_slen != slen)
|
|
|
|
continue;
|
2015-07-22 15:43:23 +08:00
|
|
|
if (fa->fa_tos && fa->fa_tos != flp->flowi4_tos)
|
|
|
|
continue;
|
2015-07-22 15:43:22 +08:00
|
|
|
if (fa->tb_id != tb->tb_id)
|
|
|
|
continue;
|
2015-07-22 15:43:23 +08:00
|
|
|
if (next_fi->fib_priority > last_prio &&
|
|
|
|
fa->fa_tos == last_tos) {
|
|
|
|
if (last_tos)
|
|
|
|
continue;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (next_fi->fib_flags & RTNH_F_DEAD)
|
|
|
|
continue;
|
|
|
|
last_tos = fa->fa_tos;
|
|
|
|
last_prio = next_fi->fib_priority;
|
|
|
|
|
2011-03-25 09:06:47 +08:00
|
|
|
if (next_fi->fib_scope != res->scope ||
|
2011-02-01 08:16:50 +08:00
|
|
|
fa->fa_type != RTN_UNICAST)
|
|
|
|
continue;
|
2019-06-04 11:19:49 +08:00
|
|
|
|
2020-04-23 05:40:20 +08:00
|
|
|
nhc = fib_info_nhc(next_fi, 0);
|
|
|
|
if (!nhc->nhc_gw_family || nhc->nhc_scope != RT_SCOPE_LINK)
|
2011-02-01 08:16:50 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
fib_alias_accessed(fa);
|
|
|
|
|
2015-04-03 16:17:26 +08:00
|
|
|
if (!fi) {
|
2011-02-01 08:16:50 +08:00
|
|
|
if (next_fi != res->fi)
|
|
|
|
break;
|
2015-07-22 15:43:23 +08:00
|
|
|
fa1 = fa;
|
2011-02-01 08:16:50 +08:00
|
|
|
} else if (!fib_detect_death(fi, order, &last_resort,
|
2015-07-22 15:43:23 +08:00
|
|
|
&last_idx, fa1->fa_default)) {
|
2011-02-01 08:16:50 +08:00
|
|
|
fib_result_assign(res, fi);
|
2015-07-22 15:43:23 +08:00
|
|
|
fa1->fa_default = order;
|
2011-02-01 08:16:50 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
fi = next_fi;
|
|
|
|
order++;
|
|
|
|
}
|
|
|
|
|
2015-04-03 16:17:26 +08:00
|
|
|
if (order <= 0 || !fi) {
|
2015-07-22 15:43:23 +08:00
|
|
|
if (fa1)
|
|
|
|
fa1->fa_default = -1;
|
2011-02-01 08:16:50 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!fib_detect_death(fi, order, &last_resort, &last_idx,
|
2015-07-22 15:43:23 +08:00
|
|
|
fa1->fa_default)) {
|
2011-02-01 08:16:50 +08:00
|
|
|
fib_result_assign(res, fi);
|
2015-07-22 15:43:23 +08:00
|
|
|
fa1->fa_default = order;
|
2011-02-01 08:16:50 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (last_idx >= 0)
|
|
|
|
fib_result_assign(res, last_resort);
|
2015-07-22 15:43:23 +08:00
|
|
|
fa1->fa_default = last_idx;
|
2011-02-01 08:16:50 +08:00
|
|
|
out:
|
2011-02-15 03:23:04 +08:00
|
|
|
return;
|
2011-02-01 08:16:50 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2010-10-05 04:00:18 +08:00
|
|
|
* Dead device goes up. We wake up dead nexthops.
|
|
|
|
* It takes sense only on multipath routes.
|
2019-06-04 11:19:51 +08:00
|
|
|
*
|
|
|
|
* only used when fib_nh is built into fib_info
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2019-04-23 23:48:09 +08:00
|
|
|
int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct fib_info *prev_fi;
|
|
|
|
unsigned int hash;
|
|
|
|
struct hlist_head *head;
|
|
|
|
struct fib_nh *nh;
|
|
|
|
int ret;
|
|
|
|
|
2010-10-05 04:00:18 +08:00
|
|
|
if (!(dev->flags & IFF_UP))
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
2015-10-30 16:23:34 +08:00
|
|
|
if (nh_flags & RTNH_F_DEAD) {
|
|
|
|
unsigned int flags = dev_get_flags(dev);
|
|
|
|
|
|
|
|
if (flags & (IFF_RUNNING | IFF_LOWER_UP))
|
|
|
|
nh_flags |= RTNH_F_LINKDOWN;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
prev_fi = NULL;
|
|
|
|
hash = fib_devindex_hashfn(dev->ifindex);
|
|
|
|
head = &fib_info_devhash[hash];
|
|
|
|
ret = 0;
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(nh, head, nh_hash) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct fib_info *fi = nh->nh_parent;
|
|
|
|
int alive;
|
|
|
|
|
|
|
|
BUG_ON(!fi->fib_nhs);
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_dev != dev || fi == prev_fi)
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
prev_fi = fi;
|
|
|
|
alive = 0;
|
|
|
|
change_nexthops(fi) {
|
2019-03-28 11:53:55 +08:00
|
|
|
if (!(nexthop_nh->fib_nh_flags & nh_flags)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
alive++;
|
|
|
|
continue;
|
|
|
|
}
|
2019-03-28 11:53:55 +08:00
|
|
|
if (!nexthop_nh->fib_nh_dev ||
|
|
|
|
!(nexthop_nh->fib_nh_dev->flags & IFF_UP))
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nexthop_nh->fib_nh_dev != dev ||
|
2010-01-15 17:16:40 +08:00
|
|
|
!__in_dev_get_rtnl(dev))
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
alive++;
|
2019-03-28 11:53:55 +08:00
|
|
|
nexthop_nh->fib_nh_flags &= ~nh_flags;
|
2017-02-08 18:16:39 +08:00
|
|
|
call_fib_nh_notifiers(nexthop_nh, FIB_EVENT_NH_ADD);
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi)
|
|
|
|
|
|
|
|
if (alive > 0) {
|
2015-06-24 01:45:36 +08:00
|
|
|
fi->fib_flags &= ~nh_flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
ret++;
|
|
|
|
}
|
2015-09-30 16:12:21 +08:00
|
|
|
|
|
|
|
fib_rebalance(fi);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-06-24 01:45:36 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2016-04-07 22:21:00 +08:00
|
|
|
static bool fib_good_nh(const struct fib_nh *nh)
|
|
|
|
{
|
|
|
|
int state = NUD_REACHABLE;
|
|
|
|
|
2019-03-28 11:53:55 +08:00
|
|
|
if (nh->fib_nh_scope == RT_SCOPE_LINK) {
|
2016-04-07 22:21:00 +08:00
|
|
|
struct neighbour *n;
|
|
|
|
|
|
|
|
rcu_read_lock_bh();
|
|
|
|
|
2019-04-06 07:30:38 +08:00
|
|
|
if (likely(nh->fib_nh_gw_family == AF_INET))
|
|
|
|
n = __ipv4_neigh_lookup_noref(nh->fib_nh_dev,
|
|
|
|
(__force u32)nh->fib_nh_gw4);
|
|
|
|
else if (nh->fib_nh_gw_family == AF_INET6)
|
|
|
|
n = __ipv6_neigh_lookup_noref_stub(nh->fib_nh_dev,
|
|
|
|
&nh->fib_nh_gw6);
|
|
|
|
else
|
|
|
|
n = NULL;
|
2016-04-07 22:21:00 +08:00
|
|
|
if (n)
|
|
|
|
state = n->nud_state;
|
|
|
|
|
|
|
|
rcu_read_unlock_bh();
|
|
|
|
}
|
|
|
|
|
|
|
|
return !!(state & NUD_VALID);
|
|
|
|
}
|
2015-06-24 01:45:36 +08:00
|
|
|
|
2015-09-30 16:12:21 +08:00
|
|
|
void fib_select_multipath(struct fib_result *res, int hash)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct fib_info *fi = res->fi;
|
2016-04-07 22:21:00 +08:00
|
|
|
struct net *net = fi->fib_net;
|
|
|
|
bool first = false;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-06-04 11:19:51 +08:00
|
|
|
if (unlikely(res->fi->nh)) {
|
|
|
|
nexthop_path_fib_result(res, hash);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-04-03 05:11:55 +08:00
|
|
|
change_nexthops(fi) {
|
2018-04-01 22:40:35 +08:00
|
|
|
if (net->ipv4.sysctl_fib_multipath_use_neigh) {
|
2019-04-03 05:11:55 +08:00
|
|
|
if (!fib_good_nh(nexthop_nh))
|
2018-04-01 22:40:35 +08:00
|
|
|
continue;
|
|
|
|
if (!first) {
|
|
|
|
res->nh_sel = nhsel;
|
2019-04-03 05:11:55 +08:00
|
|
|
res->nhc = &nexthop_nh->nh_common;
|
2018-04-01 22:40:35 +08:00
|
|
|
first = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-03 05:11:55 +08:00
|
|
|
if (hash > atomic_read(&nexthop_nh->fib_nh_upper_bound))
|
2015-09-30 16:12:21 +08:00
|
|
|
continue;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-04-01 22:40:35 +08:00
|
|
|
res->nh_sel = nhsel;
|
2019-04-03 05:11:55 +08:00
|
|
|
res->nhc = &nexthop_nh->nh_common;
|
2018-04-01 22:40:35 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
} endfor_nexthops(fi);
|
|
|
|
}
|
|
|
|
#endif
|
2015-10-05 23:51:25 +08:00
|
|
|
|
|
|
|
void fib_select_path(struct net *net, struct fib_result *res,
|
2017-03-16 21:28:00 +08:00
|
|
|
struct flowi4 *fl4, const struct sk_buff *skb)
|
2015-10-05 23:51:25 +08:00
|
|
|
{
|
2018-02-14 00:11:34 +08:00
|
|
|
if (fl4->flowi4_oif && !(fl4->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF))
|
|
|
|
goto check_saddr;
|
2017-01-11 06:37:35 +08:00
|
|
|
|
2015-10-05 23:51:25 +08:00
|
|
|
#ifdef CONFIG_IP_ROUTE_MULTIPATH
|
2019-06-04 11:19:49 +08:00
|
|
|
if (fib_info_num_path(res->fi) > 1) {
|
2018-03-03 00:32:12 +08:00
|
|
|
int h = fib_multipath_hash(net, fl4, skb, NULL);
|
2015-10-30 05:20:40 +08:00
|
|
|
|
2017-03-16 21:28:00 +08:00
|
|
|
fib_select_multipath(res, h);
|
2015-10-05 23:51:25 +08:00
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
if (!res->prefixlen &&
|
|
|
|
res->table->tb_num_default > 1 &&
|
2018-02-14 00:11:34 +08:00
|
|
|
res->type == RTN_UNICAST)
|
2015-10-05 23:51:25 +08:00
|
|
|
fib_select_default(fl4, res);
|
|
|
|
|
2018-02-14 00:11:34 +08:00
|
|
|
check_saddr:
|
2015-10-05 23:51:25 +08:00
|
|
|
if (!fl4->saddr)
|
2019-04-03 05:11:55 +08:00
|
|
|
fl4->saddr = fib_result_prefsrc(net, res);
|
2015-10-05 23:51:25 +08:00
|
|
|
}
|