2019-05-27 14:55:01 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2005-04-17 06:20:36 +08:00
|
|
|
/* -*- linux-c -*-
|
|
|
|
* INET 802.1Q VLAN
|
|
|
|
* Ethernet-type device handling.
|
|
|
|
*
|
|
|
|
* Authors: Ben Greear <greearb@candelatech.com>
|
2008-01-21 16:27:00 +08:00
|
|
|
* Please send support related email to: netdev@vger.kernel.org
|
2005-04-17 06:20:36 +08:00
|
|
|
* VLAN Home Page: http://www.candelatech.com/~greear/vlan.html
|
2007-02-09 22:24:25 +08:00
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* Fixes: Mar 22 2001: Martin Bokaemper <mbokaemper@unispherenetworks.com>
|
|
|
|
* - reset skb->pkt_type on incoming packets when MAC was changed
|
|
|
|
* - see that changed MAC is saddr for outgoing packets
|
|
|
|
* Oct 20, 2001: Ard van Breeman:
|
|
|
|
* - Fix MC-list, finally.
|
|
|
|
* - Flush MC-list on VLAN destroy.
|
|
|
|
*/
|
|
|
|
|
2011-05-26 18:58:31 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/module.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/netdevice.h>
|
2014-11-21 21:16:20 +08:00
|
|
|
#include <linux/net_tstamp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/etherdevice.h>
|
2008-07-08 18:22:42 +08:00
|
|
|
#include <linux/ethtool.h>
|
2018-03-30 09:44:00 +08:00
|
|
|
#include <linux/phy.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/arp.h>
|
2023-04-19 22:21:22 +08:00
|
|
|
#include <net/macsec.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include "vlan.h"
|
|
|
|
#include "vlanproc.h"
|
|
|
|
#include <linux/if_vlan.h>
|
2011-12-08 14:20:49 +08:00
|
|
|
#include <linux/netpoll.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2007-02-09 22:24:25 +08:00
|
|
|
* Create the VLAN header for an arbitrary protocol layer
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* saddr=NULL means use device source address
|
|
|
|
* daddr=NULL means leave destination address (eg unresolved arp)
|
|
|
|
*
|
|
|
|
* This is called when the SKB is moving down the stack towards the
|
|
|
|
* physical devices.
|
|
|
|
*/
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_hard_header(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
unsigned short type,
|
|
|
|
const void *daddr, const void *saddr,
|
|
|
|
unsigned int len)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2013-04-19 10:04:29 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
struct vlan_hdr *vhdr;
|
2008-07-15 13:51:19 +08:00
|
|
|
unsigned int vhdrlen = 0;
|
2008-07-08 18:24:44 +08:00
|
|
|
u16 vlan_tci = 0;
|
2008-07-15 13:51:19 +08:00
|
|
|
int rc;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-08-03 16:54:50 +08:00
|
|
|
if (!(vlan->flags & VLAN_FLAG_REORDER_HDR)) {
|
networking: make skb_push & __skb_push return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions return void * and remove all the casts across
the tree, adding a (u8 *) cast only where the unsigned char pointer
was used directly, all done with the following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
@@
expression SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- fn(SKB, LEN)[0]
+ *(u8 *)fn(SKB, LEN)
Note that the last part there converts from push(...)[0] to the
more idiomatic *(u8 *)push(...).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 20:29:23 +08:00
|
|
|
vhdr = skb_push(skb, VLAN_HLEN);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-08-03 16:54:50 +08:00
|
|
|
vlan_tci = vlan->vlan_id;
|
2013-11-11 13:42:07 +08:00
|
|
|
vlan_tci |= vlan_dev_get_egress_qos_mask(dev, skb->priority);
|
2008-07-08 18:24:44 +08:00
|
|
|
vhdr->h_vlan_TCI = htons(vlan_tci);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2009-12-26 19:50:59 +08:00
|
|
|
* Set the protocol type. For a packet of type ETH_P_802_3/2 we
|
|
|
|
* put the length in here instead.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2009-12-26 19:50:59 +08:00
|
|
|
if (type != ETH_P_802_3 && type != ETH_P_802_2)
|
2005-04-17 06:20:36 +08:00
|
|
|
vhdr->h_vlan_encapsulated_proto = htons(type);
|
2008-01-21 16:26:41 +08:00
|
|
|
else
|
2005-04-17 06:20:36 +08:00
|
|
|
vhdr->h_vlan_encapsulated_proto = htons(len);
|
2007-04-14 07:12:47 +08:00
|
|
|
|
2013-04-19 10:04:29 +08:00
|
|
|
skb->protocol = vlan->vlan_proto;
|
|
|
|
type = ntohs(vlan->vlan_proto);
|
2008-07-15 13:51:19 +08:00
|
|
|
vhdrlen = VLAN_HLEN;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Before delegating work to the lower layer, enter our MAC-address */
|
|
|
|
if (saddr == NULL)
|
|
|
|
saddr = dev->dev_addr;
|
|
|
|
|
2008-07-15 13:51:19 +08:00
|
|
|
/* Now make the underlying real hard header */
|
2013-08-03 16:54:50 +08:00
|
|
|
dev = vlan->real_dev;
|
2008-07-15 13:51:19 +08:00
|
|
|
rc = dev_hard_header(skb, dev, type, daddr, saddr, len + vhdrlen);
|
|
|
|
if (rc > 0)
|
|
|
|
rc += vhdrlen;
|
2005-04-17 06:20:36 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2012-08-10 09:24:48 +08:00
|
|
|
static inline netdev_tx_t vlan_netpoll_send_skb(struct vlan_dev_priv *vlan, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_NET_POLL_CONTROLLER
|
2020-05-08 00:32:21 +08:00
|
|
|
return netpoll_send_skb(vlan->netpoll, skb);
|
2012-08-10 09:24:48 +08:00
|
|
|
#else
|
|
|
|
BUG();
|
|
|
|
return NETDEV_TX_OK;
|
2020-05-08 00:32:21 +08:00
|
|
|
#endif
|
2012-08-10 09:24:48 +08:00
|
|
|
}
|
|
|
|
|
2009-09-01 03:50:41 +08:00
|
|
|
static netdev_tx_t vlan_dev_hard_start_xmit(struct sk_buff *skb,
|
|
|
|
struct net_device *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-08-10 09:24:48 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
struct vlan_ethhdr *veth = (struct vlan_ethhdr *)(skb->data);
|
2009-09-03 08:39:16 +08:00
|
|
|
unsigned int len;
|
|
|
|
int ret;
|
2008-02-24 12:09:11 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Handle non-VLAN frames if they are sent to us, for example by DHCP.
|
|
|
|
*
|
|
|
|
* NOTE: THIS ASSUMES DIX ETHERNET, SPECIFICALLY NOT SUPPORTING
|
|
|
|
* OTHER THINGS LIKE FDDI/TokenRing/802.3 SNAPs...
|
|
|
|
*/
|
2023-05-16 22:23:42 +08:00
|
|
|
if (vlan->flags & VLAN_FLAG_REORDER_HDR ||
|
|
|
|
veth->h_vlan_proto != vlan->vlan_proto) {
|
2008-07-08 18:24:44 +08:00
|
|
|
u16 vlan_tci;
|
2012-08-10 09:24:48 +08:00
|
|
|
vlan_tci = vlan->vlan_id;
|
2013-11-11 13:42:07 +08:00
|
|
|
vlan_tci |= vlan_dev_get_egress_qos_mask(dev, skb->priority);
|
2014-11-19 21:04:56 +08:00
|
|
|
__vlan_hwaccel_put_tag(skb, vlan->vlan_proto, vlan_tci);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-08-10 09:24:48 +08:00
|
|
|
skb->dev = vlan->real_dev;
|
2009-09-03 08:39:16 +08:00
|
|
|
len = skb->len;
|
2012-08-10 09:24:48 +08:00
|
|
|
if (unlikely(netpoll_tx_running(dev)))
|
|
|
|
return vlan_netpoll_send_skb(vlan, skb);
|
|
|
|
|
2009-09-03 08:39:16 +08:00
|
|
|
ret = dev_queue_xmit(skb);
|
|
|
|
|
2010-05-10 12:51:02 +08:00
|
|
|
if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN)) {
|
2010-11-11 07:42:00 +08:00
|
|
|
struct vlan_pcpu_stats *stats;
|
|
|
|
|
2012-08-10 09:24:48 +08:00
|
|
|
stats = this_cpu_ptr(vlan->vlan_pcpu_stats);
|
2010-11-11 07:42:00 +08:00
|
|
|
u64_stats_update_begin(&stats->syncp);
|
2022-06-08 23:46:32 +08:00
|
|
|
u64_stats_inc(&stats->tx_packets);
|
|
|
|
u64_stats_add(&stats->tx_bytes, len);
|
2011-06-01 06:53:19 +08:00
|
|
|
u64_stats_update_end(&stats->syncp);
|
2010-11-11 07:42:00 +08:00
|
|
|
} else {
|
2012-08-10 09:24:48 +08:00
|
|
|
this_cpu_inc(vlan->vlan_pcpu_stats->tx_dropped);
|
2010-11-11 07:42:00 +08:00
|
|
|
}
|
2009-09-03 08:39:16 +08:00
|
|
|
|
2009-11-10 14:14:24 +08:00
|
|
|
return ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_change_mtu(struct net_device *dev, int new_mtu)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-07-15 00:00:10 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
|
|
|
unsigned int max_mtu = real_dev->mtu;
|
|
|
|
|
|
|
|
if (netif_reduces_vlan_mtu(real_dev))
|
|
|
|
max_mtu -= VLAN_HLEN;
|
|
|
|
if (max_mtu < new_mtu)
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ERANGE;
|
|
|
|
|
|
|
|
dev->mtu = new_mtu;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-06-14 03:05:22 +08:00
|
|
|
void vlan_dev_set_ingress_priority(const struct net_device *dev,
|
2008-07-08 18:24:44 +08:00
|
|
|
u32 skb_prio, u16 vlan_prio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2007-06-14 03:07:22 +08:00
|
|
|
|
|
|
|
if (vlan->ingress_priority_map[vlan_prio & 0x7] && !skb_prio)
|
|
|
|
vlan->nr_ingress_mappings--;
|
|
|
|
else if (!vlan->ingress_priority_map[vlan_prio & 0x7] && skb_prio)
|
|
|
|
vlan->nr_ingress_mappings++;
|
|
|
|
|
|
|
|
vlan->ingress_priority_map[vlan_prio & 0x7] = skb_prio;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2007-06-14 03:05:22 +08:00
|
|
|
int vlan_dev_set_egress_priority(const struct net_device *dev,
|
2008-07-08 18:24:44 +08:00
|
|
|
u32 skb_prio, u16 vlan_prio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
struct vlan_priority_tci_mapping *mp = NULL;
|
|
|
|
struct vlan_priority_tci_mapping *np;
|
2009-10-27 09:40:35 +08:00
|
|
|
u32 vlan_qos = (vlan_prio << VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK;
|
2007-02-09 22:24:25 +08:00
|
|
|
|
2007-06-14 03:05:22 +08:00
|
|
|
/* See if a priority mapping exists.. */
|
2007-06-14 03:07:22 +08:00
|
|
|
mp = vlan->egress_priority_map[skb_prio & 0xF];
|
2007-06-14 03:05:22 +08:00
|
|
|
while (mp) {
|
|
|
|
if (mp->priority == skb_prio) {
|
2007-06-14 03:07:22 +08:00
|
|
|
if (mp->vlan_qos && !vlan_qos)
|
|
|
|
vlan->nr_egress_mappings--;
|
|
|
|
else if (!mp->vlan_qos && vlan_qos)
|
|
|
|
vlan->nr_egress_mappings++;
|
|
|
|
mp->vlan_qos = vlan_qos;
|
2007-06-14 03:05:22 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-06-14 03:05:22 +08:00
|
|
|
mp = mp->next;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-06-14 03:05:22 +08:00
|
|
|
|
|
|
|
/* Create a new mapping then. */
|
2007-06-14 03:07:22 +08:00
|
|
|
mp = vlan->egress_priority_map[skb_prio & 0xF];
|
2007-06-14 03:05:22 +08:00
|
|
|
np = kmalloc(sizeof(struct vlan_priority_tci_mapping), GFP_KERNEL);
|
|
|
|
if (!np)
|
|
|
|
return -ENOBUFS;
|
|
|
|
|
|
|
|
np->next = mp;
|
|
|
|
np->priority = skb_prio;
|
2007-06-14 03:07:22 +08:00
|
|
|
np->vlan_qos = vlan_qos;
|
2013-07-19 00:35:10 +08:00
|
|
|
/* Before inserting this element in hash table, make sure all its fields
|
|
|
|
* are committed to memory.
|
2013-11-11 13:42:07 +08:00
|
|
|
* coupled with smp_rmb() in vlan_dev_get_egress_qos_mask()
|
2013-07-19 00:35:10 +08:00
|
|
|
*/
|
|
|
|
smp_wmb();
|
2007-06-14 03:07:22 +08:00
|
|
|
vlan->egress_priority_map[skb_prio & 0xF] = np;
|
|
|
|
if (vlan_qos)
|
|
|
|
vlan->nr_egress_mappings++;
|
2007-06-14 03:05:22 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-05-05 09:08:18 +08:00
|
|
|
/* Flags are defined in the vlan_flags enum in
|
|
|
|
* include/uapi/linux/if_vlan.h file.
|
|
|
|
*/
|
2008-07-06 12:26:27 +08:00
|
|
|
int vlan_dev_change_flags(const struct net_device *dev, u32 flags, u32 mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2008-07-06 12:26:27 +08:00
|
|
|
u32 old_flags = vlan->flags;
|
|
|
|
|
2009-11-25 15:54:54 +08:00
|
|
|
if (mask & ~(VLAN_FLAG_REORDER_HDR | VLAN_FLAG_GVRP |
|
2019-04-19 01:35:31 +08:00
|
|
|
VLAN_FLAG_LOOSE_BINDING | VLAN_FLAG_MVRP |
|
|
|
|
VLAN_FLAG_BRIDGE_BINDING))
|
2008-07-06 12:26:27 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
vlan->flags = (old_flags & ~mask) | (flags & mask);
|
2008-07-06 12:26:57 +08:00
|
|
|
|
|
|
|
if (netif_running(dev) && (vlan->flags ^ old_flags) & VLAN_FLAG_GVRP) {
|
|
|
|
if (vlan->flags & VLAN_FLAG_GVRP)
|
|
|
|
vlan_gvrp_request_join(dev);
|
|
|
|
else
|
|
|
|
vlan_gvrp_request_leave(dev);
|
|
|
|
}
|
2013-02-09 01:17:07 +08:00
|
|
|
|
|
|
|
if (netif_running(dev) && (vlan->flags ^ old_flags) & VLAN_FLAG_MVRP) {
|
|
|
|
if (vlan->flags & VLAN_FLAG_MVRP)
|
|
|
|
vlan_mvrp_request_join(dev);
|
|
|
|
else
|
|
|
|
vlan_mvrp_request_leave(dev);
|
|
|
|
}
|
2008-07-06 12:26:27 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2021-06-03 04:27:41 +08:00
|
|
|
void vlan_dev_get_realdev_name(const struct net_device *dev, char *result, size_t size)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-06-03 04:27:41 +08:00
|
|
|
strscpy_pad(result, vlan_dev_priv(dev)->real_dev->name, size);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2016-05-28 00:45:07 +08:00
|
|
|
bool vlan_dev_inherit_address(struct net_device *dev,
|
|
|
|
struct net_device *real_dev)
|
|
|
|
{
|
|
|
|
if (dev->addr_assign_type != NET_ADDR_STOLEN)
|
|
|
|
return false;
|
|
|
|
|
2021-10-02 05:32:22 +08:00
|
|
|
eth_hw_addr_set(dev, real_dev->dev_addr);
|
2016-05-28 00:45:07 +08:00
|
|
|
call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_open(struct net_device *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2007-07-12 10:45:24 +08:00
|
|
|
struct net_device *real_dev = vlan->real_dev;
|
|
|
|
int err;
|
|
|
|
|
2009-11-25 15:54:54 +08:00
|
|
|
if (!(real_dev->flags & IFF_UP) &&
|
|
|
|
!(vlan->flags & VLAN_FLAG_LOOSE_BINDING))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ENETDOWN;
|
|
|
|
|
2016-05-28 00:45:07 +08:00
|
|
|
if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) &&
|
|
|
|
!vlan_dev_inherit_address(dev, real_dev)) {
|
2010-04-02 05:22:09 +08:00
|
|
|
err = dev_uc_add(real_dev, dev->dev_addr);
|
2007-07-12 10:45:24 +08:00
|
|
|
if (err < 0)
|
2008-07-15 11:59:03 +08:00
|
|
|
goto out;
|
2007-07-12 10:45:24 +08:00
|
|
|
}
|
|
|
|
|
2008-07-15 11:59:03 +08:00
|
|
|
if (dev->flags & IFF_ALLMULTI) {
|
|
|
|
err = dev_set_allmulti(real_dev, 1);
|
|
|
|
if (err < 0)
|
|
|
|
goto del_unicast;
|
|
|
|
}
|
|
|
|
if (dev->flags & IFF_PROMISC) {
|
|
|
|
err = dev_set_promiscuity(real_dev, 1);
|
|
|
|
if (err < 0)
|
|
|
|
goto clear_allmulti;
|
|
|
|
}
|
|
|
|
|
2014-01-21 01:52:14 +08:00
|
|
|
ether_addr_copy(vlan->real_dev_addr, real_dev->dev_addr);
|
2007-07-15 09:52:56 +08:00
|
|
|
|
2008-07-06 12:26:57 +08:00
|
|
|
if (vlan->flags & VLAN_FLAG_GVRP)
|
|
|
|
vlan_gvrp_request_join(dev);
|
|
|
|
|
2013-02-09 01:17:07 +08:00
|
|
|
if (vlan->flags & VLAN_FLAG_MVRP)
|
|
|
|
vlan_mvrp_request_join(dev);
|
|
|
|
|
2019-04-19 01:35:32 +08:00
|
|
|
if (netif_carrier_ok(real_dev) &&
|
|
|
|
!(vlan->flags & VLAN_FLAG_BRIDGE_BINDING))
|
2010-08-18 02:45:08 +08:00
|
|
|
netif_carrier_on(dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2008-07-15 11:59:03 +08:00
|
|
|
|
|
|
|
clear_allmulti:
|
|
|
|
if (dev->flags & IFF_ALLMULTI)
|
|
|
|
dev_set_allmulti(real_dev, -1);
|
|
|
|
del_unicast:
|
8021q: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-09 02:56:47 +08:00
|
|
|
if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
|
2010-04-02 05:22:09 +08:00
|
|
|
dev_uc_del(real_dev, dev->dev_addr);
|
2008-07-15 11:59:03 +08:00
|
|
|
out:
|
2009-04-26 09:03:35 +08:00
|
|
|
netif_carrier_off(dev);
|
2008-07-15 11:59:03 +08:00
|
|
|
return err;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_stop(struct net_device *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2008-07-06 12:26:57 +08:00
|
|
|
struct net_device *real_dev = vlan->real_dev;
|
|
|
|
|
2007-07-15 09:53:28 +08:00
|
|
|
dev_mc_unsync(real_dev, dev);
|
2010-04-02 05:22:09 +08:00
|
|
|
dev_uc_unsync(real_dev, dev);
|
2007-07-15 09:52:56 +08:00
|
|
|
if (dev->flags & IFF_ALLMULTI)
|
|
|
|
dev_set_allmulti(real_dev, -1);
|
|
|
|
if (dev->flags & IFF_PROMISC)
|
|
|
|
dev_set_promiscuity(real_dev, -1);
|
|
|
|
|
8021q: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-09 02:56:47 +08:00
|
|
|
if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
|
2010-04-02 05:22:09 +08:00
|
|
|
dev_uc_del(real_dev, dev->dev_addr);
|
2007-07-12 10:45:24 +08:00
|
|
|
|
2019-04-19 01:35:32 +08:00
|
|
|
if (!(vlan->flags & VLAN_FLAG_BRIDGE_BINDING))
|
|
|
|
netif_carrier_off(dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_set_mac_address(struct net_device *dev, void *p)
|
2007-11-11 13:52:35 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2007-11-11 13:52:35 +08:00
|
|
|
struct sockaddr *addr = p;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (!is_valid_ether_addr(addr->sa_data))
|
|
|
|
return -EADDRNOTAVAIL;
|
|
|
|
|
|
|
|
if (!(dev->flags & IFF_UP))
|
|
|
|
goto out;
|
|
|
|
|
8021q: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-09 02:56:47 +08:00
|
|
|
if (!ether_addr_equal(addr->sa_data, real_dev->dev_addr)) {
|
2010-04-02 05:22:09 +08:00
|
|
|
err = dev_uc_add(real_dev, addr->sa_data);
|
2007-11-11 13:52:35 +08:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
8021q: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-09 02:56:47 +08:00
|
|
|
if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
|
2010-04-02 05:22:09 +08:00
|
|
|
dev_uc_del(real_dev, dev->dev_addr);
|
2007-11-11 13:52:35 +08:00
|
|
|
|
|
|
|
out:
|
2021-10-02 05:32:22 +08:00
|
|
|
eth_hw_addr_set(dev, addr->sa_data);
|
2007-11-11 13:52:35 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-08-01 22:28:15 +08:00
|
|
|
static int vlan_hwtstamp_get(struct net_device *dev,
|
|
|
|
struct kernel_hwtstamp_config *cfg)
|
|
|
|
{
|
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
|
|
|
|
|
|
|
return generic_hwtstamp_get_lower(real_dev, cfg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_hwtstamp_set(struct net_device *dev,
|
|
|
|
struct kernel_hwtstamp_config *cfg,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
|
|
|
|
|
|
|
if (!net_eq(dev_net(dev), dev_net(real_dev)))
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return generic_hwtstamp_set_lower(real_dev, cfg, extack);
|
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2008-11-20 13:53:47 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct ifreq ifrr;
|
|
|
|
int err = -EOPNOTSUPP;
|
|
|
|
|
2021-06-03 04:27:41 +08:00
|
|
|
strscpy_pad(ifrr.ifr_name, real_dev->name, IFNAMSIZ);
|
2005-04-17 06:20:36 +08:00
|
|
|
ifrr.ifr_ifru = ifr->ifr_ifru;
|
|
|
|
|
2008-01-21 16:26:41 +08:00
|
|
|
switch (cmd) {
|
2005-04-17 06:20:36 +08:00
|
|
|
case SIOCGMIIPHY:
|
|
|
|
case SIOCGMIIREG:
|
|
|
|
case SIOCSMIIREG:
|
2021-07-27 21:45:13 +08:00
|
|
|
if (netif_device_present(real_dev) && ops->ndo_eth_ioctl)
|
|
|
|
err = ops->ndo_eth_ioctl(real_dev, &ifrr, cmd);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2007-02-09 22:24:25 +08:00
|
|
|
if (!err)
|
2005-04-17 06:20:36 +08:00
|
|
|
ifr->ifr_ifru = ifrr.ifr_ifru;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-01-09 02:50:20 +08:00
|
|
|
static int vlan_dev_neigh_setup(struct net_device *dev, struct neigh_parms *pa)
|
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-01-09 02:50:20 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
if (netif_device_present(real_dev) && ops->ndo_neigh_setup)
|
2009-03-05 15:46:25 +08:00
|
|
|
err = ops->ndo_neigh_setup(real_dev, pa);
|
2009-01-09 02:50:20 +08:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2012-10-30 01:22:28 +08:00
|
|
|
#if IS_ENABLED(CONFIG_FCOE)
|
2009-08-14 20:41:07 +08:00
|
|
|
static int vlan_dev_fcoe_ddp_setup(struct net_device *dev, u16 xid,
|
|
|
|
struct scatterlist *sgl, unsigned int sgc)
|
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-08-14 20:41:07 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
if (ops->ndo_fcoe_ddp_setup)
|
|
|
|
rc = ops->ndo_fcoe_ddp_setup(real_dev, xid, sgl, sgc);
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_dev_fcoe_ddp_done(struct net_device *dev, u16 xid)
|
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-08-14 20:41:07 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
|
|
|
int len = 0;
|
|
|
|
|
|
|
|
if (ops->ndo_fcoe_ddp_done)
|
|
|
|
len = ops->ndo_fcoe_ddp_done(real_dev, xid);
|
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
2009-08-31 20:31:55 +08:00
|
|
|
|
|
|
|
static int vlan_dev_fcoe_enable(struct net_device *dev)
|
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-08-31 20:31:55 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
|
|
|
int rc = -EINVAL;
|
|
|
|
|
|
|
|
if (ops->ndo_fcoe_enable)
|
|
|
|
rc = ops->ndo_fcoe_enable(real_dev);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_dev_fcoe_disable(struct net_device *dev)
|
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-08-31 20:31:55 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
|
|
|
int rc = -EINVAL;
|
|
|
|
|
|
|
|
if (ops->ndo_fcoe_disable)
|
|
|
|
rc = ops->ndo_fcoe_disable(real_dev);
|
|
|
|
return rc;
|
|
|
|
}
|
2009-10-29 02:25:16 +08:00
|
|
|
|
2019-04-03 06:06:12 +08:00
|
|
|
static int vlan_dev_fcoe_ddp_target(struct net_device *dev, u16 xid,
|
|
|
|
struct scatterlist *sgl, unsigned int sgc)
|
2009-10-29 02:25:16 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2009-10-29 02:25:16 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
2019-04-03 06:06:12 +08:00
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
if (ops->ndo_fcoe_ddp_target)
|
|
|
|
rc = ops->ndo_fcoe_ddp_target(real_dev, xid, sgl, sgc);
|
2009-10-29 02:25:16 +08:00
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
2019-04-03 06:06:12 +08:00
|
|
|
#endif
|
2011-02-01 15:22:11 +08:00
|
|
|
|
2019-04-03 06:06:12 +08:00
|
|
|
#ifdef NETDEV_FCOE_WWNN
|
|
|
|
static int vlan_dev_fcoe_get_wwn(struct net_device *dev, u64 *wwn, int type)
|
2011-02-01 15:22:11 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2011-02-01 15:22:11 +08:00
|
|
|
const struct net_device_ops *ops = real_dev->netdev_ops;
|
2019-04-03 06:06:12 +08:00
|
|
|
int rc = -EINVAL;
|
2011-02-01 15:22:11 +08:00
|
|
|
|
2019-04-03 06:06:12 +08:00
|
|
|
if (ops->ndo_fcoe_get_wwn)
|
|
|
|
rc = ops->ndo_fcoe_get_wwn(real_dev, wwn, type);
|
2011-02-01 15:22:11 +08:00
|
|
|
return rc;
|
|
|
|
}
|
2009-08-14 20:41:07 +08:00
|
|
|
#endif
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static void vlan_dev_change_rx_flags(struct net_device *dev, int change)
|
2007-07-15 09:52:56 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2007-07-15 09:52:56 +08:00
|
|
|
|
2011-10-31 12:53:13 +08:00
|
|
|
if (dev->flags & IFF_UP) {
|
|
|
|
if (change & IFF_ALLMULTI)
|
|
|
|
dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1);
|
|
|
|
if (change & IFF_PROMISC)
|
|
|
|
dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1);
|
|
|
|
}
|
2007-07-15 09:52:56 +08:00
|
|
|
}
|
|
|
|
|
2008-02-01 08:53:23 +08:00
|
|
|
static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2014-05-17 05:04:55 +08:00
|
|
|
dev_mc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev);
|
|
|
|
dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-01-21 16:22:11 +08:00
|
|
|
|
2020-05-03 13:22:19 +08:00
|
|
|
/*
|
|
|
|
* vlan network devices have devices nesting below it, and are a special
|
|
|
|
* "super class" of normal network devices; split their locks off into a
|
|
|
|
* separate class since they always nest.
|
|
|
|
*/
|
|
|
|
static struct lock_class_key vlan_netdev_xmit_lock_key;
|
2020-06-09 05:53:01 +08:00
|
|
|
static struct lock_class_key vlan_netdev_addr_lock_key;
|
2020-05-03 13:22:19 +08:00
|
|
|
|
|
|
|
static void vlan_dev_set_lockdep_one(struct net_device *dev,
|
|
|
|
struct netdev_queue *txq,
|
|
|
|
void *unused)
|
|
|
|
{
|
|
|
|
lockdep_set_class(&txq->_xmit_lock, &vlan_netdev_xmit_lock_key);
|
|
|
|
}
|
|
|
|
|
2020-06-27 02:24:22 +08:00
|
|
|
static void vlan_dev_set_lockdep_class(struct net_device *dev)
|
2020-05-03 13:22:19 +08:00
|
|
|
{
|
2020-06-27 02:24:22 +08:00
|
|
|
lockdep_set_class(&dev->addr_list_lock,
|
|
|
|
&vlan_netdev_addr_lock_key);
|
2020-05-03 13:22:19 +08:00
|
|
|
netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, NULL);
|
|
|
|
}
|
|
|
|
|
2021-01-13 03:07:12 +08:00
|
|
|
static __be16 vlan_parse_protocol(const struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct vlan_ethhdr *veth = (struct vlan_ethhdr *)(skb->data);
|
|
|
|
|
|
|
|
return __vlan_get_protocol(skb, veth->h_vlan_proto, NULL);
|
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static const struct header_ops vlan_header_ops = {
|
|
|
|
.create = vlan_dev_hard_header,
|
|
|
|
.parse = eth_header_parse,
|
2021-01-13 03:07:12 +08:00
|
|
|
.parse_protocol = vlan_parse_protocol,
|
2008-01-21 16:22:11 +08:00
|
|
|
};
|
|
|
|
|
2014-01-01 05:23:35 +08:00
|
|
|
static int vlan_passthru_hard_header(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
unsigned short type,
|
|
|
|
const void *daddr, const void *saddr,
|
|
|
|
unsigned int len)
|
|
|
|
{
|
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
|
|
|
struct net_device *real_dev = vlan->real_dev;
|
|
|
|
|
2014-03-10 23:17:15 +08:00
|
|
|
if (saddr == NULL)
|
|
|
|
saddr = dev->dev_addr;
|
|
|
|
|
2014-01-01 05:23:35 +08:00
|
|
|
return dev_hard_header(skb, real_dev, type, daddr, saddr, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct header_ops vlan_passthru_header_ops = {
|
|
|
|
.create = vlan_passthru_hard_header,
|
|
|
|
.parse = eth_header_parse,
|
2021-01-13 03:07:12 +08:00
|
|
|
.parse_protocol = vlan_parse_protocol,
|
2014-01-01 05:23:35 +08:00
|
|
|
};
|
|
|
|
|
2012-10-22 03:53:57 +08:00
|
|
|
static struct device_type vlan_type = {
|
|
|
|
.name = "vlan",
|
|
|
|
};
|
|
|
|
|
2010-11-11 17:42:45 +08:00
|
|
|
static const struct net_device_ops vlan_netdev_ops;
|
2008-12-26 08:45:19 +08:00
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
static int vlan_dev_init(struct net_device *dev)
|
|
|
|
{
|
2019-04-19 01:35:32 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
|
|
|
struct net_device *real_dev = vlan->real_dev;
|
2008-01-21 16:22:11 +08:00
|
|
|
|
2009-04-26 09:03:35 +08:00
|
|
|
netif_carrier_off(dev);
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
/* IFF_BROADCAST|IFF_MULTICAST; ??? */
|
2010-05-13 05:31:06 +08:00
|
|
|
dev->flags = real_dev->flags & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI |
|
|
|
|
IFF_MASTER | IFF_SLAVE);
|
2008-01-21 16:22:11 +08:00
|
|
|
dev->state = (real_dev->state & ((1<<__LINK_STATE_NOCARRIER) |
|
|
|
|
(1<<__LINK_STATE_DORMANT))) |
|
|
|
|
(1<<__LINK_STATE_PRESENT);
|
|
|
|
|
2019-04-19 01:35:32 +08:00
|
|
|
if (vlan->flags & VLAN_FLAG_BRIDGE_BINDING)
|
|
|
|
dev->state |= (1 << __LINK_STATE_NOCARRIER);
|
|
|
|
|
2015-12-15 03:19:43 +08:00
|
|
|
dev->hw_features = NETIF_F_HW_CSUM | NETIF_F_SG |
|
2014-12-10 10:43:13 +08:00
|
|
|
NETIF_F_FRAGLIST | NETIF_F_GSO_SOFTWARE |
|
2018-11-07 18:28:18 +08:00
|
|
|
NETIF_F_GSO_ENCAP_ALL |
|
2015-12-15 03:19:41 +08:00
|
|
|
NETIF_F_HIGHDMA | NETIF_F_SCTP_CRC |
|
2011-07-13 22:10:29 +08:00
|
|
|
NETIF_F_ALL_FCOE;
|
|
|
|
|
2023-04-19 22:21:22 +08:00
|
|
|
if (real_dev->vlan_features & NETIF_F_HW_MACSEC)
|
|
|
|
dev->hw_features |= NETIF_F_HW_MACSEC;
|
|
|
|
|
2017-03-16 08:41:14 +08:00
|
|
|
dev->features |= dev->hw_features | NETIF_F_LLTX;
|
2022-05-06 10:51:31 +08:00
|
|
|
netif_inherit_tso_max(dev, real_dev);
|
2014-03-28 10:14:49 +08:00
|
|
|
if (dev->features & NETIF_F_VLAN_FEATURES)
|
|
|
|
netdev_warn(real_dev, "VLAN features are set incorrectly. Q-in-Q configurations may not work correctly.\n");
|
|
|
|
|
2015-03-27 13:31:10 +08:00
|
|
|
dev->vlan_features = real_dev->vlan_features & ~NETIF_F_ALL_FCOE;
|
2018-11-07 18:28:18 +08:00
|
|
|
dev->hw_enc_features = vlan_tnl_features(real_dev);
|
2019-06-04 06:36:47 +08:00
|
|
|
dev->mpls_features = real_dev->mpls_features;
|
2008-05-21 05:54:50 +08:00
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
/* ipv6 shared card related stuff */
|
|
|
|
dev->dev_id = real_dev->dev_id;
|
|
|
|
|
2016-05-28 00:45:07 +08:00
|
|
|
if (is_zero_ether_addr(dev->dev_addr)) {
|
2021-10-02 05:32:22 +08:00
|
|
|
eth_hw_addr_set(dev, real_dev->dev_addr);
|
2016-05-28 00:45:07 +08:00
|
|
|
dev->addr_assign_type = NET_ADDR_STOLEN;
|
|
|
|
}
|
2008-01-21 16:22:11 +08:00
|
|
|
if (is_zero_ether_addr(dev->broadcast))
|
|
|
|
memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
|
|
|
|
|
2012-10-30 01:22:28 +08:00
|
|
|
#if IS_ENABLED(CONFIG_FCOE)
|
2009-08-14 20:41:07 +08:00
|
|
|
dev->fcoe_ddp_xid = real_dev->fcoe_ddp_xid;
|
|
|
|
#endif
|
|
|
|
|
2011-03-18 08:27:27 +08:00
|
|
|
dev->needed_headroom = real_dev->needed_headroom;
|
2019-04-19 01:35:32 +08:00
|
|
|
if (vlan_hw_offload_capable(real_dev->features, vlan->vlan_proto)) {
|
2014-01-01 05:23:35 +08:00
|
|
|
dev->header_ops = &vlan_passthru_header_ops;
|
2008-01-21 16:22:11 +08:00
|
|
|
dev->hard_header_len = real_dev->hard_header_len;
|
|
|
|
} else {
|
|
|
|
dev->header_ops = &vlan_header_ops;
|
|
|
|
dev->hard_header_len = real_dev->hard_header_len + VLAN_HLEN;
|
|
|
|
}
|
|
|
|
|
2010-11-11 17:42:45 +08:00
|
|
|
dev->netdev_ops = &vlan_netdev_ops;
|
2010-10-30 22:22:42 +08:00
|
|
|
|
2012-10-22 03:53:57 +08:00
|
|
|
SET_NETDEV_DEVTYPE(dev, &vlan_type);
|
|
|
|
|
2020-06-27 02:24:22 +08:00
|
|
|
vlan_dev_set_lockdep_class(dev);
|
2020-05-03 13:22:19 +08:00
|
|
|
|
2019-04-19 01:35:32 +08:00
|
|
|
vlan->vlan_pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats);
|
|
|
|
if (!vlan->vlan_pcpu_stats)
|
2009-11-17 12:53:09 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2021-11-26 09:59:42 +08:00
|
|
|
/* Get vlan's reference to real_dev */
|
2022-06-08 12:39:55 +08:00
|
|
|
netdev_hold(real_dev, &vlan->dev_tracker, GFP_KERNEL);
|
2021-11-26 09:59:42 +08:00
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-01-07 17:42:24 +08:00
|
|
|
/* Note: this function might be called multiple times for the same device. */
|
2022-02-09 16:19:55 +08:00
|
|
|
void vlan_dev_free_egress_priority(const struct net_device *dev)
|
2008-04-05 03:45:12 +08:00
|
|
|
{
|
|
|
|
struct vlan_priority_tci_mapping *pm;
|
2011-12-08 12:11:15 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2008-04-05 03:45:12 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) {
|
|
|
|
while ((pm = vlan->egress_priority_map[i]) != NULL) {
|
|
|
|
vlan->egress_priority_map[i] = pm->next;
|
|
|
|
kfree(pm);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-02-09 16:19:55 +08:00
|
|
|
static void vlan_dev_uninit(struct net_device *dev)
|
|
|
|
{
|
|
|
|
vlan_dev_free_egress_priority(dev);
|
|
|
|
}
|
|
|
|
|
2011-11-15 23:29:55 +08:00
|
|
|
static netdev_features_t vlan_dev_fix_features(struct net_device *dev,
|
|
|
|
netdev_features_t features)
|
2011-04-03 13:49:12 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
2013-05-02 07:06:42 +08:00
|
|
|
netdev_features_t old_features = features;
|
2017-05-06 04:17:41 +08:00
|
|
|
netdev_features_t lower_features;
|
2011-04-03 13:49:12 +08:00
|
|
|
|
2017-05-06 04:17:41 +08:00
|
|
|
lower_features = netdev_intersect_features((real_dev->vlan_features |
|
|
|
|
NETIF_F_RXCSUM),
|
|
|
|
real_dev->features);
|
2011-07-06 11:43:12 +08:00
|
|
|
|
2017-05-06 04:17:41 +08:00
|
|
|
/* Add HW_CSUM setting to preserve user ability to control
|
|
|
|
* checksum offload on the vlan device.
|
|
|
|
*/
|
|
|
|
if (lower_features & (NETIF_F_IP_CSUM|NETIF_F_IPV6_CSUM))
|
|
|
|
lower_features |= NETIF_F_HW_CSUM;
|
|
|
|
features = netdev_intersect_features(features, lower_features);
|
2014-12-10 10:43:13 +08:00
|
|
|
features |= old_features & (NETIF_F_SOFT_FEATURES | NETIF_F_GSO_SOFTWARE);
|
2011-05-06 15:56:29 +08:00
|
|
|
features |= NETIF_F_LLTX;
|
2011-04-03 13:49:12 +08:00
|
|
|
|
|
|
|
return features;
|
|
|
|
}
|
|
|
|
|
2016-02-25 02:58:08 +08:00
|
|
|
static int vlan_ethtool_get_link_ksettings(struct net_device *dev,
|
|
|
|
struct ethtool_link_ksettings *cmd)
|
2008-10-29 13:12:36 +08:00
|
|
|
{
|
2011-12-08 12:11:15 +08:00
|
|
|
const struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
2011-09-03 11:34:30 +08:00
|
|
|
|
2016-02-25 02:58:08 +08:00
|
|
|
return __ethtool_get_link_ksettings(vlan->real_dev, cmd);
|
2008-10-29 13:12:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void vlan_ethtool_get_drvinfo(struct net_device *dev,
|
|
|
|
struct ethtool_drvinfo *info)
|
|
|
|
{
|
2022-08-19 05:02:04 +08:00
|
|
|
strscpy(info->driver, vlan_fullname, sizeof(info->driver));
|
|
|
|
strscpy(info->version, vlan_version, sizeof(info->version));
|
|
|
|
strscpy(info->fw_version, "N/A", sizeof(info->fw_version));
|
2008-10-29 13:12:36 +08:00
|
|
|
}
|
|
|
|
|
2014-11-21 21:16:20 +08:00
|
|
|
static int vlan_ethtool_get_ts_info(struct net_device *dev,
|
|
|
|
struct ethtool_ts_info *info)
|
|
|
|
{
|
|
|
|
const struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
|
|
|
const struct ethtool_ops *ops = vlan->real_dev->ethtool_ops;
|
2018-03-30 09:44:00 +08:00
|
|
|
struct phy_device *phydev = vlan->real_dev->phydev;
|
2014-11-21 21:16:20 +08:00
|
|
|
|
2019-12-26 10:16:11 +08:00
|
|
|
if (phy_has_tsinfo(phydev)) {
|
|
|
|
return phy_ts_info(phydev, info);
|
2018-03-30 09:44:00 +08:00
|
|
|
} else if (ops->get_ts_info) {
|
2014-11-21 21:16:20 +08:00
|
|
|
return ops->get_ts_info(vlan->real_dev, info);
|
|
|
|
} else {
|
|
|
|
info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
|
|
|
|
SOF_TIMESTAMPING_SOFTWARE;
|
|
|
|
info->phc_index = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-01-07 11:12:52 +08:00
|
|
|
static void vlan_dev_get_stats64(struct net_device *dev,
|
|
|
|
struct rtnl_link_stats64 *stats)
|
2009-11-17 12:53:09 +08:00
|
|
|
{
|
2014-04-21 19:49:08 +08:00
|
|
|
struct vlan_pcpu_stats *p;
|
|
|
|
u32 rx_errors = 0, tx_dropped = 0;
|
|
|
|
int i;
|
2009-11-17 12:53:09 +08:00
|
|
|
|
2014-04-21 19:49:08 +08:00
|
|
|
for_each_possible_cpu(i) {
|
|
|
|
u64 rxpackets, rxbytes, rxmulticast, txpackets, txbytes;
|
|
|
|
unsigned int start;
|
|
|
|
|
|
|
|
p = per_cpu_ptr(vlan_dev_priv(dev)->vlan_pcpu_stats, i);
|
|
|
|
do {
|
2022-10-26 21:22:15 +08:00
|
|
|
start = u64_stats_fetch_begin(&p->syncp);
|
2022-06-08 23:46:32 +08:00
|
|
|
rxpackets = u64_stats_read(&p->rx_packets);
|
|
|
|
rxbytes = u64_stats_read(&p->rx_bytes);
|
|
|
|
rxmulticast = u64_stats_read(&p->rx_multicast);
|
|
|
|
txpackets = u64_stats_read(&p->tx_packets);
|
|
|
|
txbytes = u64_stats_read(&p->tx_bytes);
|
2022-10-26 21:22:15 +08:00
|
|
|
} while (u64_stats_fetch_retry(&p->syncp, start));
|
2014-04-21 19:49:08 +08:00
|
|
|
|
|
|
|
stats->rx_packets += rxpackets;
|
|
|
|
stats->rx_bytes += rxbytes;
|
|
|
|
stats->multicast += rxmulticast;
|
|
|
|
stats->tx_packets += txpackets;
|
|
|
|
stats->tx_bytes += txbytes;
|
|
|
|
/* rx_errors & tx_dropped are u32 */
|
2022-06-08 23:46:32 +08:00
|
|
|
rx_errors += READ_ONCE(p->rx_errors);
|
|
|
|
tx_dropped += READ_ONCE(p->tx_dropped);
|
2009-11-17 12:53:09 +08:00
|
|
|
}
|
2014-04-21 19:49:08 +08:00
|
|
|
stats->rx_errors = rx_errors;
|
|
|
|
stats->tx_dropped = tx_dropped;
|
2009-11-17 12:53:09 +08:00
|
|
|
}
|
|
|
|
|
2011-12-08 14:20:49 +08:00
|
|
|
#ifdef CONFIG_NET_POLL_CONTROLLER
|
2011-12-14 04:57:54 +08:00
|
|
|
static void vlan_dev_poll_controller(struct net_device *dev)
|
2011-12-08 14:20:49 +08:00
|
|
|
{
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-03-28 06:36:38 +08:00
|
|
|
static int vlan_dev_netpoll_setup(struct net_device *dev, struct netpoll_info *npinfo)
|
2011-12-08 14:20:49 +08:00
|
|
|
{
|
2012-08-10 09:24:47 +08:00
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
|
|
|
struct net_device *real_dev = vlan->real_dev;
|
2011-12-08 14:20:49 +08:00
|
|
|
struct netpoll *netpoll;
|
|
|
|
int err = 0;
|
|
|
|
|
2014-03-28 06:36:38 +08:00
|
|
|
netpoll = kzalloc(sizeof(*netpoll), GFP_KERNEL);
|
2011-12-08 14:20:49 +08:00
|
|
|
err = -ENOMEM;
|
|
|
|
if (!netpoll)
|
|
|
|
goto out;
|
|
|
|
|
2014-03-28 06:36:38 +08:00
|
|
|
err = __netpoll_setup(netpoll, real_dev);
|
2011-12-08 14:20:49 +08:00
|
|
|
if (err) {
|
|
|
|
kfree(netpoll);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-08-10 09:24:47 +08:00
|
|
|
vlan->netpoll = netpoll;
|
2011-12-08 14:20:49 +08:00
|
|
|
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2011-12-14 04:57:54 +08:00
|
|
|
static void vlan_dev_netpoll_cleanup(struct net_device *dev)
|
2011-12-08 14:20:49 +08:00
|
|
|
{
|
2012-08-10 09:24:47 +08:00
|
|
|
struct vlan_dev_priv *vlan= vlan_dev_priv(dev);
|
|
|
|
struct netpoll *netpoll = vlan->netpoll;
|
2011-12-08 14:20:49 +08:00
|
|
|
|
|
|
|
if (!netpoll)
|
|
|
|
return;
|
|
|
|
|
2012-08-10 09:24:47 +08:00
|
|
|
vlan->netpoll = NULL;
|
2018-10-18 23:18:26 +08:00
|
|
|
__netpoll_free(netpoll);
|
2011-12-08 14:20:49 +08:00
|
|
|
}
|
|
|
|
#endif /* CONFIG_NET_POLL_CONTROLLER */
|
|
|
|
|
2015-04-02 23:07:04 +08:00
|
|
|
static int vlan_dev_get_iflink(const struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
|
|
|
|
|
|
|
|
return real_dev->ifindex;
|
|
|
|
}
|
|
|
|
|
2021-03-24 09:30:33 +08:00
|
|
|
static int vlan_dev_fill_forward_path(struct net_device_path_ctx *ctx,
|
|
|
|
struct net_device_path *path)
|
|
|
|
{
|
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(ctx->dev);
|
|
|
|
|
|
|
|
path->type = DEV_PATH_VLAN;
|
|
|
|
path->encap.id = vlan->vlan_id;
|
|
|
|
path->encap.proto = vlan->vlan_proto;
|
|
|
|
path->dev = ctx->dev;
|
|
|
|
ctx->dev = vlan->real_dev;
|
2021-03-24 09:30:35 +08:00
|
|
|
if (ctx->num_vlans >= ARRAY_SIZE(ctx->vlan))
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
ctx->vlan[ctx->num_vlans].id = vlan->vlan_id;
|
|
|
|
ctx->vlan[ctx->num_vlans].proto = vlan->vlan_proto;
|
|
|
|
ctx->num_vlans++;
|
2021-03-24 09:30:33 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-04-19 22:21:22 +08:00
|
|
|
#if IS_ENABLED(CONFIG_MACSEC)
|
|
|
|
|
|
|
|
static const struct macsec_ops *vlan_get_macsec_ops(const struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
return vlan_dev_priv(ctx->netdev)->real_dev->macsec_ops;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_offload(int (* const func)(struct macsec_context *),
|
|
|
|
struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
if (unlikely(!func))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return (*func)(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_dev_open(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_dev_open, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_dev_stop(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_dev_stop, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_add_secy(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_add_secy, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_upd_secy(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_upd_secy, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_del_secy(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_del_secy, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_add_rxsc(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_add_rxsc, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_upd_rxsc(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_upd_rxsc, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_del_rxsc(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_del_rxsc, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_add_rxsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_add_rxsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_upd_rxsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_upd_rxsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_del_rxsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_del_rxsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_add_txsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_add_txsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_upd_txsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_upd_txsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_del_txsa(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_del_txsa, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_get_dev_stats(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_get_dev_stats, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_get_tx_sc_stats(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_get_tx_sc_stats, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_get_tx_sa_stats(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_get_tx_sa_stats, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_get_rx_sc_stats(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_get_rx_sc_stats, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vlan_macsec_get_rx_sa_stats(struct macsec_context *ctx)
|
|
|
|
{
|
|
|
|
const struct macsec_ops *ops = vlan_get_macsec_ops(ctx);
|
|
|
|
|
|
|
|
if (!ops)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
|
|
|
return vlan_macsec_offload(ops->mdo_get_rx_sa_stats, ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct macsec_ops macsec_offload_ops = {
|
|
|
|
/* Device wide */
|
|
|
|
.mdo_dev_open = vlan_macsec_dev_open,
|
|
|
|
.mdo_dev_stop = vlan_macsec_dev_stop,
|
|
|
|
/* SecY */
|
|
|
|
.mdo_add_secy = vlan_macsec_add_secy,
|
|
|
|
.mdo_upd_secy = vlan_macsec_upd_secy,
|
|
|
|
.mdo_del_secy = vlan_macsec_del_secy,
|
|
|
|
/* Security channels */
|
|
|
|
.mdo_add_rxsc = vlan_macsec_add_rxsc,
|
|
|
|
.mdo_upd_rxsc = vlan_macsec_upd_rxsc,
|
|
|
|
.mdo_del_rxsc = vlan_macsec_del_rxsc,
|
|
|
|
/* Security associations */
|
|
|
|
.mdo_add_rxsa = vlan_macsec_add_rxsa,
|
|
|
|
.mdo_upd_rxsa = vlan_macsec_upd_rxsa,
|
|
|
|
.mdo_del_rxsa = vlan_macsec_del_rxsa,
|
|
|
|
.mdo_add_txsa = vlan_macsec_add_txsa,
|
|
|
|
.mdo_upd_txsa = vlan_macsec_upd_txsa,
|
|
|
|
.mdo_del_txsa = vlan_macsec_del_txsa,
|
|
|
|
/* Statistics */
|
|
|
|
.mdo_get_dev_stats = vlan_macsec_get_dev_stats,
|
|
|
|
.mdo_get_tx_sc_stats = vlan_macsec_get_tx_sc_stats,
|
|
|
|
.mdo_get_tx_sa_stats = vlan_macsec_get_tx_sa_stats,
|
|
|
|
.mdo_get_rx_sc_stats = vlan_macsec_get_rx_sc_stats,
|
|
|
|
.mdo_get_rx_sa_stats = vlan_macsec_get_rx_sa_stats,
|
|
|
|
};
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2008-07-08 18:22:42 +08:00
|
|
|
static const struct ethtool_ops vlan_ethtool_ops = {
|
2016-02-25 02:58:08 +08:00
|
|
|
.get_link_ksettings = vlan_ethtool_get_link_ksettings,
|
2008-10-29 13:12:36 +08:00
|
|
|
.get_drvinfo = vlan_ethtool_get_drvinfo,
|
2008-07-08 18:22:42 +08:00
|
|
|
.get_link = ethtool_op_get_link,
|
2014-11-21 21:16:20 +08:00
|
|
|
.get_ts_info = vlan_ethtool_get_ts_info,
|
2008-07-08 18:22:42 +08:00
|
|
|
};
|
|
|
|
|
2008-11-20 13:53:47 +08:00
|
|
|
static const struct net_device_ops vlan_netdev_ops = {
|
|
|
|
.ndo_change_mtu = vlan_dev_change_mtu,
|
|
|
|
.ndo_init = vlan_dev_init,
|
|
|
|
.ndo_uninit = vlan_dev_uninit,
|
|
|
|
.ndo_open = vlan_dev_open,
|
|
|
|
.ndo_stop = vlan_dev_stop,
|
2010-03-23 22:41:45 +08:00
|
|
|
.ndo_start_xmit = vlan_dev_hard_start_xmit,
|
|
|
|
.ndo_validate_addr = eth_validate_addr,
|
|
|
|
.ndo_set_mac_address = vlan_dev_set_mac_address,
|
|
|
|
.ndo_set_rx_mode = vlan_dev_set_rx_mode,
|
|
|
|
.ndo_change_rx_flags = vlan_dev_change_rx_flags,
|
2021-07-27 21:45:13 +08:00
|
|
|
.ndo_eth_ioctl = vlan_dev_ioctl,
|
2010-03-23 22:41:45 +08:00
|
|
|
.ndo_neigh_setup = vlan_dev_neigh_setup,
|
2010-06-24 08:55:06 +08:00
|
|
|
.ndo_get_stats64 = vlan_dev_get_stats64,
|
2012-10-30 01:22:28 +08:00
|
|
|
#if IS_ENABLED(CONFIG_FCOE)
|
2010-03-23 22:41:45 +08:00
|
|
|
.ndo_fcoe_ddp_setup = vlan_dev_fcoe_ddp_setup,
|
|
|
|
.ndo_fcoe_ddp_done = vlan_dev_fcoe_ddp_done,
|
|
|
|
.ndo_fcoe_enable = vlan_dev_fcoe_enable,
|
|
|
|
.ndo_fcoe_disable = vlan_dev_fcoe_disable,
|
2011-02-01 15:22:11 +08:00
|
|
|
.ndo_fcoe_ddp_target = vlan_dev_fcoe_ddp_target,
|
2011-12-08 14:20:49 +08:00
|
|
|
#endif
|
2019-04-03 06:06:12 +08:00
|
|
|
#ifdef NETDEV_FCOE_WWNN
|
|
|
|
.ndo_fcoe_get_wwn = vlan_dev_fcoe_get_wwn,
|
|
|
|
#endif
|
2011-12-08 14:20:49 +08:00
|
|
|
#ifdef CONFIG_NET_POLL_CONTROLLER
|
|
|
|
.ndo_poll_controller = vlan_dev_poll_controller,
|
|
|
|
.ndo_netpoll_setup = vlan_dev_netpoll_setup,
|
|
|
|
.ndo_netpoll_cleanup = vlan_dev_netpoll_cleanup,
|
2010-03-23 22:41:45 +08:00
|
|
|
#endif
|
2011-04-03 13:49:12 +08:00
|
|
|
.ndo_fix_features = vlan_dev_fix_features,
|
2015-04-02 23:07:04 +08:00
|
|
|
.ndo_get_iflink = vlan_dev_get_iflink,
|
2021-03-24 09:30:33 +08:00
|
|
|
.ndo_fill_forward_path = vlan_dev_fill_forward_path,
|
2023-08-01 22:28:15 +08:00
|
|
|
.ndo_hwtstamp_get = vlan_hwtstamp_get,
|
|
|
|
.ndo_hwtstamp_set = vlan_hwtstamp_set,
|
2010-03-23 22:41:45 +08:00
|
|
|
};
|
|
|
|
|
2014-07-02 17:25:15 +08:00
|
|
|
static void vlan_dev_free(struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
|
|
|
|
|
|
|
free_percpu(vlan->vlan_pcpu_stats);
|
|
|
|
vlan->vlan_pcpu_stats = NULL;
|
2022-02-23 12:54:57 +08:00
|
|
|
|
|
|
|
/* Get rid of the vlan's reference to real_dev */
|
2022-06-08 12:39:55 +08:00
|
|
|
netdev_put(vlan->real_dev, &vlan->dev_tracker);
|
2014-07-02 17:25:15 +08:00
|
|
|
}
|
|
|
|
|
2008-01-21 16:22:11 +08:00
|
|
|
void vlan_setup(struct net_device *dev)
|
|
|
|
{
|
|
|
|
ether_setup(dev);
|
|
|
|
|
2015-08-18 16:30:36 +08:00
|
|
|
dev->priv_flags |= IFF_802_1Q_VLAN | IFF_NO_QUEUE;
|
2016-02-18 18:00:11 +08:00
|
|
|
dev->priv_flags |= IFF_UNICAST_FLT;
|
2014-10-06 09:38:35 +08:00
|
|
|
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
|
|
|
|
netif_keep_dst(dev);
|
2008-01-21 16:22:11 +08:00
|
|
|
|
2008-11-20 13:53:47 +08:00
|
|
|
dev->netdev_ops = &vlan_netdev_ops;
|
net: Fix inconsistent teardown and release of private netdev state.
Network devices can allocate reasources and private memory using
netdev_ops->ndo_init(). However, the release of these resources
can occur in one of two different places.
Either netdev_ops->ndo_uninit() or netdev->destructor().
The decision of which operation frees the resources depends upon
whether it is necessary for all netdev refs to be released before it
is safe to perform the freeing.
netdev_ops->ndo_uninit() presumably can occur right after the
NETDEV_UNREGISTER notifier completes and the unicast and multicast
address lists are flushed.
netdev->destructor(), on the other hand, does not run until the
netdev references all go away.
Further complicating the situation is that netdev->destructor()
almost universally does also a free_netdev().
This creates a problem for the logic in register_netdevice().
Because all callers of register_netdevice() manage the freeing
of the netdev, and invoke free_netdev(dev) if register_netdevice()
fails.
If netdev_ops->ndo_init() succeeds, but something else fails inside
of register_netdevice(), it does call ndo_ops->ndo_uninit(). But
it is not able to invoke netdev->destructor().
This is because netdev->destructor() will do a free_netdev() and
then the caller of register_netdevice() will do the same.
However, this means that the resources that would normally be released
by netdev->destructor() will not be.
Over the years drivers have added local hacks to deal with this, by
invoking their destructor parts by hand when register_netdevice()
fails.
Many drivers do not try to deal with this, and instead we have leaks.
Let's close this hole by formalizing the distinction between what
private things need to be freed up by netdev->destructor() and whether
the driver needs unregister_netdevice() to perform the free_netdev().
netdev->priv_destructor() performs all actions to free up the private
resources that used to be freed by netdev->destructor(), except for
free_netdev().
netdev->needs_free_netdev is a boolean that indicates whether
free_netdev() should be done at the end of unregister_netdevice().
Now, register_netdevice() can sanely release all resources after
ndo_ops->ndo_init() succeeds, by invoking both ndo_ops->ndo_uninit()
and netdev->priv_destructor().
And at the end of unregister_netdevice(), we invoke
netdev->priv_destructor() and optionally call free_netdev().
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-09 00:52:56 +08:00
|
|
|
dev->needs_free_netdev = true;
|
|
|
|
dev->priv_destructor = vlan_dev_free;
|
2008-07-08 18:22:42 +08:00
|
|
|
dev->ethtool_ops = &vlan_ethtool_ops;
|
2008-01-21 16:22:11 +08:00
|
|
|
|
2023-04-19 22:21:22 +08:00
|
|
|
#if IS_ENABLED(CONFIG_MACSEC)
|
|
|
|
dev->macsec_ops = &macsec_offload_ops;
|
|
|
|
#endif
|
net: use core MTU range checking in core net infra
geneve:
- Merge __geneve_change_mtu back into geneve_change_mtu, set max_mtu
- This one isn't quite as straight-forward as others, could use some
closer inspection and testing
macvlan:
- set min/max_mtu
tun:
- set min/max_mtu, remove tun_net_change_mtu
vxlan:
- Merge __vxlan_change_mtu back into vxlan_change_mtu
- Set max_mtu to IP_MAX_MTU and retain dynamic MTU range checks in
change_mtu function
- This one is also not as straight-forward and could use closer inspection
and testing from vxlan folks
bridge:
- set max_mtu of IP_MAX_MTU and retain dynamic MTU range checks in
change_mtu function
openvswitch:
- set min/max_mtu, remove internal_dev_change_mtu
- note: max_mtu wasn't checked previously, it's been set to 65535, which
is the largest possible size supported
sch_teql:
- set min/max_mtu (note: max_mtu previously unchecked, used max of 65535)
macsec:
- min_mtu = 0, max_mtu = 65535
macvlan:
- min_mtu = 0, max_mtu = 65535
ntb_netdev:
- min_mtu = 0, max_mtu = 65535
veth:
- min_mtu = 68, max_mtu = 65535
8021q:
- min_mtu = 0, max_mtu = 65535
CC: netdev@vger.kernel.org
CC: Nicolas Dichtel <nicolas.dichtel@6wind.com>
CC: Hannes Frederic Sowa <hannes@stressinduktion.org>
CC: Tom Herbert <tom@herbertland.com>
CC: Daniel Borkmann <daniel@iogearbox.net>
CC: Alexander Duyck <alexander.h.duyck@intel.com>
CC: Paolo Abeni <pabeni@redhat.com>
CC: Jiri Benc <jbenc@redhat.com>
CC: WANG Cong <xiyou.wangcong@gmail.com>
CC: Roopa Prabhu <roopa@cumulusnetworks.com>
CC: Pravin B Shelar <pshelar@ovn.org>
CC: Sabrina Dubroca <sd@queasysnail.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Stephen Hemminger <stephen@networkplumber.org>
CC: Pravin Shelar <pshelar@nicira.com>
CC: Maxim Krasnyansky <maxk@qti.qualcomm.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-10-21 01:55:20 +08:00
|
|
|
dev->min_mtu = 0;
|
|
|
|
dev->max_mtu = ETH_MAX_MTU;
|
|
|
|
|
2015-03-03 11:54:52 +08:00
|
|
|
eth_zero_addr(dev->broadcast);
|
2008-01-21 16:22:11 +08:00
|
|
|
}
|