Merge branch 'Traffic-support-for-SJA1105-DSA-driver'

Vladimir Oltean says:

====================
Traffic support for SJA1105 DSA driver

This patch set is a continuation of the "NXP SJA1105 DSA driver" v3
series, which was split in multiple pieces for easier review.

Supporting a fully-featured (traffic-capable) driver for this switch
requires some rework in DSA and also leaves behind a more generic
infrastructure for other dumb switches that rely on 802.1Q pseudo-switch
tagging for port separation. Among the DSA changes required are:

* Generic xmit and rcv functions for pushing/popping 802.1Q tags on
  skb's. These are modeled as a tagging protocol of its own but which
  must be customized by drivers to fit their own hardware possibilities.

* Permitting the .setup callback to invoke switchdev operations that
  will loop back into the driver through the switchdev notifier chain.

The SJA1105 driver then proceeds to extend this 8021q switch tagging
protocol while adding its own (tag_sja1105). This is done because
the driver actually implements a "dual tagger":

* For normal traffic it uses 802.1Q tags

* For management (multicast DMAC) frames the switch has native support
  for recognizing and annotating these with source port and switch id
  information.

Because this is a "dual tagger", decoding of management frames should
still function when regular traffic can't (under a bridge with VLAN
filtering).
There was intervention in the DSA receive hotpath, where a new
filtering function called from eth_type_trans() is needed. This is
useful in the general sense for switches that might actually have some
limited means of source port decoding, such as only for management
traffic, but not for everything.
In order for the 802.1Q tagging protocol (which cannot be enabled under
all conditions, unlike the management traffic decoding) to not be an
all-or-nothing choice, the filtering function matches everything that
can be decoded, and everything else is left to pass to the master
netdevice.

Lastly, DSA core support was added for drivers to request skb deferral.
SJA1105 needs this for SPI intervention during transmission of link-local
traffic. This is not done from within the tagger.

Some patches were carried over unchanged from the previous patchset
(01/09). Others were slightly reworked while adapting to the recent
changes in "Make DSA tag drivers kernel modules" (02/09).

The introduction of some structures (DSA_SKB_CB, dp->priv) may seem a
little premature at this point and the new structures under-utilized.
The reason is that traffic support has been rewritten with PTP
timestamping in mind, and then I removed the timestamping code from the
current submission (1. it is a different topic, 2. it does not work very
well yet). On demand I can provide the timestamping patchset as a RFC
though.

"NXP SJA1105 DSA driver" v3 patchset can be found at:
https://lkml.org/lkml/2019/4/12/978

v1 patchset can be found at:
https://lkml.org/lkml/2019/5/3/877

Changes in v2:

* Made the deferred xmit workqueue also be drained on the netdev suspend
  callback, not just on ndo_stop.

* Added clarification about how other netdevices may be bridged with the
  switch ports.

v2 patchset can be found at:
https://www.spinics.net/lists/netdev/msg568818.html

Changes in v3:

* Exported the dsa_port_vid_add and dsa_port_vid_del symbols to fix an
  error reported by the kbuild test robot

* Fixed the following checkpatch warnings in 05/10:
  Macro argument reuse 'skb' - possible side-effects?
  Macro argument reuse 'clone' - possible side-effects?

* Added a commit description to the documentation patch (10/10)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2019-05-05 21:52:42 -07:00
commit 0e5ef5a22a
16 changed files with 903 additions and 41 deletions

View File

@ -63,6 +63,38 @@ If that changed setting can be transmitted to the switch through the dynamic
reconfiguration interface, it is; otherwise the switch is reset and
reprogrammed with the updated static configuration.
Traffic support
===============
The switches do not support switch tagging in hardware. But they do support
customizing the TPID by which VLAN traffic is identified as such. The switch
driver is leveraging ``CONFIG_NET_DSA_TAG_8021Q`` by requesting that special
VLANs (with a custom TPID of ``ETH_P_EDSA`` instead of ``ETH_P_8021Q``) are
installed on its ports when not in ``vlan_filtering`` mode. This does not
interfere with the reception and transmission of real 802.1Q-tagged traffic,
because the switch does no longer parse those packets as VLAN after the TPID
change.
The TPID is restored when ``vlan_filtering`` is requested by the user through
the bridge layer, and general IP termination becomes no longer possible through
the switch netdevices in this mode.
The switches have two programmable filters for link-local destination MACs.
These are used to trap BPDUs and PTP traffic to the master netdevice, and are
further used to support STP and 1588 ordinary clock/boundary clock
functionality.
The following traffic modes are supported over the switch netdevices:
+--------------------+------------+------------------+------------------+
| | Standalone | Bridged with | Bridged with |
| | ports | vlan_filtering 0 | vlan_filtering 1 |
+====================+============+==================+==================+
| Regular traffic | Yes | Yes | No (use master) |
+--------------------+------------+------------------+------------------+
| Management traffic | Yes | Yes | Yes |
| (BPDU, PTP) | | | |
+--------------------+------------+------------------+------------------+
Switching features
==================
@ -92,6 +124,28 @@ that VLAN awareness is global at the switch level is that once a bridge with
``vlan_filtering`` enslaves at least one switch port, the other un-bridged
ports are no longer available for standalone traffic termination.
Topology and loop detection through STP is supported.
L2 FDB manipulation (add/delete/dump) is currently possible for the first
generation devices. Aging time of FDB entries, as well as enabling fully static
management (no address learning and no flooding of unknown traffic) is not yet
configurable in the driver.
A special comment about bridging with other netdevices (illustrated with an
example):
A board has eth0, eth1, swp0@eth1, swp1@eth1, swp2@eth1, swp3@eth1.
The switch ports (swp0-3) are under br0.
It is desired that eth0 is turned into another switched port that communicates
with swp0-3.
If br0 has vlan_filtering 0, then eth0 can simply be added to br0 with the
intended results.
If br0 has vlan_filtering 1, then a new br1 interface needs to be created that
enslaves eth0 and eth1 (the DSA master of the switch ports). This is because in
this mode, the switch ports beneath br0 are not capable of regular traffic, and
are only used as a conduit for switchdev operations.
Device Tree bindings and board design
=====================================

View File

@ -1,6 +1,7 @@
config NET_DSA_SJA1105
tristate "NXP SJA1105 Ethernet switch family support"
depends on NET_DSA && SPI
select NET_DSA_TAG_SJA1105
select PACKING
select CRC32
help

View File

@ -7,6 +7,7 @@
#include <linux/dsa/sja1105.h>
#include <net/dsa.h>
#include <linux/mutex.h>
#include "sja1105_static_config.h"
#define SJA1105_NUM_PORTS 5
@ -65,6 +66,11 @@ struct sja1105_private {
struct gpio_desc *reset_gpio;
struct spi_device *spidev;
struct dsa_switch *ds;
struct sja1105_port ports[SJA1105_NUM_PORTS];
/* Serializes transmission of management frames so that
* the switch doesn't confuse them with one another.
*/
struct mutex mgmt_lock;
};
#include "sja1105_dynamic_config.h"

View File

@ -20,6 +20,7 @@
#include <linux/netdevice.h>
#include <linux/if_bridge.h>
#include <linux/if_ether.h>
#include <linux/dsa/8021q.h>
#include "sja1105.h"
static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
@ -91,8 +92,10 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
.drpuntag = false,
/* Don't retag 802.1p (VID 0) traffic with the pvid */
.retag = false,
/* Enable learning and I/O on user ports by default. */
.dyn_learn = true,
/* Disable learning and I/O on user ports by default -
* STP will enable it.
*/
.dyn_learn = false,
.egress = false,
.ingress = false,
};
@ -118,8 +121,17 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
mac = table->entries;
for (i = 0; i < SJA1105_NUM_PORTS; i++)
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
mac[i] = default_mac;
if (i == dsa_upstream_port(priv->ds, i)) {
/* STP doesn't get called for CPU port, so we need to
* set the I/O parameters statically.
*/
mac[i].dyn_learn = true;
mac[i].ingress = true;
mac[i].egress = true;
}
}
return 0;
}
@ -406,11 +418,14 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
.tpid2 = ETH_P_SJA1105,
};
struct sja1105_table *table;
int i;
int i, k = 0;
for (i = 0; i < SJA1105_NUM_PORTS; i++)
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
if (dsa_is_dsa_port(priv->ds, i))
default_general_params.casc_port = i;
else if (dsa_is_user_port(priv->ds, i))
priv->ports[i].mgmt_slot = k++;
}
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
@ -651,12 +666,14 @@ static sja1105_speed_t sja1105_get_speed_cfg(unsigned int speed_mbps)
* for a specific port.
*
* @speed_mbps: If 0, leave the speed unchanged, else adapt MAC to PHY speed.
* @enabled: Manage Rx and Tx settings for this port. Overrides the static
* configuration settings.
* @enabled: Manage Rx and Tx settings for this port. If false, overrides the
* settings from the STP state, but not persistently (does not
* overwrite the static MAC info for this port).
*/
static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
int speed_mbps, bool enabled)
{
struct sja1105_mac_config_entry dyn_mac;
struct sja1105_xmii_params_entry *mii;
struct sja1105_mac_config_entry *mac;
struct device *dev = priv->ds->dev;
@ -689,12 +706,13 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
* the code common, we'll use the static configuration tables as a
* reasonable approximation for both E/T and P/Q/R/S.
*/
mac[port].ingress = enabled;
mac[port].egress = enabled;
dyn_mac = mac[port];
dyn_mac.ingress = enabled && mac[port].ingress;
dyn_mac.egress = enabled && mac[port].egress;
/* Write to the dynamic reconfiguration tables */
rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG,
port, &mac[port], true);
port, &dyn_mac, true);
if (rc < 0) {
dev_err(dev, "Failed to write MAC config: %d\n", rc);
return rc;
@ -982,6 +1000,50 @@ static int sja1105_bridge_member(struct dsa_switch *ds, int port,
port, &l2_fwd[port], true);
}
static void sja1105_bridge_stp_state_set(struct dsa_switch *ds, int port,
u8 state)
{
struct sja1105_private *priv = ds->priv;
struct sja1105_mac_config_entry *mac;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
switch (state) {
case BR_STATE_DISABLED:
case BR_STATE_BLOCKING:
/* From UM10944 description of DRPDTAG (why put this there?):
* "Management traffic flows to the port regardless of the state
* of the INGRESS flag". So BPDUs are still be allowed to pass.
* At the moment no difference between DISABLED and BLOCKING.
*/
mac[port].ingress = false;
mac[port].egress = false;
mac[port].dyn_learn = false;
break;
case BR_STATE_LISTENING:
mac[port].ingress = true;
mac[port].egress = false;
mac[port].dyn_learn = false;
break;
case BR_STATE_LEARNING:
mac[port].ingress = true;
mac[port].egress = false;
mac[port].dyn_learn = true;
break;
case BR_STATE_FORWARDING:
mac[port].ingress = true;
mac[port].egress = true;
mac[port].dyn_learn = true;
break;
default:
dev_err(ds->dev, "invalid STP state: %d\n", state);
return;
}
sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG, port,
&mac[port], true);
}
static int sja1105_bridge_join(struct dsa_switch *ds, int port,
struct net_device *br)
{
@ -994,6 +1056,23 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
sja1105_bridge_member(ds, port, br, false);
}
static u8 sja1105_stp_state_get(struct sja1105_private *priv, int port)
{
struct sja1105_mac_config_entry *mac;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
if (!mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
return BR_STATE_BLOCKING;
if (mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
return BR_STATE_LISTENING;
if (mac[port].ingress && !mac[port].egress && mac[port].dyn_learn)
return BR_STATE_LEARNING;
if (mac[port].ingress && mac[port].egress && mac[port].dyn_learn)
return BR_STATE_FORWARDING;
return -EINVAL;
}
/* For situations where we need to change a setting at runtime that is only
* available through the static configuration, resetting the switch in order
* to upload the new static config is unavoidable. Back up the settings we
@ -1004,16 +1083,27 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
{
struct sja1105_mac_config_entry *mac;
int speed_mbps[SJA1105_NUM_PORTS];
u8 stp_state[SJA1105_NUM_PORTS];
int rc, i;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
/* Back up settings changed by sja1105_adjust_port_config and
* and restore their defaults.
* sja1105_bridge_stp_state_set and restore their defaults.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
speed_mbps[i] = sja1105_speed[mac[i].speed];
mac[i].speed = SJA1105_SPEED_AUTO;
if (i == dsa_upstream_port(priv->ds, i)) {
mac[i].ingress = true;
mac[i].egress = true;
mac[i].dyn_learn = true;
} else {
stp_state[i] = sja1105_stp_state_get(priv, i);
mac[i].ingress = false;
mac[i].egress = false;
mac[i].dyn_learn = false;
}
}
/* Reset switch and send updated static configuration */
@ -1032,6 +1122,9 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
bool enabled = (speed_mbps[i] != 0);
if (i != dsa_upstream_port(priv->ds, i))
sja1105_bridge_stp_state_set(priv->ds, i, stp_state[i]);
rc = sja1105_adjust_port_config(priv, i, speed_mbps[i],
enabled);
if (rc < 0)
@ -1146,10 +1239,27 @@ static int sja1105_vlan_apply(struct sja1105_private *priv, int port, u16 vid,
return 0;
}
static int sja1105_setup_8021q_tagging(struct dsa_switch *ds, bool enabled)
{
int rc, i;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
rc = dsa_port_setup_8021q_tagging(ds, i, enabled);
if (rc < 0) {
dev_err(ds->dev, "Failed to setup VLAN tagging for port %d: %d\n",
i, rc);
return rc;
}
}
dev_info(ds->dev, "%s switch tagging\n",
enabled ? "Enabled" : "Disabled");
return 0;
}
static enum dsa_tag_protocol
sja1105_get_tag_protocol(struct dsa_switch *ds, int port)
{
return DSA_TAG_PROTO_NONE;
return DSA_TAG_PROTO_SJA1105;
}
/* This callback needs to be present */
@ -1173,7 +1283,11 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
if (rc)
dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
return rc;
/* Switch port identification based on 802.1Q is only passable
* if we are not under a vlan_filtering bridge. So make sure
* the two configurations are mutually exclusive.
*/
return sja1105_setup_8021q_tagging(ds, !enabled);
}
static void sja1105_vlan_add(struct dsa_switch *ds, int port,
@ -1276,7 +1390,98 @@ static int sja1105_setup(struct dsa_switch *ds)
*/
ds->vlan_filtering_is_global = true;
return 0;
/* The DSA/switchdev model brings up switch ports in standalone mode by
* default, and that means vlan_filtering is 0 since they're not under
* a bridge, so it's safe to set up switch tagging at this time.
*/
return sja1105_setup_8021q_tagging(ds, true);
}
static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
struct sk_buff *skb)
{
struct sja1105_mgmt_entry mgmt_route = {0};
struct sja1105_private *priv = ds->priv;
struct ethhdr *hdr;
int timeout = 10;
int rc;
hdr = eth_hdr(skb);
mgmt_route.macaddr = ether_addr_to_u64(hdr->h_dest);
mgmt_route.destports = BIT(port);
mgmt_route.enfport = 1;
rc = sja1105_dynamic_config_write(priv, BLK_IDX_MGMT_ROUTE,
slot, &mgmt_route, true);
if (rc < 0) {
kfree_skb(skb);
return rc;
}
/* Transfer skb to the host port. */
dsa_enqueue_skb(skb, ds->ports[port].slave);
/* Wait until the switch has processed the frame */
do {
rc = sja1105_dynamic_config_read(priv, BLK_IDX_MGMT_ROUTE,
slot, &mgmt_route);
if (rc < 0) {
dev_err_ratelimited(priv->ds->dev,
"failed to poll for mgmt route\n");
continue;
}
/* UM10944: The ENFPORT flag of the respective entry is
* cleared when a match is found. The host can use this
* flag as an acknowledgment.
*/
cpu_relax();
} while (mgmt_route.enfport && --timeout);
if (!timeout) {
/* Clean up the management route so that a follow-up
* frame may not match on it by mistake.
*/
sja1105_dynamic_config_write(priv, BLK_IDX_MGMT_ROUTE,
slot, &mgmt_route, false);
dev_err_ratelimited(priv->ds->dev, "xmit timed out\n");
}
return NETDEV_TX_OK;
}
/* Deferred work is unfortunately necessary because setting up the management
* route cannot be done from atomit context (SPI transfer takes a sleepable
* lock on the bus)
*/
static netdev_tx_t sja1105_port_deferred_xmit(struct dsa_switch *ds, int port,
struct sk_buff *skb)
{
struct sja1105_private *priv = ds->priv;
struct sja1105_port *sp = &priv->ports[port];
int slot = sp->mgmt_slot;
/* The tragic fact about the switch having 4x2 slots for installing
* management routes is that all of them except one are actually
* useless.
* If 2 slots are simultaneously configured for two BPDUs sent to the
* same (multicast) DMAC but on different egress ports, the switch
* would confuse them and redirect first frame it receives on the CPU
* port towards the port configured on the numerically first slot
* (therefore wrong port), then second received frame on second slot
* (also wrong port).
* So for all practical purposes, there needs to be a lock that
* prevents that from happening. The slot used here is utterly useless
* (could have simply been 0 just as fine), but we are doing it
* nonetheless, in case a smarter idea ever comes up in the future.
*/
mutex_lock(&priv->mgmt_lock);
sja1105_mgmt_xmit(ds, port, slot, skb);
mutex_unlock(&priv->mgmt_lock);
return NETDEV_TX_OK;
}
/* The MAXAGE setting belongs to the L2 Forwarding Parameters table,
@ -1317,6 +1522,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.port_fdb_del = sja1105_fdb_del,
.port_bridge_join = sja1105_bridge_join,
.port_bridge_leave = sja1105_bridge_leave,
.port_stp_state_set = sja1105_bridge_stp_state_set,
.port_vlan_prepare = sja1105_vlan_prepare,
.port_vlan_filtering = sja1105_vlan_filtering,
.port_vlan_add = sja1105_vlan_add,
@ -1324,6 +1530,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.port_mdb_prepare = sja1105_mdb_prepare,
.port_mdb_add = sja1105_mdb_add,
.port_mdb_del = sja1105_mdb_del,
.port_deferred_xmit = sja1105_port_deferred_xmit,
};
static int sja1105_check_device_id(struct sja1105_private *priv)
@ -1367,7 +1574,7 @@ static int sja1105_probe(struct spi_device *spi)
struct device *dev = &spi->dev;
struct sja1105_private *priv;
struct dsa_switch *ds;
int rc;
int rc, i;
if (!dev->of_node) {
dev_err(dev, "No DTS bindings for SJA1105 driver\n");
@ -1418,6 +1625,15 @@ static int sja1105_probe(struct spi_device *spi)
ds->priv = priv;
priv->ds = ds;
/* Connections between dsa_port and sja1105_port */
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
struct sja1105_port *sp = &priv->ports[i];
ds->ports[i].priv = sp;
sp->dp = &ds->ports[i];
}
mutex_init(&priv->mgmt_lock);
return dsa_register_switch(priv->ds);
}

76
include/linux/dsa/8021q.h Normal file
View File

@ -0,0 +1,76 @@
/* SPDX-License-Identifier: GPL-2.0
* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _NET_DSA_8021Q_H
#define _NET_DSA_8021Q_H
#include <linux/types.h>
struct dsa_switch;
struct sk_buff;
struct net_device;
struct packet_type;
#if IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q)
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
bool enabled);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
u16 tpid, u16 tci);
struct sk_buff *dsa_8021q_rcv(struct sk_buff *skb, struct net_device *netdev,
struct packet_type *pt, u16 *tpid, u16 *tci);
u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port);
int dsa_8021q_rx_switch_id(u16 vid);
int dsa_8021q_rx_source_port(u16 vid);
#else
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
bool enabled)
{
return 0;
}
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
u16 tpid, u16 tci)
{
return NULL;
}
struct sk_buff *dsa_8021q_rcv(struct sk_buff *skb, struct net_device *netdev,
struct packet_type *pt, u16 *tpid, u16 *tci)
{
return NULL;
}
u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port)
{
return 0;
}
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
{
return 0;
}
int dsa_8021q_rx_switch_id(u16 vid)
{
return 0;
}
int dsa_8021q_rx_source_port(u16 vid)
{
return 0;
}
#endif /* IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) */
#endif /* _NET_DSA_8021Q_H */

View File

@ -2,22 +2,39 @@
* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*/
/* Included by drivers/net/dsa/sja1105/sja1105.h */
/* Included by drivers/net/dsa/sja1105/sja1105.h and net/dsa/tag_sja1105.c */
#ifndef _NET_DSA_SJA1105_H
#define _NET_DSA_SJA1105_H
#include <linux/skbuff.h>
#include <linux/etherdevice.h>
#include <net/dsa.h>
#define ETH_P_SJA1105 ETH_P_DSA_8021Q
/* The switch can only be convinced to stay in unmanaged mode and not trap any
* link-local traffic by actually telling it to filter frames sent at the
* 00:00:00:00:00:00 destination MAC.
*/
#define SJA1105_LINKLOCAL_FILTER_A 0x000000000000ull
#define SJA1105_LINKLOCAL_FILTER_A_MASK 0xFFFFFFFFFFFFull
#define SJA1105_LINKLOCAL_FILTER_B 0x000000000000ull
#define SJA1105_LINKLOCAL_FILTER_B_MASK 0xFFFFFFFFFFFFull
/* IEEE 802.3 Annex 57A: Slow Protocols PDUs (01:80:C2:xx:xx:xx) */
#define SJA1105_LINKLOCAL_FILTER_A 0x0180C2000000ull
#define SJA1105_LINKLOCAL_FILTER_A_MASK 0xFFFFFF000000ull
/* IEEE 1588 Annex F: Transport of PTP over Ethernet (01:1B:19:xx:xx:xx) */
#define SJA1105_LINKLOCAL_FILTER_B 0x011B19000000ull
#define SJA1105_LINKLOCAL_FILTER_B_MASK 0xFFFFFF000000ull
enum sja1105_frame_type {
SJA1105_FRAME_TYPE_NORMAL = 0,
SJA1105_FRAME_TYPE_LINK_LOCAL,
};
struct sja1105_skb_cb {
enum sja1105_frame_type type;
};
#define SJA1105_SKB_CB(skb) \
((struct sja1105_skb_cb *)DSA_SKB_CB_PRIV(skb))
struct sja1105_port {
struct dsa_port *dp;
int mgmt_slot;
};
#endif /* _NET_DSA_SJA1105_H */

View File

@ -42,6 +42,8 @@ struct phylink_link_state;
#define DSA_TAG_PROTO_MTK_VALUE 9
#define DSA_TAG_PROTO_QCA_VALUE 10
#define DSA_TAG_PROTO_TRAILER_VALUE 11
#define DSA_TAG_PROTO_8021Q_VALUE 12
#define DSA_TAG_PROTO_SJA1105_VALUE 13
enum dsa_tag_protocol {
DSA_TAG_PROTO_NONE = DSA_TAG_PROTO_NONE_VALUE,
@ -56,6 +58,8 @@ enum dsa_tag_protocol {
DSA_TAG_PROTO_MTK = DSA_TAG_PROTO_MTK_VALUE,
DSA_TAG_PROTO_QCA = DSA_TAG_PROTO_QCA_VALUE,
DSA_TAG_PROTO_TRAILER = DSA_TAG_PROTO_TRAILER_VALUE,
DSA_TAG_PROTO_8021Q = DSA_TAG_PROTO_8021Q_VALUE,
DSA_TAG_PROTO_SJA1105 = DSA_TAG_PROTO_SJA1105_VALUE,
};
struct packet_type;
@ -67,6 +71,11 @@ struct dsa_device_ops {
struct packet_type *pt);
int (*flow_dissect)(const struct sk_buff *skb, __be16 *proto,
int *offset);
/* Used to determine which traffic should match the DSA filter in
* eth_type_trans, and which, if any, should bypass it and be processed
* as regular on the master net device.
*/
bool (*filter)(const struct sk_buff *skb, struct net_device *dev);
unsigned int overhead;
const char *name;
enum dsa_tag_protocol proto;
@ -76,6 +85,38 @@ struct dsa_device_ops {
#define MODULE_ALIAS_DSA_TAG_DRIVER(__proto) \
MODULE_ALIAS(DSA_TAG_DRIVER_ALIAS __stringify(__proto##_VALUE))
struct dsa_skb_cb {
struct sk_buff *clone;
bool deferred_xmit;
};
struct __dsa_skb_cb {
struct dsa_skb_cb cb;
u8 priv[48 - sizeof(struct dsa_skb_cb)];
};
#define __DSA_SKB_CB(skb) ((struct __dsa_skb_cb *)((skb)->cb))
#define DSA_SKB_CB(skb) ((struct dsa_skb_cb *)((skb)->cb))
#define DSA_SKB_CB_COPY(nskb, skb) \
{ *__DSA_SKB_CB(nskb) = *__DSA_SKB_CB(skb); }
#define DSA_SKB_CB_ZERO(skb) \
{ *__DSA_SKB_CB(skb) = (struct __dsa_skb_cb) {0}; }
#define DSA_SKB_CB_PRIV(skb) \
((void *)(skb)->cb + offsetof(struct __dsa_skb_cb, priv))
#define DSA_SKB_CB_CLONE(_clone, _skb) \
{ \
struct sk_buff *clone = _clone; \
struct sk_buff *skb = _skb; \
\
DSA_SKB_CB_COPY(clone, skb); \
DSA_SKB_CB(skb)->clone = clone; \
}
struct dsa_switch_tree {
struct list_head list;
@ -146,6 +187,7 @@ struct dsa_port {
struct dsa_switch_tree *dst;
struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev,
struct packet_type *pt);
bool (*filter)(const struct sk_buff *skb, struct net_device *dev);
enum {
DSA_PORT_TYPE_UNUSED = 0,
@ -166,6 +208,16 @@ struct dsa_port {
struct net_device *bridge_dev;
struct devlink_port devlink_port;
struct phylink *pl;
struct work_struct xmit_work;
struct sk_buff_head xmit_queue;
/*
* Give the switch driver somewhere to hang its per-port private data
* structures (accessible from the tagger).
*/
void *priv;
/*
* Original copy of the master netdev ethtool_ops
*/
@ -500,6 +552,12 @@ struct dsa_switch_ops {
struct sk_buff *clone, unsigned int type);
bool (*port_rxtstamp)(struct dsa_switch *ds, int port,
struct sk_buff *skb, unsigned int type);
/*
* Deferred frame Tx
*/
netdev_tx_t (*port_deferred_xmit)(struct dsa_switch *ds, int port,
struct sk_buff *skb);
};
struct dsa_switch_driver {
@ -518,6 +576,15 @@ static inline bool netdev_uses_dsa(struct net_device *dev)
return false;
}
static inline bool dsa_can_decode(const struct sk_buff *skb,
struct net_device *dev)
{
#if IS_ENABLED(CONFIG_NET_DSA)
return !dev->dsa_ptr->filter || dev->dsa_ptr->filter(skb, dev);
#endif
return false;
}
struct dsa_switch *dsa_switch_alloc(struct device *dev, size_t n);
void dsa_unregister_switch(struct dsa_switch *ds);
int dsa_register_switch(struct dsa_switch *ds);
@ -586,6 +653,7 @@ static inline int call_dsa_notifiers(unsigned long val, struct net_device *dev,
#define BRCM_TAG_GET_QUEUE(v) ((v) & 0xff)
netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev);
int dsa_port_get_phy_strings(struct dsa_port *dp, uint8_t *data);
int dsa_port_get_ethtool_phy_stats(struct dsa_port *dp, uint64_t *data);
int dsa_port_get_phy_sset_count(struct dsa_port *dp);

View File

@ -17,6 +17,17 @@ menuconfig NET_DSA
if NET_DSA
# tagging formats
config NET_DSA_TAG_8021Q
tristate "Tag driver for switches using custom 802.1Q VLAN headers"
select VLAN_8021Q
help
Unlike the other tagging protocols, the 802.1Q config option simply
provides helpers for other tagging implementations that might rely on
VLAN in one way or another. It is not a complete solution.
Drivers which use these helpers should select this as dependency.
config NET_DSA_TAG_BRCM_COMMON
tristate
default n
@ -91,6 +102,15 @@ config NET_DSA_TAG_LAN9303
Say Y or M if you want to enable support for tagging frames for the
SMSC/Microchip LAN9303 family of switches.
config NET_DSA_TAG_SJA1105
tristate "Tag driver for NXP SJA1105 switches"
select NET_DSA_TAG_8021Q
help
Say Y or M if you want to enable support for tagging frames with the
NXP SJA1105 switch family. Both the native tagging protocol (which
is only for link-local traffic) as well as non-native tagging (based
on a custom 802.1Q VLAN header) are available.
config NET_DSA_TAG_TRAILER
tristate "Tag driver for switches using a trailer tag"
help

View File

@ -4,6 +4,7 @@ obj-$(CONFIG_NET_DSA) += dsa_core.o
dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o
# tagging formats
obj-$(CONFIG_NET_DSA_TAG_8021Q) += tag_8021q.o
obj-$(CONFIG_NET_DSA_TAG_BRCM_COMMON) += tag_brcm.o
obj-$(CONFIG_NET_DSA_TAG_DSA) += tag_dsa.o
obj-$(CONFIG_NET_DSA_TAG_EDSA) += tag_edsa.o
@ -12,4 +13,5 @@ obj-$(CONFIG_NET_DSA_TAG_KSZ_COMMON) += tag_ksz.o
obj-$(CONFIG_NET_DSA_TAG_LAN9303) += tag_lan9303.o
obj-$(CONFIG_NET_DSA_TAG_MTK) += tag_mtk.o
obj-$(CONFIG_NET_DSA_TAG_QCA) += tag_qca.o
obj-$(CONFIG_NET_DSA_TAG_SJA1105) += tag_sja1105.o
obj-$(CONFIG_NET_DSA_TAG_TRAILER) += tag_trailer.o

View File

@ -371,14 +371,14 @@ static int dsa_switch_setup(struct dsa_switch *ds)
if (err)
return err;
err = ds->ops->setup(ds);
if (err < 0)
return err;
err = dsa_switch_register_notifier(ds);
if (err)
return err;
err = ds->ops->setup(ds);
if (err < 0)
return err;
if (!ds->slave_mii_bus && ds->ops->phy_read) {
ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
if (!ds->slave_mii_bus)
@ -586,6 +586,7 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master)
}
dp->type = DSA_PORT_TYPE_CPU;
dp->filter = tag_ops->filter;
dp->rcv = tag_ops->rcv;
dp->tag_ops = tag_ops;
dp->master = master;

View File

@ -174,6 +174,8 @@ int dsa_slave_resume(struct net_device *slave_dev);
int dsa_slave_register_notifier(void);
void dsa_slave_unregister_notifier(void);
void *dsa_defer_xmit(struct sk_buff *skb, struct net_device *dev);
static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
{
struct dsa_slave_priv *p = netdev_priv(dev);

View File

@ -389,6 +389,7 @@ int dsa_port_vid_add(struct dsa_port *dp, u16 vid, u16 flags)
trans.ph_prepare = false;
return dsa_port_vlan_add(dp, &vlan, &trans);
}
EXPORT_SYMBOL(dsa_port_vid_add);
int dsa_port_vid_del(struct dsa_port *dp, u16 vid)
{
@ -400,6 +401,7 @@ int dsa_port_vid_del(struct dsa_port *dp, u16 vid)
return dsa_port_vlan_del(dp, &vlan);
}
EXPORT_SYMBOL(dsa_port_vid_del);
static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
{

View File

@ -120,6 +120,9 @@ static int dsa_slave_close(struct net_device *dev)
struct net_device *master = dsa_slave_to_master(dev);
struct dsa_port *dp = dsa_slave_to_port(dev);
cancel_work_sync(&dp->xmit_work);
skb_queue_purge(&dp->xmit_queue);
phylink_stop(dp->pl);
dsa_port_disable(dp);
@ -430,6 +433,24 @@ static void dsa_skb_tx_timestamp(struct dsa_slave_priv *p,
kfree_skb(clone);
}
netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev)
{
/* SKB for netpoll still need to be mangled with the protocol-specific
* tag to be successfully transmitted
*/
if (unlikely(netpoll_tx_running(dev)))
return dsa_slave_netpoll_send_skb(dev, skb);
/* Queue the SKB for transmission on the parent interface, but
* do not modify its EtherType
*/
skb->dev = dsa_slave_to_master(dev);
dev_queue_xmit(skb);
return NETDEV_TX_OK;
}
EXPORT_SYMBOL_GPL(dsa_enqueue_skb);
static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct dsa_slave_priv *p = netdev_priv(dev);
@ -452,23 +473,37 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
*/
nskb = p->xmit(skb, dev);
if (!nskb) {
kfree_skb(skb);
if (!DSA_SKB_CB(skb)->deferred_xmit)
kfree_skb(skb);
return NETDEV_TX_OK;
}
/* SKB for netpoll still need to be mangled with the protocol-specific
* tag to be successfully transmitted
*/
if (unlikely(netpoll_tx_running(dev)))
return dsa_slave_netpoll_send_skb(dev, nskb);
return dsa_enqueue_skb(nskb, dev);
}
/* Queue the SKB for transmission on the parent interface, but
* do not modify its EtherType
*/
nskb->dev = dsa_slave_to_master(dev);
dev_queue_xmit(nskb);
void *dsa_defer_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
return NETDEV_TX_OK;
DSA_SKB_CB(skb)->deferred_xmit = true;
skb_queue_tail(&dp->xmit_queue, skb);
schedule_work(&dp->xmit_work);
return NULL;
}
EXPORT_SYMBOL_GPL(dsa_defer_xmit);
static void dsa_port_xmit_work(struct work_struct *work)
{
struct dsa_port *dp = container_of(work, struct dsa_port, xmit_work);
struct dsa_switch *ds = dp->ds;
struct sk_buff *skb;
if (unlikely(!ds->ops->port_deferred_xmit))
return;
while ((skb = skb_dequeue(&dp->xmit_queue)) != NULL)
ds->ops->port_deferred_xmit(ds, dp->index, skb);
}
/* ethtool operations *******************************************************/
@ -1318,6 +1353,9 @@ int dsa_slave_suspend(struct net_device *slave_dev)
if (!netif_running(slave_dev))
return 0;
cancel_work_sync(&dp->xmit_work);
skb_queue_purge(&dp->xmit_queue);
netif_device_detach(slave_dev);
rtnl_lock();
@ -1405,6 +1443,8 @@ int dsa_slave_create(struct dsa_port *port)
}
p->dp = port;
INIT_LIST_HEAD(&p->mall_tc_list);
INIT_WORK(&port->xmit_work, dsa_port_xmit_work);
skb_queue_head_init(&port->xmit_queue);
p->xmit = cpu_dp->tag_ops->xmit;
port->slave = slave_dev;

222
net/dsa/tag_8021q.c Normal file
View File

@ -0,0 +1,222 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*
* This module is not a complete tagger implementation. It only provides
* primitives for taggers that rely on 802.1Q VLAN tags to use. The
* dsa_8021q_netdev_ops is registered for API compliance and not used
* directly by callers.
*/
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
#include "dsa_priv.h"
/* Allocating two VLAN tags per port - one for the RX VID and
* the other for the TX VID - see below
*/
#define DSA_8021Q_VID_RANGE (DSA_MAX_SWITCHES * DSA_MAX_PORTS)
#define DSA_8021Q_VID_BASE (VLAN_N_VID - 2 * DSA_8021Q_VID_RANGE - 1)
#define DSA_8021Q_RX_VID_BASE (DSA_8021Q_VID_BASE)
#define DSA_8021Q_TX_VID_BASE (DSA_8021Q_VID_BASE + DSA_8021Q_VID_RANGE)
/* Returns the VID to be inserted into the frame from xmit for switch steering
* instructions on egress. Encodes switch ID and port ID.
*/
u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port)
{
return DSA_8021Q_TX_VID_BASE + (DSA_MAX_PORTS * ds->index) + port;
}
EXPORT_SYMBOL_GPL(dsa_8021q_tx_vid);
/* Returns the VID that will be installed as pvid for this switch port, sent as
* tagged egress towards the CPU port and decoded by the rcv function.
*/
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
{
return DSA_8021Q_RX_VID_BASE + (DSA_MAX_PORTS * ds->index) + port;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid);
/* Returns the decoded switch ID from the RX VID. */
int dsa_8021q_rx_switch_id(u16 vid)
{
return ((vid - DSA_8021Q_RX_VID_BASE) / DSA_MAX_PORTS);
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_switch_id);
/* Returns the decoded port ID from the RX VID. */
int dsa_8021q_rx_source_port(u16 vid)
{
return ((vid - DSA_8021Q_RX_VID_BASE) % DSA_MAX_PORTS);
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_source_port);
/* RX VLAN tagging (left) and TX VLAN tagging (right) setup shown for a single
* front-panel switch port (here swp0).
*
* Port identification through VLAN (802.1Q) tags has different requirements
* for it to work effectively:
* - On RX (ingress from network): each front-panel port must have a pvid
* that uniquely identifies it, and the egress of this pvid must be tagged
* towards the CPU port, so that software can recover the source port based
* on the VID in the frame. But this would only work for standalone ports;
* if bridged, this VLAN setup would break autonomous forwarding and would
* force all switched traffic to pass through the CPU. So we must also make
* the other front-panel ports members of this VID we're adding, albeit
* we're not making it their PVID (they'll still have their own).
* By the way - just because we're installing the same VID in multiple
* switch ports doesn't mean that they'll start to talk to one another, even
* while not bridged: the final forwarding decision is still an AND between
* the L2 forwarding information (which is limiting forwarding in this case)
* and the VLAN-based restrictions (of which there are none in this case,
* since all ports are members).
* - On TX (ingress from CPU and towards network) we are faced with a problem.
* If we were to tag traffic (from within DSA) with the port's pvid, all
* would be well, assuming the switch ports were standalone. Frames would
* have no choice but to be directed towards the correct front-panel port.
* But because we also want the RX VLAN to not break bridging, then
* inevitably that means that we have to give them a choice (of what
* front-panel port to go out on), and therefore we cannot steer traffic
* based on the RX VID. So what we do is simply install one more VID on the
* front-panel and CPU ports, and profit off of the fact that steering will
* work just by virtue of the fact that there is only one other port that's
* a member of the VID we're tagging the traffic with - the desired one.
*
* So at the end, each front-panel port will have one RX VID (also the PVID),
* the RX VID of all other front-panel ports, and one TX VID. Whereas the CPU
* port will have the RX and TX VIDs of all front-panel ports, and on top of
* that, is also tagged-input and tagged-output (VLAN trunk).
*
* CPU port CPU port
* +-------------+-----+-------------+ +-------------+-----+-------------+
* | RX VID | | | | TX VID | | |
* | of swp0 | | | | of swp0 | | |
* | +-----+ | | +-----+ |
* | ^ T | | | Tagged |
* | | | | | ingress |
* | +-------+---+---+-------+ | | +-----------+ |
* | | | | | | | | Untagged |
* | | U v U v U v | | v egress |
* | +-----+ +-----+ +-----+ +-----+ | | +-----+ +-----+ +-----+ +-----+ |
* | | | | | | | | | | | | | | | | | | | |
* | |PVID | | | | | | | | | | | | | | | | | |
* +-+-----+-+-----+-+-----+-+-----+-+ +-+-----+-+-----+-+-----+-+-----+-+
* swp0 swp1 swp2 swp3 swp0 swp1 swp2 swp3
*/
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
{
int upstream = dsa_upstream_port(ds, port);
struct dsa_port *dp = &ds->ports[port];
struct dsa_port *upstream_dp = &ds->ports[upstream];
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
u16 tx_vid = dsa_8021q_tx_vid(ds, port);
int i, err;
/* The CPU port is implicitly configured by
* configuring the front-panel ports
*/
if (!dsa_is_user_port(ds, port))
return 0;
/* Add this user port's RX VID to the membership list of all others
* (including itself). This is so that bridging will not be hindered.
* L2 forwarding rules still take precedence when there are no VLAN
* restrictions, so there are no concerns about leaking traffic.
*/
for (i = 0; i < ds->num_ports; i++) {
struct dsa_port *other_dp = &ds->ports[i];
u16 flags;
if (i == upstream)
/* CPU port needs to see this port's RX VID
* as tagged egress.
*/
flags = 0;
else if (i == port)
/* The RX VID is pvid on this port */
flags = BRIDGE_VLAN_INFO_UNTAGGED |
BRIDGE_VLAN_INFO_PVID;
else
/* The RX VID is a regular VLAN on all others */
flags = BRIDGE_VLAN_INFO_UNTAGGED;
if (enabled)
err = dsa_port_vid_add(other_dp, rx_vid, flags);
else
err = dsa_port_vid_del(other_dp, rx_vid);
if (err) {
dev_err(ds->dev, "Failed to apply RX VID %d to port %d: %d\n",
rx_vid, port, err);
return err;
}
}
/* Finally apply the TX VID on this port and on the CPU port */
if (enabled)
err = dsa_port_vid_add(dp, tx_vid, BRIDGE_VLAN_INFO_UNTAGGED);
else
err = dsa_port_vid_del(dp, tx_vid);
if (err) {
dev_err(ds->dev, "Failed to apply TX VID %d on port %d: %d\n",
tx_vid, port, err);
return err;
}
if (enabled)
err = dsa_port_vid_add(upstream_dp, tx_vid, 0);
else
err = dsa_port_vid_del(upstream_dp, tx_vid);
if (err) {
dev_err(ds->dev, "Failed to apply TX VID %d on port %d: %d\n",
tx_vid, upstream, err);
return err;
}
return 0;
}
EXPORT_SYMBOL_GPL(dsa_port_setup_8021q_tagging);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
u16 tpid, u16 tci)
{
/* skb->data points at skb_mac_header, which
* is fine for vlan_insert_tag.
*/
return vlan_insert_tag(skb, htons(tpid), tci);
}
EXPORT_SYMBOL_GPL(dsa_8021q_xmit);
struct sk_buff *dsa_8021q_rcv(struct sk_buff *skb, struct net_device *netdev,
struct packet_type *pt, u16 *tpid, u16 *tci)
{
struct vlan_ethhdr *tag;
if (unlikely(!pskb_may_pull(skb, VLAN_HLEN)))
return NULL;
tag = vlan_eth_hdr(skb);
*tpid = ntohs(tag->h_vlan_proto);
*tci = ntohs(tag->h_vlan_TCI);
/* skb->data points in the middle of the VLAN tag,
* after tpid and before tci. This is because so far,
* ETH_HLEN (DMAC, SMAC, EtherType) bytes were pulled.
* There are 2 bytes of VLAN tag left in skb->data, and upper
* layers expect the 'real' EtherType to be consumed as well.
* Coincidentally, a VLAN header is also of the same size as
* the number of bytes that need to be pulled.
*/
skb_pull_rcsum(skb, VLAN_HLEN);
return skb;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rcv);
static const struct dsa_device_ops dsa_8021q_netdev_ops = {
.name = "8021q",
.proto = DSA_TAG_PROTO_8021Q,
.overhead = VLAN_HLEN,
};
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_8021Q);
module_dsa_tag_driver(dsa_8021q_netdev_ops);

131
net/dsa/tag_sja1105.c Normal file
View File

@ -0,0 +1,131 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/if_vlan.h>
#include <linux/dsa/sja1105.h>
#include <linux/dsa/8021q.h>
#include <linux/packing.h>
#include "dsa_priv.h"
/* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */
static inline bool sja1105_is_link_local(const struct sk_buff *skb)
{
const struct ethhdr *hdr = eth_hdr(skb);
u64 dmac = ether_addr_to_u64(hdr->h_dest);
if ((dmac & SJA1105_LINKLOCAL_FILTER_A_MASK) ==
SJA1105_LINKLOCAL_FILTER_A)
return true;
if ((dmac & SJA1105_LINKLOCAL_FILTER_B_MASK) ==
SJA1105_LINKLOCAL_FILTER_B)
return true;
return false;
}
/* This is the first time the tagger sees the frame on RX.
* Figure out if we can decode it, and if we can, annotate skb->cb with how we
* plan to do that, so we don't need to check again in the rcv function.
*/
static bool sja1105_filter(const struct sk_buff *skb, struct net_device *dev)
{
if (sja1105_is_link_local(skb)) {
SJA1105_SKB_CB(skb)->type = SJA1105_FRAME_TYPE_LINK_LOCAL;
return true;
}
if (!dsa_port_is_vlan_filtering(dev->dsa_ptr)) {
SJA1105_SKB_CB(skb)->type = SJA1105_FRAME_TYPE_NORMAL;
return true;
}
return false;
}
static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct dsa_port *dp = dsa_slave_to_port(netdev);
struct dsa_switch *ds = dp->ds;
u16 tx_vid = dsa_8021q_tx_vid(ds, dp->index);
u8 pcp = skb->priority;
/* Transmitting management traffic does not rely upon switch tagging,
* but instead SPI-installed management routes. Part 2 of this
* is the .port_deferred_xmit driver callback.
*/
if (unlikely(sja1105_is_link_local(skb)))
return dsa_defer_xmit(skb, netdev);
/* If we are under a vlan_filtering bridge, IP termination on
* switch ports based on 802.1Q tags is simply too brittle to
* be passable. So just defer to the dsa_slave_notag_xmit
* implementation.
*/
if (dsa_port_is_vlan_filtering(dp))
return skb;
return dsa_8021q_xmit(skb, netdev, ETH_P_SJA1105,
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
}
static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
struct ethhdr *hdr = eth_hdr(skb);
u64 source_port, switch_id;
struct sk_buff *nskb;
u16 tpid, vid, tci;
bool is_tagged;
nskb = dsa_8021q_rcv(skb, netdev, pt, &tpid, &tci);
is_tagged = (nskb && tpid == ETH_P_SJA1105);
skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
vid = tci & VLAN_VID_MASK;
skb->offload_fwd_mark = 1;
if (SJA1105_SKB_CB(skb)->type == SJA1105_FRAME_TYPE_LINK_LOCAL) {
/* Management traffic path. Switch embeds the switch ID and
* port ID into bytes of the destination MAC, courtesy of
* the incl_srcpt options.
*/
source_port = hdr->h_dest[3];
switch_id = hdr->h_dest[4];
/* Clear the DMAC bytes that were mangled by the switch */
hdr->h_dest[3] = 0;
hdr->h_dest[4] = 0;
} else {
/* Normal traffic path. */
source_port = dsa_8021q_rx_source_port(vid);
switch_id = dsa_8021q_rx_switch_id(vid);
}
skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
if (!skb->dev) {
netdev_warn(netdev, "Couldn't decode source port\n");
return NULL;
}
/* Delete/overwrite fake VLAN header, DSA expects to not find
* it there, see dsa_switch_rcv: skb_push(skb, ETH_HLEN).
*/
if (is_tagged)
memmove(skb->data - ETH_HLEN, skb->data - ETH_HLEN - VLAN_HLEN,
ETH_HLEN - VLAN_HLEN);
return skb;
}
static struct dsa_device_ops sja1105_netdev_ops = {
.name = "sja1105",
.proto = DSA_TAG_PROTO_SJA1105,
.xmit = sja1105_xmit,
.rcv = sja1105_rcv,
.filter = sja1105_filter,
.overhead = VLAN_HLEN,
};
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_SJA1105);
module_dsa_tag_driver(sja1105_netdev_ops);

View File

@ -185,8 +185,12 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
* at all, so we check here whether one of those tagging
* variants has been configured on the receiving interface,
* and if so, set skb->protocol without looking at the packet.
* The DSA tagging protocol may be able to decode some but not all
* traffic (for example only for management). In that case give it the
* option to filter the packets from which it can decode source port
* information.
*/
if (unlikely(netdev_uses_dsa(dev)))
if (unlikely(netdev_uses_dsa(dev)) && dsa_can_decode(skb, dev))
return htons(ETH_P_XDSA);
if (likely(eth_proto_is_802_3(eth->h_proto)))