Networking fixes for 5.11-rc3 (part 2), including fixes from bpf and
can trees. Current release - always broken: - can: mcp251xfd: fix Tx/Rx ring buffer driver race conditions - dsa: hellcreek: fix led_classdev build errors Previous releases - regressions: - ipv6: fib: flush exceptions when purging route to avoid netdev reference leak - ip_tunnels: fix pmtu check in nopmtudisc mode - ip: always refragment ip defragmented packets to avoid MTU issues when forwarding through tunnels, correct "packet too big" message is prohibitively tricky to generate - s390/qeth: fix locking for discipline setup / removal and during recovery to prevent both deadlocks and races - mlx5: Use port_num 1 instead of 0 when delete a RoCE address Previous releases - always broken: - cdc_ncm: correct overhead calculation in delayed_ndp_size to prevent out of bound accesses with Huawei 909s-120 LTE module - stmmac: dwmac-sun8i: fix suspend/resume: - PHY being left powered off - MAC syscon configuration being reset - reference to the reset controller being improperly dropped - qrtr: fix null-ptr-deref in qrtr_ns_remove - can: tcan4x5x: fix bittiming const, use common bittiming from m_can driver - mlx5e: CT: Use per flow counter when CT flow accounting is enabled - mlx5e: Fix SWP offsets when vlan inserted by driver Misc: - bpf: Fix a task_iter bug caused by a bpf -> net merge conflict resolution And the usual many fixes to various error paths. Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIyBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAl/34wsACgkQMUZtbf5S IrttVA/1FSZ34bAulkdPAQj/qtL9Y3G/qR33ZUgiNySRaqhRuq9ZB73+Qk84RyWI yef586533NXXfwLg/8sS3crwDLAMl62rU8RTH/BF7MdByDAnX89L5czzmZxJGd36 Ffn81F9oBu2obIPjvXC2TDAr1G8/x4MMNBZZiEPEetWjjlAmCBhK7AjsCRTEquOy PF2//97y/qrCo3nFoHNaxUy4IG7qu5WvA9vPGTPk/SZoyJ4HYKGIoRFNl8hnh8E7 XUTeqC5+KE+3PIfRV6WSwnA5mQ3/LXXjCQFQvHgT4vT7DCUUhAqiqwXfczFy88Jt jN5ir3JFbFODF965NkohdvCA/ynhIvPMQUThq5JZd2t8fqVDtOQPR+Ov20Emw/mW tD2o4+1vi5GLaFoyWIMBCFZVtoh9cFjne19S0ls90XcF5B3EKwMlRbQt69F6afBx n/w+qr4/fRFkrWewKHsfiez/RXM8bYr8QsRxqofq/FVPxOfI+nVC4OyNFrhEYY0R NRTEY8/RxOgjfL61c0OjZJ+m2oeBIusWH/BW/9SPzAiqgrIqnNkJhbhXhGfdW6r3 AMoaS1vittUoc4Vsv/3GIslSQV3xgbQVbuWK0sZEh8K0gKGIy8EDLDd2r1ZkcwN2 slSh2QMAiOQZaq8UkLuVEF2yAVVGXtPhnL0yuoN0dM1+kPo7zw== =YS2c -----END PGP SIGNATURE----- Merge tag 'net-5.11-rc3-2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull more networking fixes from Jakub Kicinski: "Slightly lighter pull request to get back into the Thursday cadence. Current release - always broken: - can: mcp251xfd: fix Tx/Rx ring buffer driver race conditions - dsa: hellcreek: fix led_classdev build errors Previous releases - regressions: - ipv6: fib: flush exceptions when purging route to avoid netdev reference leak - ip_tunnels: fix pmtu check in nopmtudisc mode - ip: always refragment ip defragmented packets to avoid MTU issues when forwarding through tunnels, correct "packet too big" message is prohibitively tricky to generate - s390/qeth: fix locking for discipline setup / removal and during recovery to prevent both deadlocks and races - mlx5: Use port_num 1 instead of 0 when delete a RoCE address Previous releases - always broken: - cdc_ncm: correct overhead calculation in delayed_ndp_size to prevent out of bound accesses with Huawei 909s-120 LTE module - fix stmmac dwmac-sun8i suspend/resume: - PHY being left powered off - MAC syscon configuration being reset - reference to the reset controller being improperly dropped - qrtr: fix null-ptr-deref in qrtr_ns_remove - can: tcan4x5x: fix bittiming const, use common bittiming from m_can driver - mlx5e: CT: Use per flow counter when CT flow accounting is enabled - mlx5e: Fix SWP offsets when vlan inserted by driver Misc: - bpf: Fix a task_iter bug caused by a bpf -> net merge conflict resolution And the usual many fixes to various error paths" * tag 'net-5.11-rc3-2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (69 commits) net: dsa: lantiq_gswip: Exclude RMII from modes that report 1 GbE s390/qeth: fix L2 header access in qeth_l3_osa_features_check() s390/qeth: fix locking for discipline setup / removal s390/qeth: fix deadlock during recovery selftests: fib_nexthops: Fix wrong mausezahn invocation nexthop: Bounce NHA_GATEWAY in FDB nexthop groups nexthop: Unlink nexthop group entry in error path nexthop: Fix off-by-one error in error path octeontx2-af: fix memory leak of lmac and lmac->name chtls: Fix chtls resources release sequence chtls: Added a check to avoid NULL pointer dereference chtls: Replace skb_dequeue with skb_peek chtls: Avoid unnecessary freeing of oreq pointer chtls: Fix panic when route to peer not configured chtls: Remove invalid set_tcb call chtls: Fix hardware tid leak net: ip: always refragment ip defragmented packets net: fix pmtu check in nopmtudisc mode selftests: netfilter: add selftest for ipip pmtu discovery with enabled connection tracking docs: octeontx2: tune rst markup ...
This commit is contained in:
commit
6279d812ea
|
@ -164,46 +164,56 @@ Devlink health reporters
|
|||
|
||||
NPA Reporters
|
||||
-------------
|
||||
The NPA reporters are responsible for reporting and recovering the following group of errors
|
||||
The NPA reporters are responsible for reporting and recovering the following group of errors:
|
||||
|
||||
1. GENERAL events
|
||||
|
||||
- Error due to operation of unmapped PF.
|
||||
- Error due to disabled alloc/free for other HW blocks (NIX, SSO, TIM, DPI and AURA).
|
||||
|
||||
2. ERROR events
|
||||
|
||||
- Fault due to NPA_AQ_INST_S read or NPA_AQ_RES_S write.
|
||||
- AQ Doorbell Error.
|
||||
|
||||
3. RAS events
|
||||
|
||||
- RAS Error Reporting for NPA_AQ_INST_S/NPA_AQ_RES_S.
|
||||
|
||||
4. RVU events
|
||||
|
||||
- Error due to unmapped slot.
|
||||
|
||||
Sample Output
|
||||
-------------
|
||||
~# devlink health
|
||||
pci/0002:01:00.0:
|
||||
reporter hw_npa_intr
|
||||
state healthy error 2872 recover 2872 last_dump_date 2020-12-10 last_dump_time 09:39:09 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_gen
|
||||
state healthy error 2872 recover 2872 last_dump_date 2020-12-11 last_dump_time 04:43:04 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_err
|
||||
state healthy error 2871 recover 2871 last_dump_date 2020-12-10 last_dump_time 09:39:17 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_ras
|
||||
state healthy error 0 recover 0 last_dump_date 2020-12-10 last_dump_time 09:32:40 grace_period 0 auto_recover true auto_dump true
|
||||
Sample Output::
|
||||
|
||||
~# devlink health
|
||||
pci/0002:01:00.0:
|
||||
reporter hw_npa_intr
|
||||
state healthy error 2872 recover 2872 last_dump_date 2020-12-10 last_dump_time 09:39:09 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_gen
|
||||
state healthy error 2872 recover 2872 last_dump_date 2020-12-11 last_dump_time 04:43:04 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_err
|
||||
state healthy error 2871 recover 2871 last_dump_date 2020-12-10 last_dump_time 09:39:17 grace_period 0 auto_recover true auto_dump true
|
||||
reporter hw_npa_ras
|
||||
state healthy error 0 recover 0 last_dump_date 2020-12-10 last_dump_time 09:32:40 grace_period 0 auto_recover true auto_dump true
|
||||
|
||||
Each reporter dumps the
|
||||
|
||||
- Error Type
|
||||
- Error Register value
|
||||
- Reason in words
|
||||
|
||||
For eg:
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_gen
|
||||
NPA_AF_GENERAL:
|
||||
NPA General Interrupt Reg : 1
|
||||
NIX0: free disabled RX
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_intr
|
||||
NPA_AF_RVU:
|
||||
NPA RVU Interrupt Reg : 1
|
||||
Unmap Slot Error
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_err
|
||||
NPA_AF_ERR:
|
||||
NPA Error Interrupt Reg : 4096
|
||||
AQ Doorbell Error
|
||||
For example::
|
||||
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_gen
|
||||
NPA_AF_GENERAL:
|
||||
NPA General Interrupt Reg : 1
|
||||
NIX0: free disabled RX
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_intr
|
||||
NPA_AF_RVU:
|
||||
NPA RVU Interrupt Reg : 1
|
||||
Unmap Slot Error
|
||||
~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_err
|
||||
NPA_AF_ERR:
|
||||
NPA Error Interrupt Reg : 4096
|
||||
AQ Doorbell Error
|
||||
|
|
|
@ -64,8 +64,8 @@ ndo_do_ioctl:
|
|||
Context: process
|
||||
|
||||
ndo_get_stats:
|
||||
Synchronization: dev_base_lock rwlock.
|
||||
Context: nominally process, but don't sleep inside an rwlock
|
||||
Synchronization: rtnl_lock() semaphore, dev_base_lock rwlock, or RCU.
|
||||
Context: atomic (can't sleep under rwlock or RCU)
|
||||
|
||||
ndo_start_xmit:
|
||||
Synchronization: __netif_tx_lock spinlock.
|
||||
|
|
|
@ -10847,7 +10847,7 @@ F: drivers/media/radio/radio-maxiradio*
|
|||
|
||||
MCAN MMIO DEVICE DRIVER
|
||||
M: Dan Murphy <dmurphy@ti.com>
|
||||
M: Sriram Dash <sriram.dash@samsung.com>
|
||||
M: Pankaj Sharma <pankj.sharma@samsung.com>
|
||||
L: linux-can@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/net/can/bosch,m_can.yaml
|
||||
|
|
|
@ -13,6 +13,7 @@ if MISDN != n
|
|||
config MISDN_DSP
|
||||
tristate "Digital Audio Processing of transparent data"
|
||||
depends on MISDN
|
||||
select BITREVERSE
|
||||
help
|
||||
Enable support for digital audio processing capability.
|
||||
|
||||
|
|
|
@ -645,11 +645,20 @@ static int bareudp_link_config(struct net_device *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void bareudp_dellink(struct net_device *dev, struct list_head *head)
|
||||
{
|
||||
struct bareudp_dev *bareudp = netdev_priv(dev);
|
||||
|
||||
list_del(&bareudp->next);
|
||||
unregister_netdevice_queue(dev, head);
|
||||
}
|
||||
|
||||
static int bareudp_newlink(struct net *net, struct net_device *dev,
|
||||
struct nlattr *tb[], struct nlattr *data[],
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct bareudp_conf conf;
|
||||
LIST_HEAD(list_kill);
|
||||
int err;
|
||||
|
||||
err = bareudp2info(data, &conf, extack);
|
||||
|
@ -662,17 +671,14 @@ static int bareudp_newlink(struct net *net, struct net_device *dev,
|
|||
|
||||
err = bareudp_link_config(dev, tb);
|
||||
if (err)
|
||||
return err;
|
||||
goto err_unconfig;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bareudp_dellink(struct net_device *dev, struct list_head *head)
|
||||
{
|
||||
struct bareudp_dev *bareudp = netdev_priv(dev);
|
||||
|
||||
list_del(&bareudp->next);
|
||||
unregister_netdevice_queue(dev, head);
|
||||
err_unconfig:
|
||||
bareudp_dellink(dev, &list_kill);
|
||||
unregister_netdevice_many(&list_kill);
|
||||
return err;
|
||||
}
|
||||
|
||||
static size_t bareudp_get_size(const struct net_device *dev)
|
||||
|
|
|
@ -123,6 +123,7 @@ config CAN_JANZ_ICAN3
|
|||
config CAN_KVASER_PCIEFD
|
||||
depends on PCI
|
||||
tristate "Kvaser PCIe FD cards"
|
||||
select CRC32
|
||||
help
|
||||
This is a driver for the Kvaser PCI Express CAN FD family.
|
||||
|
||||
|
|
|
@ -1852,8 +1852,6 @@ EXPORT_SYMBOL_GPL(m_can_class_register);
|
|||
void m_can_class_unregister(struct m_can_classdev *cdev)
|
||||
{
|
||||
unregister_candev(cdev->net);
|
||||
|
||||
m_can_clk_stop(cdev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(m_can_class_unregister);
|
||||
|
||||
|
|
|
@ -131,30 +131,6 @@ static inline struct tcan4x5x_priv *cdev_to_priv(struct m_can_classdev *cdev)
|
|||
|
||||
}
|
||||
|
||||
static struct can_bittiming_const tcan4x5x_bittiming_const = {
|
||||
.name = DEVICE_NAME,
|
||||
.tseg1_min = 2,
|
||||
.tseg1_max = 31,
|
||||
.tseg2_min = 2,
|
||||
.tseg2_max = 16,
|
||||
.sjw_max = 16,
|
||||
.brp_min = 1,
|
||||
.brp_max = 32,
|
||||
.brp_inc = 1,
|
||||
};
|
||||
|
||||
static struct can_bittiming_const tcan4x5x_data_bittiming_const = {
|
||||
.name = DEVICE_NAME,
|
||||
.tseg1_min = 1,
|
||||
.tseg1_max = 32,
|
||||
.tseg2_min = 1,
|
||||
.tseg2_max = 16,
|
||||
.sjw_max = 16,
|
||||
.brp_min = 1,
|
||||
.brp_max = 32,
|
||||
.brp_inc = 1,
|
||||
};
|
||||
|
||||
static void tcan4x5x_check_wake(struct tcan4x5x_priv *priv)
|
||||
{
|
||||
int wake_state = 0;
|
||||
|
@ -469,8 +445,6 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
|
|||
mcan_class->dev = &spi->dev;
|
||||
mcan_class->ops = &tcan4x5x_ops;
|
||||
mcan_class->is_peripheral = true;
|
||||
mcan_class->bit_timing = &tcan4x5x_bittiming_const;
|
||||
mcan_class->data_timing = &tcan4x5x_data_bittiming_const;
|
||||
mcan_class->net->irq = spi->irq;
|
||||
|
||||
spi_set_drvdata(spi, priv);
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
config CAN_RCAR
|
||||
tristate "Renesas R-Car CAN controller"
|
||||
tristate "Renesas R-Car and RZ/G CAN controller"
|
||||
depends on ARCH_RENESAS || ARM
|
||||
help
|
||||
Say Y here if you want to use CAN controller found on Renesas R-Car
|
||||
SoCs.
|
||||
or RZ/G SoCs.
|
||||
|
||||
To compile this driver as a module, choose M here: the module will
|
||||
be called rcar_can.
|
||||
|
|
|
@ -1368,13 +1368,10 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
|||
struct mcp251xfd_tx_ring *tx_ring = priv->tx;
|
||||
struct spi_transfer *last_xfer;
|
||||
|
||||
tx_ring->tail += len;
|
||||
|
||||
/* Increment the TEF FIFO tail pointer 'len' times in
|
||||
* a single SPI message.
|
||||
*/
|
||||
|
||||
/* Note:
|
||||
*
|
||||
* Note:
|
||||
*
|
||||
* "cs_change == 1" on the last transfer results in an
|
||||
* active chip select after the complete SPI
|
||||
|
@ -1391,6 +1388,8 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
tx_ring->tail += len;
|
||||
|
||||
err = mcp251xfd_check_tef_tail(priv);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -1553,10 +1552,8 @@ mcp251xfd_handle_rxif_ring(struct mcp251xfd_priv *priv,
|
|||
|
||||
/* Increment the RX FIFO tail pointer 'len' times in a
|
||||
* single SPI message.
|
||||
*/
|
||||
ring->tail += len;
|
||||
|
||||
/* Note:
|
||||
*
|
||||
* Note:
|
||||
*
|
||||
* "cs_change == 1" on the last transfer results in an
|
||||
* active chip select after the complete SPI
|
||||
|
@ -1572,6 +1569,8 @@ mcp251xfd_handle_rxif_ring(struct mcp251xfd_priv *priv,
|
|||
last_xfer->cs_change = 1;
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
ring->tail += len;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -4,6 +4,7 @@ config NET_DSA_HIRSCHMANN_HELLCREEK
|
|||
depends on HAS_IOMEM
|
||||
depends on NET_DSA
|
||||
depends on PTP_1588_CLOCK
|
||||
depends on LEDS_CLASS
|
||||
select NET_DSA_TAG_HELLCREEK
|
||||
help
|
||||
This driver adds support for Hirschmann Hellcreek TSN switches.
|
||||
|
|
|
@ -1436,11 +1436,12 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
|
|||
phylink_set(mask, Pause);
|
||||
phylink_set(mask, Asym_Pause);
|
||||
|
||||
/* With the exclusion of MII and Reverse MII, we support Gigabit,
|
||||
* including Half duplex
|
||||
/* With the exclusion of MII, Reverse MII and Reduced MII, we
|
||||
* support Gigabit, including Half duplex
|
||||
*/
|
||||
if (state->interface != PHY_INTERFACE_MODE_MII &&
|
||||
state->interface != PHY_INTERFACE_MODE_REVMII) {
|
||||
state->interface != PHY_INTERFACE_MODE_REVMII &&
|
||||
state->interface != PHY_INTERFACE_MODE_RMII) {
|
||||
phylink_set(mask, 1000baseT_Full);
|
||||
phylink_set(mask, 1000baseT_Half);
|
||||
}
|
||||
|
|
|
@ -621,7 +621,7 @@ static void chtls_reset_synq(struct listen_ctx *listen_ctx)
|
|||
|
||||
while (!skb_queue_empty(&listen_ctx->synq)) {
|
||||
struct chtls_sock *csk =
|
||||
container_of((struct synq *)__skb_dequeue
|
||||
container_of((struct synq *)skb_peek
|
||||
(&listen_ctx->synq), struct chtls_sock, synq);
|
||||
struct sock *child = csk->sk;
|
||||
|
||||
|
@ -1109,6 +1109,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
const struct cpl_pass_accept_req *req,
|
||||
struct chtls_dev *cdev)
|
||||
{
|
||||
struct adapter *adap = pci_get_drvdata(cdev->pdev);
|
||||
struct neighbour *n = NULL;
|
||||
struct inet_sock *newinet;
|
||||
const struct iphdr *iph;
|
||||
|
@ -1118,9 +1119,10 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
struct dst_entry *dst;
|
||||
struct tcp_sock *tp;
|
||||
struct sock *newsk;
|
||||
bool found = false;
|
||||
u16 port_id;
|
||||
int rxq_idx;
|
||||
int step;
|
||||
int step, i;
|
||||
|
||||
iph = (const struct iphdr *)network_hdr;
|
||||
newsk = tcp_create_openreq_child(lsk, oreq, cdev->askb);
|
||||
|
@ -1152,7 +1154,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
n = dst_neigh_lookup(dst, &ip6h->saddr);
|
||||
#endif
|
||||
}
|
||||
if (!n)
|
||||
if (!n || !n->dev)
|
||||
goto free_sk;
|
||||
|
||||
ndev = n->dev;
|
||||
|
@ -1161,6 +1163,13 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
if (is_vlan_dev(ndev))
|
||||
ndev = vlan_dev_real_dev(ndev);
|
||||
|
||||
for_each_port(adap, i)
|
||||
if (cdev->ports[i] == ndev)
|
||||
found = true;
|
||||
|
||||
if (!found)
|
||||
goto free_dst;
|
||||
|
||||
port_id = cxgb4_port_idx(ndev);
|
||||
|
||||
csk = chtls_sock_create(cdev);
|
||||
|
@ -1238,6 +1247,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
free_csk:
|
||||
chtls_sock_release(&csk->kref);
|
||||
free_dst:
|
||||
neigh_release(n);
|
||||
dst_release(dst);
|
||||
free_sk:
|
||||
inet_csk_prepare_forced_close(newsk);
|
||||
|
@ -1387,7 +1397,7 @@ static void chtls_pass_accept_request(struct sock *sk,
|
|||
|
||||
newsk = chtls_recv_sock(sk, oreq, network_hdr, req, cdev);
|
||||
if (!newsk)
|
||||
goto free_oreq;
|
||||
goto reject;
|
||||
|
||||
if (chtls_get_module(newsk))
|
||||
goto reject;
|
||||
|
@ -1403,8 +1413,6 @@ static void chtls_pass_accept_request(struct sock *sk,
|
|||
kfree_skb(skb);
|
||||
return;
|
||||
|
||||
free_oreq:
|
||||
chtls_reqsk_free(oreq);
|
||||
reject:
|
||||
mk_tid_release(reply_skb, 0, tid);
|
||||
cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb);
|
||||
|
@ -1589,6 +1597,11 @@ static int chtls_pass_establish(struct chtls_dev *cdev, struct sk_buff *skb)
|
|||
sk_wake_async(sk, 0, POLL_OUT);
|
||||
|
||||
data = lookup_stid(cdev->tids, stid);
|
||||
if (!data) {
|
||||
/* listening server close */
|
||||
kfree_skb(skb);
|
||||
goto unlock;
|
||||
}
|
||||
lsk = ((struct listen_ctx *)data)->lsk;
|
||||
|
||||
bh_lock_sock(lsk);
|
||||
|
@ -1997,39 +2010,6 @@ static void t4_defer_reply(struct sk_buff *skb, struct chtls_dev *cdev,
|
|||
spin_unlock_bh(&cdev->deferq.lock);
|
||||
}
|
||||
|
||||
static void send_abort_rpl(struct sock *sk, struct sk_buff *skb,
|
||||
struct chtls_dev *cdev, int status, int queue)
|
||||
{
|
||||
struct cpl_abort_req_rss *req = cplhdr(skb);
|
||||
struct sk_buff *reply_skb;
|
||||
struct chtls_sock *csk;
|
||||
|
||||
csk = rcu_dereference_sk_user_data(sk);
|
||||
|
||||
reply_skb = alloc_skb(sizeof(struct cpl_abort_rpl),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!reply_skb) {
|
||||
req->status = (queue << 1);
|
||||
t4_defer_reply(skb, cdev, send_defer_abort_rpl);
|
||||
return;
|
||||
}
|
||||
|
||||
set_abort_rpl_wr(reply_skb, GET_TID(req), status);
|
||||
kfree_skb(skb);
|
||||
|
||||
set_wr_txq(reply_skb, CPL_PRIORITY_DATA, queue);
|
||||
if (csk_conn_inline(csk)) {
|
||||
struct l2t_entry *e = csk->l2t_entry;
|
||||
|
||||
if (e && sk->sk_state != TCP_SYN_RECV) {
|
||||
cxgb4_l2t_send(csk->egress_dev, reply_skb, e);
|
||||
return;
|
||||
}
|
||||
}
|
||||
cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb);
|
||||
}
|
||||
|
||||
static void chtls_send_abort_rpl(struct sock *sk, struct sk_buff *skb,
|
||||
struct chtls_dev *cdev,
|
||||
int status, int queue)
|
||||
|
@ -2078,9 +2058,9 @@ static void bl_abort_syn_rcv(struct sock *lsk, struct sk_buff *skb)
|
|||
queue = csk->txq_idx;
|
||||
|
||||
skb->sk = NULL;
|
||||
chtls_send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev,
|
||||
CPL_ABORT_NO_RST, queue);
|
||||
do_abort_syn_rcv(child, lsk);
|
||||
send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev,
|
||||
CPL_ABORT_NO_RST, queue);
|
||||
}
|
||||
|
||||
static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb)
|
||||
|
@ -2110,8 +2090,8 @@ static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb)
|
|||
if (!sock_owned_by_user(psk)) {
|
||||
int queue = csk->txq_idx;
|
||||
|
||||
chtls_send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue);
|
||||
do_abort_syn_rcv(sk, psk);
|
||||
send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue);
|
||||
} else {
|
||||
skb->sk = sk;
|
||||
BLOG_SKB_CB(skb)->backlog_rcv = bl_abort_syn_rcv;
|
||||
|
@ -2129,9 +2109,6 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb)
|
|||
int queue = csk->txq_idx;
|
||||
|
||||
if (is_neg_adv(req->status)) {
|
||||
if (sk->sk_state == TCP_SYN_RECV)
|
||||
chtls_set_tcb_tflag(sk, 0, 0);
|
||||
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
@ -2158,12 +2135,12 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb)
|
|||
if (sk->sk_state == TCP_SYN_RECV && !abort_syn_rcv(sk, skb))
|
||||
return;
|
||||
|
||||
chtls_release_resources(sk);
|
||||
chtls_conn_done(sk);
|
||||
}
|
||||
|
||||
chtls_send_abort_rpl(sk, skb, BLOG_SKB_CB(skb)->cdev,
|
||||
rst_status, queue);
|
||||
chtls_release_resources(sk);
|
||||
chtls_conn_done(sk);
|
||||
}
|
||||
|
||||
static void chtls_abort_rpl_rss(struct sock *sk, struct sk_buff *skb)
|
||||
|
|
|
@ -223,3 +223,4 @@ static struct platform_driver fs_enet_bb_mdio_driver = {
|
|||
};
|
||||
|
||||
module_platform_driver(fs_enet_bb_mdio_driver);
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -224,3 +224,4 @@ static struct platform_driver fs_enet_fec_mdio_driver = {
|
|||
};
|
||||
|
||||
module_platform_driver(fs_enet_fec_mdio_driver);
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -169,7 +169,7 @@ struct hclgevf_mbx_arq_ring {
|
|||
#define hclge_mbx_ring_ptr_move_crq(crq) \
|
||||
(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
|
||||
#define hclge_mbx_tail_ptr_move_arq(arq) \
|
||||
(arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE)
|
||||
(arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM)
|
||||
#define hclge_mbx_head_ptr_move_arq(arq) \
|
||||
(arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE)
|
||||
(arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM)
|
||||
#endif
|
||||
|
|
|
@ -752,7 +752,8 @@ static int hclge_get_sset_count(struct hnae3_handle *handle, int stringset)
|
|||
handle->flags |= HNAE3_SUPPORT_SERDES_SERIAL_LOOPBACK;
|
||||
handle->flags |= HNAE3_SUPPORT_SERDES_PARALLEL_LOOPBACK;
|
||||
|
||||
if (hdev->hw.mac.phydev) {
|
||||
if (hdev->hw.mac.phydev && hdev->hw.mac.phydev->drv &&
|
||||
hdev->hw.mac.phydev->drv->set_loopback) {
|
||||
count += 1;
|
||||
handle->flags |= HNAE3_SUPPORT_PHY_LOOPBACK;
|
||||
}
|
||||
|
@ -4537,8 +4538,8 @@ static int hclge_set_rss_tuple(struct hnae3_handle *handle,
|
|||
req->ipv4_sctp_en = tuple_sets;
|
||||
break;
|
||||
case SCTP_V6_FLOW:
|
||||
if ((nfc->data & RXH_L4_B_0_1) ||
|
||||
(nfc->data & RXH_L4_B_2_3))
|
||||
if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 &&
|
||||
(nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)))
|
||||
return -EINVAL;
|
||||
|
||||
req->ipv6_sctp_en = tuple_sets;
|
||||
|
@ -4730,6 +4731,8 @@ static void hclge_rss_init_cfg(struct hclge_dev *hdev)
|
|||
vport[i].rss_tuple_sets.ipv6_udp_en =
|
||||
HCLGE_RSS_INPUT_TUPLE_OTHER;
|
||||
vport[i].rss_tuple_sets.ipv6_sctp_en =
|
||||
hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ?
|
||||
HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT :
|
||||
HCLGE_RSS_INPUT_TUPLE_SCTP;
|
||||
vport[i].rss_tuple_sets.ipv6_fragment_en =
|
||||
HCLGE_RSS_INPUT_TUPLE_OTHER;
|
||||
|
|
|
@ -107,6 +107,8 @@
|
|||
#define HCLGE_D_IP_BIT BIT(2)
|
||||
#define HCLGE_S_IP_BIT BIT(3)
|
||||
#define HCLGE_V_TAG_BIT BIT(4)
|
||||
#define HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT \
|
||||
(HCLGE_D_IP_BIT | HCLGE_S_IP_BIT | HCLGE_V_TAG_BIT)
|
||||
|
||||
#define HCLGE_RSS_TC_SIZE_0 1
|
||||
#define HCLGE_RSS_TC_SIZE_1 2
|
||||
|
|
|
@ -917,8 +917,8 @@ static int hclgevf_set_rss_tuple(struct hnae3_handle *handle,
|
|||
req->ipv4_sctp_en = tuple_sets;
|
||||
break;
|
||||
case SCTP_V6_FLOW:
|
||||
if ((nfc->data & RXH_L4_B_0_1) ||
|
||||
(nfc->data & RXH_L4_B_2_3))
|
||||
if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 &&
|
||||
(nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)))
|
||||
return -EINVAL;
|
||||
|
||||
req->ipv6_sctp_en = tuple_sets;
|
||||
|
@ -2502,7 +2502,10 @@ static void hclgevf_rss_init_cfg(struct hclgevf_dev *hdev)
|
|||
tuple_sets->ipv4_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
|
||||
tuple_sets->ipv6_tcp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
|
||||
tuple_sets->ipv6_udp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
|
||||
tuple_sets->ipv6_sctp_en = HCLGEVF_RSS_INPUT_TUPLE_SCTP;
|
||||
tuple_sets->ipv6_sctp_en =
|
||||
hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ?
|
||||
HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT :
|
||||
HCLGEVF_RSS_INPUT_TUPLE_SCTP;
|
||||
tuple_sets->ipv6_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER;
|
||||
}
|
||||
|
||||
|
|
|
@ -122,6 +122,8 @@
|
|||
#define HCLGEVF_D_IP_BIT BIT(2)
|
||||
#define HCLGEVF_S_IP_BIT BIT(3)
|
||||
#define HCLGEVF_V_TAG_BIT BIT(4)
|
||||
#define HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT \
|
||||
(HCLGEVF_D_IP_BIT | HCLGEVF_S_IP_BIT | HCLGEVF_V_TAG_BIT)
|
||||
|
||||
#define HCLGEVF_STATS_TIMER_INTERVAL 36U
|
||||
|
||||
|
|
|
@ -4432,7 +4432,7 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
|
|||
struct bpf_prog *old_prog;
|
||||
|
||||
if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
|
||||
NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
|
|
|
@ -871,8 +871,10 @@ static int cgx_lmac_init(struct cgx *cgx)
|
|||
if (!lmac)
|
||||
return -ENOMEM;
|
||||
lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL);
|
||||
if (!lmac->name)
|
||||
return -ENOMEM;
|
||||
if (!lmac->name) {
|
||||
err = -ENOMEM;
|
||||
goto err_lmac_free;
|
||||
}
|
||||
sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i);
|
||||
lmac->lmac_id = i;
|
||||
lmac->cgx = cgx;
|
||||
|
@ -883,7 +885,7 @@ static int cgx_lmac_init(struct cgx *cgx)
|
|||
CGX_LMAC_FWI + i * 9),
|
||||
cgx_fwi_event_handler, 0, lmac->name, lmac);
|
||||
if (err)
|
||||
return err;
|
||||
goto err_irq;
|
||||
|
||||
/* Enable interrupt */
|
||||
cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S,
|
||||
|
@ -895,6 +897,12 @@ static int cgx_lmac_init(struct cgx *cgx)
|
|||
}
|
||||
|
||||
return cgx_lmac_verify_fwi_version(cgx);
|
||||
|
||||
err_irq:
|
||||
kfree(lmac->name);
|
||||
err_lmac_free:
|
||||
kfree(lmac);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int cgx_lmac_exit(struct cgx *cgx)
|
||||
|
|
|
@ -626,6 +626,11 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe,
|
|||
if (!reg_c0)
|
||||
return true;
|
||||
|
||||
/* If reg_c0 is not equal to the default flow tag then skb->mark
|
||||
* is not supported and must be reset back to 0.
|
||||
*/
|
||||
skb->mark = 0;
|
||||
|
||||
priv = netdev_priv(skb->dev);
|
||||
esw = priv->mdev->priv.eswitch;
|
||||
|
||||
|
|
|
@ -118,16 +118,17 @@ struct mlx5_ct_tuple {
|
|||
u16 zone;
|
||||
};
|
||||
|
||||
struct mlx5_ct_shared_counter {
|
||||
struct mlx5_ct_counter {
|
||||
struct mlx5_fc *counter;
|
||||
refcount_t refcount;
|
||||
bool is_shared;
|
||||
};
|
||||
|
||||
struct mlx5_ct_entry {
|
||||
struct rhash_head node;
|
||||
struct rhash_head tuple_node;
|
||||
struct rhash_head tuple_nat_node;
|
||||
struct mlx5_ct_shared_counter *shared_counter;
|
||||
struct mlx5_ct_counter *counter;
|
||||
unsigned long cookie;
|
||||
unsigned long restore_cookie;
|
||||
struct mlx5_ct_tuple tuple;
|
||||
|
@ -394,13 +395,14 @@ mlx5_tc_ct_set_tuple_match(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec,
|
|||
}
|
||||
|
||||
static void
|
||||
mlx5_tc_ct_shared_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry)
|
||||
mlx5_tc_ct_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry)
|
||||
{
|
||||
if (!refcount_dec_and_test(&entry->shared_counter->refcount))
|
||||
if (entry->counter->is_shared &&
|
||||
!refcount_dec_and_test(&entry->counter->refcount))
|
||||
return;
|
||||
|
||||
mlx5_fc_destroy(ct_priv->dev, entry->shared_counter->counter);
|
||||
kfree(entry->shared_counter);
|
||||
mlx5_fc_destroy(ct_priv->dev, entry->counter->counter);
|
||||
kfree(entry->counter);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -699,7 +701,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
|
|||
attr->dest_ft = ct_priv->post_ct;
|
||||
attr->ft = nat ? ct_priv->ct_nat : ct_priv->ct;
|
||||
attr->outer_match_level = MLX5_MATCH_L4;
|
||||
attr->counter = entry->shared_counter->counter;
|
||||
attr->counter = entry->counter->counter;
|
||||
attr->flags |= MLX5_ESW_ATTR_FLAG_NO_IN_PORT;
|
||||
|
||||
mlx5_tc_ct_set_tuple_match(netdev_priv(ct_priv->netdev), spec, flow_rule);
|
||||
|
@ -732,13 +734,34 @@ err_attr:
|
|||
return err;
|
||||
}
|
||||
|
||||
static struct mlx5_ct_shared_counter *
|
||||
static struct mlx5_ct_counter *
|
||||
mlx5_tc_ct_counter_create(struct mlx5_tc_ct_priv *ct_priv)
|
||||
{
|
||||
struct mlx5_ct_counter *counter;
|
||||
int ret;
|
||||
|
||||
counter = kzalloc(sizeof(*counter), GFP_KERNEL);
|
||||
if (!counter)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
counter->is_shared = false;
|
||||
counter->counter = mlx5_fc_create(ct_priv->dev, true);
|
||||
if (IS_ERR(counter->counter)) {
|
||||
ct_dbg("Failed to create counter for ct entry");
|
||||
ret = PTR_ERR(counter->counter);
|
||||
kfree(counter);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
return counter;
|
||||
}
|
||||
|
||||
static struct mlx5_ct_counter *
|
||||
mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv,
|
||||
struct mlx5_ct_entry *entry)
|
||||
{
|
||||
struct mlx5_ct_tuple rev_tuple = entry->tuple;
|
||||
struct mlx5_ct_shared_counter *shared_counter;
|
||||
struct mlx5_core_dev *dev = ct_priv->dev;
|
||||
struct mlx5_ct_counter *shared_counter;
|
||||
struct mlx5_ct_entry *rev_entry;
|
||||
__be16 tmp_port;
|
||||
int ret;
|
||||
|
@ -767,25 +790,20 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv,
|
|||
rev_entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &rev_tuple,
|
||||
tuples_ht_params);
|
||||
if (rev_entry) {
|
||||
if (refcount_inc_not_zero(&rev_entry->shared_counter->refcount)) {
|
||||
if (refcount_inc_not_zero(&rev_entry->counter->refcount)) {
|
||||
mutex_unlock(&ct_priv->shared_counter_lock);
|
||||
return rev_entry->shared_counter;
|
||||
return rev_entry->counter;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&ct_priv->shared_counter_lock);
|
||||
|
||||
shared_counter = kzalloc(sizeof(*shared_counter), GFP_KERNEL);
|
||||
if (!shared_counter)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
shared_counter->counter = mlx5_fc_create(dev, true);
|
||||
if (IS_ERR(shared_counter->counter)) {
|
||||
ct_dbg("Failed to create counter for ct entry");
|
||||
ret = PTR_ERR(shared_counter->counter);
|
||||
kfree(shared_counter);
|
||||
shared_counter = mlx5_tc_ct_counter_create(ct_priv);
|
||||
if (IS_ERR(shared_counter)) {
|
||||
ret = PTR_ERR(shared_counter);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
shared_counter->is_shared = true;
|
||||
refcount_set(&shared_counter->refcount, 1);
|
||||
return shared_counter;
|
||||
}
|
||||
|
@ -798,10 +816,13 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv,
|
|||
{
|
||||
int err;
|
||||
|
||||
entry->shared_counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry);
|
||||
if (IS_ERR(entry->shared_counter)) {
|
||||
err = PTR_ERR(entry->shared_counter);
|
||||
ct_dbg("Failed to create counter for ct entry");
|
||||
if (nf_ct_acct_enabled(dev_net(ct_priv->netdev)))
|
||||
entry->counter = mlx5_tc_ct_counter_create(ct_priv);
|
||||
else
|
||||
entry->counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry);
|
||||
|
||||
if (IS_ERR(entry->counter)) {
|
||||
err = PTR_ERR(entry->counter);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -820,7 +841,7 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv,
|
|||
err_nat:
|
||||
mlx5_tc_ct_entry_del_rule(ct_priv, entry, false);
|
||||
err_orig:
|
||||
mlx5_tc_ct_shared_counter_put(ct_priv, entry);
|
||||
mlx5_tc_ct_counter_put(ct_priv, entry);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -918,7 +939,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv,
|
|||
rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node,
|
||||
tuples_ht_params);
|
||||
mutex_unlock(&ct_priv->shared_counter_lock);
|
||||
mlx5_tc_ct_shared_counter_put(ct_priv, entry);
|
||||
mlx5_tc_ct_counter_put(ct_priv, entry);
|
||||
|
||||
}
|
||||
|
||||
|
@ -956,7 +977,7 @@ mlx5_tc_ct_block_flow_offload_stats(struct mlx5_ct_ft *ft,
|
|||
if (!entry)
|
||||
return -ENOENT;
|
||||
|
||||
mlx5_fc_query_cached(entry->shared_counter->counter, &bytes, &packets, &lastuse);
|
||||
mlx5_fc_query_cached(entry->counter->counter, &bytes, &packets, &lastuse);
|
||||
flow_stats_update(&f->stats, bytes, packets, 0, lastuse,
|
||||
FLOW_ACTION_HW_STATS_DELAYED);
|
||||
|
||||
|
|
|
@ -371,6 +371,15 @@ struct mlx5e_swp_spec {
|
|||
u8 tun_l4_proto;
|
||||
};
|
||||
|
||||
static inline void mlx5e_eseg_swp_offsets_add_vlan(struct mlx5_wqe_eth_seg *eseg)
|
||||
{
|
||||
/* SWP offsets are in 2-bytes words */
|
||||
eseg->swp_outer_l3_offset += VLAN_HLEN / 2;
|
||||
eseg->swp_outer_l4_offset += VLAN_HLEN / 2;
|
||||
eseg->swp_inner_l3_offset += VLAN_HLEN / 2;
|
||||
eseg->swp_inner_l4_offset += VLAN_HLEN / 2;
|
||||
}
|
||||
|
||||
static inline void
|
||||
mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
|
||||
struct mlx5e_swp_spec *swp_spec)
|
||||
|
|
|
@ -51,7 +51,7 @@ static inline bool mlx5_geneve_tx_allowed(struct mlx5_core_dev *mdev)
|
|||
}
|
||||
|
||||
static inline void
|
||||
mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg)
|
||||
mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg, u16 ihs)
|
||||
{
|
||||
struct mlx5e_swp_spec swp_spec = {};
|
||||
unsigned int offset = 0;
|
||||
|
@ -85,6 +85,8 @@ mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg)
|
|||
}
|
||||
|
||||
mlx5e_set_eseg_swp(skb, eseg, &swp_spec);
|
||||
if (skb_vlan_tag_present(skb) && ihs)
|
||||
mlx5e_eseg_swp_offsets_add_vlan(eseg);
|
||||
}
|
||||
|
||||
#else
|
||||
|
@ -163,7 +165,7 @@ static inline unsigned int mlx5e_accel_tx_ids_len(struct mlx5e_txqsq *sq,
|
|||
|
||||
static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv,
|
||||
struct sk_buff *skb,
|
||||
struct mlx5_wqe_eth_seg *eseg)
|
||||
struct mlx5_wqe_eth_seg *eseg, u16 ihs)
|
||||
{
|
||||
#ifdef CONFIG_MLX5_EN_IPSEC
|
||||
if (xfrm_offload(skb))
|
||||
|
@ -172,7 +174,7 @@ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv,
|
|||
|
||||
#if IS_ENABLED(CONFIG_GENEVE)
|
||||
if (skb->encapsulation)
|
||||
mlx5e_tx_tunnel_accel(skb, eseg);
|
||||
mlx5e_tx_tunnel_accel(skb, eseg, ihs);
|
||||
#endif
|
||||
|
||||
return true;
|
||||
|
|
|
@ -1010,6 +1010,22 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev,
|
|||
return mlx5e_ethtool_get_link_ksettings(priv, link_ksettings);
|
||||
}
|
||||
|
||||
static int mlx5e_speed_validate(struct net_device *netdev, bool ext,
|
||||
const unsigned long link_modes, u8 autoneg)
|
||||
{
|
||||
/* Extended link-mode has no speed limitations. */
|
||||
if (ext)
|
||||
return 0;
|
||||
|
||||
if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) &&
|
||||
autoneg != AUTONEG_ENABLE) {
|
||||
netdev_err(netdev, "%s: 56G link speed requires autoneg enabled\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u32 mlx5e_ethtool2ptys_adver_link(const unsigned long *link_modes)
|
||||
{
|
||||
u32 i, ptys_modes = 0;
|
||||
|
@ -1103,13 +1119,9 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
|
|||
link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) :
|
||||
mlx5e_port_speed2linkmodes(mdev, speed, !ext);
|
||||
|
||||
if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) &&
|
||||
autoneg != AUTONEG_ENABLE) {
|
||||
netdev_err(priv->netdev, "%s: 56G link speed requires autoneg enabled\n",
|
||||
__func__);
|
||||
err = -EINVAL;
|
||||
err = mlx5e_speed_validate(priv->netdev, ext, link_modes, autoneg);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
link_modes = link_modes & eproto.cap;
|
||||
if (!link_modes) {
|
||||
|
|
|
@ -942,6 +942,7 @@ static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc,
|
|||
in = kvzalloc(inlen, GFP_KERNEL);
|
||||
if (!in) {
|
||||
kfree(ft->g);
|
||||
ft->g = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
@ -1087,6 +1088,7 @@ static int mlx5e_create_inner_ttc_table_groups(struct mlx5e_ttc_table *ttc)
|
|||
in = kvzalloc(inlen, GFP_KERNEL);
|
||||
if (!in) {
|
||||
kfree(ft->g);
|
||||
ft->g = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
@ -1390,6 +1392,7 @@ err_destroy_groups:
|
|||
ft->g[ft->num_groups] = NULL;
|
||||
mlx5e_destroy_groups(ft);
|
||||
kvfree(in);
|
||||
kfree(ft->g);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -3161,7 +3161,8 @@ static void mlx5e_modify_admin_state(struct mlx5_core_dev *mdev,
|
|||
|
||||
mlx5_set_port_admin_status(mdev, state);
|
||||
|
||||
if (mlx5_eswitch_mode(mdev) != MLX5_ESWITCH_LEGACY)
|
||||
if (mlx5_eswitch_mode(mdev) == MLX5_ESWITCH_OFFLOADS ||
|
||||
!MLX5_CAP_GEN(mdev, uplink_follow))
|
||||
return;
|
||||
|
||||
if (state == MLX5_PORT_UP)
|
||||
|
|
|
@ -682,9 +682,9 @@ void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq)
|
|||
|
||||
static bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq,
|
||||
struct sk_buff *skb, struct mlx5e_accel_tx_state *accel,
|
||||
struct mlx5_wqe_eth_seg *eseg)
|
||||
struct mlx5_wqe_eth_seg *eseg, u16 ihs)
|
||||
{
|
||||
if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg)))
|
||||
if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg, ihs)))
|
||||
return false;
|
||||
|
||||
mlx5e_txwqe_build_eseg_csum(sq, skb, accel, eseg);
|
||||
|
@ -714,7 +714,8 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) {
|
||||
struct mlx5_wqe_eth_seg eseg = {};
|
||||
|
||||
if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg)))
|
||||
if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg,
|
||||
attr.ihs)))
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more());
|
||||
|
@ -731,7 +732,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
/* May update the WQE, but may not post other WQEs. */
|
||||
mlx5e_accel_tx_finish(sq, wqe, &accel,
|
||||
(struct mlx5_wqe_inline_seg *)(wqe->data + wqe_attr.ds_cnt_inl));
|
||||
if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth)))
|
||||
if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth, attr.ihs)))
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more());
|
||||
|
|
|
@ -95,22 +95,21 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (!IS_ERR_OR_NULL(vport->egress.acl))
|
||||
return 0;
|
||||
if (!vport->egress.acl) {
|
||||
vport->egress.acl = esw_acl_table_create(esw, vport->vport,
|
||||
MLX5_FLOW_NAMESPACE_ESW_EGRESS,
|
||||
table_size);
|
||||
if (IS_ERR(vport->egress.acl)) {
|
||||
err = PTR_ERR(vport->egress.acl);
|
||||
vport->egress.acl = NULL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
vport->egress.acl = esw_acl_table_create(esw, vport->vport,
|
||||
MLX5_FLOW_NAMESPACE_ESW_EGRESS,
|
||||
table_size);
|
||||
if (IS_ERR(vport->egress.acl)) {
|
||||
err = PTR_ERR(vport->egress.acl);
|
||||
vport->egress.acl = NULL;
|
||||
goto out;
|
||||
err = esw_acl_egress_lgcy_groups_create(esw, vport);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = esw_acl_egress_lgcy_groups_create(esw, vport);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
esw_debug(esw->dev,
|
||||
"vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
|
||||
vport->vport, vport->info.vlan, vport->info.qos);
|
||||
|
|
|
@ -564,7 +564,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
|
|||
struct mlx5_core_dev *tmp_dev;
|
||||
int i, err;
|
||||
|
||||
if (!MLX5_CAP_GEN(dev, vport_group_manager))
|
||||
if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
|
||||
!MLX5_CAP_GEN(dev, lag_master) ||
|
||||
MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS)
|
||||
return;
|
||||
|
||||
tmp_dev = mlx5_get_next_phys_dev(dev);
|
||||
|
@ -582,12 +584,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev)
|
|||
if (mlx5_lag_dev_add_pf(ldev, dev, netdev) < 0)
|
||||
return;
|
||||
|
||||
for (i = 0; i < MLX5_MAX_PORTS; i++) {
|
||||
tmp_dev = ldev->pf[i].dev;
|
||||
if (!tmp_dev || !MLX5_CAP_GEN(tmp_dev, lag_master) ||
|
||||
MLX5_CAP_GEN(tmp_dev, num_lag_ports) != MLX5_MAX_PORTS)
|
||||
for (i = 0; i < MLX5_MAX_PORTS; i++)
|
||||
if (!ldev->pf[i].dev)
|
||||
break;
|
||||
}
|
||||
|
||||
if (i >= MLX5_MAX_PORTS)
|
||||
ldev->flags |= MLX5_LAG_FLAG_READY;
|
||||
|
|
|
@ -1368,8 +1368,10 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
MLX5_COREDEV_VF : MLX5_COREDEV_PF;
|
||||
|
||||
dev->priv.adev_idx = mlx5_adev_idx_alloc();
|
||||
if (dev->priv.adev_idx < 0)
|
||||
return dev->priv.adev_idx;
|
||||
if (dev->priv.adev_idx < 0) {
|
||||
err = dev->priv.adev_idx;
|
||||
goto adev_init_err;
|
||||
}
|
||||
|
||||
err = mlx5_mdev_init(dev, prof_sel);
|
||||
if (err)
|
||||
|
@ -1403,6 +1405,7 @@ pci_init_err:
|
|||
mlx5_mdev_uninit(dev);
|
||||
mdev_init_err:
|
||||
mlx5_adev_idx_free(dev->priv.adev_idx);
|
||||
adev_init_err:
|
||||
mlx5_devlink_free(devlink);
|
||||
|
||||
return err;
|
||||
|
|
|
@ -116,7 +116,7 @@ free:
|
|||
static void mlx5_rdma_del_roce_addr(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_core_roce_gid_set(dev, 0, 0, 0,
|
||||
NULL, NULL, false, 0, 0);
|
||||
NULL, NULL, false, 0, 1);
|
||||
}
|
||||
|
||||
static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *gid)
|
||||
|
|
|
@ -506,10 +506,14 @@ static int mac_sonic_platform_probe(struct platform_device *pdev)
|
|||
|
||||
err = register_netdev(dev);
|
||||
if (err)
|
||||
goto out;
|
||||
goto undo_probe;
|
||||
|
||||
return 0;
|
||||
|
||||
undo_probe:
|
||||
dma_free_coherent(lp->device,
|
||||
SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
|
||||
lp->descriptors, lp->descriptors_laddr);
|
||||
out:
|
||||
free_netdev(dev);
|
||||
|
||||
|
@ -584,12 +588,16 @@ static int mac_sonic_nubus_probe(struct nubus_board *board)
|
|||
|
||||
err = register_netdev(ndev);
|
||||
if (err)
|
||||
goto out;
|
||||
goto undo_probe;
|
||||
|
||||
nubus_set_drvdata(board, ndev);
|
||||
|
||||
return 0;
|
||||
|
||||
undo_probe:
|
||||
dma_free_coherent(lp->device,
|
||||
SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
|
||||
lp->descriptors, lp->descriptors_laddr);
|
||||
out:
|
||||
free_netdev(ndev);
|
||||
return err;
|
||||
|
|
|
@ -229,11 +229,14 @@ int xtsonic_probe(struct platform_device *pdev)
|
|||
sonic_msg_init(dev);
|
||||
|
||||
if ((err = register_netdev(dev)))
|
||||
goto out1;
|
||||
goto undo_probe1;
|
||||
|
||||
return 0;
|
||||
|
||||
out1:
|
||||
undo_probe1:
|
||||
dma_free_coherent(lp->device,
|
||||
SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
|
||||
lp->descriptors, lp->descriptors_laddr);
|
||||
release_region(dev->base_addr, SONIC_MEM_SIZE);
|
||||
out:
|
||||
free_netdev(dev);
|
||||
|
|
|
@ -78,6 +78,7 @@ config QED
|
|||
depends on PCI
|
||||
select ZLIB_INFLATE
|
||||
select CRC8
|
||||
select CRC32
|
||||
select NET_DEVLINK
|
||||
help
|
||||
This enables the support for Marvell FastLinQ adapters family.
|
||||
|
|
|
@ -64,6 +64,7 @@ struct emac_variant {
|
|||
* @variant: reference to the current board variant
|
||||
* @regmap: regmap for using the syscon
|
||||
* @internal_phy_powered: Does the internal PHY is enabled
|
||||
* @use_internal_phy: Is the internal PHY selected for use
|
||||
* @mux_handle: Internal pointer used by mdio-mux lib
|
||||
*/
|
||||
struct sunxi_priv_data {
|
||||
|
@ -74,6 +75,7 @@ struct sunxi_priv_data {
|
|||
const struct emac_variant *variant;
|
||||
struct regmap_field *regmap_field;
|
||||
bool internal_phy_powered;
|
||||
bool use_internal_phy;
|
||||
void *mux_handle;
|
||||
};
|
||||
|
||||
|
@ -539,8 +541,11 @@ static const struct stmmac_dma_ops sun8i_dwmac_dma_ops = {
|
|||
.dma_interrupt = sun8i_dwmac_dma_interrupt,
|
||||
};
|
||||
|
||||
static int sun8i_dwmac_power_internal_phy(struct stmmac_priv *priv);
|
||||
|
||||
static int sun8i_dwmac_init(struct platform_device *pdev, void *priv)
|
||||
{
|
||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||
struct sunxi_priv_data *gmac = priv;
|
||||
int ret;
|
||||
|
||||
|
@ -554,13 +559,25 @@ static int sun8i_dwmac_init(struct platform_device *pdev, void *priv)
|
|||
|
||||
ret = clk_prepare_enable(gmac->tx_clk);
|
||||
if (ret) {
|
||||
if (gmac->regulator)
|
||||
regulator_disable(gmac->regulator);
|
||||
dev_err(&pdev->dev, "Could not enable AHB clock\n");
|
||||
return ret;
|
||||
goto err_disable_regulator;
|
||||
}
|
||||
|
||||
if (gmac->use_internal_phy) {
|
||||
ret = sun8i_dwmac_power_internal_phy(netdev_priv(ndev));
|
||||
if (ret)
|
||||
goto err_disable_clk;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_disable_clk:
|
||||
clk_disable_unprepare(gmac->tx_clk);
|
||||
err_disable_regulator:
|
||||
if (gmac->regulator)
|
||||
regulator_disable(gmac->regulator);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void sun8i_dwmac_core_init(struct mac_device_info *hw,
|
||||
|
@ -831,7 +848,6 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
|
|||
struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
|
||||
u32 reg, val;
|
||||
int ret = 0;
|
||||
bool need_power_ephy = false;
|
||||
|
||||
if (current_child ^ desired_child) {
|
||||
regmap_field_read(gmac->regmap_field, ®);
|
||||
|
@ -839,13 +855,12 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
|
|||
case DWMAC_SUN8I_MDIO_MUX_INTERNAL_ID:
|
||||
dev_info(priv->device, "Switch mux to internal PHY");
|
||||
val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SELECT;
|
||||
|
||||
need_power_ephy = true;
|
||||
gmac->use_internal_phy = true;
|
||||
break;
|
||||
case DWMAC_SUN8I_MDIO_MUX_EXTERNAL_ID:
|
||||
dev_info(priv->device, "Switch mux to external PHY");
|
||||
val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SHUTDOWN;
|
||||
need_power_ephy = false;
|
||||
gmac->use_internal_phy = false;
|
||||
break;
|
||||
default:
|
||||
dev_err(priv->device, "Invalid child ID %x\n",
|
||||
|
@ -853,7 +868,7 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child,
|
|||
return -EINVAL;
|
||||
}
|
||||
regmap_field_write(gmac->regmap_field, val);
|
||||
if (need_power_ephy) {
|
||||
if (gmac->use_internal_phy) {
|
||||
ret = sun8i_dwmac_power_internal_phy(priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -883,22 +898,23 @@ static int sun8i_dwmac_register_mdio_mux(struct stmmac_priv *priv)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
||||
static int sun8i_dwmac_set_syscon(struct device *dev,
|
||||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
|
||||
struct device_node *node = priv->device->of_node;
|
||||
struct sunxi_priv_data *gmac = plat->bsp_priv;
|
||||
struct device_node *node = dev->of_node;
|
||||
int ret;
|
||||
u32 reg, val;
|
||||
|
||||
ret = regmap_field_read(gmac->regmap_field, &val);
|
||||
if (ret) {
|
||||
dev_err(priv->device, "Fail to read from regmap field.\n");
|
||||
dev_err(dev, "Fail to read from regmap field.\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
reg = gmac->variant->default_syscon_value;
|
||||
if (reg != val)
|
||||
dev_warn(priv->device,
|
||||
dev_warn(dev,
|
||||
"Current syscon value is not the default %x (expect %x)\n",
|
||||
val, reg);
|
||||
|
||||
|
@ -911,9 +927,9 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
|||
/* Force EPHY xtal frequency to 24MHz. */
|
||||
reg |= H3_EPHY_CLK_SEL;
|
||||
|
||||
ret = of_mdio_parse_addr(priv->device, priv->plat->phy_node);
|
||||
ret = of_mdio_parse_addr(dev, plat->phy_node);
|
||||
if (ret < 0) {
|
||||
dev_err(priv->device, "Could not parse MDIO addr\n");
|
||||
dev_err(dev, "Could not parse MDIO addr\n");
|
||||
return ret;
|
||||
}
|
||||
/* of_mdio_parse_addr returns a valid (0 ~ 31) PHY
|
||||
|
@ -929,17 +945,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
|||
|
||||
if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) {
|
||||
if (val % 100) {
|
||||
dev_err(priv->device, "tx-delay must be a multiple of 100\n");
|
||||
dev_err(dev, "tx-delay must be a multiple of 100\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
val /= 100;
|
||||
dev_dbg(priv->device, "set tx-delay to %x\n", val);
|
||||
dev_dbg(dev, "set tx-delay to %x\n", val);
|
||||
if (val <= gmac->variant->tx_delay_max) {
|
||||
reg &= ~(gmac->variant->tx_delay_max <<
|
||||
SYSCON_ETXDC_SHIFT);
|
||||
reg |= (val << SYSCON_ETXDC_SHIFT);
|
||||
} else {
|
||||
dev_err(priv->device, "Invalid TX clock delay: %d\n",
|
||||
dev_err(dev, "Invalid TX clock delay: %d\n",
|
||||
val);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -947,17 +963,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
|||
|
||||
if (!of_property_read_u32(node, "allwinner,rx-delay-ps", &val)) {
|
||||
if (val % 100) {
|
||||
dev_err(priv->device, "rx-delay must be a multiple of 100\n");
|
||||
dev_err(dev, "rx-delay must be a multiple of 100\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
val /= 100;
|
||||
dev_dbg(priv->device, "set rx-delay to %x\n", val);
|
||||
dev_dbg(dev, "set rx-delay to %x\n", val);
|
||||
if (val <= gmac->variant->rx_delay_max) {
|
||||
reg &= ~(gmac->variant->rx_delay_max <<
|
||||
SYSCON_ERXDC_SHIFT);
|
||||
reg |= (val << SYSCON_ERXDC_SHIFT);
|
||||
} else {
|
||||
dev_err(priv->device, "Invalid RX clock delay: %d\n",
|
||||
dev_err(dev, "Invalid RX clock delay: %d\n",
|
||||
val);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -968,7 +984,7 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
|||
if (gmac->variant->support_rmii)
|
||||
reg &= ~SYSCON_RMII_EN;
|
||||
|
||||
switch (priv->plat->interface) {
|
||||
switch (plat->interface) {
|
||||
case PHY_INTERFACE_MODE_MII:
|
||||
/* default */
|
||||
break;
|
||||
|
@ -982,8 +998,8 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
|
|||
reg |= SYSCON_RMII_EN | SYSCON_ETCS_EXT_GMII;
|
||||
break;
|
||||
default:
|
||||
dev_err(priv->device, "Unsupported interface mode: %s",
|
||||
phy_modes(priv->plat->interface));
|
||||
dev_err(dev, "Unsupported interface mode: %s",
|
||||
phy_modes(plat->interface));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1004,17 +1020,10 @@ static void sun8i_dwmac_exit(struct platform_device *pdev, void *priv)
|
|||
struct sunxi_priv_data *gmac = priv;
|
||||
|
||||
if (gmac->variant->soc_has_internal_phy) {
|
||||
/* sun8i_dwmac_exit could be called with mdiomux uninit */
|
||||
if (gmac->mux_handle)
|
||||
mdio_mux_uninit(gmac->mux_handle);
|
||||
if (gmac->internal_phy_powered)
|
||||
sun8i_dwmac_unpower_internal_phy(gmac);
|
||||
}
|
||||
|
||||
sun8i_dwmac_unset_syscon(gmac);
|
||||
|
||||
reset_control_put(gmac->rst_ephy);
|
||||
|
||||
clk_disable_unprepare(gmac->tx_clk);
|
||||
|
||||
if (gmac->regulator)
|
||||
|
@ -1049,16 +1058,11 @@ static struct mac_device_info *sun8i_dwmac_setup(void *ppriv)
|
|||
{
|
||||
struct mac_device_info *mac;
|
||||
struct stmmac_priv *priv = ppriv;
|
||||
int ret;
|
||||
|
||||
mac = devm_kzalloc(priv->device, sizeof(*mac), GFP_KERNEL);
|
||||
if (!mac)
|
||||
return NULL;
|
||||
|
||||
ret = sun8i_dwmac_set_syscon(priv);
|
||||
if (ret)
|
||||
return NULL;
|
||||
|
||||
mac->pcsr = priv->ioaddr;
|
||||
mac->mac = &sun8i_dwmac_ops;
|
||||
mac->dma = &sun8i_dwmac_dma_ops;
|
||||
|
@ -1134,10 +1138,6 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac);
|
||||
if (IS_ERR(plat_dat))
|
||||
return PTR_ERR(plat_dat);
|
||||
|
||||
gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL);
|
||||
if (!gmac)
|
||||
return -ENOMEM;
|
||||
|
@ -1201,11 +1201,15 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
|
|||
ret = of_get_phy_mode(dev->of_node, &interface);
|
||||
if (ret)
|
||||
return -EINVAL;
|
||||
plat_dat->interface = interface;
|
||||
|
||||
plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac);
|
||||
if (IS_ERR(plat_dat))
|
||||
return PTR_ERR(plat_dat);
|
||||
|
||||
/* platform data specifying hardware features and callbacks.
|
||||
* hardware features were copied from Allwinner drivers.
|
||||
*/
|
||||
plat_dat->interface = interface;
|
||||
plat_dat->rx_coe = STMMAC_RX_COE_TYPE2;
|
||||
plat_dat->tx_coe = 1;
|
||||
plat_dat->has_sun8i = true;
|
||||
|
@ -1214,9 +1218,13 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
|
|||
plat_dat->exit = sun8i_dwmac_exit;
|
||||
plat_dat->setup = sun8i_dwmac_setup;
|
||||
|
||||
ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat);
|
||||
if (ret)
|
||||
goto dwmac_deconfig;
|
||||
|
||||
ret = sun8i_dwmac_init(pdev, plat_dat->bsp_priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto dwmac_syscon;
|
||||
|
||||
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
||||
if (ret)
|
||||
|
@ -1230,7 +1238,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
|
|||
if (gmac->variant->soc_has_internal_phy) {
|
||||
ret = get_ephy_nodes(priv);
|
||||
if (ret)
|
||||
goto dwmac_exit;
|
||||
goto dwmac_remove;
|
||||
ret = sun8i_dwmac_register_mdio_mux(priv);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Failed to register mux\n");
|
||||
|
@ -1239,15 +1247,42 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
|
|||
} else {
|
||||
ret = sun8i_dwmac_reset(priv);
|
||||
if (ret)
|
||||
goto dwmac_exit;
|
||||
goto dwmac_remove;
|
||||
}
|
||||
|
||||
return ret;
|
||||
dwmac_mux:
|
||||
sun8i_dwmac_unset_syscon(gmac);
|
||||
reset_control_put(gmac->rst_ephy);
|
||||
clk_put(gmac->ephy_clk);
|
||||
dwmac_remove:
|
||||
stmmac_dvr_remove(&pdev->dev);
|
||||
dwmac_exit:
|
||||
sun8i_dwmac_exit(pdev, gmac);
|
||||
dwmac_syscon:
|
||||
sun8i_dwmac_unset_syscon(gmac);
|
||||
dwmac_deconfig:
|
||||
stmmac_remove_config_dt(pdev, plat_dat);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sun8i_dwmac_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||
struct stmmac_priv *priv = netdev_priv(ndev);
|
||||
struct sunxi_priv_data *gmac = priv->plat->bsp_priv;
|
||||
|
||||
if (gmac->variant->soc_has_internal_phy) {
|
||||
mdio_mux_uninit(gmac->mux_handle);
|
||||
sun8i_dwmac_unpower_internal_phy(gmac);
|
||||
reset_control_put(gmac->rst_ephy);
|
||||
clk_put(gmac->ephy_clk);
|
||||
}
|
||||
|
||||
stmmac_pltfr_remove(pdev);
|
||||
return ret;
|
||||
sun8i_dwmac_unset_syscon(gmac);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id sun8i_dwmac_match[] = {
|
||||
|
@ -1269,7 +1304,7 @@ MODULE_DEVICE_TABLE(of, sun8i_dwmac_match);
|
|||
|
||||
static struct platform_driver sun8i_dwmac_driver = {
|
||||
.probe = sun8i_dwmac_probe,
|
||||
.remove = stmmac_pltfr_remove,
|
||||
.remove = sun8i_dwmac_remove,
|
||||
.driver = {
|
||||
.name = "dwmac-sun8i",
|
||||
.pm = &stmmac_pltfr_pm_ops,
|
||||
|
|
|
@ -1199,7 +1199,10 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
|
|||
* accordingly. Otherwise, we should check here.
|
||||
*/
|
||||
if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)
|
||||
delayed_ndp_size = ALIGN(ctx->max_ndp_size, ctx->tx_ndp_modulus);
|
||||
delayed_ndp_size = ctx->max_ndp_size +
|
||||
max_t(u32,
|
||||
ctx->tx_ndp_modulus,
|
||||
ctx->tx_modulus + ctx->tx_remainder) - 1;
|
||||
else
|
||||
delayed_ndp_size = 0;
|
||||
|
||||
|
@ -1410,7 +1413,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
|
|||
if (!(dev->driver_info->flags & FLAG_SEND_ZLP) &&
|
||||
skb_out->len > ctx->min_tx_pkt) {
|
||||
padding_count = ctx->tx_curr_size - skb_out->len;
|
||||
skb_put_zero(skb_out, padding_count);
|
||||
if (!WARN_ON(padding_count > ctx->tx_curr_size))
|
||||
skb_put_zero(skb_out, padding_count);
|
||||
} else if (skb_out->len < ctx->tx_curr_size &&
|
||||
(skb_out->len % dev->maxpacket) == 0) {
|
||||
skb_put_u8(skb_out, 0); /* force short packet */
|
||||
|
|
|
@ -282,6 +282,7 @@ config SLIC_DS26522
|
|||
tristate "Slic Maxim ds26522 card support"
|
||||
depends on SPI
|
||||
depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE || COMPILE_TEST
|
||||
select BITREVERSE
|
||||
help
|
||||
This module initializes and configures the slic maxim card
|
||||
in T1 or E1 mode.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
config WIL6210
|
||||
tristate "Wilocity 60g WiFi card wil6210 support"
|
||||
select WANT_DEV_COREDUMP
|
||||
select CRC32
|
||||
depends on CFG80211
|
||||
depends on PCI
|
||||
default n
|
||||
|
|
|
@ -64,6 +64,7 @@ config DP83640_PHY
|
|||
depends on NETWORK_PHY_TIMESTAMPING
|
||||
depends on PHYLIB
|
||||
depends on PTP_1588_CLOCK
|
||||
select CRC32
|
||||
help
|
||||
Supports the DP83640 PHYTER with IEEE 1588 features.
|
||||
|
||||
|
@ -78,6 +79,7 @@ config DP83640_PHY
|
|||
config PTP_1588_CLOCK_INES
|
||||
tristate "ZHAW InES PTP time stamping IP core"
|
||||
depends on NETWORK_PHY_TIMESTAMPING
|
||||
depends on HAS_IOMEM
|
||||
depends on PHYLIB
|
||||
depends on PTP_1588_CLOCK
|
||||
help
|
||||
|
|
|
@ -1079,7 +1079,8 @@ struct qeth_card *qeth_get_card_by_busid(char *bus_id);
|
|||
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
|
||||
int clear_start_mask);
|
||||
int qeth_threads_running(struct qeth_card *, unsigned long);
|
||||
int qeth_set_offline(struct qeth_card *card, bool resetting);
|
||||
int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc,
|
||||
bool resetting);
|
||||
|
||||
int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *,
|
||||
int (*reply_cb)
|
||||
|
|
|
@ -5507,12 +5507,12 @@ out:
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int qeth_set_online(struct qeth_card *card)
|
||||
static int qeth_set_online(struct qeth_card *card,
|
||||
const struct qeth_discipline *disc)
|
||||
{
|
||||
bool carrier_ok;
|
||||
int rc;
|
||||
|
||||
mutex_lock(&card->discipline_mutex);
|
||||
mutex_lock(&card->conf_mutex);
|
||||
QETH_CARD_TEXT(card, 2, "setonlin");
|
||||
|
||||
|
@ -5529,7 +5529,7 @@ static int qeth_set_online(struct qeth_card *card)
|
|||
/* no need for locking / error handling at this early stage: */
|
||||
qeth_set_real_num_tx_queues(card, qeth_tx_actual_queues(card));
|
||||
|
||||
rc = card->discipline->set_online(card, carrier_ok);
|
||||
rc = disc->set_online(card, carrier_ok);
|
||||
if (rc)
|
||||
goto err_online;
|
||||
|
||||
|
@ -5537,7 +5537,6 @@ static int qeth_set_online(struct qeth_card *card)
|
|||
kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE);
|
||||
|
||||
mutex_unlock(&card->conf_mutex);
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
return 0;
|
||||
|
||||
err_online:
|
||||
|
@ -5552,15 +5551,14 @@ err_hardsetup:
|
|||
qdio_free(CARD_DDEV(card));
|
||||
|
||||
mutex_unlock(&card->conf_mutex);
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
return rc;
|
||||
}
|
||||
|
||||
int qeth_set_offline(struct qeth_card *card, bool resetting)
|
||||
int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc,
|
||||
bool resetting)
|
||||
{
|
||||
int rc, rc2, rc3;
|
||||
|
||||
mutex_lock(&card->discipline_mutex);
|
||||
mutex_lock(&card->conf_mutex);
|
||||
QETH_CARD_TEXT(card, 3, "setoffl");
|
||||
|
||||
|
@ -5581,7 +5579,7 @@ int qeth_set_offline(struct qeth_card *card, bool resetting)
|
|||
|
||||
cancel_work_sync(&card->rx_mode_work);
|
||||
|
||||
card->discipline->set_offline(card);
|
||||
disc->set_offline(card);
|
||||
|
||||
qeth_qdio_clear_card(card, 0);
|
||||
qeth_drain_output_queues(card);
|
||||
|
@ -5602,16 +5600,19 @@ int qeth_set_offline(struct qeth_card *card, bool resetting)
|
|||
kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE);
|
||||
|
||||
mutex_unlock(&card->conf_mutex);
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qeth_set_offline);
|
||||
|
||||
static int qeth_do_reset(void *data)
|
||||
{
|
||||
const struct qeth_discipline *disc;
|
||||
struct qeth_card *card = data;
|
||||
int rc;
|
||||
|
||||
/* Lock-free, other users will block until we are done. */
|
||||
disc = card->discipline;
|
||||
|
||||
QETH_CARD_TEXT(card, 2, "recover1");
|
||||
if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD))
|
||||
return 0;
|
||||
|
@ -5619,8 +5620,8 @@ static int qeth_do_reset(void *data)
|
|||
dev_warn(&card->gdev->dev,
|
||||
"A recovery process has been started for the device\n");
|
||||
|
||||
qeth_set_offline(card, true);
|
||||
rc = qeth_set_online(card);
|
||||
qeth_set_offline(card, disc, true);
|
||||
rc = qeth_set_online(card, disc);
|
||||
if (!rc) {
|
||||
dev_info(&card->gdev->dev,
|
||||
"Device successfully recovered!\n");
|
||||
|
@ -6584,6 +6585,7 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
|
|||
break;
|
||||
default:
|
||||
card->info.layer_enforced = true;
|
||||
/* It's so early that we don't need the discipline_mutex yet. */
|
||||
rc = qeth_core_load_discipline(card, enforced_disc);
|
||||
if (rc)
|
||||
goto err_load;
|
||||
|
@ -6616,10 +6618,12 @@ static void qeth_core_remove_device(struct ccwgroup_device *gdev)
|
|||
|
||||
QETH_CARD_TEXT(card, 2, "removedv");
|
||||
|
||||
mutex_lock(&card->discipline_mutex);
|
||||
if (card->discipline) {
|
||||
card->discipline->remove(gdev);
|
||||
qeth_core_free_discipline(card);
|
||||
}
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
|
||||
qeth_free_qdio_queues(card);
|
||||
|
||||
|
@ -6634,6 +6638,7 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev)
|
|||
int rc = 0;
|
||||
enum qeth_discipline_id def_discipline;
|
||||
|
||||
mutex_lock(&card->discipline_mutex);
|
||||
if (!card->discipline) {
|
||||
def_discipline = IS_IQD(card) ? QETH_DISCIPLINE_LAYER3 :
|
||||
QETH_DISCIPLINE_LAYER2;
|
||||
|
@ -6647,16 +6652,23 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev)
|
|||
}
|
||||
}
|
||||
|
||||
rc = qeth_set_online(card);
|
||||
rc = qeth_set_online(card, card->discipline);
|
||||
|
||||
err:
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int qeth_core_set_offline(struct ccwgroup_device *gdev)
|
||||
{
|
||||
struct qeth_card *card = dev_get_drvdata(&gdev->dev);
|
||||
int rc;
|
||||
|
||||
return qeth_set_offline(card, false);
|
||||
mutex_lock(&card->discipline_mutex);
|
||||
rc = qeth_set_offline(card, card->discipline, false);
|
||||
mutex_unlock(&card->discipline_mutex);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void qeth_core_shutdown(struct ccwgroup_device *gdev)
|
||||
|
|
|
@ -2208,7 +2208,7 @@ static void qeth_l2_remove_device(struct ccwgroup_device *gdev)
|
|||
wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
|
||||
|
||||
if (gdev->state == CCWGROUP_ONLINE)
|
||||
qeth_set_offline(card, false);
|
||||
qeth_set_offline(card, card->discipline, false);
|
||||
|
||||
cancel_work_sync(&card->close_dev_work);
|
||||
if (card->dev->reg_state == NETREG_REGISTERED)
|
||||
|
|
|
@ -1813,7 +1813,7 @@ static netdev_features_t qeth_l3_osa_features_check(struct sk_buff *skb,
|
|||
struct net_device *dev,
|
||||
netdev_features_t features)
|
||||
{
|
||||
if (qeth_get_ip_version(skb) != 4)
|
||||
if (vlan_get_protocol(skb) != htons(ETH_P_IP))
|
||||
features &= ~NETIF_F_HW_VLAN_CTAG_TX;
|
||||
return qeth_features_check(skb, dev, features);
|
||||
}
|
||||
|
@ -1971,7 +1971,7 @@ static void qeth_l3_remove_device(struct ccwgroup_device *cgdev)
|
|||
wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
|
||||
|
||||
if (cgdev->state == CCWGROUP_ONLINE)
|
||||
qeth_set_offline(card, false);
|
||||
qeth_set_offline(card, card->discipline, false);
|
||||
|
||||
cancel_work_sync(&card->close_dev_work);
|
||||
if (card->dev->reg_state == NETREG_REGISTERED)
|
||||
|
|
|
@ -1280,7 +1280,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
|
|||
u8 ece_support[0x1];
|
||||
u8 reserved_at_a4[0x7];
|
||||
u8 log_max_srq[0x5];
|
||||
u8 reserved_at_b0[0x2];
|
||||
u8 reserved_at_b0[0x1];
|
||||
u8 uplink_follow[0x1];
|
||||
u8 ts_cqe_to_dest_cqn[0x1];
|
||||
u8 reserved_at_b3[0xd];
|
||||
|
||||
|
|
|
@ -75,8 +75,9 @@ struct rtnl_link_stats {
|
|||
*
|
||||
* @rx_dropped: Number of packets received but not processed,
|
||||
* e.g. due to lack of resources or unsupported protocol.
|
||||
* For hardware interfaces this counter should not include packets
|
||||
* dropped by the device which are counted separately in
|
||||
* For hardware interfaces this counter may include packets discarded
|
||||
* due to L2 address filtering but should not include packets dropped
|
||||
* by the device due to buffer exhaustion which are counted separately in
|
||||
* @rx_missed_errors (since procfs folds those two counters together).
|
||||
*
|
||||
* @tx_dropped: Number of packets dropped on their way to transmission,
|
||||
|
|
|
@ -159,6 +159,7 @@ again:
|
|||
}
|
||||
|
||||
/* set info->task and info->tid */
|
||||
info->task = curr_task;
|
||||
if (curr_tid == info->tid) {
|
||||
curr_fd = info->fd;
|
||||
} else {
|
||||
|
|
|
@ -284,7 +284,8 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id)
|
|||
return 0;
|
||||
|
||||
out_free_newdev:
|
||||
if (new_dev->reg_state == NETREG_UNINITIALIZED)
|
||||
if (new_dev->reg_state == NETREG_UNINITIALIZED ||
|
||||
new_dev->reg_state == NETREG_UNREGISTERED)
|
||||
free_netdev(new_dev);
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -302,7 +302,7 @@ static int __ip_finish_output(struct net *net, struct sock *sk, struct sk_buff *
|
|||
if (skb_is_gso(skb))
|
||||
return ip_finish_output_gso(net, sk, skb, mtu);
|
||||
|
||||
if (skb->len > mtu || (IPCB(skb)->flags & IPSKB_FRAG_PMTU))
|
||||
if (skb->len > mtu || IPCB(skb)->frag_max_size)
|
||||
return ip_fragment(net, sk, skb, mtu, ip_finish_output2);
|
||||
|
||||
return ip_finish_output2(net, sk, skb);
|
||||
|
|
|
@ -759,8 +759,11 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
|||
goto tx_error;
|
||||
}
|
||||
|
||||
if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off, inner_iph,
|
||||
0, 0, false)) {
|
||||
df = tnl_params->frag_off;
|
||||
if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df)
|
||||
df |= (inner_iph->frag_off & htons(IP_DF));
|
||||
|
||||
if (tnl_update_pmtu(dev, skb, rt, df, inner_iph, 0, 0, false)) {
|
||||
ip_rt_put(rt);
|
||||
goto tx_error;
|
||||
}
|
||||
|
@ -788,10 +791,6 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
|||
ttl = ip4_dst_hoplimit(&rt->dst);
|
||||
}
|
||||
|
||||
df = tnl_params->frag_off;
|
||||
if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df)
|
||||
df |= (inner_iph->frag_off&htons(IP_DF));
|
||||
|
||||
max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
|
||||
+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
|
||||
if (max_headroom > dev->needed_headroom)
|
||||
|
|
|
@ -627,7 +627,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[],
|
|||
for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) {
|
||||
if (!tb[i])
|
||||
continue;
|
||||
if (tb[NHA_FDB])
|
||||
if (i == NHA_FDB)
|
||||
continue;
|
||||
NL_SET_ERR_MSG(extack,
|
||||
"No other attributes can be set in nexthop groups");
|
||||
|
@ -1459,8 +1459,10 @@ static struct nexthop *nexthop_create_group(struct net *net,
|
|||
return nh;
|
||||
|
||||
out_no_nh:
|
||||
for (; i >= 0; --i)
|
||||
for (i--; i >= 0; --i) {
|
||||
list_del(&nhg->nh_entries[i].nh_list);
|
||||
nexthop_put(nhg->nh_entries[i].nh);
|
||||
}
|
||||
|
||||
kfree(nhg->spare);
|
||||
kfree(nhg);
|
||||
|
|
|
@ -1025,6 +1025,8 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
|
|||
{
|
||||
struct fib6_table *table = rt->fib6_table;
|
||||
|
||||
/* Flush all cached dst in exception table */
|
||||
rt6_flush_exceptions(rt);
|
||||
fib6_drop_pcpu_from(rt, table);
|
||||
|
||||
if (rt->nh && !list_empty(&rt->nh_list))
|
||||
|
@ -1927,9 +1929,6 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
|
|||
net->ipv6.rt6_stats->fib_rt_entries--;
|
||||
net->ipv6.rt6_stats->fib_discarded_routes++;
|
||||
|
||||
/* Flush all cached dst in exception table */
|
||||
rt6_flush_exceptions(rt);
|
||||
|
||||
/* Reset round-robin state, if necessary */
|
||||
if (rcu_access_pointer(fn->rr_ptr) == rt)
|
||||
fn->rr_ptr = NULL;
|
||||
|
|
|
@ -755,7 +755,7 @@ static void qrtr_ns_data_ready(struct sock *sk)
|
|||
queue_work(qrtr_ns.workqueue, &qrtr_ns.work);
|
||||
}
|
||||
|
||||
void qrtr_ns_init(void)
|
||||
int qrtr_ns_init(void)
|
||||
{
|
||||
struct sockaddr_qrtr sq;
|
||||
int ret;
|
||||
|
@ -766,7 +766,7 @@ void qrtr_ns_init(void)
|
|||
ret = sock_create_kern(&init_net, AF_QIPCRTR, SOCK_DGRAM,
|
||||
PF_QIPCRTR, &qrtr_ns.sock);
|
||||
if (ret < 0)
|
||||
return;
|
||||
return ret;
|
||||
|
||||
ret = kernel_getsockname(qrtr_ns.sock, (struct sockaddr *)&sq);
|
||||
if (ret < 0) {
|
||||
|
@ -797,12 +797,13 @@ void qrtr_ns_init(void)
|
|||
if (ret < 0)
|
||||
goto err_wq;
|
||||
|
||||
return;
|
||||
return 0;
|
||||
|
||||
err_wq:
|
||||
destroy_workqueue(qrtr_ns.workqueue);
|
||||
err_sock:
|
||||
sock_release(qrtr_ns.sock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qrtr_ns_init);
|
||||
|
||||
|
|
|
@ -1287,13 +1287,19 @@ static int __init qrtr_proto_init(void)
|
|||
return rc;
|
||||
|
||||
rc = sock_register(&qrtr_family);
|
||||
if (rc) {
|
||||
proto_unregister(&qrtr_proto);
|
||||
return rc;
|
||||
}
|
||||
if (rc)
|
||||
goto err_proto;
|
||||
|
||||
qrtr_ns_init();
|
||||
rc = qrtr_ns_init();
|
||||
if (rc)
|
||||
goto err_sock;
|
||||
|
||||
return 0;
|
||||
|
||||
err_sock:
|
||||
sock_unregister(qrtr_family.family);
|
||||
err_proto:
|
||||
proto_unregister(&qrtr_proto);
|
||||
return rc;
|
||||
}
|
||||
postcore_initcall(qrtr_proto_init);
|
||||
|
|
|
@ -29,7 +29,7 @@ void qrtr_endpoint_unregister(struct qrtr_endpoint *ep);
|
|||
|
||||
int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len);
|
||||
|
||||
void qrtr_ns_init(void);
|
||||
int qrtr_ns_init(void);
|
||||
|
||||
void qrtr_ns_remove(void);
|
||||
|
||||
|
|
|
@ -21,6 +21,7 @@ config CFG80211
|
|||
tristate "cfg80211 - wireless configuration API"
|
||||
depends on RFKILL || !RFKILL
|
||||
select FW_LOADER
|
||||
select CRC32
|
||||
# may need to update this when certificates are changed and are
|
||||
# using a different algorithm, though right now they shouldn't
|
||||
# (this is here rather than below to allow it to be a module)
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
#include <bpf/bpf.h>
|
||||
#include <bpf/libbpf.h>
|
||||
#include <net/if.h>
|
||||
#include <linux/if.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <linux/socket.h>
|
||||
#include <linux/tc_act/tc_bpf.h>
|
||||
|
|
|
@ -139,6 +139,8 @@ int eprintf(int level, int var, const char *fmt, ...)
|
|||
#define pr_debug2(fmt, ...) pr_debugN(2, pr_fmt(fmt), ##__VA_ARGS__)
|
||||
#define pr_err(fmt, ...) \
|
||||
eprintf(0, verbose, pr_fmt(fmt), ##__VA_ARGS__)
|
||||
#define pr_info(fmt, ...) \
|
||||
eprintf(0, verbose, pr_fmt(fmt), ##__VA_ARGS__)
|
||||
|
||||
static bool is_btf_id(const char *name)
|
||||
{
|
||||
|
@ -472,7 +474,7 @@ static int symbols_resolve(struct object *obj)
|
|||
int nr_funcs = obj->nr_funcs;
|
||||
int err, type_id;
|
||||
struct btf *btf;
|
||||
__u32 nr;
|
||||
__u32 nr_types;
|
||||
|
||||
btf = btf__parse(obj->btf ?: obj->path, NULL);
|
||||
err = libbpf_get_error(btf);
|
||||
|
@ -483,12 +485,12 @@ static int symbols_resolve(struct object *obj)
|
|||
}
|
||||
|
||||
err = -1;
|
||||
nr = btf__get_nr_types(btf);
|
||||
nr_types = btf__get_nr_types(btf);
|
||||
|
||||
/*
|
||||
* Iterate all the BTF types and search for collected symbol IDs.
|
||||
*/
|
||||
for (type_id = 1; type_id <= nr; type_id++) {
|
||||
for (type_id = 1; type_id <= nr_types; type_id++) {
|
||||
const struct btf_type *type;
|
||||
struct rb_root *root;
|
||||
struct btf_id *id;
|
||||
|
@ -526,8 +528,13 @@ static int symbols_resolve(struct object *obj)
|
|||
|
||||
id = btf_id__find(root, str);
|
||||
if (id) {
|
||||
id->id = type_id;
|
||||
(*nr)--;
|
||||
if (id->id) {
|
||||
pr_info("WARN: multiple IDs found for '%s': %d, %d - using %d\n",
|
||||
str, id->id, type_id, id->id);
|
||||
} else {
|
||||
id->id = type_id;
|
||||
(*nr)--;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
* Copyright 2020 Google LLC.
|
||||
*/
|
||||
|
||||
#include "vmlinux.h"
|
||||
#include <linux/bpf.h>
|
||||
#include <errno.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
|
|
|
@ -869,7 +869,7 @@ ipv6_torture()
|
|||
pid3=$!
|
||||
ip netns exec me ping -f 2001:db8:101::2 >/dev/null 2>&1 &
|
||||
pid4=$!
|
||||
ip netns exec me mausezahn veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 &
|
||||
ip netns exec me mausezahn -6 veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 &
|
||||
pid5=$!
|
||||
|
||||
sleep 300
|
||||
|
|
|
@ -162,7 +162,15 @@
|
|||
# - list_flush_ipv6_exception
|
||||
# Using the same topology as in pmtu_ipv6, create exceptions, and check
|
||||
# they are shown when listing exception caches, gone after flushing them
|
||||
|
||||
#
|
||||
# - pmtu_ipv4_route_change
|
||||
# Use the same topology as in pmtu_ipv4, but issue a route replacement
|
||||
# command and delete the corresponding device afterward. This tests for
|
||||
# proper cleanup of the PMTU exceptions by the route replacement path.
|
||||
# Device unregistration should complete successfully
|
||||
#
|
||||
# - pmtu_ipv6_route_change
|
||||
# Same as above but with IPv6
|
||||
|
||||
# Kselftest framework requirement - SKIP code is 4.
|
||||
ksft_skip=4
|
||||
|
@ -224,7 +232,9 @@ tests="
|
|||
cleanup_ipv4_exception ipv4: cleanup of cached exceptions 1
|
||||
cleanup_ipv6_exception ipv6: cleanup of cached exceptions 1
|
||||
list_flush_ipv4_exception ipv4: list and flush cached exceptions 1
|
||||
list_flush_ipv6_exception ipv6: list and flush cached exceptions 1"
|
||||
list_flush_ipv6_exception ipv6: list and flush cached exceptions 1
|
||||
pmtu_ipv4_route_change ipv4: PMTU exception w/route replace 1
|
||||
pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1"
|
||||
|
||||
NS_A="ns-A"
|
||||
NS_B="ns-B"
|
||||
|
@ -1782,6 +1792,63 @@ test_list_flush_ipv6_exception() {
|
|||
return ${fail}
|
||||
}
|
||||
|
||||
test_pmtu_ipvX_route_change() {
|
||||
family=${1}
|
||||
|
||||
setup namespaces routing || return 2
|
||||
trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \
|
||||
"${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \
|
||||
"${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \
|
||||
"${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2
|
||||
|
||||
if [ ${family} -eq 4 ]; then
|
||||
ping=ping
|
||||
dst1="${prefix4}.${b_r1}.1"
|
||||
dst2="${prefix4}.${b_r2}.1"
|
||||
gw="${prefix4}.${a_r1}.2"
|
||||
else
|
||||
ping=${ping6}
|
||||
dst1="${prefix6}:${b_r1}::1"
|
||||
dst2="${prefix6}:${b_r2}::1"
|
||||
gw="${prefix6}:${a_r1}::2"
|
||||
fi
|
||||
|
||||
# Set up initial MTU values
|
||||
mtu "${ns_a}" veth_A-R1 2000
|
||||
mtu "${ns_r1}" veth_R1-A 2000
|
||||
mtu "${ns_r1}" veth_R1-B 1400
|
||||
mtu "${ns_b}" veth_B-R1 1400
|
||||
|
||||
mtu "${ns_a}" veth_A-R2 2000
|
||||
mtu "${ns_r2}" veth_R2-A 2000
|
||||
mtu "${ns_r2}" veth_R2-B 1500
|
||||
mtu "${ns_b}" veth_B-R2 1500
|
||||
|
||||
# Create route exceptions
|
||||
run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst1}
|
||||
run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst2}
|
||||
|
||||
# Check that exceptions have been created with the correct PMTU
|
||||
pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst1})"
|
||||
check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
|
||||
pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst2})"
|
||||
check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
|
||||
|
||||
# Replace the route from A to R1
|
||||
run_cmd ${ns_a} ip route change default via ${gw}
|
||||
|
||||
# Delete the device in A
|
||||
run_cmd ${ns_a} ip link del "veth_A-R1"
|
||||
}
|
||||
|
||||
test_pmtu_ipv4_route_change() {
|
||||
test_pmtu_ipvX_route_change 4
|
||||
}
|
||||
|
||||
test_pmtu_ipv6_route_change() {
|
||||
test_pmtu_ipvX_route_change 6
|
||||
}
|
||||
|
||||
usage() {
|
||||
echo
|
||||
echo "$0 [OPTIONS] [TEST]..."
|
||||
|
|
|
@ -5,6 +5,14 @@
|
|||
|
||||
readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
|
||||
|
||||
# set global exit status, but never reset nonzero one.
|
||||
check_err()
|
||||
{
|
||||
if [ $ret -eq 0 ]; then
|
||||
ret=$1
|
||||
fi
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
local -r jobs="$(jobs -p)"
|
||||
local -r ns="$(ip netns list|grep $PEER_NS)"
|
||||
|
@ -44,7 +52,9 @@ run_one() {
|
|||
# Hack: let bg programs complete the startup
|
||||
sleep 0.1
|
||||
./udpgso_bench_tx ${tx_args}
|
||||
ret=$?
|
||||
wait $(jobs -p)
|
||||
return $ret
|
||||
}
|
||||
|
||||
run_test() {
|
||||
|
@ -87,8 +97,10 @@ run_one_nat() {
|
|||
|
||||
sleep 0.1
|
||||
./udpgso_bench_tx ${tx_args}
|
||||
ret=$?
|
||||
kill -INT $pid
|
||||
wait $(jobs -p)
|
||||
return $ret
|
||||
}
|
||||
|
||||
run_one_2sock() {
|
||||
|
@ -110,7 +122,9 @@ run_one_2sock() {
|
|||
sleep 0.1
|
||||
# first UDP GSO socket should be closed at this point
|
||||
./udpgso_bench_tx ${tx_args}
|
||||
ret=$?
|
||||
wait $(jobs -p)
|
||||
return $ret
|
||||
}
|
||||
|
||||
run_nat_test() {
|
||||
|
@ -131,36 +145,54 @@ run_all() {
|
|||
local -r core_args="-l 4"
|
||||
local -r ipv4_args="${core_args} -4 -D 192.168.1.1"
|
||||
local -r ipv6_args="${core_args} -6 -D 2001:db8::1"
|
||||
ret=0
|
||||
|
||||
echo "ipv4"
|
||||
run_test "no GRO" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400"
|
||||
check_err $?
|
||||
|
||||
# explicitly check we are not receiving UDP_SEGMENT cmsg (-S -1)
|
||||
# when GRO does not take place
|
||||
run_test "no GRO chk cmsg" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400 -S -1"
|
||||
check_err $?
|
||||
|
||||
# the GSO packets are aggregated because:
|
||||
# * veth schedule napi after each xmit
|
||||
# * segmentation happens in BH context, veth napi poll is delayed after
|
||||
# the transmission of the last segment
|
||||
run_test "GRO" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720"
|
||||
check_err $?
|
||||
run_test "GRO chk cmsg" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472"
|
||||
check_err $?
|
||||
run_test "GRO with custom segment size" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720"
|
||||
check_err $?
|
||||
run_test "GRO with custom segment size cmsg" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720 -S 500"
|
||||
check_err $?
|
||||
|
||||
run_nat_test "bad GRO lookup" "${ipv4_args} -M 1 -s 14720 -S 0" "-n 10 -l 1472"
|
||||
check_err $?
|
||||
run_2sock_test "multiple GRO socks" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472"
|
||||
check_err $?
|
||||
|
||||
echo "ipv6"
|
||||
run_test "no GRO" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400"
|
||||
check_err $?
|
||||
run_test "no GRO chk cmsg" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400 -S -1"
|
||||
check_err $?
|
||||
run_test "GRO" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520"
|
||||
check_err $?
|
||||
run_test "GRO chk cmsg" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520 -S 1452"
|
||||
check_err $?
|
||||
run_test "GRO with custom segment size" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520"
|
||||
check_err $?
|
||||
run_test "GRO with custom segment size cmsg" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520 -S 500"
|
||||
check_err $?
|
||||
|
||||
run_nat_test "bad GRO lookup" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 10 -l 1452"
|
||||
check_err $?
|
||||
run_2sock_test "multiple GRO socks" "${ipv6_args} -M 1 -s 14520 -S 0 " "-n 1 -l 14520 -S 1452"
|
||||
check_err $?
|
||||
return $ret
|
||||
}
|
||||
|
||||
if [ ! -f ../bpf/xdp_dummy.o ]; then
|
||||
|
@ -180,3 +212,5 @@ elif [[ $1 == "__subprocess_2sock" ]]; then
|
|||
shift
|
||||
run_one_2sock $@
|
||||
fi
|
||||
|
||||
exit $?
|
||||
|
|
|
@ -4,7 +4,8 @@
|
|||
TEST_PROGS := nft_trans_stress.sh nft_nat.sh bridge_brouter.sh \
|
||||
conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \
|
||||
nft_concat_range.sh nft_conntrack_helper.sh \
|
||||
nft_queue.sh nft_meta.sh
|
||||
nft_queue.sh nft_meta.sh \
|
||||
ipip-conntrack-mtu.sh
|
||||
|
||||
LDLIBS = -lmnl
|
||||
TEST_GEN_FILES = nf-queue
|
||||
|
|
|
@ -0,0 +1,206 @@
|
|||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
# Kselftest framework requirement - SKIP code is 4.
|
||||
ksft_skip=4
|
||||
|
||||
# Conntrack needs to reassemble fragments in order to have complete
|
||||
# packets for rule matching. Reassembly can lead to packet loss.
|
||||
|
||||
# Consider the following setup:
|
||||
# +--------+ +---------+ +--------+
|
||||
# |Router A|-------|Wanrouter|-------|Router B|
|
||||
# | |.IPIP..| |..IPIP.| |
|
||||
# +--------+ +---------+ +--------+
|
||||
# / mtu 1400 \
|
||||
# / \
|
||||
#+--------+ +--------+
|
||||
#|Client A| |Client B|
|
||||
#| | | |
|
||||
#+--------+ +--------+
|
||||
|
||||
# Router A and Router B use IPIP tunnel interfaces to tunnel traffic
|
||||
# between Client A and Client B over WAN. Wanrouter has MTU 1400 set
|
||||
# on its interfaces.
|
||||
|
||||
rnd=$(mktemp -u XXXXXXXX)
|
||||
rx=$(mktemp)
|
||||
|
||||
r_a="ns-ra-$rnd"
|
||||
r_b="ns-rb-$rnd"
|
||||
r_w="ns-rw-$rnd"
|
||||
c_a="ns-ca-$rnd"
|
||||
c_b="ns-cb-$rnd"
|
||||
|
||||
checktool (){
|
||||
if ! $1 > /dev/null 2>&1; then
|
||||
echo "SKIP: Could not $2"
|
||||
exit $ksft_skip
|
||||
fi
|
||||
}
|
||||
|
||||
checktool "iptables --version" "run test without iptables"
|
||||
checktool "ip -Version" "run test without ip tool"
|
||||
checktool "which nc" "run test without nc (netcat)"
|
||||
checktool "ip netns add ${r_a}" "create net namespace"
|
||||
|
||||
for n in ${r_b} ${r_w} ${c_a} ${c_b};do
|
||||
ip netns add ${n}
|
||||
done
|
||||
|
||||
cleanup() {
|
||||
for n in ${r_a} ${r_b} ${r_w} ${c_a} ${c_b};do
|
||||
ip netns del ${n}
|
||||
done
|
||||
rm -f ${rx}
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
test_path() {
|
||||
msg="$1"
|
||||
|
||||
ip netns exec ${c_b} nc -n -w 3 -q 3 -u -l -p 5000 > ${rx} < /dev/null &
|
||||
|
||||
sleep 1
|
||||
for i in 1 2 3; do
|
||||
head -c1400 /dev/zero | tr "\000" "a" | ip netns exec ${c_a} nc -n -w 1 -u 192.168.20.2 5000
|
||||
done
|
||||
|
||||
wait
|
||||
|
||||
bytes=$(wc -c < ${rx})
|
||||
|
||||
if [ $bytes -eq 1400 ];then
|
||||
echo "OK: PMTU $msg connection tracking"
|
||||
else
|
||||
echo "FAIL: PMTU $msg connection tracking: got $bytes, expected 1400"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Detailed setup for Router A
|
||||
# ---------------------------
|
||||
# Interfaces:
|
||||
# eth0: 10.2.2.1/24
|
||||
# eth1: 192.168.10.1/24
|
||||
# ipip0: No IP address, local 10.2.2.1 remote 10.4.4.1
|
||||
# Routes:
|
||||
# 192.168.20.0/24 dev ipip0 (192.168.20.0/24 is subnet of Client B)
|
||||
# 10.4.4.1 via 10.2.2.254 (Router B via Wanrouter)
|
||||
# No iptables rules at all.
|
||||
|
||||
ip link add veth0 netns ${r_a} type veth peer name veth0 netns ${r_w}
|
||||
ip link add veth1 netns ${r_a} type veth peer name veth0 netns ${c_a}
|
||||
|
||||
l_addr="10.2.2.1"
|
||||
r_addr="10.4.4.1"
|
||||
ip netns exec ${r_a} ip link add ipip0 type ipip local ${l_addr} remote ${r_addr} mode ipip || exit $ksft_skip
|
||||
|
||||
for dev in lo veth0 veth1 ipip0; do
|
||||
ip -net ${r_a} link set $dev up
|
||||
done
|
||||
|
||||
ip -net ${r_a} addr add 10.2.2.1/24 dev veth0
|
||||
ip -net ${r_a} addr add 192.168.10.1/24 dev veth1
|
||||
|
||||
ip -net ${r_a} route add 192.168.20.0/24 dev ipip0
|
||||
ip -net ${r_a} route add 10.4.4.0/24 via 10.2.2.254
|
||||
|
||||
ip netns exec ${r_a} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null
|
||||
|
||||
# Detailed setup for Router B
|
||||
# ---------------------------
|
||||
# Interfaces:
|
||||
# eth0: 10.4.4.1/24
|
||||
# eth1: 192.168.20.1/24
|
||||
# ipip0: No IP address, local 10.4.4.1 remote 10.2.2.1
|
||||
# Routes:
|
||||
# 192.168.10.0/24 dev ipip0 (192.168.10.0/24 is subnet of Client A)
|
||||
# 10.2.2.1 via 10.4.4.254 (Router A via Wanrouter)
|
||||
# No iptables rules at all.
|
||||
|
||||
ip link add veth0 netns ${r_b} type veth peer name veth1 netns ${r_w}
|
||||
ip link add veth1 netns ${r_b} type veth peer name veth0 netns ${c_b}
|
||||
|
||||
l_addr="10.4.4.1"
|
||||
r_addr="10.2.2.1"
|
||||
|
||||
ip netns exec ${r_b} ip link add ipip0 type ipip local ${l_addr} remote ${r_addr} mode ipip || exit $ksft_skip
|
||||
|
||||
for dev in lo veth0 veth1 ipip0; do
|
||||
ip -net ${r_b} link set $dev up
|
||||
done
|
||||
|
||||
ip -net ${r_b} addr add 10.4.4.1/24 dev veth0
|
||||
ip -net ${r_b} addr add 192.168.20.1/24 dev veth1
|
||||
|
||||
ip -net ${r_b} route add 192.168.10.0/24 dev ipip0
|
||||
ip -net ${r_b} route add 10.2.2.0/24 via 10.4.4.254
|
||||
ip netns exec ${r_b} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null
|
||||
|
||||
# Client A
|
||||
ip -net ${c_a} addr add 192.168.10.2/24 dev veth0
|
||||
ip -net ${c_a} link set dev lo up
|
||||
ip -net ${c_a} link set dev veth0 up
|
||||
ip -net ${c_a} route add default via 192.168.10.1
|
||||
|
||||
# Client A
|
||||
ip -net ${c_b} addr add 192.168.20.2/24 dev veth0
|
||||
ip -net ${c_b} link set dev veth0 up
|
||||
ip -net ${c_b} link set dev lo up
|
||||
ip -net ${c_b} route add default via 192.168.20.1
|
||||
|
||||
# Wan
|
||||
ip -net ${r_w} addr add 10.2.2.254/24 dev veth0
|
||||
ip -net ${r_w} addr add 10.4.4.254/24 dev veth1
|
||||
|
||||
ip -net ${r_w} link set dev lo up
|
||||
ip -net ${r_w} link set dev veth0 up mtu 1400
|
||||
ip -net ${r_w} link set dev veth1 up mtu 1400
|
||||
|
||||
ip -net ${r_a} link set dev veth0 mtu 1400
|
||||
ip -net ${r_b} link set dev veth0 mtu 1400
|
||||
|
||||
ip netns exec ${r_w} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null
|
||||
|
||||
# Path MTU discovery
|
||||
# ------------------
|
||||
# Running tracepath from Client A to Client B shows PMTU discovery is working
|
||||
# as expected:
|
||||
#
|
||||
# clienta:~# tracepath 192.168.20.2
|
||||
# 1?: [LOCALHOST] pmtu 1500
|
||||
# 1: 192.168.10.1 0.867ms
|
||||
# 1: 192.168.10.1 0.302ms
|
||||
# 2: 192.168.10.1 0.312ms pmtu 1480
|
||||
# 2: no reply
|
||||
# 3: 192.168.10.1 0.510ms pmtu 1380
|
||||
# 3: 192.168.20.2 2.320ms reached
|
||||
# Resume: pmtu 1380 hops 3 back 3
|
||||
|
||||
# ip netns exec ${c_a} traceroute --mtu 192.168.20.2
|
||||
|
||||
# Router A has learned PMTU (1400) to Router B from Wanrouter.
|
||||
# Client A has learned PMTU (1400 - IPIP overhead = 1380) to Client B
|
||||
# from Router A.
|
||||
|
||||
#Send large UDP packet
|
||||
#---------------------
|
||||
#Now we send a 1400 bytes UDP packet from Client A to Client B:
|
||||
|
||||
# clienta:~# head -c1400 /dev/zero | tr "\000" "a" | nc -u 192.168.20.2 5000
|
||||
test_path "without"
|
||||
|
||||
# The IPv4 stack on Client A already knows the PMTU to Client B, so the
|
||||
# UDP packet is sent as two fragments (1380 + 20). Router A forwards the
|
||||
# fragments between eth1 and ipip0. The fragments fit into the tunnel and
|
||||
# reach their destination.
|
||||
|
||||
#When sending the large UDP packet again, Router A now reassembles the
|
||||
#fragments before routing the packet over ipip0. The resulting IPIP
|
||||
#packet is too big (1400) for the tunnel PMTU (1380) to Router B, it is
|
||||
#dropped on Router A before sending.
|
||||
|
||||
ip netns exec ${r_a} iptables -A FORWARD -m conntrack --ctstate NEW
|
||||
test_path "with"
|
Loading…
Reference in New Issue