Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from David Miller: 1) Fix free/alloc races in batmanadv, from Sven Eckelmann. 2) Several leaks and other fixes in kTLS support of mlx5 driver, from Tariq Toukan. 3) BPF devmap_hash cost calculation can overflow on 32-bit, from Toke Høiland-Jørgensen. 4) Add an r8152 device ID, from Kazutoshi Noguchi. 5) Missing include in ipv6's addrconf.c, from Ben Dooks. 6) Use siphash in flow dissector, from Eric Dumazet. Attackers can easily infer the 32-bit secret otherwise etc. 7) Several netdevice nesting depth fixes from Taehee Yoo. 8) Fix several KCSAN reported errors, from Eric Dumazet. For example, when doing lockless skb_queue_empty() checks, and accessing sk_napi_id/sk_incoming_cpu lockless as well. 9) Fix jumbo packet handling in RXRPC, from David Howells. 10) Bump SOMAXCONN and tcp_max_syn_backlog values, from Eric Dumazet. 11) Fix DMA synchronization in gve driver, from Yangchun Fu. 12) Several bpf offload fixes, from Jakub Kicinski. 13) Fix sk_page_frag() recursion during memory reclaim, from Tejun Heo. 14) Fix ping latency during high traffic rates in hisilicon driver, from Jiangfent Xiao. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (146 commits) net: fix installing orphaned programs net: cls_bpf: fix NULL deref on offload filter removal selftests: bpf: Skip write only files in debugfs selftests: net: reuseport_dualstack: fix uninitalized parameter r8169: fix wrong PHY ID issue with RTL8168dp net: dsa: bcm_sf2: Fix IMP setup for port different than 8 net: phylink: Fix phylink_dbg() macro gve: Fixes DMA synchronization. inet: stop leaking jiffies on the wire ixgbe: Remove duplicate clear_bit() call Documentation: networking: device drivers: Remove stray asterisks e1000: fix memory leaks i40e: Fix receive buffer starvation for AF_XDP igb: Fix constant media auto sense switching when no cable is connected net: ethernet: arc: add the missed clk_disable_unprepare igb: Enable media autosense for the i350. igb/igc: Don't warn on fatal read failures when the device is removed tcp: increase tcp_max_syn_backlog max value net: increase SOMAXCONN to 4096 netdevsim: Fix use-after-free during device dismantle ...
This commit is contained in:
commit
1204c70d9d
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
==============================================================
|
||||
Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters
|
||||
==============================================================
|
||||
=============================================================
|
||||
Linux Base Driver for the Intel(R) PRO/100 Family of Adapters
|
||||
=============================================================
|
||||
|
||||
June 1, 2018
|
||||
|
||||
|
@ -21,7 +21,7 @@ Contents
|
|||
In This Release
|
||||
===============
|
||||
|
||||
This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of
|
||||
This file describes the Linux Base Driver for the Intel(R) PRO/100 Family of
|
||||
Adapters. This driver includes support for Itanium(R)2-based systems.
|
||||
|
||||
For questions related to hardware requirements, refer to the documentation
|
||||
|
@ -138,9 +138,9 @@ version 1.6 or later is required for this functionality.
|
|||
The latest release of ethtool can be found from
|
||||
https://www.kernel.org/pub/software/network/ethtool/
|
||||
|
||||
Enabling Wake on LAN* (WoL)
|
||||
---------------------------
|
||||
WoL is provided through the ethtool* utility. For instructions on
|
||||
Enabling Wake on LAN (WoL)
|
||||
--------------------------
|
||||
WoL is provided through the ethtool utility. For instructions on
|
||||
enabling WoL with ethtool, refer to the ethtool man page. WoL will be
|
||||
enabled on the system during the next shut down or reboot. For this
|
||||
driver version, in order to enable WoL, the e100 driver must be loaded
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
===========================================================
|
||||
Linux* Base Driver for Intel(R) Ethernet Network Connection
|
||||
===========================================================
|
||||
==========================================================
|
||||
Linux Base Driver for Intel(R) Ethernet Network Connection
|
||||
==========================================================
|
||||
|
||||
Intel Gigabit Linux driver.
|
||||
Copyright(c) 1999 - 2013 Intel Corporation.
|
||||
|
@ -438,10 +438,10 @@ ethtool
|
|||
The latest release of ethtool can be found from
|
||||
https://www.kernel.org/pub/software/network/ethtool/
|
||||
|
||||
Enabling Wake on LAN* (WoL)
|
||||
---------------------------
|
||||
Enabling Wake on LAN (WoL)
|
||||
--------------------------
|
||||
|
||||
WoL is configured through the ethtool* utility.
|
||||
WoL is configured through the ethtool utility.
|
||||
|
||||
WoL will be enabled on the system during the next shut down or reboot.
|
||||
For this driver version, in order to enable WoL, the e1000 driver must be
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
======================================================
|
||||
Linux* Driver for Intel(R) Ethernet Network Connection
|
||||
======================================================
|
||||
=====================================================
|
||||
Linux Driver for Intel(R) Ethernet Network Connection
|
||||
=====================================================
|
||||
|
||||
Intel Gigabit Linux driver.
|
||||
Copyright(c) 2008-2018 Intel Corporation.
|
||||
|
@ -338,7 +338,7 @@ and higher cannot be forced. Use the autonegotiation advertising setting to
|
|||
manually set devices for 1 Gbps and higher.
|
||||
|
||||
Speed, duplex, and autonegotiation advertising are configured through the
|
||||
ethtool* utility.
|
||||
ethtool utility.
|
||||
|
||||
Caution: Only experienced network administrators should force speed and duplex
|
||||
or change autonegotiation advertising manually. The settings at the switch must
|
||||
|
@ -351,9 +351,9 @@ will not attempt to auto-negotiate with its link partner since those adapters
|
|||
operate only in full duplex and only at their native speed.
|
||||
|
||||
|
||||
Enabling Wake on LAN* (WoL)
|
||||
---------------------------
|
||||
WoL is configured through the ethtool* utility.
|
||||
Enabling Wake on LAN (WoL)
|
||||
--------------------------
|
||||
WoL is configured through the ethtool utility.
|
||||
|
||||
WoL will be enabled on the system during the next shut down or reboot. For
|
||||
this driver version, in order to enable WoL, the e1000e driver must be loaded
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
==============================================================
|
||||
Linux* Base Driver for Intel(R) Ethernet Multi-host Controller
|
||||
==============================================================
|
||||
=============================================================
|
||||
Linux Base Driver for Intel(R) Ethernet Multi-host Controller
|
||||
=============================================================
|
||||
|
||||
August 20, 2018
|
||||
Copyright(c) 2015-2018 Intel Corporation.
|
||||
|
@ -120,8 +120,8 @@ rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r
|
|||
Known Issues/Troubleshooting
|
||||
============================
|
||||
|
||||
Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS under Linux KVM
|
||||
---------------------------------------------------------------------------------------
|
||||
Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS under Linux KVM
|
||||
-------------------------------------------------------------------------------------
|
||||
KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
|
||||
includes traditional PCIe devices, as well as SR-IOV-capable devices based on
|
||||
the Intel Ethernet Controller XL710.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
==================================================================
|
||||
Linux* Base Driver for the Intel(R) Ethernet Controller 700 Series
|
||||
==================================================================
|
||||
=================================================================
|
||||
Linux Base Driver for the Intel(R) Ethernet Controller 700 Series
|
||||
=================================================================
|
||||
|
||||
Intel 40 Gigabit Linux driver.
|
||||
Copyright(c) 1999-2018 Intel Corporation.
|
||||
|
@ -384,7 +384,7 @@ NOTE: You cannot set the speed for devices based on the Intel(R) Ethernet
|
|||
Network Adapter XXV710 based devices.
|
||||
|
||||
Speed, duplex, and autonegotiation advertising are configured through the
|
||||
ethtool* utility.
|
||||
ethtool utility.
|
||||
|
||||
Caution: Only experienced network administrators should force speed and duplex
|
||||
or change autonegotiation advertising manually. The settings at the switch must
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
==================================================================
|
||||
Linux* Base Driver for Intel(R) Ethernet Adaptive Virtual Function
|
||||
==================================================================
|
||||
=================================================================
|
||||
Linux Base Driver for Intel(R) Ethernet Adaptive Virtual Function
|
||||
=================================================================
|
||||
|
||||
Intel Ethernet Adaptive Virtual Function Linux driver.
|
||||
Copyright(c) 2013-2018 Intel Corporation.
|
||||
|
@ -19,7 +19,7 @@ Contents
|
|||
Overview
|
||||
========
|
||||
|
||||
This file describes the iavf Linux* Base Driver. This driver was formerly
|
||||
This file describes the iavf Linux Base Driver. This driver was formerly
|
||||
called i40evf.
|
||||
|
||||
The iavf driver supports the below mentioned virtual function devices and
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
===================================================================
|
||||
Linux* Base Driver for the Intel(R) Ethernet Connection E800 Series
|
||||
===================================================================
|
||||
==================================================================
|
||||
Linux Base Driver for the Intel(R) Ethernet Connection E800 Series
|
||||
==================================================================
|
||||
|
||||
Intel ice Linux driver.
|
||||
Copyright(c) 2018 Intel Corporation.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
===========================================================
|
||||
Linux* Base Driver for Intel(R) Ethernet Network Connection
|
||||
===========================================================
|
||||
==========================================================
|
||||
Linux Base Driver for Intel(R) Ethernet Network Connection
|
||||
==========================================================
|
||||
|
||||
Intel Gigabit Linux driver.
|
||||
Copyright(c) 1999-2018 Intel Corporation.
|
||||
|
@ -129,9 +129,9 @@ version is required for this functionality. Download it at:
|
|||
https://www.kernel.org/pub/software/network/ethtool/
|
||||
|
||||
|
||||
Enabling Wake on LAN* (WoL)
|
||||
---------------------------
|
||||
WoL is configured through the ethtool* utility.
|
||||
Enabling Wake on LAN (WoL)
|
||||
--------------------------
|
||||
WoL is configured through the ethtool utility.
|
||||
|
||||
WoL will be enabled on the system during the next shut down or reboot. For
|
||||
this driver version, in order to enable WoL, the igb driver must be loaded
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
============================================================
|
||||
Linux* Base Virtual Function Driver for Intel(R) 1G Ethernet
|
||||
============================================================
|
||||
===========================================================
|
||||
Linux Base Virtual Function Driver for Intel(R) 1G Ethernet
|
||||
===========================================================
|
||||
|
||||
Intel Gigabit Virtual Function Linux driver.
|
||||
Copyright(c) 1999-2018 Intel Corporation.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
=============================================================================
|
||||
Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters
|
||||
=============================================================================
|
||||
===========================================================================
|
||||
Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters
|
||||
===========================================================================
|
||||
|
||||
Intel 10 Gigabit Linux driver.
|
||||
Copyright(c) 1999-2018 Intel Corporation.
|
||||
|
@ -519,8 +519,8 @@ The offload is also supported for ixgbe's VFs, but the VF must be set as
|
|||
Known Issues/Troubleshooting
|
||||
============================
|
||||
|
||||
Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS
|
||||
-----------------------------------------------------------------------
|
||||
Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS
|
||||
---------------------------------------------------------------------
|
||||
Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM.
|
||||
This includes traditional PCIe devices, as well as SR-IOV-capable devices based
|
||||
on the Intel Ethernet Controller XL710.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
=============================================================
|
||||
Linux* Base Virtual Function Driver for Intel(R) 10G Ethernet
|
||||
=============================================================
|
||||
============================================================
|
||||
Linux Base Virtual Function Driver for Intel(R) 10G Ethernet
|
||||
============================================================
|
||||
|
||||
Intel 10 Gigabit Virtual Function Linux driver.
|
||||
Copyright(c) 1999-2018 Intel Corporation.
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
==========================================================
|
||||
Linux* Driver for the Pensando(R) Ethernet adapter family
|
||||
==========================================================
|
||||
========================================================
|
||||
Linux Driver for the Pensando(R) Ethernet adapter family
|
||||
========================================================
|
||||
|
||||
Pensando Linux Ethernet driver.
|
||||
Copyright(c) 2019 Pensando Systems, Inc
|
||||
|
|
|
@ -207,8 +207,8 @@ TCP variables:
|
|||
|
||||
somaxconn - INTEGER
|
||||
Limit of socket listen() backlog, known in userspace as SOMAXCONN.
|
||||
Defaults to 128. See also tcp_max_syn_backlog for additional tuning
|
||||
for TCP sockets.
|
||||
Defaults to 4096. (Was 128 before linux-5.4)
|
||||
See also tcp_max_syn_backlog for additional tuning for TCP sockets.
|
||||
|
||||
tcp_abort_on_overflow - BOOLEAN
|
||||
If listening service is too slow to accept new connections,
|
||||
|
@ -408,11 +408,14 @@ tcp_max_orphans - INTEGER
|
|||
up to ~64K of unswappable memory.
|
||||
|
||||
tcp_max_syn_backlog - INTEGER
|
||||
Maximal number of remembered connection requests, which have not
|
||||
received an acknowledgment from connecting client.
|
||||
Maximal number of remembered connection requests (SYN_RECV),
|
||||
which have not received an acknowledgment from connecting client.
|
||||
This is a per-listener limit.
|
||||
The minimal value is 128 for low memory machines, and it will
|
||||
increase in proportion to the memory of machine.
|
||||
If server suffers from overload, try increasing this number.
|
||||
Remember to also check /proc/sys/net/core/somaxconn
|
||||
A SYN_RECV request socket consumes about 304 bytes of memory.
|
||||
|
||||
tcp_max_tw_buckets - INTEGER
|
||||
Maximal number of timewait sockets held by system simultaneously.
|
||||
|
|
|
@ -11408,7 +11408,6 @@ F: include/trace/events/tcp.h
|
|||
NETWORKING [TLS]
|
||||
M: Boris Pismenny <borisp@mellanox.com>
|
||||
M: Aviad Yehezkel <aviadye@mellanox.com>
|
||||
M: Dave Watson <davejwatson@fb.com>
|
||||
M: John Fastabend <john.fastabend@gmail.com>
|
||||
M: Daniel Borkmann <daniel@iogearbox.net>
|
||||
M: Jakub Kicinski <jakub.kicinski@netronome.com>
|
||||
|
|
|
@ -1297,7 +1297,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
|
|||
tp->write_seq = snd_isn;
|
||||
tp->snd_nxt = snd_isn;
|
||||
tp->snd_una = snd_isn;
|
||||
inet_sk(sk)->inet_id = tp->write_seq ^ jiffies;
|
||||
inet_sk(sk)->inet_id = prandom_u32();
|
||||
assign_rxopt(sk, opt);
|
||||
|
||||
if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
|
||||
|
|
|
@ -1702,7 +1702,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
|||
return peekmsg(sk, msg, len, nonblock, flags);
|
||||
|
||||
if (sk_can_busy_loop(sk) &&
|
||||
skb_queue_empty(&sk->sk_receive_queue) &&
|
||||
skb_queue_empty_lockless(&sk->sk_receive_queue) &&
|
||||
sk->sk_state == TCP_ESTABLISHED)
|
||||
sk_busy_loop(sk, nonblock);
|
||||
|
||||
|
|
|
@ -744,7 +744,7 @@ capi_poll(struct file *file, poll_table *wait)
|
|||
|
||||
poll_wait(file, &(cdev->recvwait), wait);
|
||||
mask = EPOLLOUT | EPOLLWRNORM;
|
||||
if (!skb_queue_empty(&cdev->recvqueue))
|
||||
if (!skb_queue_empty_lockless(&cdev->recvqueue))
|
||||
mask |= EPOLLIN | EPOLLRDNORM;
|
||||
return mask;
|
||||
}
|
||||
|
|
|
@ -952,7 +952,7 @@ static int alb_upper_dev_walk(struct net_device *upper, void *_data)
|
|||
struct bond_vlan_tag *tags;
|
||||
|
||||
if (is_vlan_dev(upper) &&
|
||||
bond->nest_level == vlan_get_encap_level(upper) - 1) {
|
||||
bond->dev->lower_level == upper->lower_level - 1) {
|
||||
if (upper->addr_assign_type == NET_ADDR_STOLEN) {
|
||||
alb_send_lp_vid(slave, mac_addr,
|
||||
vlan_dev_vlan_proto(upper),
|
||||
|
|
|
@ -1733,8 +1733,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
|
|||
goto err_upper_unlink;
|
||||
}
|
||||
|
||||
bond->nest_level = dev_get_nest_level(bond_dev) + 1;
|
||||
|
||||
/* If the mode uses primary, then the following is handled by
|
||||
* bond_change_active_slave().
|
||||
*/
|
||||
|
@ -1816,7 +1814,8 @@ err_detach:
|
|||
slave_disable_netpoll(new_slave);
|
||||
|
||||
err_close:
|
||||
slave_dev->priv_flags &= ~IFF_BONDING;
|
||||
if (!netif_is_bond_master(slave_dev))
|
||||
slave_dev->priv_flags &= ~IFF_BONDING;
|
||||
dev_close(slave_dev);
|
||||
|
||||
err_restore_mac:
|
||||
|
@ -1956,9 +1955,6 @@ static int __bond_release_one(struct net_device *bond_dev,
|
|||
if (!bond_has_slaves(bond)) {
|
||||
bond_set_carrier(bond);
|
||||
eth_hw_addr_random(bond_dev);
|
||||
bond->nest_level = SINGLE_DEPTH_NESTING;
|
||||
} else {
|
||||
bond->nest_level = dev_get_nest_level(bond_dev) + 1;
|
||||
}
|
||||
|
||||
unblock_netpoll_tx();
|
||||
|
@ -2017,7 +2013,8 @@ static int __bond_release_one(struct net_device *bond_dev,
|
|||
else
|
||||
dev_set_mtu(slave_dev, slave->original_mtu);
|
||||
|
||||
slave_dev->priv_flags &= ~IFF_BONDING;
|
||||
if (!netif_is_bond_master(slave_dev))
|
||||
slave_dev->priv_flags &= ~IFF_BONDING;
|
||||
|
||||
bond_free_slave(slave);
|
||||
|
||||
|
@ -3442,13 +3439,6 @@ static void bond_fold_stats(struct rtnl_link_stats64 *_res,
|
|||
}
|
||||
}
|
||||
|
||||
static int bond_get_nest_level(struct net_device *bond_dev)
|
||||
{
|
||||
struct bonding *bond = netdev_priv(bond_dev);
|
||||
|
||||
return bond->nest_level;
|
||||
}
|
||||
|
||||
static void bond_get_stats(struct net_device *bond_dev,
|
||||
struct rtnl_link_stats64 *stats)
|
||||
{
|
||||
|
@ -3457,7 +3447,7 @@ static void bond_get_stats(struct net_device *bond_dev,
|
|||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
|
||||
spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev));
|
||||
spin_lock(&bond->stats_lock);
|
||||
memcpy(stats, &bond->bond_stats, sizeof(*stats));
|
||||
|
||||
rcu_read_lock();
|
||||
|
@ -4268,7 +4258,6 @@ static const struct net_device_ops bond_netdev_ops = {
|
|||
.ndo_neigh_setup = bond_neigh_setup,
|
||||
.ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid,
|
||||
.ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid,
|
||||
.ndo_get_lock_subclass = bond_get_nest_level,
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
.ndo_netpoll_setup = bond_netpoll_setup,
|
||||
.ndo_netpoll_cleanup = bond_netpoll_cleanup,
|
||||
|
@ -4296,7 +4285,6 @@ void bond_setup(struct net_device *bond_dev)
|
|||
struct bonding *bond = netdev_priv(bond_dev);
|
||||
|
||||
spin_lock_init(&bond->mode_lock);
|
||||
spin_lock_init(&bond->stats_lock);
|
||||
bond->params = bonding_defaults;
|
||||
|
||||
/* Initialize pointers */
|
||||
|
@ -4365,6 +4353,7 @@ static void bond_uninit(struct net_device *bond_dev)
|
|||
|
||||
list_del(&bond->bond_list);
|
||||
|
||||
lockdep_unregister_key(&bond->stats_lock_key);
|
||||
bond_debug_unregister(bond);
|
||||
}
|
||||
|
||||
|
@ -4768,8 +4757,9 @@ static int bond_init(struct net_device *bond_dev)
|
|||
if (!bond->wq)
|
||||
return -ENOMEM;
|
||||
|
||||
bond->nest_level = SINGLE_DEPTH_NESTING;
|
||||
netdev_lockdep_set_classes(bond_dev);
|
||||
spin_lock_init(&bond->stats_lock);
|
||||
lockdep_register_key(&bond->stats_lock_key);
|
||||
lockdep_set_class(&bond->stats_lock, &bond->stats_lock_key);
|
||||
|
||||
list_add_tail(&bond->bond_list, &bn->dev_list);
|
||||
|
||||
|
|
|
@ -37,22 +37,11 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
|
|||
unsigned int i;
|
||||
u32 reg, offset;
|
||||
|
||||
if (priv->type == BCM7445_DEVICE_ID)
|
||||
offset = CORE_STS_OVERRIDE_IMP;
|
||||
else
|
||||
offset = CORE_STS_OVERRIDE_IMP2;
|
||||
|
||||
/* Enable the port memories */
|
||||
reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL);
|
||||
reg &= ~P_TXQ_PSM_VDD(port);
|
||||
core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
|
||||
|
||||
/* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
|
||||
reg = core_readl(priv, CORE_IMP_CTL);
|
||||
reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
|
||||
reg &= ~(RX_DIS | TX_DIS);
|
||||
core_writel(priv, reg, CORE_IMP_CTL);
|
||||
|
||||
/* Enable forwarding */
|
||||
core_writel(priv, SW_FWDG_EN, CORE_SWMODE);
|
||||
|
||||
|
@ -71,10 +60,27 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
|
|||
|
||||
b53_brcm_hdr_setup(ds, port);
|
||||
|
||||
/* Force link status for IMP port */
|
||||
reg = core_readl(priv, offset);
|
||||
reg |= (MII_SW_OR | LINK_STS);
|
||||
core_writel(priv, reg, offset);
|
||||
if (port == 8) {
|
||||
if (priv->type == BCM7445_DEVICE_ID)
|
||||
offset = CORE_STS_OVERRIDE_IMP;
|
||||
else
|
||||
offset = CORE_STS_OVERRIDE_IMP2;
|
||||
|
||||
/* Force link status for IMP port */
|
||||
reg = core_readl(priv, offset);
|
||||
reg |= (MII_SW_OR | LINK_STS);
|
||||
core_writel(priv, reg, offset);
|
||||
|
||||
/* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
|
||||
reg = core_readl(priv, CORE_IMP_CTL);
|
||||
reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
|
||||
reg &= ~(RX_DIS | TX_DIS);
|
||||
core_writel(priv, reg, CORE_IMP_CTL);
|
||||
} else {
|
||||
reg = core_readl(priv, CORE_G_PCTL_PORT(port));
|
||||
reg &= ~(RX_DIS | TX_DIS);
|
||||
core_writel(priv, reg, CORE_G_PCTL_PORT(port));
|
||||
}
|
||||
}
|
||||
|
||||
static void bcm_sf2_gphy_enable_set(struct dsa_switch *ds, bool enable)
|
||||
|
|
|
@ -26,8 +26,8 @@ config NET_DSA_SJA1105_PTP
|
|||
|
||||
config NET_DSA_SJA1105_TAS
|
||||
bool "Support for the Time-Aware Scheduler on NXP SJA1105"
|
||||
depends on NET_DSA_SJA1105
|
||||
depends on NET_SCH_TAPRIO
|
||||
depends on NET_DSA_SJA1105 && NET_SCH_TAPRIO
|
||||
depends on NET_SCH_TAPRIO=y || NET_DSA_SJA1105=m
|
||||
help
|
||||
This enables support for the TTEthernet-based egress scheduling
|
||||
engine in the SJA1105 DSA driver, which is controlled using a
|
||||
|
|
|
@ -256,6 +256,9 @@ static int emac_rockchip_remove(struct platform_device *pdev)
|
|||
if (priv->regulator)
|
||||
regulator_disable(priv->regulator);
|
||||
|
||||
if (priv->soc_data->need_div_macclk)
|
||||
clk_disable_unprepare(priv->macclk);
|
||||
|
||||
free_netdev(ndev);
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -10382,7 +10382,8 @@ static void bnxt_cleanup_pci(struct bnxt *bp)
|
|||
{
|
||||
bnxt_unmap_bars(bp, bp->pdev);
|
||||
pci_release_regions(bp->pdev);
|
||||
pci_disable_device(bp->pdev);
|
||||
if (pci_is_enabled(bp->pdev))
|
||||
pci_disable_device(bp->pdev);
|
||||
}
|
||||
|
||||
static void bnxt_init_dflt_coal(struct bnxt *bp)
|
||||
|
@ -10669,14 +10670,11 @@ static void bnxt_fw_reset_task(struct work_struct *work)
|
|||
bp->fw_reset_state = BNXT_FW_RESET_STATE_RESET_FW;
|
||||
}
|
||||
/* fall through */
|
||||
case BNXT_FW_RESET_STATE_RESET_FW: {
|
||||
u32 wait_dsecs = bp->fw_health->post_reset_wait_dsecs;
|
||||
|
||||
case BNXT_FW_RESET_STATE_RESET_FW:
|
||||
bnxt_reset_all(bp);
|
||||
bp->fw_reset_state = BNXT_FW_RESET_STATE_ENABLE_DEV;
|
||||
bnxt_queue_fw_reset_work(bp, wait_dsecs * HZ / 10);
|
||||
bnxt_queue_fw_reset_work(bp, bp->fw_reset_min_dsecs * HZ / 10);
|
||||
return;
|
||||
}
|
||||
case BNXT_FW_RESET_STATE_ENABLE_DEV:
|
||||
if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state) &&
|
||||
bp->fw_health) {
|
||||
|
|
|
@ -29,25 +29,20 @@ static int bnxt_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
|
|||
val = bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG);
|
||||
health_status = val & 0xffff;
|
||||
|
||||
if (health_status == BNXT_FW_STATUS_HEALTHY) {
|
||||
rc = devlink_fmsg_string_pair_put(fmsg, "FW status",
|
||||
"Healthy;");
|
||||
if (rc)
|
||||
return rc;
|
||||
} else if (health_status < BNXT_FW_STATUS_HEALTHY) {
|
||||
rc = devlink_fmsg_string_pair_put(fmsg, "FW status",
|
||||
"Not yet completed initialization;");
|
||||
if (health_status < BNXT_FW_STATUS_HEALTHY) {
|
||||
rc = devlink_fmsg_string_pair_put(fmsg, "Description",
|
||||
"Not yet completed initialization");
|
||||
if (rc)
|
||||
return rc;
|
||||
} else if (health_status > BNXT_FW_STATUS_HEALTHY) {
|
||||
rc = devlink_fmsg_string_pair_put(fmsg, "FW status",
|
||||
"Encountered fatal error and cannot recover;");
|
||||
rc = devlink_fmsg_string_pair_put(fmsg, "Description",
|
||||
"Encountered fatal error and cannot recover");
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (val >> 16) {
|
||||
rc = devlink_fmsg_u32_pair_put(fmsg, "Error", val >> 16);
|
||||
rc = devlink_fmsg_u32_pair_put(fmsg, "Error code", val >> 16);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
@ -215,25 +210,68 @@ enum bnxt_dl_param_id {
|
|||
|
||||
static const struct bnxt_dl_nvm_param nvm_params[] = {
|
||||
{DEVLINK_PARAM_GENERIC_ID_ENABLE_SRIOV, NVM_OFF_ENABLE_SRIOV,
|
||||
BNXT_NVM_SHARED_CFG, 1},
|
||||
BNXT_NVM_SHARED_CFG, 1, 1},
|
||||
{DEVLINK_PARAM_GENERIC_ID_IGNORE_ARI, NVM_OFF_IGNORE_ARI,
|
||||
BNXT_NVM_SHARED_CFG, 1},
|
||||
BNXT_NVM_SHARED_CFG, 1, 1},
|
||||
{DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MAX,
|
||||
NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10},
|
||||
NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10, 4},
|
||||
{DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MIN,
|
||||
NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7},
|
||||
NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7, 4},
|
||||
{BNXT_DEVLINK_PARAM_ID_GRE_VER_CHECK, NVM_OFF_DIS_GRE_VER_CHECK,
|
||||
BNXT_NVM_SHARED_CFG, 1},
|
||||
BNXT_NVM_SHARED_CFG, 1, 1},
|
||||
};
|
||||
|
||||
union bnxt_nvm_data {
|
||||
u8 val8;
|
||||
__le32 val32;
|
||||
};
|
||||
|
||||
static void bnxt_copy_to_nvm_data(union bnxt_nvm_data *dst,
|
||||
union devlink_param_value *src,
|
||||
int nvm_num_bits, int dl_num_bytes)
|
||||
{
|
||||
u32 val32 = 0;
|
||||
|
||||
if (nvm_num_bits == 1) {
|
||||
dst->val8 = src->vbool;
|
||||
return;
|
||||
}
|
||||
if (dl_num_bytes == 4)
|
||||
val32 = src->vu32;
|
||||
else if (dl_num_bytes == 2)
|
||||
val32 = (u32)src->vu16;
|
||||
else if (dl_num_bytes == 1)
|
||||
val32 = (u32)src->vu8;
|
||||
dst->val32 = cpu_to_le32(val32);
|
||||
}
|
||||
|
||||
static void bnxt_copy_from_nvm_data(union devlink_param_value *dst,
|
||||
union bnxt_nvm_data *src,
|
||||
int nvm_num_bits, int dl_num_bytes)
|
||||
{
|
||||
u32 val32;
|
||||
|
||||
if (nvm_num_bits == 1) {
|
||||
dst->vbool = src->val8;
|
||||
return;
|
||||
}
|
||||
val32 = le32_to_cpu(src->val32);
|
||||
if (dl_num_bytes == 4)
|
||||
dst->vu32 = val32;
|
||||
else if (dl_num_bytes == 2)
|
||||
dst->vu16 = (u16)val32;
|
||||
else if (dl_num_bytes == 1)
|
||||
dst->vu8 = (u8)val32;
|
||||
}
|
||||
|
||||
static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,
|
||||
int msg_len, union devlink_param_value *val)
|
||||
{
|
||||
struct hwrm_nvm_get_variable_input *req = msg;
|
||||
void *data_addr = NULL, *buf = NULL;
|
||||
struct bnxt_dl_nvm_param nvm_param;
|
||||
int bytesize, idx = 0, rc, i;
|
||||
union bnxt_nvm_data *data;
|
||||
dma_addr_t data_dma_addr;
|
||||
int idx = 0, rc, i;
|
||||
|
||||
/* Get/Set NVM CFG parameter is supported only on PFs */
|
||||
if (BNXT_VF(bp))
|
||||
|
@ -254,47 +292,31 @@ static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,
|
|||
else if (nvm_param.dir_type == BNXT_NVM_FUNC_CFG)
|
||||
idx = bp->pf.fw_fid - BNXT_FIRST_PF_FID;
|
||||
|
||||
bytesize = roundup(nvm_param.num_bits, BITS_PER_BYTE) / BITS_PER_BYTE;
|
||||
switch (bytesize) {
|
||||
case 1:
|
||||
if (nvm_param.num_bits == 1)
|
||||
buf = &val->vbool;
|
||||
else
|
||||
buf = &val->vu8;
|
||||
break;
|
||||
case 2:
|
||||
buf = &val->vu16;
|
||||
break;
|
||||
case 4:
|
||||
buf = &val->vu32;
|
||||
break;
|
||||
default:
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
data_addr = dma_alloc_coherent(&bp->pdev->dev, bytesize,
|
||||
&data_dma_addr, GFP_KERNEL);
|
||||
if (!data_addr)
|
||||
data = dma_alloc_coherent(&bp->pdev->dev, sizeof(*data),
|
||||
&data_dma_addr, GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
req->dest_data_addr = cpu_to_le64(data_dma_addr);
|
||||
req->data_len = cpu_to_le16(nvm_param.num_bits);
|
||||
req->data_len = cpu_to_le16(nvm_param.nvm_num_bits);
|
||||
req->option_num = cpu_to_le16(nvm_param.offset);
|
||||
req->index_0 = cpu_to_le16(idx);
|
||||
if (idx)
|
||||
req->dimensions = cpu_to_le16(1);
|
||||
|
||||
if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE)) {
|
||||
memcpy(data_addr, buf, bytesize);
|
||||
bnxt_copy_to_nvm_data(data, val, nvm_param.nvm_num_bits,
|
||||
nvm_param.dl_num_bytes);
|
||||
rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);
|
||||
} else {
|
||||
rc = hwrm_send_message_silent(bp, msg, msg_len,
|
||||
HWRM_CMD_TIMEOUT);
|
||||
if (!rc)
|
||||
bnxt_copy_from_nvm_data(val, data,
|
||||
nvm_param.nvm_num_bits,
|
||||
nvm_param.dl_num_bytes);
|
||||
}
|
||||
if (!rc && req->req_type == cpu_to_le16(HWRM_NVM_GET_VARIABLE))
|
||||
memcpy(buf, data_addr, bytesize);
|
||||
|
||||
dma_free_coherent(&bp->pdev->dev, bytesize, data_addr, data_dma_addr);
|
||||
dma_free_coherent(&bp->pdev->dev, sizeof(*data), data, data_dma_addr);
|
||||
if (rc == -EACCES)
|
||||
netdev_err(bp->dev, "PF does not have admin privileges to modify NVM config\n");
|
||||
return rc;
|
||||
|
|
|
@ -52,7 +52,8 @@ struct bnxt_dl_nvm_param {
|
|||
u16 id;
|
||||
u16 offset;
|
||||
u16 dir_type;
|
||||
u16 num_bits;
|
||||
u16 nvm_num_bits;
|
||||
u8 dl_num_bytes;
|
||||
};
|
||||
|
||||
void bnxt_devlink_health_report(struct bnxt *bp, unsigned long event);
|
||||
|
|
|
@ -695,10 +695,10 @@ static void uld_init(struct adapter *adap, struct cxgb4_lld_info *lld)
|
|||
lld->write_cmpl_support = adap->params.write_cmpl_support;
|
||||
}
|
||||
|
||||
static void uld_attach(struct adapter *adap, unsigned int uld)
|
||||
static int uld_attach(struct adapter *adap, unsigned int uld)
|
||||
{
|
||||
void *handle;
|
||||
struct cxgb4_lld_info lli;
|
||||
void *handle;
|
||||
|
||||
uld_init(adap, &lli);
|
||||
uld_queue_init(adap, uld, &lli);
|
||||
|
@ -708,7 +708,7 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
|
|||
dev_warn(adap->pdev_dev,
|
||||
"could not attach to the %s driver, error %ld\n",
|
||||
adap->uld[uld].name, PTR_ERR(handle));
|
||||
return;
|
||||
return PTR_ERR(handle);
|
||||
}
|
||||
|
||||
adap->uld[uld].handle = handle;
|
||||
|
@ -716,22 +716,22 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
|
|||
|
||||
if (adap->flags & CXGB4_FULL_INIT_DONE)
|
||||
adap->uld[uld].state_change(handle, CXGB4_STATE_UP);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* cxgb4_register_uld - register an upper-layer driver
|
||||
* @type: the ULD type
|
||||
* @p: the ULD methods
|
||||
/* cxgb4_register_uld - register an upper-layer driver
|
||||
* @type: the ULD type
|
||||
* @p: the ULD methods
|
||||
*
|
||||
* Registers an upper-layer driver with this driver and notifies the ULD
|
||||
* about any presently available devices that support its type. Returns
|
||||
* %-EBUSY if a ULD of the same type is already registered.
|
||||
* Registers an upper-layer driver with this driver and notifies the ULD
|
||||
* about any presently available devices that support its type.
|
||||
*/
|
||||
void cxgb4_register_uld(enum cxgb4_uld type,
|
||||
const struct cxgb4_uld_info *p)
|
||||
{
|
||||
int ret = 0;
|
||||
struct adapter *adap;
|
||||
int ret = 0;
|
||||
|
||||
if (type >= CXGB4_ULD_MAX)
|
||||
return;
|
||||
|
@ -763,8 +763,12 @@ void cxgb4_register_uld(enum cxgb4_uld type,
|
|||
if (ret)
|
||||
goto free_irq;
|
||||
adap->uld[type] = *p;
|
||||
uld_attach(adap, type);
|
||||
ret = uld_attach(adap, type);
|
||||
if (ret)
|
||||
goto free_txq;
|
||||
continue;
|
||||
free_txq:
|
||||
release_sge_txq_uld(adap, type);
|
||||
free_irq:
|
||||
if (adap->flags & CXGB4_FULL_INIT_DONE)
|
||||
quiesce_rx_uld(adap, type);
|
||||
|
|
|
@ -3791,15 +3791,11 @@ int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
|
|||
* write the CIDX Updates into the Status Page at the end of the
|
||||
* TX Queue.
|
||||
*/
|
||||
c.autoequiqe_to_viid = htonl((dbqt
|
||||
? FW_EQ_ETH_CMD_AUTOEQUIQE_F
|
||||
: FW_EQ_ETH_CMD_AUTOEQUEQE_F) |
|
||||
c.autoequiqe_to_viid = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE_F |
|
||||
FW_EQ_ETH_CMD_VIID_V(pi->viid));
|
||||
|
||||
c.fetchszm_to_iqid =
|
||||
htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(dbqt
|
||||
? HOSTFCMODE_INGRESS_QUEUE_X
|
||||
: HOSTFCMODE_STATUS_PAGE_X) |
|
||||
htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(HOSTFCMODE_STATUS_PAGE_X) |
|
||||
FW_EQ_ETH_CMD_PCIECHN_V(pi->tx_chan) |
|
||||
FW_EQ_ETH_CMD_FETCHRO_F | FW_EQ_ETH_CMD_IQID_V(iqid));
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/* Register definitions for Gemini GMAC Ethernet device driver
|
||||
*
|
||||
* Copyright (C) 2006 Storlink, Corp.
|
||||
|
|
|
@ -727,6 +727,18 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
|
|||
*/
|
||||
nfrags = skb_shinfo(skb)->nr_frags;
|
||||
|
||||
/* Setup HW checksumming */
|
||||
csum_vlan = 0;
|
||||
if (skb->ip_summed == CHECKSUM_PARTIAL &&
|
||||
!ftgmac100_prep_tx_csum(skb, &csum_vlan))
|
||||
goto drop;
|
||||
|
||||
/* Add VLAN tag */
|
||||
if (skb_vlan_tag_present(skb)) {
|
||||
csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
|
||||
csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
|
||||
}
|
||||
|
||||
/* Get header len */
|
||||
len = skb_headlen(skb);
|
||||
|
||||
|
@ -753,19 +765,6 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
|
|||
if (nfrags == 0)
|
||||
f_ctl_stat |= FTGMAC100_TXDES0_LTS;
|
||||
txdes->txdes3 = cpu_to_le32(map);
|
||||
|
||||
/* Setup HW checksumming */
|
||||
csum_vlan = 0;
|
||||
if (skb->ip_summed == CHECKSUM_PARTIAL &&
|
||||
!ftgmac100_prep_tx_csum(skb, &csum_vlan))
|
||||
goto drop;
|
||||
|
||||
/* Add VLAN tag */
|
||||
if (skb_vlan_tag_present(skb)) {
|
||||
csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
|
||||
csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
|
||||
}
|
||||
|
||||
txdes->txdes1 = cpu_to_le32(csum_vlan);
|
||||
|
||||
/* Next descriptor */
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2018 NXP
|
||||
*/
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2013-2016 Freescale Semiconductor Inc.
|
||||
* Copyright 2016-2018 NXP
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright 2013-2016 Freescale Semiconductor Inc.
|
||||
* Copyright 2016-2018 NXP
|
||||
|
|
|
@ -3558,7 +3558,7 @@ fec_probe(struct platform_device *pdev)
|
|||
|
||||
for (i = 0; i < irq_cnt; i++) {
|
||||
snprintf(irq_name, sizeof(irq_name), "int%d", i);
|
||||
irq = platform_get_irq_byname(pdev, irq_name);
|
||||
irq = platform_get_irq_byname_optional(pdev, irq_name);
|
||||
if (irq < 0)
|
||||
irq = platform_get_irq(pdev, i);
|
||||
if (irq < 0) {
|
||||
|
|
|
@ -600,9 +600,9 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
|
|||
|
||||
INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);
|
||||
|
||||
irq = platform_get_irq_byname(pdev, "pps");
|
||||
irq = platform_get_irq_byname_optional(pdev, "pps");
|
||||
if (irq < 0)
|
||||
irq = platform_get_irq(pdev, irq_idx);
|
||||
irq = platform_get_irq_optional(pdev, irq_idx);
|
||||
/* Failure to get an irq is not fatal,
|
||||
* only the PTP_CLOCK_PPS clock events should stop
|
||||
*/
|
||||
|
|
|
@ -289,6 +289,8 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc,
|
|||
|
||||
len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD;
|
||||
page_info = &rx->data.page_info[idx];
|
||||
dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx],
|
||||
PAGE_SIZE, DMA_FROM_DEVICE);
|
||||
|
||||
/* gvnic can only receive into registered segments. If the buffer
|
||||
* can't be recycled, our only choice is to copy the data out of
|
||||
|
|
|
@ -390,7 +390,21 @@ static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
|
|||
seg_desc->seg.seg_addr = cpu_to_be64(addr);
|
||||
}
|
||||
|
||||
static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
|
||||
static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses,
|
||||
u64 iov_offset, u64 iov_len)
|
||||
{
|
||||
dma_addr_t dma;
|
||||
u64 addr;
|
||||
|
||||
for (addr = iov_offset; addr < iov_offset + iov_len;
|
||||
addr += PAGE_SIZE) {
|
||||
dma = page_buses[addr / PAGE_SIZE];
|
||||
dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE);
|
||||
}
|
||||
}
|
||||
|
||||
static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb,
|
||||
struct device *dev)
|
||||
{
|
||||
int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset;
|
||||
union gve_tx_desc *pkt_desc, *seg_desc;
|
||||
|
@ -432,6 +446,9 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
|
|||
skb_copy_bits(skb, 0,
|
||||
tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
|
||||
hlen);
|
||||
gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses,
|
||||
info->iov[hdr_nfrags - 1].iov_offset,
|
||||
info->iov[hdr_nfrags - 1].iov_len);
|
||||
copy_offset = hlen;
|
||||
|
||||
for (i = payload_iov; i < payload_nfrags + payload_iov; i++) {
|
||||
|
@ -445,6 +462,9 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
|
|||
skb_copy_bits(skb, copy_offset,
|
||||
tx->tx_fifo.base + info->iov[i].iov_offset,
|
||||
info->iov[i].iov_len);
|
||||
gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses,
|
||||
info->iov[i].iov_offset,
|
||||
info->iov[i].iov_len);
|
||||
copy_offset += info->iov[i].iov_len;
|
||||
}
|
||||
|
||||
|
@ -473,7 +493,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
|
|||
gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
|
||||
return NETDEV_TX_BUSY;
|
||||
}
|
||||
nsegs = gve_tx_add_skb(tx, skb);
|
||||
nsegs = gve_tx_add_skb(tx, skb, &priv->pdev->dev);
|
||||
|
||||
netdev_tx_sent_queue(tx->netdev_txq, skb->len);
|
||||
skb_tx_timestamp(skb);
|
||||
|
|
|
@ -237,6 +237,7 @@ struct hip04_priv {
|
|||
dma_addr_t rx_phys[RX_DESC_NUM];
|
||||
unsigned int rx_head;
|
||||
unsigned int rx_buf_size;
|
||||
unsigned int rx_cnt_remaining;
|
||||
|
||||
struct device_node *phy_node;
|
||||
struct phy_device *phy;
|
||||
|
@ -575,7 +576,6 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
|
|||
struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi);
|
||||
struct net_device *ndev = priv->ndev;
|
||||
struct net_device_stats *stats = &ndev->stats;
|
||||
unsigned int cnt = hip04_recv_cnt(priv);
|
||||
struct rx_desc *desc;
|
||||
struct sk_buff *skb;
|
||||
unsigned char *buf;
|
||||
|
@ -588,8 +588,8 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
|
|||
|
||||
/* clean up tx descriptors */
|
||||
tx_remaining = hip04_tx_reclaim(ndev, false);
|
||||
|
||||
while (cnt && !last) {
|
||||
priv->rx_cnt_remaining += hip04_recv_cnt(priv);
|
||||
while (priv->rx_cnt_remaining && !last) {
|
||||
buf = priv->rx_buf[priv->rx_head];
|
||||
skb = build_skb(buf, priv->rx_buf_size);
|
||||
if (unlikely(!skb)) {
|
||||
|
@ -635,11 +635,13 @@ refill:
|
|||
hip04_set_recv_desc(priv, phys);
|
||||
|
||||
priv->rx_head = RX_NEXT(priv->rx_head);
|
||||
if (rx >= budget)
|
||||
if (rx >= budget) {
|
||||
--priv->rx_cnt_remaining;
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (--cnt == 0)
|
||||
cnt = hip04_recv_cnt(priv);
|
||||
if (--priv->rx_cnt_remaining == 0)
|
||||
priv->rx_cnt_remaining += hip04_recv_cnt(priv);
|
||||
}
|
||||
|
||||
if (!(priv->reg_inten & RCV_INT)) {
|
||||
|
@ -724,6 +726,7 @@ static int hip04_mac_open(struct net_device *ndev)
|
|||
int i;
|
||||
|
||||
priv->rx_head = 0;
|
||||
priv->rx_cnt_remaining = 0;
|
||||
priv->tx_head = 0;
|
||||
priv->tx_tail = 0;
|
||||
hip04_reset_ppe(priv);
|
||||
|
@ -1038,7 +1041,6 @@ static int hip04_remove(struct platform_device *pdev)
|
|||
|
||||
hip04_free_ring(ndev, d);
|
||||
unregister_netdev(ndev);
|
||||
free_irq(ndev->irq, ndev);
|
||||
of_node_put(priv->phy_node);
|
||||
cancel_work_sync(&priv->tx_timeout_task);
|
||||
free_netdev(ndev);
|
||||
|
|
|
@ -607,6 +607,7 @@ static int e1000_set_ringparam(struct net_device *netdev,
|
|||
for (i = 0; i < adapter->num_rx_queues; i++)
|
||||
rxdr[i].count = rxdr->count;
|
||||
|
||||
err = 0;
|
||||
if (netif_running(adapter->netdev)) {
|
||||
/* Try to get new resources before deleting old */
|
||||
err = e1000_setup_all_rx_resources(adapter);
|
||||
|
@ -627,14 +628,13 @@ static int e1000_set_ringparam(struct net_device *netdev,
|
|||
adapter->rx_ring = rxdr;
|
||||
adapter->tx_ring = txdr;
|
||||
err = e1000_up(adapter);
|
||||
if (err)
|
||||
goto err_setup;
|
||||
}
|
||||
kfree(tx_old);
|
||||
kfree(rx_old);
|
||||
|
||||
clear_bit(__E1000_RESETTING, &adapter->flags);
|
||||
return 0;
|
||||
return err;
|
||||
|
||||
err_setup_tx:
|
||||
e1000_free_all_rx_resources(adapter);
|
||||
err_setup_rx:
|
||||
|
@ -646,7 +646,6 @@ err_alloc_rx:
|
|||
err_alloc_tx:
|
||||
if (netif_running(adapter->netdev))
|
||||
e1000_up(adapter);
|
||||
err_setup:
|
||||
clear_bit(__E1000_RESETTING, &adapter->flags);
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -157,11 +157,6 @@ static int i40e_xsk_umem_disable(struct i40e_vsi *vsi, u16 qid)
|
|||
err = i40e_queue_pair_enable(vsi, qid);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Kick start the NAPI context so that receiving will start */
|
||||
err = i40e_xsk_wakeup(vsi->netdev, qid, XDP_WAKEUP_RX);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -466,7 +466,7 @@ static s32 igb_init_mac_params_82575(struct e1000_hw *hw)
|
|||
? igb_setup_copper_link_82575
|
||||
: igb_setup_serdes_link_82575;
|
||||
|
||||
if (mac->type == e1000_82580) {
|
||||
if (mac->type == e1000_82580 || mac->type == e1000_i350) {
|
||||
switch (hw->device_id) {
|
||||
/* feature not supported on these id's */
|
||||
case E1000_DEV_ID_DH89XXCC_SGMII:
|
||||
|
|
|
@ -753,7 +753,8 @@ u32 igb_rd32(struct e1000_hw *hw, u32 reg)
|
|||
struct net_device *netdev = igb->netdev;
|
||||
hw->hw_addr = NULL;
|
||||
netdev_err(netdev, "PCIe link lost\n");
|
||||
WARN(1, "igb: Failed to read reg 0x%x!\n", reg);
|
||||
WARN(pci_device_is_present(igb->pdev),
|
||||
"igb: Failed to read reg 0x%x!\n", reg);
|
||||
}
|
||||
|
||||
return value;
|
||||
|
@ -2064,7 +2065,8 @@ static void igb_check_swap_media(struct igb_adapter *adapter)
|
|||
if ((hw->phy.media_type == e1000_media_type_copper) &&
|
||||
(!(connsw & E1000_CONNSW_AUTOSENSE_EN))) {
|
||||
swap_now = true;
|
||||
} else if (!(connsw & E1000_CONNSW_SERDESD)) {
|
||||
} else if ((hw->phy.media_type != e1000_media_type_copper) &&
|
||||
!(connsw & E1000_CONNSW_SERDESD)) {
|
||||
/* copper signal takes time to appear */
|
||||
if (adapter->copper_tries < 4) {
|
||||
adapter->copper_tries++;
|
||||
|
@ -2370,7 +2372,7 @@ void igb_reset(struct igb_adapter *adapter)
|
|||
adapter->ei.get_invariants(hw);
|
||||
adapter->flags &= ~IGB_FLAG_MEDIA_RESET;
|
||||
}
|
||||
if ((mac->type == e1000_82575) &&
|
||||
if ((mac->type == e1000_82575 || mac->type == e1000_i350) &&
|
||||
(adapter->flags & IGB_FLAG_MAS_ENABLE)) {
|
||||
igb_enable_mas(adapter);
|
||||
}
|
||||
|
|
|
@ -4047,7 +4047,8 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg)
|
|||
hw->hw_addr = NULL;
|
||||
netif_device_detach(netdev);
|
||||
netdev_err(netdev, "PCIe link lost, device now detached\n");
|
||||
WARN(1, "igc: Failed to read reg 0x%x!\n", reg);
|
||||
WARN(pci_device_is_present(igc->pdev),
|
||||
"igc: Failed to read reg 0x%x!\n", reg);
|
||||
}
|
||||
|
||||
return value;
|
||||
|
|
|
@ -4310,7 +4310,6 @@ static void ixgbe_set_rx_buffer_len(struct ixgbe_adapter *adapter)
|
|||
if (test_bit(__IXGBE_RX_FCOE, &rx_ring->state))
|
||||
set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);
|
||||
|
||||
clear_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &rx_ring->state);
|
||||
if (adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)
|
||||
continue;
|
||||
|
||||
|
|
|
@ -160,16 +160,23 @@ static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
|
|||
(bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS));
|
||||
}
|
||||
#else
|
||||
void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool, u8 port_map) {}
|
||||
void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
|
||||
u8 port_map) {}
|
||||
int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf) { return 0; }
|
||||
int mvneta_bm_pool_refill(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool) {return 0; }
|
||||
struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
|
||||
enum mvneta_bm_type type, u8 port_id,
|
||||
int pkt_size) { return NULL; }
|
||||
static inline void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool,
|
||||
u8 port_map) {}
|
||||
static inline void mvneta_bm_bufs_free(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool,
|
||||
u8 port_map) {}
|
||||
static inline int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf)
|
||||
{ return 0; }
|
||||
static inline int mvneta_bm_pool_refill(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool)
|
||||
{ return 0; }
|
||||
static inline struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv,
|
||||
u8 pool_id,
|
||||
enum mvneta_bm_type type,
|
||||
u8 port_id,
|
||||
int pkt_size)
|
||||
{ return NULL; }
|
||||
|
||||
static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool,
|
||||
|
@ -178,7 +185,8 @@ static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
|
|||
static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
|
||||
struct mvneta_bm_pool *bm_pool)
|
||||
{ return 0; }
|
||||
struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; }
|
||||
void mvneta_bm_put(struct mvneta_bm *priv) {}
|
||||
static inline struct mvneta_bm *mvneta_bm_get(struct device_node *node)
|
||||
{ return NULL; }
|
||||
static inline void mvneta_bm_put(struct mvneta_bm *priv) {}
|
||||
#endif /* CONFIG_MVNETA_BM */
|
||||
#endif
|
||||
|
|
|
@ -471,12 +471,31 @@ void mlx4_init_quotas(struct mlx4_dev *dev)
|
|||
priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf];
|
||||
}
|
||||
|
||||
static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev)
|
||||
static int
|
||||
mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev,
|
||||
struct resource_allocator *res_alloc,
|
||||
int vf)
|
||||
{
|
||||
/* reduce the sink counter */
|
||||
return (dev->caps.max_counters - 1 -
|
||||
(MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS))
|
||||
/ MLX4_MAX_PORTS;
|
||||
struct mlx4_active_ports actv_ports;
|
||||
int ports, counters_guaranteed;
|
||||
|
||||
/* For master, only allocate according to the number of phys ports */
|
||||
if (vf == mlx4_master_func_num(dev))
|
||||
return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports;
|
||||
|
||||
/* calculate real number of ports for the VF */
|
||||
actv_ports = mlx4_get_active_ports(dev, vf);
|
||||
ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports);
|
||||
counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT;
|
||||
|
||||
/* If we do not have enough counters for this VF, do not
|
||||
* allocate any for it. '-1' to reduce the sink counter.
|
||||
*/
|
||||
if ((res_alloc->res_reserved + counters_guaranteed) >
|
||||
(dev->caps.max_counters - 1))
|
||||
return 0;
|
||||
|
||||
return counters_guaranteed;
|
||||
}
|
||||
|
||||
int mlx4_init_resource_tracker(struct mlx4_dev *dev)
|
||||
|
@ -484,7 +503,6 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
|
|||
struct mlx4_priv *priv = mlx4_priv(dev);
|
||||
int i, j;
|
||||
int t;
|
||||
int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev);
|
||||
|
||||
priv->mfunc.master.res_tracker.slave_list =
|
||||
kcalloc(dev->num_slaves, sizeof(struct slave_list),
|
||||
|
@ -603,16 +621,8 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
|
|||
break;
|
||||
case RES_COUNTER:
|
||||
res_alloc->quota[t] = dev->caps.max_counters;
|
||||
if (t == mlx4_master_func_num(dev))
|
||||
res_alloc->guaranteed[t] =
|
||||
MLX4_PF_COUNTERS_PER_PORT *
|
||||
MLX4_MAX_PORTS;
|
||||
else if (t <= max_vfs_guarantee_counter)
|
||||
res_alloc->guaranteed[t] =
|
||||
MLX4_VF_COUNTERS_PER_PORT *
|
||||
MLX4_MAX_PORTS;
|
||||
else
|
||||
res_alloc->guaranteed[t] = 0;
|
||||
res_alloc->guaranteed[t] =
|
||||
mlx4_calc_res_counter_guaranteed(dev, res_alloc, t);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
|
|
@ -345,7 +345,7 @@ struct mlx5e_tx_wqe_info {
|
|||
u8 num_wqebbs;
|
||||
u8 num_dma;
|
||||
#ifdef CONFIG_MLX5_EN_TLS
|
||||
skb_frag_t *resync_dump_frag;
|
||||
struct page *resync_dump_frag_page;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
@ -410,6 +410,7 @@ struct mlx5e_txqsq {
|
|||
struct device *pdev;
|
||||
__be32 mkey_be;
|
||||
unsigned long state;
|
||||
unsigned int hw_mtu;
|
||||
struct hwtstamp_config *tstamp;
|
||||
struct mlx5_clock *clock;
|
||||
|
||||
|
|
|
@ -141,7 +141,7 @@ int mlx5e_hv_vhca_stats_create(struct mlx5e_priv *priv)
|
|||
"Failed to create hv vhca stats agent, err = %ld\n",
|
||||
PTR_ERR(agent));
|
||||
|
||||
kfree(priv->stats_agent.buf);
|
||||
kvfree(priv->stats_agent.buf);
|
||||
return IS_ERR_OR_NULL(agent);
|
||||
}
|
||||
|
||||
|
@ -157,5 +157,5 @@ void mlx5e_hv_vhca_stats_destroy(struct mlx5e_priv *priv)
|
|||
return;
|
||||
|
||||
mlx5_hv_vhca_agent_destroy(priv->stats_agent.agent);
|
||||
kfree(priv->stats_agent.buf);
|
||||
kvfree(priv->stats_agent.buf);
|
||||
}
|
||||
|
|
|
@ -97,15 +97,19 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET)
|
||||
if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) {
|
||||
ip_rt_put(rt);
|
||||
return -ENETUNREACH;
|
||||
}
|
||||
#else
|
||||
return -EOPNOTSUPP;
|
||||
#endif
|
||||
|
||||
ret = get_route_and_out_devs(priv, rt->dst.dev, route_dev, out_dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
ip_rt_put(rt);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (!(*out_ttl))
|
||||
*out_ttl = ip4_dst_hoplimit(&rt->dst);
|
||||
|
@ -149,8 +153,10 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
|
|||
*out_ttl = ip6_dst_hoplimit(dst);
|
||||
|
||||
ret = get_route_and_out_devs(priv, dst->dev, route_dev, out_dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
dst_release(dst);
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
return -EOPNOTSUPP;
|
||||
#endif
|
||||
|
|
|
@ -15,15 +15,14 @@
|
|||
#else
|
||||
/* TLS offload requires additional stop_room for:
|
||||
* - a resync SKB.
|
||||
* kTLS offload requires additional stop_room for:
|
||||
* - static params WQE,
|
||||
* - progress params WQE, and
|
||||
* - resync DUMP per frag.
|
||||
* kTLS offload requires fixed additional stop_room for:
|
||||
* - a static params WQE, and a progress params WQE.
|
||||
* The additional MTU-depending room for the resync DUMP WQEs
|
||||
* will be calculated and added in runtime.
|
||||
*/
|
||||
#define MLX5E_SQ_TLS_ROOM \
|
||||
(MLX5_SEND_WQE_MAX_WQEBBS + \
|
||||
MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + \
|
||||
MAX_SKB_FRAGS * MLX5E_KTLS_MAX_DUMP_WQEBBS)
|
||||
MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS)
|
||||
#endif
|
||||
|
||||
#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
|
||||
|
@ -92,7 +91,7 @@ mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq, struct mlx5_wq_cyc *wq,
|
|||
|
||||
/* fill sq frag edge with nops to avoid wqe wrapping two pages */
|
||||
for (; wi < edge_wi; wi++) {
|
||||
wi->skb = NULL;
|
||||
memset(wi, 0, sizeof(*wi));
|
||||
wi->num_wqebbs = 1;
|
||||
mlx5e_post_nop(wq, sq->sqn, &sq->pc);
|
||||
}
|
||||
|
|
|
@ -38,7 +38,7 @@ static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
|
|||
return -ENOMEM;
|
||||
|
||||
tx_priv->expected_seq = start_offload_tcp_sn;
|
||||
tx_priv->crypto_info = crypto_info;
|
||||
tx_priv->crypto_info = *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
|
||||
mlx5e_set_ktls_tx_priv_ctx(tls_ctx, tx_priv);
|
||||
|
||||
/* tc and underlay_qpn values are not in use for tls tis */
|
||||
|
|
|
@ -21,7 +21,14 @@
|
|||
MLX5_ST_SZ_BYTES(tls_progress_params))
|
||||
#define MLX5E_KTLS_PROGRESS_WQEBBS \
|
||||
(DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_BB))
|
||||
#define MLX5E_KTLS_MAX_DUMP_WQEBBS 2
|
||||
|
||||
struct mlx5e_dump_wqe {
|
||||
struct mlx5_wqe_ctrl_seg ctrl;
|
||||
struct mlx5_wqe_data_seg data;
|
||||
};
|
||||
|
||||
#define MLX5E_KTLS_DUMP_WQEBBS \
|
||||
(DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB))
|
||||
|
||||
enum {
|
||||
MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD = 0,
|
||||
|
@ -37,7 +44,7 @@ enum {
|
|||
|
||||
struct mlx5e_ktls_offload_context_tx {
|
||||
struct tls_offload_context_tx *tx_ctx;
|
||||
struct tls_crypto_info *crypto_info;
|
||||
struct tls12_crypto_info_aes_gcm_128 crypto_info;
|
||||
u32 expected_seq;
|
||||
u32 tisn;
|
||||
u32 key_id;
|
||||
|
@ -86,14 +93,28 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
|
|||
struct mlx5e_tx_wqe **wqe, u16 *pi);
|
||||
void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
|
||||
struct mlx5e_tx_wqe_info *wi,
|
||||
struct mlx5e_sq_dma *dma);
|
||||
|
||||
u32 *dma_fifo_cc);
|
||||
static inline u8
|
||||
mlx5e_ktls_dumps_num_wqebbs(struct mlx5e_txqsq *sq, unsigned int nfrags,
|
||||
unsigned int sync_len)
|
||||
{
|
||||
/* Given the MTU and sync_len, calculates an upper bound for the
|
||||
* number of WQEBBs needed for the TX resync DUMP WQEs of a record.
|
||||
*/
|
||||
return MLX5E_KTLS_DUMP_WQEBBS *
|
||||
(nfrags + DIV_ROUND_UP(sync_len, sq->hw_mtu));
|
||||
}
|
||||
#else
|
||||
|
||||
static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void
|
||||
mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
|
||||
struct mlx5e_tx_wqe_info *wi,
|
||||
u32 *dma_fifo_cc) {}
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* __MLX5E_TLS_H__ */
|
||||
|
|
|
@ -24,17 +24,12 @@ enum {
|
|||
static void
|
||||
fill_static_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
|
||||
{
|
||||
struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
|
||||
struct tls12_crypto_info_aes_gcm_128 *info;
|
||||
struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info;
|
||||
char *initial_rn, *gcm_iv;
|
||||
u16 salt_sz, rec_seq_sz;
|
||||
char *salt, *rec_seq;
|
||||
u8 tls_version;
|
||||
|
||||
if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128))
|
||||
return;
|
||||
|
||||
info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
|
||||
EXTRACT_INFO_FIELDS;
|
||||
|
||||
gcm_iv = MLX5_ADDR_OF(tls_static_params, ctx, gcm_iv);
|
||||
|
@ -108,16 +103,15 @@ build_progress_params(struct mlx5e_tx_wqe *wqe, u16 pc, u32 sqn,
|
|||
}
|
||||
|
||||
static void tx_fill_wi(struct mlx5e_txqsq *sq,
|
||||
u16 pi, u8 num_wqebbs,
|
||||
skb_frag_t *resync_dump_frag,
|
||||
u32 num_bytes)
|
||||
u16 pi, u8 num_wqebbs, u32 num_bytes,
|
||||
struct page *page)
|
||||
{
|
||||
struct mlx5e_tx_wqe_info *wi = &sq->db.wqe_info[pi];
|
||||
|
||||
wi->skb = NULL;
|
||||
wi->num_wqebbs = num_wqebbs;
|
||||
wi->resync_dump_frag = resync_dump_frag;
|
||||
wi->num_bytes = num_bytes;
|
||||
memset(wi, 0, sizeof(*wi));
|
||||
wi->num_wqebbs = num_wqebbs;
|
||||
wi->num_bytes = num_bytes;
|
||||
wi->resync_dump_frag_page = page;
|
||||
}
|
||||
|
||||
void mlx5e_ktls_tx_offload_set_pending(struct mlx5e_ktls_offload_context_tx *priv_tx)
|
||||
|
@ -145,7 +139,7 @@ post_static_params(struct mlx5e_txqsq *sq,
|
|||
|
||||
umr_wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_STATIC_UMR_WQE_SZ, &pi);
|
||||
build_static_params(umr_wqe, sq->pc, sq->sqn, priv_tx, fence);
|
||||
tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, NULL, 0);
|
||||
tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, 0, NULL);
|
||||
sq->pc += MLX5E_KTLS_STATIC_WQEBBS;
|
||||
}
|
||||
|
||||
|
@ -159,7 +153,7 @@ post_progress_params(struct mlx5e_txqsq *sq,
|
|||
|
||||
wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_PROGRESS_WQE_SZ, &pi);
|
||||
build_progress_params(wqe, sq->pc, sq->sqn, priv_tx, fence);
|
||||
tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, NULL, 0);
|
||||
tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, 0, NULL);
|
||||
sq->pc += MLX5E_KTLS_PROGRESS_WQEBBS;
|
||||
}
|
||||
|
||||
|
@ -169,6 +163,14 @@ mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq,
|
|||
bool skip_static_post, bool fence_first_post)
|
||||
{
|
||||
bool progress_fence = skip_static_post || !fence_first_post;
|
||||
struct mlx5_wq_cyc *wq = &sq->wq;
|
||||
u16 contig_wqebbs_room, pi;
|
||||
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
|
||||
if (unlikely(contig_wqebbs_room <
|
||||
MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS))
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
|
||||
|
||||
if (!skip_static_post)
|
||||
post_static_params(sq, priv_tx, fence_first_post);
|
||||
|
@ -180,29 +182,36 @@ struct tx_sync_info {
|
|||
u64 rcd_sn;
|
||||
s32 sync_len;
|
||||
int nr_frags;
|
||||
skb_frag_t *frags[MAX_SKB_FRAGS];
|
||||
skb_frag_t frags[MAX_SKB_FRAGS];
|
||||
};
|
||||
|
||||
static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
||||
u32 tcp_seq, struct tx_sync_info *info)
|
||||
enum mlx5e_ktls_sync_retval {
|
||||
MLX5E_KTLS_SYNC_DONE,
|
||||
MLX5E_KTLS_SYNC_FAIL,
|
||||
MLX5E_KTLS_SYNC_SKIP_NO_DATA,
|
||||
};
|
||||
|
||||
static enum mlx5e_ktls_sync_retval
|
||||
tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
||||
u32 tcp_seq, struct tx_sync_info *info)
|
||||
{
|
||||
struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx;
|
||||
enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE;
|
||||
struct tls_record_info *record;
|
||||
int remaining, i = 0;
|
||||
unsigned long flags;
|
||||
bool ret = true;
|
||||
|
||||
spin_lock_irqsave(&tx_ctx->lock, flags);
|
||||
record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn);
|
||||
|
||||
if (unlikely(!record)) {
|
||||
ret = false;
|
||||
ret = MLX5E_KTLS_SYNC_FAIL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (unlikely(tcp_seq < tls_record_start_seq(record))) {
|
||||
if (!tls_record_is_start_marker(record))
|
||||
ret = false;
|
||||
ret = tls_record_is_start_marker(record) ?
|
||||
MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -211,13 +220,13 @@ static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
|||
while (remaining > 0) {
|
||||
skb_frag_t *frag = &record->frags[i];
|
||||
|
||||
__skb_frag_ref(frag);
|
||||
get_page(skb_frag_page(frag));
|
||||
remaining -= skb_frag_size(frag);
|
||||
info->frags[i++] = frag;
|
||||
info->frags[i++] = *frag;
|
||||
}
|
||||
/* reduce the part which will be sent with the original SKB */
|
||||
if (remaining < 0)
|
||||
skb_frag_size_add(info->frags[i - 1], remaining);
|
||||
skb_frag_size_add(&info->frags[i - 1], remaining);
|
||||
info->nr_frags = i;
|
||||
out:
|
||||
spin_unlock_irqrestore(&tx_ctx->lock, flags);
|
||||
|
@ -229,17 +238,12 @@ tx_post_resync_params(struct mlx5e_txqsq *sq,
|
|||
struct mlx5e_ktls_offload_context_tx *priv_tx,
|
||||
u64 rcd_sn)
|
||||
{
|
||||
struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
|
||||
struct tls12_crypto_info_aes_gcm_128 *info;
|
||||
struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info;
|
||||
__be64 rn_be = cpu_to_be64(rcd_sn);
|
||||
bool skip_static_post;
|
||||
u16 rec_seq_sz;
|
||||
char *rec_seq;
|
||||
|
||||
if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128))
|
||||
return;
|
||||
|
||||
info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
|
||||
rec_seq = info->rec_seq;
|
||||
rec_seq_sz = sizeof(info->rec_seq);
|
||||
|
||||
|
@ -250,11 +254,6 @@ tx_post_resync_params(struct mlx5e_txqsq *sq,
|
|||
mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, skip_static_post, true);
|
||||
}
|
||||
|
||||
struct mlx5e_dump_wqe {
|
||||
struct mlx5_wqe_ctrl_seg ctrl;
|
||||
struct mlx5_wqe_data_seg data;
|
||||
};
|
||||
|
||||
static int
|
||||
tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool first)
|
||||
{
|
||||
|
@ -262,7 +261,6 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
|
|||
struct mlx5_wqe_data_seg *dseg;
|
||||
struct mlx5e_dump_wqe *wqe;
|
||||
dma_addr_t dma_addr = 0;
|
||||
u8 num_wqebbs;
|
||||
u16 ds_cnt;
|
||||
int fsz;
|
||||
u16 pi;
|
||||
|
@ -270,7 +268,6 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
|
|||
wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi);
|
||||
|
||||
ds_cnt = sizeof(*wqe) / MLX5_SEND_WQE_DS;
|
||||
num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
|
||||
|
||||
cseg = &wqe->ctrl;
|
||||
dseg = &wqe->data;
|
||||
|
@ -291,24 +288,27 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
|
|||
dseg->byte_count = cpu_to_be32(fsz);
|
||||
mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE);
|
||||
|
||||
tx_fill_wi(sq, pi, num_wqebbs, frag, fsz);
|
||||
sq->pc += num_wqebbs;
|
||||
|
||||
WARN(num_wqebbs > MLX5E_KTLS_MAX_DUMP_WQEBBS,
|
||||
"unexpected DUMP num_wqebbs, %d > %d",
|
||||
num_wqebbs, MLX5E_KTLS_MAX_DUMP_WQEBBS);
|
||||
tx_fill_wi(sq, pi, MLX5E_KTLS_DUMP_WQEBBS, fsz, skb_frag_page(frag));
|
||||
sq->pc += MLX5E_KTLS_DUMP_WQEBBS;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
|
||||
struct mlx5e_tx_wqe_info *wi,
|
||||
struct mlx5e_sq_dma *dma)
|
||||
u32 *dma_fifo_cc)
|
||||
{
|
||||
struct mlx5e_sq_stats *stats = sq->stats;
|
||||
struct mlx5e_sq_stats *stats;
|
||||
struct mlx5e_sq_dma *dma;
|
||||
|
||||
if (!wi->resync_dump_frag_page)
|
||||
return;
|
||||
|
||||
dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++);
|
||||
stats = sq->stats;
|
||||
|
||||
mlx5e_tx_dma_unmap(sq->pdev, dma);
|
||||
__skb_frag_unref(wi->resync_dump_frag);
|
||||
put_page(wi->resync_dump_frag_page);
|
||||
stats->tls_dump_packets++;
|
||||
stats->tls_dump_bytes += wi->num_bytes;
|
||||
}
|
||||
|
@ -318,25 +318,31 @@ static void tx_post_fence_nop(struct mlx5e_txqsq *sq)
|
|||
struct mlx5_wq_cyc *wq = &sq->wq;
|
||||
u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
|
||||
tx_fill_wi(sq, pi, 1, NULL, 0);
|
||||
tx_fill_wi(sq, pi, 1, 0, NULL);
|
||||
|
||||
mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc);
|
||||
}
|
||||
|
||||
static struct sk_buff *
|
||||
static enum mlx5e_ktls_sync_retval
|
||||
mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
||||
struct mlx5e_txqsq *sq,
|
||||
struct sk_buff *skb,
|
||||
int datalen,
|
||||
u32 seq)
|
||||
{
|
||||
struct mlx5e_sq_stats *stats = sq->stats;
|
||||
struct mlx5_wq_cyc *wq = &sq->wq;
|
||||
enum mlx5e_ktls_sync_retval ret;
|
||||
struct tx_sync_info info = {};
|
||||
u16 contig_wqebbs_room, pi;
|
||||
u8 num_wqebbs;
|
||||
int i;
|
||||
int i = 0;
|
||||
|
||||
if (!tx_sync_info_get(priv_tx, seq, &info)) {
|
||||
ret = tx_sync_info_get(priv_tx, seq, &info);
|
||||
if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) {
|
||||
if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) {
|
||||
stats->tls_skip_no_sync_data++;
|
||||
return MLX5E_KTLS_SYNC_SKIP_NO_DATA;
|
||||
}
|
||||
/* We might get here if a retransmission reaches the driver
|
||||
* after the relevant record is acked.
|
||||
* It should be safe to drop the packet in this case
|
||||
|
@ -346,13 +352,8 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
|||
}
|
||||
|
||||
if (unlikely(info.sync_len < 0)) {
|
||||
u32 payload;
|
||||
int headln;
|
||||
|
||||
headln = skb_transport_offset(skb) + tcp_hdrlen(skb);
|
||||
payload = skb->len - headln;
|
||||
if (likely(payload <= -info.sync_len))
|
||||
return skb;
|
||||
if (likely(datalen <= -info.sync_len))
|
||||
return MLX5E_KTLS_SYNC_DONE;
|
||||
|
||||
stats->tls_drop_bypass_req++;
|
||||
goto err_out;
|
||||
|
@ -360,30 +361,62 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
|
|||
|
||||
stats->tls_ooo++;
|
||||
|
||||
num_wqebbs = MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS +
|
||||
(info.nr_frags ? info.nr_frags * MLX5E_KTLS_MAX_DUMP_WQEBBS : 1);
|
||||
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
|
||||
|
||||
/* If no dump WQE was sent, we need to have a fence NOP WQE before the
|
||||
* actual data xmit.
|
||||
*/
|
||||
if (!info.nr_frags) {
|
||||
tx_post_fence_nop(sq);
|
||||
return MLX5E_KTLS_SYNC_DONE;
|
||||
}
|
||||
|
||||
num_wqebbs = mlx5e_ktls_dumps_num_wqebbs(sq, info.nr_frags, info.sync_len);
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
|
||||
|
||||
if (unlikely(contig_wqebbs_room < num_wqebbs))
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
|
||||
|
||||
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
|
||||
|
||||
for (i = 0; i < info.nr_frags; i++)
|
||||
if (tx_post_resync_dump(sq, info.frags[i], priv_tx->tisn, !i))
|
||||
goto err_out;
|
||||
for (; i < info.nr_frags; i++) {
|
||||
unsigned int orig_fsz, frag_offset = 0, n = 0;
|
||||
skb_frag_t *f = &info.frags[i];
|
||||
|
||||
/* If no dump WQE was sent, we need to have a fence NOP WQE before the
|
||||
* actual data xmit.
|
||||
*/
|
||||
if (!info.nr_frags)
|
||||
tx_post_fence_nop(sq);
|
||||
orig_fsz = skb_frag_size(f);
|
||||
|
||||
return skb;
|
||||
do {
|
||||
bool fence = !(i || frag_offset);
|
||||
unsigned int fsz;
|
||||
|
||||
n++;
|
||||
fsz = min_t(unsigned int, sq->hw_mtu, orig_fsz - frag_offset);
|
||||
skb_frag_size_set(f, fsz);
|
||||
if (tx_post_resync_dump(sq, f, priv_tx->tisn, fence)) {
|
||||
page_ref_add(skb_frag_page(f), n - 1);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
skb_frag_off_add(f, fsz);
|
||||
frag_offset += fsz;
|
||||
} while (frag_offset < orig_fsz);
|
||||
|
||||
page_ref_add(skb_frag_page(f), n - 1);
|
||||
}
|
||||
|
||||
return MLX5E_KTLS_SYNC_DONE;
|
||||
|
||||
err_out:
|
||||
dev_kfree_skb_any(skb);
|
||||
return NULL;
|
||||
for (; i < info.nr_frags; i++)
|
||||
/* The put_page() here undoes the page ref obtained in tx_sync_info_get().
|
||||
* Page refs obtained for the DUMP WQEs above (by page_ref_add) will be
|
||||
* released only upon their completions (or in mlx5e_free_txqsq_descs,
|
||||
* if channel closes).
|
||||
*/
|
||||
put_page(skb_frag_page(&info.frags[i]));
|
||||
|
||||
return MLX5E_KTLS_SYNC_FAIL;
|
||||
}
|
||||
|
||||
struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
|
||||
|
@ -419,10 +452,15 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
|
|||
|
||||
seq = ntohl(tcp_hdr(skb)->seq);
|
||||
if (unlikely(priv_tx->expected_seq != seq)) {
|
||||
skb = mlx5e_ktls_tx_handle_ooo(priv_tx, sq, skb, seq);
|
||||
if (unlikely(!skb))
|
||||
enum mlx5e_ktls_sync_retval ret =
|
||||
mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq);
|
||||
|
||||
if (likely(ret == MLX5E_KTLS_SYNC_DONE))
|
||||
*wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
|
||||
else if (ret == MLX5E_KTLS_SYNC_FAIL)
|
||||
goto err_out;
|
||||
else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */
|
||||
goto out;
|
||||
*wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
|
||||
}
|
||||
|
||||
priv_tx->expected_seq = seq + datalen;
|
||||
|
|
|
@ -1021,7 +1021,7 @@ static bool ext_link_mode_requested(const unsigned long *adver)
|
|||
{
|
||||
#define MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT ETHTOOL_LINK_MODE_50000baseKR_Full_BIT
|
||||
int size = __ETHTOOL_LINK_MODE_MASK_NBITS - MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT;
|
||||
__ETHTOOL_DECLARE_LINK_MODE_MASK(modes);
|
||||
__ETHTOOL_DECLARE_LINK_MODE_MASK(modes) = {0,};
|
||||
|
||||
bitmap_set(modes, MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT, size);
|
||||
return bitmap_intersects(modes, adver, __ETHTOOL_LINK_MODE_MASK_NBITS);
|
||||
|
|
|
@ -1128,6 +1128,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
|
|||
sq->txq_ix = txq_ix;
|
||||
sq->uar_map = mdev->mlx5e_res.bfreg.map;
|
||||
sq->min_inline_mode = params->tx_min_inline_mode;
|
||||
sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
||||
sq->stats = &c->priv->channel_stats[c->ix].sq[tc];
|
||||
sq->stop_room = MLX5E_SQ_STOP_ROOM;
|
||||
INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work);
|
||||
|
@ -1135,10 +1136,14 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
|
|||
set_bit(MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, &sq->state);
|
||||
if (MLX5_IPSEC_DEV(c->priv->mdev))
|
||||
set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state);
|
||||
#ifdef CONFIG_MLX5_EN_TLS
|
||||
if (mlx5_accel_is_tls_device(c->priv->mdev)) {
|
||||
set_bit(MLX5E_SQ_STATE_TLS, &sq->state);
|
||||
sq->stop_room += MLX5E_SQ_TLS_ROOM;
|
||||
sq->stop_room += MLX5E_SQ_TLS_ROOM +
|
||||
mlx5e_ktls_dumps_num_wqebbs(sq, MAX_SKB_FRAGS,
|
||||
TLS_MAX_PAYLOAD_SIZE);
|
||||
}
|
||||
#endif
|
||||
|
||||
param->wq.db_numa_node = cpu_to_node(c->cpu);
|
||||
err = mlx5_wq_cyc_create(mdev, ¶m->wq, sqc_wq, wq, &sq->wq_ctrl);
|
||||
|
@ -1349,9 +1354,13 @@ static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
|
|||
/* last doorbell out, godspeed .. */
|
||||
if (mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, 1)) {
|
||||
u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
struct mlx5e_tx_wqe_info *wi;
|
||||
struct mlx5e_tx_wqe *nop;
|
||||
|
||||
sq->db.wqe_info[pi].skb = NULL;
|
||||
wi = &sq->db.wqe_info[pi];
|
||||
|
||||
memset(wi, 0, sizeof(*wi));
|
||||
wi->num_wqebbs = 1;
|
||||
nop = mlx5e_post_nop(wq, sq->sqn, &sq->pc);
|
||||
mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &nop->ctrl);
|
||||
}
|
||||
|
|
|
@ -611,8 +611,8 @@ static void mlx5e_rep_update_flows(struct mlx5e_priv *priv,
|
|||
|
||||
mutex_lock(&esw->offloads.encap_tbl_lock);
|
||||
encap_connected = !!(e->flags & MLX5_ENCAP_ENTRY_VALID);
|
||||
if (e->compl_result || (encap_connected == neigh_connected &&
|
||||
ether_addr_equal(e->h_dest, ha)))
|
||||
if (e->compl_result < 0 || (encap_connected == neigh_connected &&
|
||||
ether_addr_equal(e->h_dest, ha)))
|
||||
goto unlock;
|
||||
|
||||
mlx5e_take_all_encap_flows(e, &flow_list);
|
||||
|
|
|
@ -1386,8 +1386,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
|
|||
if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state)))
|
||||
return 0;
|
||||
|
||||
if (rq->cqd.left)
|
||||
if (rq->cqd.left) {
|
||||
work_done += mlx5e_decompress_cqes_cont(rq, cqwq, 0, budget);
|
||||
if (rq->cqd.left || work_done >= budget)
|
||||
goto out;
|
||||
}
|
||||
|
||||
cqe = mlx5_cqwq_get_cqe(cqwq);
|
||||
if (!cqe) {
|
||||
|
|
|
@ -35,6 +35,7 @@
|
|||
#include <linux/udp.h>
|
||||
#include <net/udp.h>
|
||||
#include "en.h"
|
||||
#include "en/port.h"
|
||||
|
||||
enum {
|
||||
MLX5E_ST_LINK_STATE,
|
||||
|
@ -80,22 +81,12 @@ static int mlx5e_test_link_state(struct mlx5e_priv *priv)
|
|||
|
||||
static int mlx5e_test_link_speed(struct mlx5e_priv *priv)
|
||||
{
|
||||
u32 out[MLX5_ST_SZ_DW(ptys_reg)];
|
||||
u32 eth_proto_oper;
|
||||
int i;
|
||||
u32 speed;
|
||||
|
||||
if (!netif_carrier_ok(priv->netdev))
|
||||
return 1;
|
||||
|
||||
if (mlx5_query_port_ptys(priv->mdev, out, sizeof(out), MLX5_PTYS_EN, 1))
|
||||
return 1;
|
||||
|
||||
eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper);
|
||||
for (i = 0; i < MLX5E_LINK_MODES_NUMBER; i++) {
|
||||
if (eth_proto_oper & MLX5E_PROT_MASK(i))
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
return mlx5e_port_linkspeed(priv->mdev, &speed);
|
||||
}
|
||||
|
||||
struct mlx5ehdr {
|
||||
|
|
|
@ -52,11 +52,12 @@ static const struct counter_desc sw_stats_desc[] = {
|
|||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_skip_no_sync_data) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) },
|
||||
#endif
|
||||
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_lro_packets) },
|
||||
|
@ -288,11 +289,12 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
|
|||
s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes;
|
||||
s->tx_tls_ctx += sq_stats->tls_ctx;
|
||||
s->tx_tls_ooo += sq_stats->tls_ooo;
|
||||
s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes;
|
||||
s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data;
|
||||
s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req;
|
||||
s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes;
|
||||
s->tx_tls_dump_packets += sq_stats->tls_dump_packets;
|
||||
s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes;
|
||||
s->tx_tls_skip_no_sync_data += sq_stats->tls_skip_no_sync_data;
|
||||
s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data;
|
||||
s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req;
|
||||
#endif
|
||||
s->tx_cqes += sq_stats->cqes;
|
||||
}
|
||||
|
@ -1472,10 +1474,12 @@ static const struct counter_desc sq_stats_desc[] = {
|
|||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_resync_bytes) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_skip_no_sync_data) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) },
|
||||
#endif
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_none) },
|
||||
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, stopped) },
|
||||
|
|
|
@ -129,11 +129,12 @@ struct mlx5e_sw_stats {
|
|||
u64 tx_tls_encrypted_bytes;
|
||||
u64 tx_tls_ctx;
|
||||
u64 tx_tls_ooo;
|
||||
u64 tx_tls_resync_bytes;
|
||||
u64 tx_tls_drop_no_sync_data;
|
||||
u64 tx_tls_drop_bypass_req;
|
||||
u64 tx_tls_dump_packets;
|
||||
u64 tx_tls_dump_bytes;
|
||||
u64 tx_tls_resync_bytes;
|
||||
u64 tx_tls_skip_no_sync_data;
|
||||
u64 tx_tls_drop_no_sync_data;
|
||||
u64 tx_tls_drop_bypass_req;
|
||||
#endif
|
||||
|
||||
u64 rx_xsk_packets;
|
||||
|
@ -273,11 +274,12 @@ struct mlx5e_sq_stats {
|
|||
u64 tls_encrypted_bytes;
|
||||
u64 tls_ctx;
|
||||
u64 tls_ooo;
|
||||
u64 tls_resync_bytes;
|
||||
u64 tls_drop_no_sync_data;
|
||||
u64 tls_drop_bypass_req;
|
||||
u64 tls_dump_packets;
|
||||
u64 tls_dump_bytes;
|
||||
u64 tls_resync_bytes;
|
||||
u64 tls_skip_no_sync_data;
|
||||
u64 tls_drop_no_sync_data;
|
||||
u64 tls_drop_bypass_req;
|
||||
#endif
|
||||
/* less likely accessed in data path */
|
||||
u64 csum_none;
|
||||
|
|
|
@ -1278,8 +1278,10 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
|
|||
mlx5_eswitch_del_vlan_action(esw, attr);
|
||||
|
||||
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++)
|
||||
if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)
|
||||
if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) {
|
||||
mlx5e_detach_encap(priv, flow, out_index);
|
||||
kfree(attr->parse_attr->tun_info[out_index]);
|
||||
}
|
||||
kvfree(attr->parse_attr);
|
||||
|
||||
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
|
||||
|
@ -1559,6 +1561,7 @@ static void mlx5e_encap_dealloc(struct mlx5e_priv *priv, struct mlx5e_encap_entr
|
|||
mlx5_packet_reformat_dealloc(priv->mdev, e->pkt_reformat);
|
||||
}
|
||||
|
||||
kfree(e->tun_info);
|
||||
kfree(e->encap_header);
|
||||
kfree_rcu(e, rcu);
|
||||
}
|
||||
|
@ -2972,6 +2975,13 @@ mlx5e_encap_get(struct mlx5e_priv *priv, struct encap_key *key,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static struct ip_tunnel_info *dup_tun_info(const struct ip_tunnel_info *tun_info)
|
||||
{
|
||||
size_t tun_size = sizeof(*tun_info) + tun_info->options_len;
|
||||
|
||||
return kmemdup(tun_info, tun_size, GFP_KERNEL);
|
||||
}
|
||||
|
||||
static int mlx5e_attach_encap(struct mlx5e_priv *priv,
|
||||
struct mlx5e_tc_flow *flow,
|
||||
struct net_device *mirred_dev,
|
||||
|
@ -3028,13 +3038,15 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
|
|||
refcount_set(&e->refcnt, 1);
|
||||
init_completion(&e->res_ready);
|
||||
|
||||
tun_info = dup_tun_info(tun_info);
|
||||
if (!tun_info) {
|
||||
err = -ENOMEM;
|
||||
goto out_err_init;
|
||||
}
|
||||
e->tun_info = tun_info;
|
||||
err = mlx5e_tc_tun_init_encap_attr(mirred_dev, priv, e, extack);
|
||||
if (err) {
|
||||
kfree(e);
|
||||
e = NULL;
|
||||
goto out_err;
|
||||
}
|
||||
if (err)
|
||||
goto out_err_init;
|
||||
|
||||
INIT_LIST_HEAD(&e->flows);
|
||||
hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key);
|
||||
|
@ -3075,6 +3087,12 @@ out_err:
|
|||
if (e)
|
||||
mlx5e_encap_put(priv, e);
|
||||
return err;
|
||||
|
||||
out_err_init:
|
||||
mutex_unlock(&esw->offloads.encap_tbl_lock);
|
||||
kfree(tun_info);
|
||||
kfree(e);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int parse_tc_vlan_action(struct mlx5e_priv *priv,
|
||||
|
@ -3160,7 +3178,7 @@ static int add_vlan_pop_action(struct mlx5e_priv *priv,
|
|||
struct mlx5_esw_flow_attr *attr,
|
||||
u32 *action)
|
||||
{
|
||||
int nest_level = vlan_get_encap_level(attr->parse_attr->filter_dev);
|
||||
int nest_level = attr->parse_attr->filter_dev->lower_level;
|
||||
struct flow_action_entry vlan_act = {
|
||||
.id = FLOW_ACTION_VLAN_POP,
|
||||
};
|
||||
|
@ -3295,7 +3313,9 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
|
|||
} else if (encap) {
|
||||
parse_attr->mirred_ifindex[attr->out_count] =
|
||||
out_dev->ifindex;
|
||||
parse_attr->tun_info[attr->out_count] = info;
|
||||
parse_attr->tun_info[attr->out_count] = dup_tun_info(info);
|
||||
if (!parse_attr->tun_info[attr->out_count])
|
||||
return -ENOMEM;
|
||||
encap = false;
|
||||
attr->dests[attr->out_count].flags |=
|
||||
MLX5_ESW_DEST_ENCAP;
|
||||
|
|
|
@ -403,7 +403,10 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
static void mlx5e_dump_error_cqe(struct mlx5e_txqsq *sq,
|
||||
struct mlx5_err_cqe *err_cqe)
|
||||
{
|
||||
u32 ci = mlx5_cqwq_get_ci(&sq->cq.wq);
|
||||
struct mlx5_cqwq *wq = &sq->cq.wq;
|
||||
u32 ci;
|
||||
|
||||
ci = mlx5_cqwq_ctr2ix(wq, wq->cc - 1);
|
||||
|
||||
netdev_err(sq->channel->netdev,
|
||||
"Error cqe on cqn 0x%x, ci 0x%x, sqn 0x%x, opcode 0x%x, syndrome 0x%x, vendor syndrome 0x%x\n",
|
||||
|
@ -479,14 +482,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
|
|||
skb = wi->skb;
|
||||
|
||||
if (unlikely(!skb)) {
|
||||
#ifdef CONFIG_MLX5_EN_TLS
|
||||
if (wi->resync_dump_frag) {
|
||||
struct mlx5e_sq_dma *dma =
|
||||
mlx5e_dma_get(sq, dma_fifo_cc++);
|
||||
|
||||
mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma);
|
||||
}
|
||||
#endif
|
||||
mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
|
||||
sqcc += wi->num_wqebbs;
|
||||
continue;
|
||||
}
|
||||
|
@ -542,29 +538,38 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq)
|
|||
{
|
||||
struct mlx5e_tx_wqe_info *wi;
|
||||
struct sk_buff *skb;
|
||||
u32 dma_fifo_cc;
|
||||
u16 sqcc;
|
||||
u16 ci;
|
||||
int i;
|
||||
|
||||
while (sq->cc != sq->pc) {
|
||||
ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc);
|
||||
sqcc = sq->cc;
|
||||
dma_fifo_cc = sq->dma_fifo_cc;
|
||||
|
||||
while (sqcc != sq->pc) {
|
||||
ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc);
|
||||
wi = &sq->db.wqe_info[ci];
|
||||
skb = wi->skb;
|
||||
|
||||
if (!skb) { /* nop */
|
||||
sq->cc++;
|
||||
if (!skb) {
|
||||
mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
|
||||
sqcc += wi->num_wqebbs;
|
||||
continue;
|
||||
}
|
||||
|
||||
for (i = 0; i < wi->num_dma; i++) {
|
||||
struct mlx5e_sq_dma *dma =
|
||||
mlx5e_dma_get(sq, sq->dma_fifo_cc++);
|
||||
mlx5e_dma_get(sq, dma_fifo_cc++);
|
||||
|
||||
mlx5e_tx_dma_unmap(sq->pdev, dma);
|
||||
}
|
||||
|
||||
dev_kfree_skb_any(skb);
|
||||
sq->cc += wi->num_wqebbs;
|
||||
sqcc += wi->num_wqebbs;
|
||||
}
|
||||
|
||||
sq->dma_fifo_cc = dma_fifo_cc;
|
||||
sq->cc = sqcc;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MLX5_CORE_IPOIB
|
||||
|
|
|
@ -285,7 +285,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
|
|||
|
||||
mlx5_eswitch_set_rule_source_port(esw, spec, attr);
|
||||
|
||||
spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
|
||||
if (attr->outer_match_level != MLX5_MATCH_NONE)
|
||||
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
|
||||
|
||||
|
|
|
@ -177,22 +177,32 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
|
|||
memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
|
||||
}
|
||||
|
||||
static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
|
||||
const struct mlx5_flow_spec *spec)
|
||||
{
|
||||
u32 port_mask, port_value;
|
||||
|
||||
if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
|
||||
return spec->flow_context.flow_source == MLX5_VPORT_UPLINK;
|
||||
|
||||
port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
|
||||
misc_parameters.source_port);
|
||||
port_value = MLX5_GET(fte_match_param, spec->match_value,
|
||||
misc_parameters.source_port);
|
||||
return (port_mask & port_value & 0xffff) == MLX5_VPORT_UPLINK;
|
||||
}
|
||||
|
||||
bool
|
||||
mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
|
||||
struct mlx5_flow_act *flow_act,
|
||||
struct mlx5_flow_spec *spec)
|
||||
{
|
||||
u32 port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
|
||||
misc_parameters.source_port);
|
||||
u32 port_value = MLX5_GET(fte_match_param, spec->match_value,
|
||||
misc_parameters.source_port);
|
||||
|
||||
if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table))
|
||||
return false;
|
||||
|
||||
/* push vlan on RX */
|
||||
return (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) &&
|
||||
((port_mask & port_value) == MLX5_VPORT_UPLINK);
|
||||
mlx5_eswitch_offload_is_uplink_port(esw, spec);
|
||||
}
|
||||
|
||||
struct mlx5_flow_handle *
|
||||
|
|
|
@ -464,8 +464,10 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
|
|||
}
|
||||
|
||||
err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn);
|
||||
if (err)
|
||||
if (err) {
|
||||
kvfree(in);
|
||||
goto err_cqwq;
|
||||
}
|
||||
|
||||
cqc = MLX5_ADDR_OF(create_cq_in, in, cq_context);
|
||||
MLX5_SET(cqc, cqc, log_cq_size, ilog2(cq_size));
|
||||
|
|
|
@ -507,7 +507,8 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
|
|||
MLX5_SET(dest_format_struct, in_dests,
|
||||
destination_eswitch_owner_vhca_id,
|
||||
dst->dest_attr.vport.vhca_id);
|
||||
if (extended_dest) {
|
||||
if (extended_dest &&
|
||||
dst->dest_attr.vport.pkt_reformat) {
|
||||
MLX5_SET(dest_format_struct, in_dests,
|
||||
packet_reformat,
|
||||
!!(dst->dest_attr.vport.flags &
|
||||
|
|
|
@ -572,7 +572,7 @@ mlx5_fw_fatal_reporter_dump(struct devlink_health_reporter *reporter,
|
|||
return -ENOMEM;
|
||||
err = mlx5_crdump_collect(dev, cr_data);
|
||||
if (err)
|
||||
return err;
|
||||
goto free_data;
|
||||
|
||||
if (priv_ctx) {
|
||||
struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
|
||||
|
|
|
@ -1186,7 +1186,7 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
|
|||
if (err)
|
||||
goto err_thermal_init;
|
||||
|
||||
if (mlxsw_driver->params_register && !reload)
|
||||
if (mlxsw_driver->params_register)
|
||||
devlink_params_publish(devlink);
|
||||
|
||||
return 0;
|
||||
|
@ -1259,7 +1259,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
|
|||
return;
|
||||
}
|
||||
|
||||
if (mlxsw_core->driver->params_unregister && !reload)
|
||||
if (mlxsw_core->driver->params_unregister)
|
||||
devlink_params_unpublish(devlink);
|
||||
mlxsw_thermal_fini(mlxsw_core->thermal);
|
||||
mlxsw_hwmon_fini(mlxsw_core->hwmon);
|
||||
|
|
|
@ -261,8 +261,15 @@ static int ocelot_vlan_vid_add(struct net_device *dev, u16 vid, bool pvid,
|
|||
port->pvid = vid;
|
||||
|
||||
/* Untagged egress vlan clasification */
|
||||
if (untagged)
|
||||
if (untagged && port->vid != vid) {
|
||||
if (port->vid) {
|
||||
dev_err(ocelot->dev,
|
||||
"Port already has a native VLAN: %d\n",
|
||||
port->vid);
|
||||
return -EBUSY;
|
||||
}
|
||||
port->vid = vid;
|
||||
}
|
||||
|
||||
ocelot_vlan_port_apply(ocelot, port);
|
||||
|
||||
|
@ -934,7 +941,7 @@ end:
|
|||
static int ocelot_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
|
||||
u16 vid)
|
||||
{
|
||||
return ocelot_vlan_vid_add(dev, vid, false, true);
|
||||
return ocelot_vlan_vid_add(dev, vid, false, false);
|
||||
}
|
||||
|
||||
static int ocelot_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
|
||||
|
|
|
@ -299,22 +299,6 @@ static void nfp_repr_clean(struct nfp_repr *repr)
|
|||
nfp_port_free(repr->port);
|
||||
}
|
||||
|
||||
static struct lock_class_key nfp_repr_netdev_xmit_lock_key;
|
||||
static struct lock_class_key nfp_repr_netdev_addr_lock_key;
|
||||
|
||||
static void nfp_repr_set_lockdep_class_one(struct net_device *dev,
|
||||
struct netdev_queue *txq,
|
||||
void *_unused)
|
||||
{
|
||||
lockdep_set_class(&txq->_xmit_lock, &nfp_repr_netdev_xmit_lock_key);
|
||||
}
|
||||
|
||||
static void nfp_repr_set_lockdep_class(struct net_device *dev)
|
||||
{
|
||||
lockdep_set_class(&dev->addr_list_lock, &nfp_repr_netdev_addr_lock_key);
|
||||
netdev_for_each_tx_queue(dev, nfp_repr_set_lockdep_class_one, NULL);
|
||||
}
|
||||
|
||||
int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
|
||||
u32 cmsg_port_id, struct nfp_port *port,
|
||||
struct net_device *pf_netdev)
|
||||
|
@ -324,8 +308,6 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
|
|||
u32 repr_cap = nn->tlv_caps.repr_cap;
|
||||
int err;
|
||||
|
||||
nfp_repr_set_lockdep_class(netdev);
|
||||
|
||||
repr->port = port;
|
||||
repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, GFP_KERNEL);
|
||||
if (!repr->dst)
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
|
||||
|
||||
#include <linux/printk.h>
|
||||
#include <linux/dynamic_debug.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
|
||||
|
||||
#include <linux/printk.h>
|
||||
#include <linux/dynamic_debug.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/utsname.h>
|
||||
|
|
|
@ -67,10 +67,9 @@
|
|||
#define QED_ROCE_QPS (8192)
|
||||
#define QED_ROCE_DPIS (8)
|
||||
#define QED_RDMA_SRQS QED_ROCE_QPS
|
||||
#define QED_NVM_CFG_SET_FLAGS 0xE
|
||||
#define QED_NVM_CFG_SET_PF_FLAGS 0x1E
|
||||
#define QED_NVM_CFG_GET_FLAGS 0xA
|
||||
#define QED_NVM_CFG_GET_PF_FLAGS 0x1A
|
||||
#define QED_NVM_CFG_MAX_ATTRS 50
|
||||
|
||||
static char version[] =
|
||||
"QLogic FastLinQ 4xxxx Core Module qed " DRV_MODULE_VERSION "\n";
|
||||
|
@ -2255,6 +2254,7 @@ static int qed_nvm_flash_cfg_write(struct qed_dev *cdev, const u8 **data)
|
|||
{
|
||||
struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
|
||||
u8 entity_id, len, buf[32];
|
||||
bool need_nvm_init = true;
|
||||
struct qed_ptt *ptt;
|
||||
u16 cfg_id, count;
|
||||
int rc = 0, i;
|
||||
|
@ -2271,8 +2271,10 @@ static int qed_nvm_flash_cfg_write(struct qed_dev *cdev, const u8 **data)
|
|||
|
||||
DP_VERBOSE(cdev, NETIF_MSG_DRV,
|
||||
"Read config ids: num_attrs = %0d\n", count);
|
||||
/* NVM CFG ID attributes */
|
||||
for (i = 0; i < count; i++) {
|
||||
/* NVM CFG ID attributes. Start loop index from 1 to avoid additional
|
||||
* arithmetic operations in the implementation.
|
||||
*/
|
||||
for (i = 1; i <= count; i++) {
|
||||
cfg_id = *((u16 *)*data);
|
||||
*data += 2;
|
||||
entity_id = **data;
|
||||
|
@ -2282,8 +2284,21 @@ static int qed_nvm_flash_cfg_write(struct qed_dev *cdev, const u8 **data)
|
|||
memcpy(buf, *data, len);
|
||||
*data += len;
|
||||
|
||||
flags = entity_id ? QED_NVM_CFG_SET_PF_FLAGS :
|
||||
QED_NVM_CFG_SET_FLAGS;
|
||||
flags = 0;
|
||||
if (need_nvm_init) {
|
||||
flags |= QED_NVM_CFG_OPTION_INIT;
|
||||
need_nvm_init = false;
|
||||
}
|
||||
|
||||
/* Commit to flash and free the resources */
|
||||
if (!(i % QED_NVM_CFG_MAX_ATTRS) || i == count) {
|
||||
flags |= QED_NVM_CFG_OPTION_COMMIT |
|
||||
QED_NVM_CFG_OPTION_FREE;
|
||||
need_nvm_init = true;
|
||||
}
|
||||
|
||||
if (entity_id)
|
||||
flags |= QED_NVM_CFG_OPTION_ENTITY_SEL;
|
||||
|
||||
DP_VERBOSE(cdev, NETIF_MSG_DRV,
|
||||
"cfg_id = %d entity = %d len = %d\n", cfg_id,
|
||||
|
|
|
@ -2005,7 +2005,7 @@ static void qed_iov_vf_mbx_stop_vport(struct qed_hwfn *p_hwfn,
|
|||
(qed_iov_validate_active_txq(p_hwfn, vf))) {
|
||||
vf->b_malicious = true;
|
||||
DP_NOTICE(p_hwfn,
|
||||
"VF [%02x] - considered malicious; Unable to stop RX/TX queuess\n",
|
||||
"VF [%02x] - considered malicious; Unable to stop RX/TX queues\n",
|
||||
vf->abs_vf_id);
|
||||
status = PFVF_STATUS_MALICIOUS;
|
||||
goto out;
|
||||
|
|
|
@ -1029,6 +1029,10 @@ static int r8168dp_2_mdio_read(struct rtl8169_private *tp, int reg)
|
|||
{
|
||||
int value;
|
||||
|
||||
/* Work around issue with chip reporting wrong PHY ID */
|
||||
if (reg == MII_PHYSID2)
|
||||
return 0xc912;
|
||||
|
||||
r8168dp_2_mdio_start(tp);
|
||||
|
||||
value = r8169_mdio_read(tp, reg);
|
||||
|
|
|
@ -2995,6 +2995,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
} else {
|
||||
stmmac_set_desc_addr(priv, first, des);
|
||||
tmp_pay_len = pay_len;
|
||||
des += proto_hdr_len;
|
||||
}
|
||||
|
||||
stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
|
||||
|
|
|
@ -1237,8 +1237,17 @@ static int fjes_probe(struct platform_device *plat_dev)
|
|||
adapter->open_guard = false;
|
||||
|
||||
adapter->txrx_wq = alloc_workqueue(DRV_NAME "/txrx", WQ_MEM_RECLAIM, 0);
|
||||
if (unlikely(!adapter->txrx_wq)) {
|
||||
err = -ENOMEM;
|
||||
goto err_free_netdev;
|
||||
}
|
||||
|
||||
adapter->control_wq = alloc_workqueue(DRV_NAME "/control",
|
||||
WQ_MEM_RECLAIM, 0);
|
||||
if (unlikely(!adapter->control_wq)) {
|
||||
err = -ENOMEM;
|
||||
goto err_free_txrx_wq;
|
||||
}
|
||||
|
||||
INIT_WORK(&adapter->tx_stall_task, fjes_tx_stall_task);
|
||||
INIT_WORK(&adapter->raise_intr_rxdata_task,
|
||||
|
@ -1255,7 +1264,7 @@ static int fjes_probe(struct platform_device *plat_dev)
|
|||
hw->hw_res.irq = platform_get_irq(plat_dev, 0);
|
||||
err = fjes_hw_init(&adapter->hw);
|
||||
if (err)
|
||||
goto err_free_netdev;
|
||||
goto err_free_control_wq;
|
||||
|
||||
/* setup MAC address (02:00:00:00:00:[epid])*/
|
||||
netdev->dev_addr[0] = 2;
|
||||
|
@ -1277,6 +1286,10 @@ static int fjes_probe(struct platform_device *plat_dev)
|
|||
|
||||
err_hw_exit:
|
||||
fjes_hw_exit(&adapter->hw);
|
||||
err_free_control_wq:
|
||||
destroy_workqueue(adapter->control_wq);
|
||||
err_free_txrx_wq:
|
||||
destroy_workqueue(adapter->txrx_wq);
|
||||
err_free_netdev:
|
||||
free_netdev(netdev);
|
||||
err_out:
|
||||
|
|
|
@ -107,27 +107,6 @@ struct bpqdev {
|
|||
|
||||
static LIST_HEAD(bpq_devices);
|
||||
|
||||
/*
|
||||
* bpqether network devices are paired with ethernet devices below them, so
|
||||
* form a special "super class" of normal ethernet devices; split their locks
|
||||
* off into a separate class since they always nest.
|
||||
*/
|
||||
static struct lock_class_key bpq_netdev_xmit_lock_key;
|
||||
static struct lock_class_key bpq_netdev_addr_lock_key;
|
||||
|
||||
static void bpq_set_lockdep_class_one(struct net_device *dev,
|
||||
struct netdev_queue *txq,
|
||||
void *_unused)
|
||||
{
|
||||
lockdep_set_class(&txq->_xmit_lock, &bpq_netdev_xmit_lock_key);
|
||||
}
|
||||
|
||||
static void bpq_set_lockdep_class(struct net_device *dev)
|
||||
{
|
||||
lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key);
|
||||
netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL);
|
||||
}
|
||||
|
||||
/* ------------------------------------------------------------------------ */
|
||||
|
||||
|
||||
|
@ -498,7 +477,6 @@ static int bpq_new_device(struct net_device *edev)
|
|||
err = register_netdevice(ndev);
|
||||
if (err)
|
||||
goto error;
|
||||
bpq_set_lockdep_class(ndev);
|
||||
|
||||
/* List protected by RTNL */
|
||||
list_add_rcu(&bpq->bpq_list, &bpq_devices);
|
||||
|
|
|
@ -982,7 +982,7 @@ static int netvsc_attach(struct net_device *ndev,
|
|||
if (netif_running(ndev)) {
|
||||
ret = rndis_filter_open(nvdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err;
|
||||
|
||||
rdev = nvdev->extension;
|
||||
if (!rdev->link_state)
|
||||
|
@ -990,6 +990,13 @@ static int netvsc_attach(struct net_device *ndev,
|
|||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
netif_device_detach(ndev);
|
||||
|
||||
rndis_filter_device_remove(hdev, nvdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int netvsc_set_channels(struct net_device *net,
|
||||
|
@ -1807,8 +1814,10 @@ static int netvsc_set_features(struct net_device *ndev,
|
|||
|
||||
ret = rndis_filter_set_offload_params(ndev, nvdev, &offloads);
|
||||
|
||||
if (ret)
|
||||
if (ret) {
|
||||
features ^= NETIF_F_LRO;
|
||||
ndev->features = features;
|
||||
}
|
||||
|
||||
syncvf:
|
||||
if (!vf_netdev)
|
||||
|
@ -2335,8 +2344,6 @@ static int netvsc_probe(struct hv_device *dev,
|
|||
NETIF_F_HW_VLAN_CTAG_RX;
|
||||
net->vlan_features = net->features;
|
||||
|
||||
netdev_lockdep_set_classes(net);
|
||||
|
||||
/* MTU range: 68 - 1500 or 65521 */
|
||||
net->min_mtu = NETVSC_MTU_MIN;
|
||||
if (nvdev->nvsp_version >= NVSP_PROTOCOL_VERSION_2)
|
||||
|
|
|
@ -131,8 +131,6 @@ static int ipvlan_init(struct net_device *dev)
|
|||
dev->gso_max_segs = phy_dev->gso_max_segs;
|
||||
dev->hard_header_len = phy_dev->hard_header_len;
|
||||
|
||||
netdev_lockdep_set_classes(dev);
|
||||
|
||||
ipvlan->pcpu_stats = netdev_alloc_pcpu_stats(struct ipvl_pcpu_stats);
|
||||
if (!ipvlan->pcpu_stats)
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -267,7 +267,6 @@ struct macsec_dev {
|
|||
struct pcpu_secy_stats __percpu *stats;
|
||||
struct list_head secys;
|
||||
struct gro_cells gro_cells;
|
||||
unsigned int nest_level;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -2750,7 +2749,6 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
|
|||
|
||||
#define MACSEC_FEATURES \
|
||||
(NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
|
||||
static struct lock_class_key macsec_netdev_addr_lock_key;
|
||||
|
||||
static int macsec_dev_init(struct net_device *dev)
|
||||
{
|
||||
|
@ -2958,11 +2956,6 @@ static int macsec_get_iflink(const struct net_device *dev)
|
|||
return macsec_priv(dev)->real_dev->ifindex;
|
||||
}
|
||||
|
||||
static int macsec_get_nest_level(struct net_device *dev)
|
||||
{
|
||||
return macsec_priv(dev)->nest_level;
|
||||
}
|
||||
|
||||
static const struct net_device_ops macsec_netdev_ops = {
|
||||
.ndo_init = macsec_dev_init,
|
||||
.ndo_uninit = macsec_dev_uninit,
|
||||
|
@ -2976,7 +2969,6 @@ static const struct net_device_ops macsec_netdev_ops = {
|
|||
.ndo_start_xmit = macsec_start_xmit,
|
||||
.ndo_get_stats64 = macsec_get_stats64,
|
||||
.ndo_get_iflink = macsec_get_iflink,
|
||||
.ndo_get_lock_subclass = macsec_get_nest_level,
|
||||
};
|
||||
|
||||
static const struct device_type macsec_type = {
|
||||
|
@ -3001,12 +2993,10 @@ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
|
|||
static void macsec_free_netdev(struct net_device *dev)
|
||||
{
|
||||
struct macsec_dev *macsec = macsec_priv(dev);
|
||||
struct net_device *real_dev = macsec->real_dev;
|
||||
|
||||
free_percpu(macsec->stats);
|
||||
free_percpu(macsec->secy.tx_sc.stats);
|
||||
|
||||
dev_put(real_dev);
|
||||
}
|
||||
|
||||
static void macsec_setup(struct net_device *dev)
|
||||
|
@ -3261,14 +3251,6 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
|
|||
if (err < 0)
|
||||
return err;
|
||||
|
||||
dev_hold(real_dev);
|
||||
|
||||
macsec->nest_level = dev_get_nest_level(real_dev) + 1;
|
||||
netdev_lockdep_set_classes(dev);
|
||||
lockdep_set_class_and_subclass(&dev->addr_list_lock,
|
||||
&macsec_netdev_addr_lock_key,
|
||||
macsec_get_nest_level(dev));
|
||||
|
||||
err = netdev_upper_dev_link(real_dev, dev, extack);
|
||||
if (err < 0)
|
||||
goto unregister;
|
||||
|
|
|
@ -852,8 +852,6 @@ static int macvlan_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
|
|||
* "super class" of normal network devices; split their locks off into a
|
||||
* separate class since they always nest.
|
||||
*/
|
||||
static struct lock_class_key macvlan_netdev_addr_lock_key;
|
||||
|
||||
#define ALWAYS_ON_OFFLOADS \
|
||||
(NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE | \
|
||||
NETIF_F_GSO_ROBUST | NETIF_F_GSO_ENCAP_ALL)
|
||||
|
@ -869,19 +867,6 @@ static struct lock_class_key macvlan_netdev_addr_lock_key;
|
|||
#define MACVLAN_STATE_MASK \
|
||||
((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT))
|
||||
|
||||
static int macvlan_get_nest_level(struct net_device *dev)
|
||||
{
|
||||
return ((struct macvlan_dev *)netdev_priv(dev))->nest_level;
|
||||
}
|
||||
|
||||
static void macvlan_set_lockdep_class(struct net_device *dev)
|
||||
{
|
||||
netdev_lockdep_set_classes(dev);
|
||||
lockdep_set_class_and_subclass(&dev->addr_list_lock,
|
||||
&macvlan_netdev_addr_lock_key,
|
||||
macvlan_get_nest_level(dev));
|
||||
}
|
||||
|
||||
static int macvlan_init(struct net_device *dev)
|
||||
{
|
||||
struct macvlan_dev *vlan = netdev_priv(dev);
|
||||
|
@ -900,8 +885,6 @@ static int macvlan_init(struct net_device *dev)
|
|||
dev->gso_max_segs = lowerdev->gso_max_segs;
|
||||
dev->hard_header_len = lowerdev->hard_header_len;
|
||||
|
||||
macvlan_set_lockdep_class(dev);
|
||||
|
||||
vlan->pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats);
|
||||
if (!vlan->pcpu_stats)
|
||||
return -ENOMEM;
|
||||
|
@ -1161,7 +1144,6 @@ static const struct net_device_ops macvlan_netdev_ops = {
|
|||
.ndo_fdb_add = macvlan_fdb_add,
|
||||
.ndo_fdb_del = macvlan_fdb_del,
|
||||
.ndo_fdb_dump = ndo_dflt_fdb_dump,
|
||||
.ndo_get_lock_subclass = macvlan_get_nest_level,
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
.ndo_poll_controller = macvlan_dev_poll_controller,
|
||||
.ndo_netpoll_setup = macvlan_dev_netpoll_setup,
|
||||
|
@ -1445,7 +1427,6 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
|
|||
vlan->dev = dev;
|
||||
vlan->port = port;
|
||||
vlan->set_features = MACVLAN_FEATURES;
|
||||
vlan->nest_level = dev_get_nest_level(lowerdev) + 1;
|
||||
|
||||
vlan->mode = MACVLAN_MODE_VEPA;
|
||||
if (data && data[IFLA_MACVLAN_MODE])
|
||||
|
|
|
@ -806,9 +806,11 @@ static void nsim_dev_port_del_all(struct nsim_dev *nsim_dev)
|
|||
{
|
||||
struct nsim_dev_port *nsim_dev_port, *tmp;
|
||||
|
||||
mutex_lock(&nsim_dev->port_list_lock);
|
||||
list_for_each_entry_safe(nsim_dev_port, tmp,
|
||||
&nsim_dev->port_list, list)
|
||||
__nsim_dev_port_del(nsim_dev_port);
|
||||
mutex_unlock(&nsim_dev->port_list_lock);
|
||||
}
|
||||
|
||||
int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
|
||||
|
@ -822,14 +824,17 @@ int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev)
|
|||
return PTR_ERR(nsim_dev);
|
||||
dev_set_drvdata(&nsim_bus_dev->dev, nsim_dev);
|
||||
|
||||
mutex_lock(&nsim_dev->port_list_lock);
|
||||
for (i = 0; i < nsim_bus_dev->port_count; i++) {
|
||||
err = __nsim_dev_port_add(nsim_dev, i);
|
||||
if (err)
|
||||
goto err_port_del_all;
|
||||
}
|
||||
mutex_unlock(&nsim_dev->port_list_lock);
|
||||
return 0;
|
||||
|
||||
err_port_del_all:
|
||||
mutex_unlock(&nsim_dev->port_list_lock);
|
||||
nsim_dev_port_del_all(nsim_dev);
|
||||
nsim_dev_destroy(nsim_dev);
|
||||
return err;
|
||||
|
|
|
@ -87,8 +87,24 @@ struct phylink {
|
|||
phylink_printk(KERN_WARNING, pl, fmt, ##__VA_ARGS__)
|
||||
#define phylink_info(pl, fmt, ...) \
|
||||
phylink_printk(KERN_INFO, pl, fmt, ##__VA_ARGS__)
|
||||
#if defined(CONFIG_DYNAMIC_DEBUG)
|
||||
#define phylink_dbg(pl, fmt, ...) \
|
||||
do { \
|
||||
if ((pl)->config->type == PHYLINK_NETDEV) \
|
||||
netdev_dbg((pl)->netdev, fmt, ##__VA_ARGS__); \
|
||||
else if ((pl)->config->type == PHYLINK_DEV) \
|
||||
dev_dbg((pl)->dev, fmt, ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
#elif defined(DEBUG)
|
||||
#define phylink_dbg(pl, fmt, ...) \
|
||||
phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__)
|
||||
#else
|
||||
#define phylink_dbg(pl, fmt, ...) \
|
||||
({ \
|
||||
if (0) \
|
||||
phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__); \
|
||||
})
|
||||
#endif
|
||||
|
||||
/**
|
||||
* phylink_set_port_modes() - set the port type modes in the ethtool mask
|
||||
|
|
|
@ -327,6 +327,7 @@ static struct phy_driver smsc_phy_driver[] = {
|
|||
.name = "SMSC LAN8740",
|
||||
|
||||
/* PHY_BASIC_FEATURES */
|
||||
.flags = PHY_RST_AFTER_CLK_EN,
|
||||
|
||||
.probe = smsc_phy_probe,
|
||||
|
||||
|
|
|
@ -1324,8 +1324,6 @@ static int ppp_dev_init(struct net_device *dev)
|
|||
{
|
||||
struct ppp *ppp;
|
||||
|
||||
netdev_lockdep_set_classes(dev);
|
||||
|
||||
ppp = netdev_priv(dev);
|
||||
/* Let the netdevice take a reference on the ppp file. This ensures
|
||||
* that ppp_destroy_interface() won't run before the device gets
|
||||
|
|
|
@ -1615,7 +1615,6 @@ static int team_init(struct net_device *dev)
|
|||
int err;
|
||||
|
||||
team->dev = dev;
|
||||
mutex_init(&team->lock);
|
||||
team_set_no_mode(team);
|
||||
|
||||
team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats);
|
||||
|
@ -1642,7 +1641,8 @@ static int team_init(struct net_device *dev)
|
|||
goto err_options_register;
|
||||
netif_carrier_off(dev);
|
||||
|
||||
netdev_lockdep_set_classes(dev);
|
||||
lockdep_register_key(&team->team_lock_key);
|
||||
__mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -1673,6 +1673,7 @@ static void team_uninit(struct net_device *dev)
|
|||
team_queue_override_fini(team);
|
||||
mutex_unlock(&team->lock);
|
||||
netdev_change_features(dev);
|
||||
lockdep_unregister_key(&team->team_lock_key);
|
||||
}
|
||||
|
||||
static void team_destructor(struct net_device *dev)
|
||||
|
@ -1976,8 +1977,15 @@ static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
|
|||
err = team_port_del(team, port_dev);
|
||||
mutex_unlock(&team->lock);
|
||||
|
||||
if (!err)
|
||||
netdev_change_features(dev);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (netif_is_team_master(port_dev)) {
|
||||
lockdep_unregister_key(&team->team_lock_key);
|
||||
lockdep_register_key(&team->team_lock_key);
|
||||
lockdep_set_class(&team->lock, &team->team_lock_key);
|
||||
}
|
||||
netdev_change_features(dev);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -787,6 +787,13 @@ static const struct usb_device_id products[] = {
|
|||
.driver_info = 0,
|
||||
},
|
||||
|
||||
/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||
.driver_info = 0,
|
||||
},
|
||||
|
||||
/* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM,
|
||||
|
|
|
@ -1264,8 +1264,11 @@ static void lan78xx_status(struct lan78xx_net *dev, struct urb *urb)
|
|||
netif_dbg(dev, link, dev->net, "PHY INTR: 0x%08x\n", intdata);
|
||||
lan78xx_defer_kevent(dev, EVENT_LINK_RESET);
|
||||
|
||||
if (dev->domain_data.phyirq > 0)
|
||||
if (dev->domain_data.phyirq > 0) {
|
||||
local_irq_disable();
|
||||
generic_handle_irq(dev->domain_data.phyirq);
|
||||
local_irq_enable();
|
||||
}
|
||||
} else
|
||||
netdev_warn(dev->net,
|
||||
"unexpected interrupt: 0x%08x\n", intdata);
|
||||
|
|
|
@ -5755,6 +5755,7 @@ static const struct usb_device_id rtl8152_table[] = {
|
|||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)},
|
||||
|
|
|
@ -865,7 +865,6 @@ static int vrf_dev_init(struct net_device *dev)
|
|||
|
||||
/* similarly, oper state is irrelevant; set to up to avoid confusion */
|
||||
dev->operstate = IF_OPER_UP;
|
||||
netdev_lockdep_set_classes(dev);
|
||||
return 0;
|
||||
|
||||
out_rth:
|
||||
|
|
|
@ -2487,9 +2487,11 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
vni = tunnel_id_to_key32(info->key.tun_id);
|
||||
ifindex = 0;
|
||||
dst_cache = &info->dst_cache;
|
||||
if (info->options_len &&
|
||||
info->key.tun_flags & TUNNEL_VXLAN_OPT)
|
||||
if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
|
||||
if (info->options_len < sizeof(*md))
|
||||
goto drop;
|
||||
md = ip_tunnel_info_opts(info);
|
||||
}
|
||||
ttl = info->key.ttl;
|
||||
tos = info->key.tos;
|
||||
label = info->key.label;
|
||||
|
@ -3566,10 +3568,13 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
|
|||
{
|
||||
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
|
||||
struct vxlan_dev *vxlan = netdev_priv(dev);
|
||||
struct net_device *remote_dev = NULL;
|
||||
struct vxlan_fdb *f = NULL;
|
||||
bool unregister = false;
|
||||
struct vxlan_rdst *dst;
|
||||
int err;
|
||||
|
||||
dst = &vxlan->default_dst;
|
||||
err = vxlan_dev_configure(net, dev, conf, false, extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -3577,14 +3582,14 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
|
|||
dev->ethtool_ops = &vxlan_ethtool_ops;
|
||||
|
||||
/* create an fdb entry for a valid default destination */
|
||||
if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) {
|
||||
if (!vxlan_addr_any(&dst->remote_ip)) {
|
||||
err = vxlan_fdb_create(vxlan, all_zeros_mac,
|
||||
&vxlan->default_dst.remote_ip,
|
||||
&dst->remote_ip,
|
||||
NUD_REACHABLE | NUD_PERMANENT,
|
||||
vxlan->cfg.dst_port,
|
||||
vxlan->default_dst.remote_vni,
|
||||
vxlan->default_dst.remote_vni,
|
||||
vxlan->default_dst.remote_ifindex,
|
||||
dst->remote_vni,
|
||||
dst->remote_vni,
|
||||
dst->remote_ifindex,
|
||||
NTF_SELF, &f);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -3595,26 +3600,41 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
|
|||
goto errout;
|
||||
unregister = true;
|
||||
|
||||
if (dst->remote_ifindex) {
|
||||
remote_dev = __dev_get_by_index(net, dst->remote_ifindex);
|
||||
if (!remote_dev)
|
||||
goto errout;
|
||||
|
||||
err = netdev_upper_dev_link(remote_dev, dev, extack);
|
||||
if (err)
|
||||
goto errout;
|
||||
}
|
||||
|
||||
err = rtnl_configure_link(dev, NULL);
|
||||
if (err)
|
||||
goto errout;
|
||||
goto unlink;
|
||||
|
||||
if (f) {
|
||||
vxlan_fdb_insert(vxlan, all_zeros_mac,
|
||||
vxlan->default_dst.remote_vni, f);
|
||||
vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f);
|
||||
|
||||
/* notify default fdb entry */
|
||||
err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),
|
||||
RTM_NEWNEIGH, true, extack);
|
||||
if (err) {
|
||||
vxlan_fdb_destroy(vxlan, f, false, false);
|
||||
if (remote_dev)
|
||||
netdev_upper_dev_unlink(remote_dev, dev);
|
||||
goto unregister;
|
||||
}
|
||||
}
|
||||
|
||||
list_add(&vxlan->next, &vn->vxlan_list);
|
||||
if (remote_dev)
|
||||
dst->remote_dev = remote_dev;
|
||||
return 0;
|
||||
|
||||
unlink:
|
||||
if (remote_dev)
|
||||
netdev_upper_dev_unlink(remote_dev, dev);
|
||||
errout:
|
||||
/* unregister_netdevice() destroys the default FDB entry with deletion
|
||||
* notification. But the addition notification was not sent yet, so
|
||||
|
@ -3932,11 +3952,12 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
|
|||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct vxlan_dev *vxlan = netdev_priv(dev);
|
||||
struct vxlan_rdst *dst = &vxlan->default_dst;
|
||||
struct net_device *lowerdev;
|
||||
struct vxlan_config conf;
|
||||
struct vxlan_rdst *dst;
|
||||
int err;
|
||||
|
||||
dst = &vxlan->default_dst;
|
||||
err = vxlan_nl2conf(tb, data, dev, &conf, true, extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -3946,6 +3967,14 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
if (dst->remote_dev == lowerdev)
|
||||
lowerdev = NULL;
|
||||
|
||||
err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev,
|
||||
extack);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* handle default dst entry */
|
||||
if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) {
|
||||
u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
|
||||
|
@ -3962,6 +3991,8 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
|
|||
NTF_SELF, true, extack);
|
||||
if (err) {
|
||||
spin_unlock_bh(&vxlan->hash_lock[hash_index]);
|
||||
netdev_adjacent_change_abort(dst->remote_dev,
|
||||
lowerdev, dev);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
@ -3979,6 +4010,11 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
|
|||
if (conf.age_interval != vxlan->cfg.age_interval)
|
||||
mod_timer(&vxlan->age_timer, jiffies);
|
||||
|
||||
netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev);
|
||||
if (lowerdev && lowerdev != dst->remote_dev) {
|
||||
dst->remote_dev = lowerdev;
|
||||
netdev_update_lockdep_key(lowerdev);
|
||||
}
|
||||
vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true);
|
||||
return 0;
|
||||
}
|
||||
|
@ -3991,6 +4027,8 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
|
|||
|
||||
list_del(&vxlan->next);
|
||||
unregister_netdevice_queue(dev, head);
|
||||
if (vxlan->default_dst.remote_dev)
|
||||
netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev);
|
||||
}
|
||||
|
||||
static size_t vxlan_get_size(const struct net_device *dev)
|
||||
|
|
|
@ -127,12 +127,12 @@ int i2400m_op_rfkill_sw_toggle(struct wimax_dev *wimax_dev,
|
|||
"%d\n", result);
|
||||
result = 0;
|
||||
error_cmd:
|
||||
kfree(cmd);
|
||||
kfree_skb(ack_skb);
|
||||
error_msg_to_dev:
|
||||
error_alloc:
|
||||
d_fnend(4, dev, "(wimax_dev %p state %d) = %d\n",
|
||||
wimax_dev, state, result);
|
||||
kfree(cmd);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
|
|
@ -520,7 +520,7 @@ struct iwl_scan_dwell {
|
|||
} __packed;
|
||||
|
||||
/**
|
||||
* struct iwl_scan_config
|
||||
* struct iwl_scan_config_v1
|
||||
* @flags: enum scan_config_flags
|
||||
* @tx_chains: valid_tx antenna - ANT_* definitions
|
||||
* @rx_chains: valid_rx antenna - ANT_* definitions
|
||||
|
@ -552,7 +552,7 @@ struct iwl_scan_config_v1 {
|
|||
#define SCAN_LB_LMAC_IDX 0
|
||||
#define SCAN_HB_LMAC_IDX 1
|
||||
|
||||
struct iwl_scan_config {
|
||||
struct iwl_scan_config_v2 {
|
||||
__le32 flags;
|
||||
__le32 tx_chains;
|
||||
__le32 rx_chains;
|
||||
|
@ -564,6 +564,24 @@ struct iwl_scan_config {
|
|||
u8 bcast_sta_id;
|
||||
u8 channel_flags;
|
||||
u8 channel_array[];
|
||||
} __packed; /* SCAN_CONFIG_DB_CMD_API_S_2 */
|
||||
|
||||
/**
|
||||
* struct iwl_scan_config
|
||||
* @enable_cam_mode: whether to enable CAM mode.
|
||||
* @enable_promiscouos_mode: whether to enable promiscouos mode
|
||||
* @bcast_sta_id: the index of the station in the fw
|
||||
* @reserved: reserved
|
||||
* @tx_chains: valid_tx antenna - ANT_* definitions
|
||||
* @rx_chains: valid_rx antenna - ANT_* definitions
|
||||
*/
|
||||
struct iwl_scan_config {
|
||||
u8 enable_cam_mode;
|
||||
u8 enable_promiscouos_mode;
|
||||
u8 bcast_sta_id;
|
||||
u8 reserved;
|
||||
__le32 tx_chains;
|
||||
__le32 rx_chains;
|
||||
} __packed; /* SCAN_CONFIG_DB_CMD_API_S_3 */
|
||||
|
||||
/**
|
||||
|
|
|
@ -288,6 +288,8 @@ typedef unsigned int __bitwise iwl_ucode_tlv_api_t;
|
|||
* STA_CONTEXT_DOT11AX_API_S
|
||||
* @IWL_UCODE_TLV_CAPA_SAR_TABLE_VER: This ucode supports different sar
|
||||
* version tables.
|
||||
* @IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG: This ucode supports v3 of
|
||||
* SCAN_CONFIG_DB_CMD_API_S.
|
||||
*
|
||||
* @NUM_IWL_UCODE_TLV_API: number of bits used
|
||||
*/
|
||||
|
@ -321,6 +323,7 @@ enum iwl_ucode_tlv_api {
|
|||
IWL_UCODE_TLV_API_WOWLAN_TCP_SYN_WAKE = (__force iwl_ucode_tlv_api_t)53,
|
||||
IWL_UCODE_TLV_API_FTM_RTT_ACCURACY = (__force iwl_ucode_tlv_api_t)54,
|
||||
IWL_UCODE_TLV_API_SAR_TABLE_VER = (__force iwl_ucode_tlv_api_t)55,
|
||||
IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG = (__force iwl_ucode_tlv_api_t)56,
|
||||
IWL_UCODE_TLV_API_ADWELL_HB_DEF_N_AP = (__force iwl_ucode_tlv_api_t)57,
|
||||
IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER = (__force iwl_ucode_tlv_api_t)58,
|
||||
|
||||
|
|
|
@ -279,6 +279,7 @@
|
|||
* Indicates MAC is entering a power-saving sleep power-down.
|
||||
* Not a good time to access device-internal resources.
|
||||
*/
|
||||
#define CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004)
|
||||
#define CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010)
|
||||
#define CSR_GP_CNTRL_REG_FLAG_XTAL_ON (0x00000400)
|
||||
|
||||
|
|
|
@ -449,6 +449,11 @@ enum {
|
|||
#define PERSISTENCE_BIT BIT(12)
|
||||
#define PREG_WFPM_ACCESS BIT(12)
|
||||
|
||||
#define HPM_HIPM_GEN_CFG 0xA03458
|
||||
#define HPM_HIPM_GEN_CFG_CR_PG_EN BIT(0)
|
||||
#define HPM_HIPM_GEN_CFG_CR_SLP_EN BIT(1)
|
||||
#define HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE BIT(10)
|
||||
|
||||
#define UREG_DOORBELL_TO_ISR6 0xA05C04
|
||||
#define UREG_DOORBELL_TO_ISR6_NMI_BIT BIT(0)
|
||||
#define UREG_DOORBELL_TO_ISR6_SUSPEND BIT(18)
|
||||
|
|
|
@ -1405,6 +1405,12 @@ static inline bool iwl_mvm_is_scan_ext_chan_supported(struct iwl_mvm *mvm)
|
|||
IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER);
|
||||
}
|
||||
|
||||
static inline bool iwl_mvm_is_reduced_config_scan_supported(struct iwl_mvm *mvm)
|
||||
{
|
||||
return fw_has_api(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG);
|
||||
}
|
||||
|
||||
static inline bool iwl_mvm_has_new_rx_stats_api(struct iwl_mvm *mvm)
|
||||
{
|
||||
return fw_has_api(&mvm->fw->ucode_capa,
|
||||
|
|
|
@ -1137,11 +1137,11 @@ static void iwl_mvm_fill_scan_config_v1(struct iwl_mvm *mvm, void *config,
|
|||
iwl_mvm_fill_channels(mvm, cfg->channel_array, max_channels);
|
||||
}
|
||||
|
||||
static void iwl_mvm_fill_scan_config(struct iwl_mvm *mvm, void *config,
|
||||
u32 flags, u8 channel_flags,
|
||||
u32 max_channels)
|
||||
static void iwl_mvm_fill_scan_config_v2(struct iwl_mvm *mvm, void *config,
|
||||
u32 flags, u8 channel_flags,
|
||||
u32 max_channels)
|
||||
{
|
||||
struct iwl_scan_config *cfg = config;
|
||||
struct iwl_scan_config_v2 *cfg = config;
|
||||
|
||||
cfg->flags = cpu_to_le32(flags);
|
||||
cfg->tx_chains = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm));
|
||||
|
@ -1185,7 +1185,7 @@ static void iwl_mvm_fill_scan_config(struct iwl_mvm *mvm, void *config,
|
|||
iwl_mvm_fill_channels(mvm, cfg->channel_array, max_channels);
|
||||
}
|
||||
|
||||
int iwl_mvm_config_scan(struct iwl_mvm *mvm)
|
||||
static int iwl_mvm_legacy_config_scan(struct iwl_mvm *mvm)
|
||||
{
|
||||
void *cfg;
|
||||
int ret, cmd_size;
|
||||
|
@ -1217,7 +1217,7 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm)
|
|||
}
|
||||
|
||||
if (iwl_mvm_cdb_scan_api(mvm))
|
||||
cmd_size = sizeof(struct iwl_scan_config);
|
||||
cmd_size = sizeof(struct iwl_scan_config_v2);
|
||||
else
|
||||
cmd_size = sizeof(struct iwl_scan_config_v1);
|
||||
cmd_size += num_channels;
|
||||
|
@ -1254,8 +1254,8 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm)
|
|||
flags |= (iwl_mvm_is_scan_fragmented(hb_type)) ?
|
||||
SCAN_CONFIG_FLAG_SET_LMAC2_FRAGMENTED :
|
||||
SCAN_CONFIG_FLAG_CLEAR_LMAC2_FRAGMENTED;
|
||||
iwl_mvm_fill_scan_config(mvm, cfg, flags, channel_flags,
|
||||
num_channels);
|
||||
iwl_mvm_fill_scan_config_v2(mvm, cfg, flags, channel_flags,
|
||||
num_channels);
|
||||
} else {
|
||||
iwl_mvm_fill_scan_config_v1(mvm, cfg, flags, channel_flags,
|
||||
num_channels);
|
||||
|
@ -1277,6 +1277,30 @@ int iwl_mvm_config_scan(struct iwl_mvm *mvm)
|
|||
return ret;
|
||||
}
|
||||
|
||||
int iwl_mvm_config_scan(struct iwl_mvm *mvm)
|
||||
{
|
||||
struct iwl_scan_config cfg;
|
||||
struct iwl_host_cmd cmd = {
|
||||
.id = iwl_cmd_id(SCAN_CFG_CMD, IWL_ALWAYS_LONG_GROUP, 0),
|
||||
.len[0] = sizeof(cfg),
|
||||
.data[0] = &cfg,
|
||||
.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
|
||||
};
|
||||
|
||||
if (!iwl_mvm_is_reduced_config_scan_supported(mvm))
|
||||
return iwl_mvm_legacy_config_scan(mvm);
|
||||
|
||||
memset(&cfg, 0, sizeof(cfg));
|
||||
|
||||
cfg.bcast_sta_id = mvm->aux_sta.sta_id;
|
||||
cfg.tx_chains = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm));
|
||||
cfg.rx_chains = cpu_to_le32(iwl_mvm_scan_rx_ant(mvm));
|
||||
|
||||
IWL_DEBUG_SCAN(mvm, "Sending UMAC scan config\n");
|
||||
|
||||
return iwl_mvm_send_cmd(mvm, &cmd);
|
||||
}
|
||||
|
||||
static int iwl_mvm_scan_uid_by_status(struct iwl_mvm *mvm, int status)
|
||||
{
|
||||
int i;
|
||||
|
|
|
@ -1482,6 +1482,13 @@ static void iwl_mvm_realloc_queues_after_restart(struct iwl_mvm *mvm,
|
|||
mvm_sta->sta_id, i);
|
||||
txq_id = iwl_mvm_tvqm_enable_txq(mvm, mvm_sta->sta_id,
|
||||
i, wdg);
|
||||
/*
|
||||
* on failures, just set it to IWL_MVM_INVALID_QUEUE
|
||||
* to try again later, we have no other good way of
|
||||
* failing here
|
||||
*/
|
||||
if (txq_id < 0)
|
||||
txq_id = IWL_MVM_INVALID_QUEUE;
|
||||
tid_data->txq_id = txq_id;
|
||||
|
||||
/*
|
||||
|
@ -1950,30 +1957,73 @@ void iwl_mvm_dealloc_int_sta(struct iwl_mvm *mvm, struct iwl_mvm_int_sta *sta)
|
|||
sta->sta_id = IWL_MVM_INVALID_STA;
|
||||
}
|
||||
|
||||
static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 *queue,
|
||||
static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 queue,
|
||||
u8 sta_id, u8 fifo)
|
||||
{
|
||||
unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ?
|
||||
mvm->trans->trans_cfg->base_params->wd_timeout :
|
||||
IWL_WATCHDOG_DISABLED;
|
||||
struct iwl_trans_txq_scd_cfg cfg = {
|
||||
.fifo = fifo,
|
||||
.sta_id = sta_id,
|
||||
.tid = IWL_MAX_TID_COUNT,
|
||||
.aggregate = false,
|
||||
.frame_limit = IWL_FRAME_LIMIT,
|
||||
};
|
||||
|
||||
if (iwl_mvm_has_new_tx_api(mvm)) {
|
||||
int tvqm_queue =
|
||||
iwl_mvm_tvqm_enable_txq(mvm, sta_id,
|
||||
IWL_MAX_TID_COUNT,
|
||||
wdg_timeout);
|
||||
*queue = tvqm_queue;
|
||||
} else {
|
||||
struct iwl_trans_txq_scd_cfg cfg = {
|
||||
.fifo = fifo,
|
||||
.sta_id = sta_id,
|
||||
.tid = IWL_MAX_TID_COUNT,
|
||||
.aggregate = false,
|
||||
.frame_limit = IWL_FRAME_LIMIT,
|
||||
};
|
||||
WARN_ON(iwl_mvm_has_new_tx_api(mvm));
|
||||
|
||||
iwl_mvm_enable_txq(mvm, NULL, *queue, 0, &cfg, wdg_timeout);
|
||||
iwl_mvm_enable_txq(mvm, NULL, queue, 0, &cfg, wdg_timeout);
|
||||
}
|
||||
|
||||
static int iwl_mvm_enable_aux_snif_queue_tvqm(struct iwl_mvm *mvm, u8 sta_id)
|
||||
{
|
||||
unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ?
|
||||
mvm->trans->trans_cfg->base_params->wd_timeout :
|
||||
IWL_WATCHDOG_DISABLED;
|
||||
|
||||
WARN_ON(!iwl_mvm_has_new_tx_api(mvm));
|
||||
|
||||
return iwl_mvm_tvqm_enable_txq(mvm, sta_id, IWL_MAX_TID_COUNT,
|
||||
wdg_timeout);
|
||||
}
|
||||
|
||||
static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx,
|
||||
int maccolor,
|
||||
struct iwl_mvm_int_sta *sta,
|
||||
u16 *queue, int fifo)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* Map queue to fifo - needs to happen before adding station */
|
||||
if (!iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_enable_aux_snif_queue(mvm, *queue, sta->sta_id, fifo);
|
||||
|
||||
ret = iwl_mvm_add_int_sta_common(mvm, sta, NULL, macidx, maccolor);
|
||||
if (ret) {
|
||||
if (!iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_disable_txq(mvm, NULL, *queue,
|
||||
IWL_MAX_TID_COUNT, 0);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* For 22000 firmware and on we cannot add queue to a station unknown
|
||||
* to firmware so enable queue here - after the station was added
|
||||
*/
|
||||
if (iwl_mvm_has_new_tx_api(mvm)) {
|
||||
int txq;
|
||||
|
||||
txq = iwl_mvm_enable_aux_snif_queue_tvqm(mvm, sta->sta_id);
|
||||
if (txq < 0) {
|
||||
iwl_mvm_rm_sta_common(mvm, sta->sta_id);
|
||||
return txq;
|
||||
}
|
||||
|
||||
*queue = txq;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm)
|
||||
|
@ -1989,59 +2039,26 @@ int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Map Aux queue to fifo - needs to happen before adding Aux station */
|
||||
if (!iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue,
|
||||
mvm->aux_sta.sta_id,
|
||||
IWL_MVM_TX_FIFO_MCAST);
|
||||
|
||||
ret = iwl_mvm_add_int_sta_common(mvm, &mvm->aux_sta, NULL,
|
||||
MAC_INDEX_AUX, 0);
|
||||
ret = iwl_mvm_add_int_sta_with_queue(mvm, MAC_INDEX_AUX, 0,
|
||||
&mvm->aux_sta, &mvm->aux_queue,
|
||||
IWL_MVM_TX_FIFO_MCAST);
|
||||
if (ret) {
|
||||
iwl_mvm_dealloc_int_sta(mvm, &mvm->aux_sta);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* For 22000 firmware and on we cannot add queue to a station unknown
|
||||
* to firmware so enable queue here - after the station was added
|
||||
*/
|
||||
if (iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue,
|
||||
mvm->aux_sta.sta_id,
|
||||
IWL_MVM_TX_FIFO_MCAST);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int iwl_mvm_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
||||
{
|
||||
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
|
||||
/* Map snif queue to fifo - must happen before adding snif station */
|
||||
if (!iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue,
|
||||
mvm->snif_sta.sta_id,
|
||||
return iwl_mvm_add_int_sta_with_queue(mvm, mvmvif->id, mvmvif->color,
|
||||
&mvm->snif_sta, &mvm->snif_queue,
|
||||
IWL_MVM_TX_FIFO_BE);
|
||||
|
||||
ret = iwl_mvm_add_int_sta_common(mvm, &mvm->snif_sta, vif->addr,
|
||||
mvmvif->id, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* For 22000 firmware and on we cannot add queue to a station unknown
|
||||
* to firmware so enable queue here - after the station was added
|
||||
*/
|
||||
if (iwl_mvm_has_new_tx_api(mvm))
|
||||
iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue,
|
||||
mvm->snif_sta.sta_id,
|
||||
IWL_MVM_TX_FIFO_BE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
||||
|
@ -2133,6 +2150,10 @@ int iwl_mvm_send_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
queue = iwl_mvm_tvqm_enable_txq(mvm, bsta->sta_id,
|
||||
IWL_MAX_TID_COUNT,
|
||||
wdg_timeout);
|
||||
if (queue < 0) {
|
||||
iwl_mvm_rm_sta_common(mvm, bsta->sta_id);
|
||||
return queue;
|
||||
}
|
||||
|
||||
if (vif->type == NL80211_IFTYPE_AP ||
|
||||
vif->type == NL80211_IFTYPE_ADHOC)
|
||||
|
@ -2307,10 +2328,8 @@ int iwl_mvm_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
}
|
||||
ret = iwl_mvm_add_int_sta_common(mvm, msta, maddr,
|
||||
mvmvif->id, mvmvif->color);
|
||||
if (ret) {
|
||||
iwl_mvm_dealloc_int_sta(mvm, msta);
|
||||
return ret;
|
||||
}
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
/*
|
||||
* Enable cab queue after the ADD_STA command is sent.
|
||||
|
@ -2323,6 +2342,10 @@ int iwl_mvm_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
int queue = iwl_mvm_tvqm_enable_txq(mvm, msta->sta_id,
|
||||
0,
|
||||
timeout);
|
||||
if (queue < 0) {
|
||||
ret = queue;
|
||||
goto err;
|
||||
}
|
||||
mvmvif->cab_queue = queue;
|
||||
} else if (!fw_has_api(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_API_STA_TYPE))
|
||||
|
@ -2330,6 +2353,9 @@ int iwl_mvm_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
timeout);
|
||||
|
||||
return 0;
|
||||
err:
|
||||
iwl_mvm_dealloc_int_sta(mvm, msta);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __iwl_mvm_remove_sta_key(struct iwl_mvm *mvm, u8 sta_id,
|
||||
|
|
|
@ -573,20 +573,20 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
|
|||
{IWL_PCI_DEVICE(0x2526, 0x0034, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0038, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x003C, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0060, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0064, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0210, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0214, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0260, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x0264, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x1010, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x1210, iwl9260_2ac_cfg)},
|
||||
|
@ -603,7 +603,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
|
|||
{IWL_PCI_DEVICE(0x2526, 0x401C, iwl9260_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x4034, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x4234, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x42A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2526, 0x6010, iwl9260_2ac_160_cfg)},
|
||||
|
@ -618,60 +618,61 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
|
|||
{IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x271B, 0x0214, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x271C, 0x0214, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1010, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1210, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1552, iwl9560_killer_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_soc)},
|
||||
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
|
||||
{IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)},
|
||||
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1010, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1210, iwl9260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x1552, iwl9560_killer_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_soc)},
|
||||
{IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_soc)},
|
||||
|
||||
{IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_160_cfg_shared_clk)},
|
||||
{IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_shared_clk)},
|
||||
|
@ -1067,11 +1068,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
}
|
||||
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(iwl_trans->hw_rf_id) ==
|
||||
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
|
||||
((cfg != &iwl_ax200_cfg_cc &&
|
||||
cfg != &killer1650x_2ax_cfg &&
|
||||
cfg != &killer1650w_2ax_cfg &&
|
||||
cfg != &iwl_ax201_cfg_quz_hr) ||
|
||||
iwl_trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) {
|
||||
iwl_trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) {
|
||||
u32 hw_status;
|
||||
|
||||
hw_status = iwl_read_prph(iwl_trans, UMAG_GEN_HW_STATUS);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue