Networking fixes for 5.11-rc6, including fixes from can, xfrm, wireless,
wireless-drivers and netfilter trees. Nothing scary, Intel WiFi-related fixes seemed most notable to the users. Current release - regressions: - dsa: microchip: ksz8795: fix KSZ8794 port map again to program the CPU port correctly Current release - new code bugs: - iwlwifi: pcie: reschedule in long-running memory reads Previous releases - regressions: - iwlwifi: dbg: don't try to overwrite read-only FW data - iwlwifi: provide gso_type to GSO packets - octeontx2: make sure the buffer is 128 byte aligned - tcp: make TCP_USER_TIMEOUT accurate for zero window probes - xfrm: fix wraparound in xfrm_policy_addr_delta() - xfrm: fix oops in xfrm_replay_advance_bmp due to a race between CPUs in presence of packet reorder - tcp: fix TLP timer not set when CA_STATE changes from DISORDER to OPEN - wext: fix NULL-ptr-dereference with cfg80211's lack of commit() Previous releases - always broken: - igc: fix link speed advertising - stmmac: configure EHL PSE0 GbE and PSE1 GbE to 32 bits DMA addressing - team: protect features update by RCU to avoid deadlock - xfrm: fix disable_xfrm sysctl when used on xfrm interfaces themselves - fec: fix temporary RMII clock reset on link up - can: dev: prevent potential information leak in can_fill_info() Misc: - mrp: fix bad packing of MRP test packet structures - uapi: fix big endian definition of ipv6_rpl_sr_hdr - add David Ahern to IPv4/IPv6 maintainers Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmATRs4ACgkQMUZtbf5S IrtOfQ//Vmn1WprrwLPf6/uOuBN0RAKHC+64IRIw2ahDuiB1QQV0c3ALRd42Xp8n qnoDMB/mUWdF/KjjJEKvwYyBuwBeQWLcpgTXi1HvvhxM13PVHjvyIp6hTAYYj+m4 KyWWzQZwezz0zKQ3wXFdZV4JuefXEgXvMx65o8nk+TsutHn6WK/E6ZnWTexoZ0pa 5Lab149mtoCdSpT3gr2x1aTqd9KYWaxfarYOUD1GY58BQyDFl4wj10MV3oE7xWPj /MKnSBvPx52ajbb+rUVhfFjBN1BmEjdze7cBMncJc5H+0X38R23ZaAlP3gecGaac hZ5C2wnSSvRR8KIvSEwbCArlpuyU+exacZXZ0vS6sfgqISKqoPv8erWvpxtLil3v YfwZVNPYG9RBwbnDVw1gLQIFn3lUqLhIPnJ8J2Ue6KUm7ur4fO566RjyPU3gkPdp 5Zj3Eh7hsB2EqOy4RdwnoI0QboWmlq9+wT11HCXPFyJ077JzVU0FzMSvJr4dgVSI 3D3ckmw+RSej4ib6G4xjpq1tPCFzdf9zlFoUPomRFTKgfJFaky5pEb/22C3bztp1 43fsv3PiwlQtoYP3pfQsRj+r6DikYwDL7A3lskWohIZXviY2wErKWViUcIXr5ULE BxYQq0NYMl4TgDkn525U9EFwVgJAvPAedhYxF7VKn3eHNODqWBo= =dwFD -----END PGP SIGNATURE----- Merge tag 'net-5.11-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Networking fixes including fixes from can, xfrm, wireless, wireless-drivers and netfilter trees. Nothing scary, Intel WiFi-related fixes seemed most notable to the users. Current release - regressions: - dsa: microchip: ksz8795: fix KSZ8794 port map again to program the CPU port correctly Current release - new code bugs: - iwlwifi: pcie: reschedule in long-running memory reads Previous releases - regressions: - iwlwifi: dbg: don't try to overwrite read-only FW data - iwlwifi: provide gso_type to GSO packets - octeontx2: make sure the buffer is 128 byte aligned - tcp: make TCP_USER_TIMEOUT accurate for zero window probes - xfrm: fix wraparound in xfrm_policy_addr_delta() - xfrm: fix oops in xfrm_replay_advance_bmp due to a race between CPUs in presence of packet reorder - tcp: fix TLP timer not set when CA_STATE changes from DISORDER to OPEN - wext: fix NULL-ptr-dereference with cfg80211's lack of commit() Previous releases - always broken: - igc: fix link speed advertising - stmmac: configure EHL PSE0 GbE and PSE1 GbE to 32 bits DMA addressing - team: protect features update by RCU to avoid deadlock - xfrm: fix disable_xfrm sysctl when used on xfrm interfaces themselves - fec: fix temporary RMII clock reset on link up - can: dev: prevent potential information leak in can_fill_info() Misc: - mrp: fix bad packing of MRP test packet structures - uapi: fix big endian definition of ipv6_rpl_sr_hdr - add David Ahern to IPv4/IPv6 maintainers" * tag 'net-5.11-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (86 commits) rxrpc: Fix memory leak in rxrpc_lookup_local mlxsw: spectrum_span: Do not overwrite policer configuration selftests: forwarding: Specify interface when invoking mausezahn stmmac: intel: Configure EHL PSE0 GbE and PSE1 GbE to 32 bits DMA addressing net: usb: cdc_ether: added support for Thales Cinterion PLSx3 modem family. ibmvnic: Ensure that CRQ entry read are correctly ordered MAINTAINERS: add missing header for bonding net: decnet: fix netdev refcount leaking on error path net: switchdev: don't set port_obj_info->handled true when -EOPNOTSUPP can: dev: prevent potential information leak in can_fill_info() net: fec: Fix temporary RMII clock reset on link up net: lapb: Add locking to the lapb module team: protect features update by RCU to avoid deadlock MAINTAINERS: add David Ahern to IPv4/IPv6 maintainers net/mlx5: CT: Fix incorrect removal of tuple_nat_node from nat rhashtable net/mlx5e: Revert parameters on errors when changing MTU and LRO state without reset net/mlx5e: Revert parameters on errors when changing trust state without reset net/mlx5e: Correctly handle changing the number of queues when the interface is down net/mlx5e: Fix CT rule + encap slow path offload and deletion net/mlx5e: Disable hw-tc-offload when MLX5_CLS_ACT config is disabled ...
This commit is contained in:
commit
909b447dcc
|
@ -1807,12 +1807,24 @@ seg6_flowlabel - INTEGER
|
|||
``conf/default/*``:
|
||||
Change the interface-specific default settings.
|
||||
|
||||
These settings would be used during creating new interfaces.
|
||||
|
||||
|
||||
``conf/all/*``:
|
||||
Change all the interface-specific settings.
|
||||
|
||||
[XXX: Other special features than forwarding?]
|
||||
|
||||
conf/all/disable_ipv6 - BOOLEAN
|
||||
Changing this value is same as changing ``conf/default/disable_ipv6``
|
||||
setting and also all per-interface ``disable_ipv6`` settings to the same
|
||||
value.
|
||||
|
||||
Reading this value does not have any particular meaning. It does not say
|
||||
whether IPv6 support is enabled or disabled. Returned value can be 1
|
||||
also in the case when some interface has ``disable_ipv6`` set to 0 and
|
||||
has configured IPv6 addresses.
|
||||
|
||||
conf/all/forwarding - BOOLEAN
|
||||
Enable global IPv6 forwarding between all interfaces.
|
||||
|
||||
|
|
|
@ -3239,6 +3239,7 @@ L: netdev@vger.kernel.org
|
|||
S: Supported
|
||||
W: http://sourceforge.net/projects/bonding/
|
||||
F: drivers/net/bonding/
|
||||
F: include/net/bonding.h
|
||||
F: include/uapi/linux/if_bonding.h
|
||||
|
||||
BOSCH SENSORTEC BMA400 ACCELEROMETER IIO DRIVER
|
||||
|
@ -12412,6 +12413,7 @@ F: tools/testing/selftests/net/ipsec.c
|
|||
NETWORKING [IPv4/IPv6]
|
||||
M: "David S. Miller" <davem@davemloft.net>
|
||||
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
|
||||
M: David Ahern <dsahern@kernel.org>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
|
||||
|
|
|
@ -1163,7 +1163,7 @@ static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
|
|||
{
|
||||
struct can_priv *priv = netdev_priv(dev);
|
||||
struct can_ctrlmode cm = {.flags = priv->ctrlmode};
|
||||
struct can_berr_counter bec;
|
||||
struct can_berr_counter bec = { };
|
||||
enum can_state state = priv->state;
|
||||
|
||||
if (priv->do_get_state)
|
||||
|
|
|
@ -509,15 +509,19 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
|
|||
/* Find our integrated MDIO bus node */
|
||||
dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio");
|
||||
priv->master_mii_bus = of_mdio_find_bus(dn);
|
||||
if (!priv->master_mii_bus)
|
||||
if (!priv->master_mii_bus) {
|
||||
of_node_put(dn);
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
get_device(&priv->master_mii_bus->dev);
|
||||
priv->master_mii_dn = dn;
|
||||
|
||||
priv->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
|
||||
if (!priv->slave_mii_bus)
|
||||
if (!priv->slave_mii_bus) {
|
||||
of_node_put(dn);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
priv->slave_mii_bus->priv = priv;
|
||||
priv->slave_mii_bus->name = "sf2 slave mii";
|
||||
|
|
|
@ -1187,6 +1187,20 @@ static const struct ksz_chip_data ksz8795_switch_chips[] = {
|
|||
.port_cnt = 5, /* total cpu and user ports */
|
||||
},
|
||||
{
|
||||
/*
|
||||
* WARNING
|
||||
* =======
|
||||
* KSZ8794 is similar to KSZ8795, except the port map
|
||||
* contains a gap between external and CPU ports, the
|
||||
* port map is NOT continuous. The per-port register
|
||||
* map is shifted accordingly too, i.e. registers at
|
||||
* offset 0x40 are NOT used on KSZ8794 and they ARE
|
||||
* used on KSZ8795 for external port 3.
|
||||
* external cpu
|
||||
* KSZ8794 0,1,2 4
|
||||
* KSZ8795 0,1,2,3 4
|
||||
* KSZ8765 0,1,2,3 4
|
||||
*/
|
||||
.chip_id = 0x8794,
|
||||
.dev_name = "KSZ8794",
|
||||
.num_vlans = 4096,
|
||||
|
@ -1220,9 +1234,13 @@ static int ksz8795_switch_init(struct ksz_device *dev)
|
|||
dev->num_vlans = chip->num_vlans;
|
||||
dev->num_alus = chip->num_alus;
|
||||
dev->num_statics = chip->num_statics;
|
||||
dev->port_cnt = chip->port_cnt;
|
||||
dev->port_cnt = fls(chip->cpu_ports);
|
||||
dev->cpu_port = fls(chip->cpu_ports) - 1;
|
||||
dev->phy_port_cnt = dev->port_cnt - 1;
|
||||
dev->cpu_ports = chip->cpu_ports;
|
||||
|
||||
dev->host_mask = chip->cpu_ports;
|
||||
dev->port_mask = (BIT(dev->phy_port_cnt) - 1) |
|
||||
chip->cpu_ports;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -1231,17 +1249,9 @@ static int ksz8795_switch_init(struct ksz_device *dev)
|
|||
if (!dev->cpu_ports)
|
||||
return -ENODEV;
|
||||
|
||||
dev->port_mask = BIT(dev->port_cnt) - 1;
|
||||
dev->port_mask |= dev->host_mask;
|
||||
|
||||
dev->reg_mib_cnt = KSZ8795_COUNTER_NUM;
|
||||
dev->mib_cnt = ARRAY_SIZE(mib_names);
|
||||
|
||||
dev->phy_port_cnt = dev->port_cnt - 1;
|
||||
|
||||
dev->cpu_port = dev->port_cnt - 1;
|
||||
dev->host_mask = BIT(dev->cpu_port);
|
||||
|
||||
dev->ports = devm_kzalloc(dev->dev,
|
||||
dev->port_cnt * sizeof(struct ksz_port),
|
||||
GFP_KERNEL);
|
||||
|
|
|
@ -400,7 +400,7 @@ int ksz_switch_register(struct ksz_device *dev,
|
|||
gpiod_set_value_cansleep(dev->reset_gpio, 1);
|
||||
usleep_range(10000, 12000);
|
||||
gpiod_set_value_cansleep(dev->reset_gpio, 0);
|
||||
usleep_range(100, 1000);
|
||||
msleep(100);
|
||||
}
|
||||
|
||||
mutex_init(&dev->dev_mutex);
|
||||
|
@ -434,7 +434,7 @@ int ksz_switch_register(struct ksz_device *dev,
|
|||
if (of_property_read_u32(port, "reg",
|
||||
&port_num))
|
||||
continue;
|
||||
if (port_num >= dev->port_cnt)
|
||||
if (!(dev->port_mask & BIT(port_num)))
|
||||
return -EINVAL;
|
||||
of_get_phy_mode(port,
|
||||
&dev->ports[port_num].interface);
|
||||
|
|
|
@ -1158,11 +1158,9 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
#endif
|
||||
}
|
||||
if (!n || !n->dev)
|
||||
goto free_sk;
|
||||
goto free_dst;
|
||||
|
||||
ndev = n->dev;
|
||||
if (!ndev)
|
||||
goto free_dst;
|
||||
if (is_vlan_dev(ndev))
|
||||
ndev = vlan_dev_real_dev(ndev);
|
||||
|
||||
|
@ -1250,7 +1248,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
|
|||
free_csk:
|
||||
chtls_sock_release(&csk->kref);
|
||||
free_dst:
|
||||
neigh_release(n);
|
||||
if (n)
|
||||
neigh_release(n);
|
||||
dst_release(dst);
|
||||
free_sk:
|
||||
inet_csk_prepare_forced_close(newsk);
|
||||
|
|
|
@ -462,6 +462,11 @@ struct bufdesc_ex {
|
|||
*/
|
||||
#define FEC_QUIRK_CLEAR_SETUP_MII (1 << 17)
|
||||
|
||||
/* Some link partners do not tolerate the momentary reset of the REF_CLK
|
||||
* frequency when the RNCTL register is cleared by hardware reset.
|
||||
*/
|
||||
#define FEC_QUIRK_NO_HARD_RESET (1 << 18)
|
||||
|
||||
struct bufdesc_prop {
|
||||
int qid;
|
||||
/* Address of Rx and Tx buffers */
|
||||
|
|
|
@ -100,7 +100,8 @@ static const struct fec_devinfo fec_imx27_info = {
|
|||
static const struct fec_devinfo fec_imx28_info = {
|
||||
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
|
||||
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
|
||||
FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII,
|
||||
FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII |
|
||||
FEC_QUIRK_NO_HARD_RESET,
|
||||
};
|
||||
|
||||
static const struct fec_devinfo fec_imx6q_info = {
|
||||
|
@ -953,7 +954,8 @@ fec_restart(struct net_device *ndev)
|
|||
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC
|
||||
* instead of reset MAC itself.
|
||||
*/
|
||||
if (fep->quirks & FEC_QUIRK_HAS_AVB) {
|
||||
if (fep->quirks & FEC_QUIRK_HAS_AVB ||
|
||||
((fep->quirks & FEC_QUIRK_NO_HARD_RESET) && fep->link)) {
|
||||
writel(0, fep->hwp + FEC_ECNTRL);
|
||||
} else {
|
||||
writel(1, fep->hwp + FEC_ECNTRL);
|
||||
|
@ -2165,9 +2167,9 @@ static int fec_enet_mii_init(struct platform_device *pdev)
|
|||
fep->mii_bus->parent = &pdev->dev;
|
||||
|
||||
err = of_mdiobus_register(fep->mii_bus, node);
|
||||
of_node_put(node);
|
||||
if (err)
|
||||
goto err_out_free_mdiobus;
|
||||
of_node_put(node);
|
||||
|
||||
mii_cnt++;
|
||||
|
||||
|
@ -2180,6 +2182,7 @@ static int fec_enet_mii_init(struct platform_device *pdev)
|
|||
err_out_free_mdiobus:
|
||||
mdiobus_free(fep->mii_bus);
|
||||
err_out:
|
||||
of_node_put(node);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -5084,6 +5084,12 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
|
|||
while (!done) {
|
||||
/* Pull all the valid messages off the CRQ */
|
||||
while ((crq = ibmvnic_next_crq(adapter)) != NULL) {
|
||||
/* This barrier makes sure ibmvnic_next_crq()'s
|
||||
* crq->generic.first & IBMVNIC_CRQ_CMD_RSP is loaded
|
||||
* before ibmvnic_handle_crq()'s
|
||||
* switch(gen_crq->first) and switch(gen_crq->cmd).
|
||||
*/
|
||||
dma_rmb();
|
||||
ibmvnic_handle_crq(crq, adapter);
|
||||
crq->generic.first = 0;
|
||||
}
|
||||
|
|
|
@ -4046,20 +4046,16 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
|
|||
goto error_param;
|
||||
|
||||
vf = &pf->vf[vf_id];
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
|
||||
/* When the VF is resetting wait until it is done.
|
||||
* It can take up to 200 milliseconds,
|
||||
* but wait for up to 300 milliseconds to be safe.
|
||||
* If the VF is indeed in reset, the vsi pointer has
|
||||
* to show on the newly loaded vsi under pf->vsi[id].
|
||||
* Acquire the VSI pointer only after the VF has been
|
||||
* properly initialized.
|
||||
*/
|
||||
for (i = 0; i < 15; i++) {
|
||||
if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
|
||||
if (i > 0)
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states))
|
||||
break;
|
||||
}
|
||||
msleep(20);
|
||||
}
|
||||
if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
|
||||
|
@ -4068,6 +4064,7 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
|
|||
ret = -EAGAIN;
|
||||
goto error_param;
|
||||
}
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
|
||||
if (is_multicast_ether_addr(mac)) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
|
|
|
@ -68,7 +68,9 @@
|
|||
#define ICE_INT_NAME_STR_LEN (IFNAMSIZ + 16)
|
||||
#define ICE_AQ_LEN 64
|
||||
#define ICE_MBXSQ_LEN 64
|
||||
#define ICE_MIN_MSIX 2
|
||||
#define ICE_MIN_LAN_TXRX_MSIX 1
|
||||
#define ICE_MIN_LAN_OICR_MSIX 1
|
||||
#define ICE_MIN_MSIX (ICE_MIN_LAN_TXRX_MSIX + ICE_MIN_LAN_OICR_MSIX)
|
||||
#define ICE_FDIR_MSIX 1
|
||||
#define ICE_NO_VSI 0xffff
|
||||
#define ICE_VSI_MAP_CONTIG 0
|
||||
|
|
|
@ -3258,8 +3258,8 @@ ice_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
|
|||
*/
|
||||
static int ice_get_max_txq(struct ice_pf *pf)
|
||||
{
|
||||
return min_t(int, num_online_cpus(),
|
||||
pf->hw.func_caps.common_cap.num_txq);
|
||||
return min3(pf->num_lan_msix, (u16)num_online_cpus(),
|
||||
(u16)pf->hw.func_caps.common_cap.num_txq);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3268,8 +3268,8 @@ static int ice_get_max_txq(struct ice_pf *pf)
|
|||
*/
|
||||
static int ice_get_max_rxq(struct ice_pf *pf)
|
||||
{
|
||||
return min_t(int, num_online_cpus(),
|
||||
pf->hw.func_caps.common_cap.num_rxq);
|
||||
return min3(pf->num_lan_msix, (u16)num_online_cpus(),
|
||||
(u16)pf->hw.func_caps.common_cap.num_rxq);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -1576,7 +1576,13 @@ ice_set_fdir_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
|
|||
sizeof(struct in6_addr));
|
||||
input->ip.v6.l4_header = fsp->h_u.usr_ip6_spec.l4_4_bytes;
|
||||
input->ip.v6.tc = fsp->h_u.usr_ip6_spec.tclass;
|
||||
input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto;
|
||||
|
||||
/* if no protocol requested, use IPPROTO_NONE */
|
||||
if (!fsp->m_u.usr_ip6_spec.l4_proto)
|
||||
input->ip.v6.proto = IPPROTO_NONE;
|
||||
else
|
||||
input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto;
|
||||
|
||||
memcpy(input->mask.v6.dst_ip, fsp->m_u.usr_ip6_spec.ip6dst,
|
||||
sizeof(struct in6_addr));
|
||||
memcpy(input->mask.v6.src_ip, fsp->m_u.usr_ip6_spec.ip6src,
|
||||
|
|
|
@ -161,8 +161,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
|
|||
|
||||
switch (vsi->type) {
|
||||
case ICE_VSI_PF:
|
||||
vsi->alloc_txq = min_t(int, ice_get_avail_txq_count(pf),
|
||||
num_online_cpus());
|
||||
vsi->alloc_txq = min3(pf->num_lan_msix,
|
||||
ice_get_avail_txq_count(pf),
|
||||
(u16)num_online_cpus());
|
||||
if (vsi->req_txq) {
|
||||
vsi->alloc_txq = vsi->req_txq;
|
||||
vsi->num_txq = vsi->req_txq;
|
||||
|
@ -174,8 +175,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
|
|||
if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
|
||||
vsi->alloc_rxq = 1;
|
||||
} else {
|
||||
vsi->alloc_rxq = min_t(int, ice_get_avail_rxq_count(pf),
|
||||
num_online_cpus());
|
||||
vsi->alloc_rxq = min3(pf->num_lan_msix,
|
||||
ice_get_avail_rxq_count(pf),
|
||||
(u16)num_online_cpus());
|
||||
if (vsi->req_rxq) {
|
||||
vsi->alloc_rxq = vsi->req_rxq;
|
||||
vsi->num_rxq = vsi->req_rxq;
|
||||
|
@ -184,7 +186,9 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
|
|||
|
||||
pf->num_lan_rx = vsi->alloc_rxq;
|
||||
|
||||
vsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq);
|
||||
vsi->num_q_vectors = min_t(int, pf->num_lan_msix,
|
||||
max_t(int, vsi->alloc_rxq,
|
||||
vsi->alloc_txq));
|
||||
break;
|
||||
case ICE_VSI_VF:
|
||||
vf = &pf->vf[vsi->vf_id];
|
||||
|
|
|
@ -3430,18 +3430,14 @@ static int ice_ena_msix_range(struct ice_pf *pf)
|
|||
if (v_actual < v_budget) {
|
||||
dev_warn(dev, "not enough OS MSI-X vectors. requested = %d, obtained = %d\n",
|
||||
v_budget, v_actual);
|
||||
/* 2 vectors each for LAN and RDMA (traffic + OICR), one for flow director */
|
||||
#define ICE_MIN_LAN_VECS 2
|
||||
#define ICE_MIN_RDMA_VECS 2
|
||||
#define ICE_MIN_VECS (ICE_MIN_LAN_VECS + ICE_MIN_RDMA_VECS + 1)
|
||||
|
||||
if (v_actual < ICE_MIN_LAN_VECS) {
|
||||
if (v_actual < ICE_MIN_MSIX) {
|
||||
/* error if we can't get minimum vectors */
|
||||
pci_disable_msix(pf->pdev);
|
||||
err = -ERANGE;
|
||||
goto msix_err;
|
||||
} else {
|
||||
pf->num_lan_msix = ICE_MIN_LAN_VECS;
|
||||
pf->num_lan_msix = ICE_MIN_LAN_TXRX_MSIX;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -4884,9 +4880,15 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
|
|||
goto err_update_filters;
|
||||
}
|
||||
|
||||
/* Add filter for new MAC. If filter exists, just return success */
|
||||
/* Add filter for new MAC. If filter exists, return success */
|
||||
status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);
|
||||
if (status == ICE_ERR_ALREADY_EXISTS) {
|
||||
/* Although this MAC filter is already present in hardware it's
|
||||
* possible in some cases (e.g. bonding) that dev_addr was
|
||||
* modified outside of the driver and needs to be restored back
|
||||
* to this value.
|
||||
*/
|
||||
memcpy(netdev->dev_addr, mac, netdev->addr_len);
|
||||
netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1923,12 +1923,15 @@ int ice_tx_csum(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
|
|||
ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
|
||||
l4_proto = ip.v4->protocol;
|
||||
} else if (first->tx_flags & ICE_TX_FLAGS_IPV6) {
|
||||
int ret;
|
||||
|
||||
tunnel |= ICE_TX_CTX_EIPT_IPV6;
|
||||
exthdr = ip.hdr + sizeof(*ip.v6);
|
||||
l4_proto = ip.v6->nexthdr;
|
||||
if (l4.hdr != exthdr)
|
||||
ipv6_skip_exthdr(skb, exthdr - skb->data,
|
||||
&l4_proto, &frag_off);
|
||||
ret = ipv6_skip_exthdr(skb, exthdr - skb->data,
|
||||
&l4_proto, &frag_off);
|
||||
if (ret < 0)
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* define outer transport */
|
||||
|
|
|
@ -1675,12 +1675,18 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
|
|||
cmd->base.phy_address = hw->phy.addr;
|
||||
|
||||
/* advertising link modes */
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half);
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full);
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half);
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full);
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full);
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_10_FULL)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_100_HALF)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_100_FULL)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_1000_FULL)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full);
|
||||
if (hw->phy.autoneg_advertised & ADVERTISE_2500_FULL)
|
||||
ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full);
|
||||
|
||||
/* set autoneg settings */
|
||||
if (hw->mac.autoneg == 1) {
|
||||
|
@ -1792,6 +1798,12 @@ igc_ethtool_set_link_ksettings(struct net_device *netdev,
|
|||
|
||||
ethtool_convert_link_mode_to_legacy_u32(&advertising,
|
||||
cmd->link_modes.advertising);
|
||||
/* Converting to legacy u32 drops ETHTOOL_LINK_MODE_2500baseT_Full_BIT.
|
||||
* We have to check this and convert it to ADVERTISE_2500_FULL
|
||||
* (aka ETHTOOL_LINK_MODE_2500baseX_Full_BIT) explicitly.
|
||||
*/
|
||||
if (ethtool_link_ksettings_test_link_mode(cmd, advertising, 2500baseT_Full))
|
||||
advertising |= ADVERTISE_2500_FULL;
|
||||
|
||||
if (cmd->base.autoneg == AUTONEG_ENABLE) {
|
||||
hw->mac.autoneg = 1;
|
||||
|
|
|
@ -478,10 +478,11 @@ dma_addr_t __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool)
|
|||
dma_addr_t iova;
|
||||
u8 *buf;
|
||||
|
||||
buf = napi_alloc_frag(pool->rbsize);
|
||||
buf = napi_alloc_frag(pool->rbsize + OTX2_ALIGN);
|
||||
if (unlikely(!buf))
|
||||
return -ENOMEM;
|
||||
|
||||
buf = PTR_ALIGN(buf, OTX2_ALIGN);
|
||||
iova = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize,
|
||||
DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
|
||||
if (unlikely(dma_mapping_error(pfvf->dev, iova))) {
|
||||
|
|
|
@ -273,7 +273,7 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key
|
|||
|
||||
err = devlink_fmsg_binary_pair_nest_start(fmsg, "data");
|
||||
if (err)
|
||||
return err;
|
||||
goto free_page;
|
||||
|
||||
cmd = mlx5_rsc_dump_cmd_create(mdev, key);
|
||||
if (IS_ERR(cmd)) {
|
||||
|
|
|
@ -167,6 +167,12 @@ static const struct rhashtable_params tuples_nat_ht_params = {
|
|||
.min_size = 16 * 1024,
|
||||
};
|
||||
|
||||
static bool
|
||||
mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry)
|
||||
{
|
||||
return !!(entry->tuple_nat_node.next);
|
||||
}
|
||||
|
||||
static int
|
||||
mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule)
|
||||
{
|
||||
|
@ -911,13 +917,13 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
|
|||
err_insert:
|
||||
mlx5_tc_ct_entry_del_rules(ct_priv, entry);
|
||||
err_rules:
|
||||
rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
|
||||
&entry->tuple_nat_node, tuples_nat_ht_params);
|
||||
if (mlx5_tc_ct_entry_has_nat(entry))
|
||||
rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
|
||||
&entry->tuple_nat_node, tuples_nat_ht_params);
|
||||
err_tuple_nat:
|
||||
if (entry->tuple_node.next)
|
||||
rhashtable_remove_fast(&ct_priv->ct_tuples_ht,
|
||||
&entry->tuple_node,
|
||||
tuples_ht_params);
|
||||
rhashtable_remove_fast(&ct_priv->ct_tuples_ht,
|
||||
&entry->tuple_node,
|
||||
tuples_ht_params);
|
||||
err_tuple:
|
||||
err_set:
|
||||
kfree(entry);
|
||||
|
@ -932,7 +938,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv,
|
|||
{
|
||||
mlx5_tc_ct_entry_del_rules(ct_priv, entry);
|
||||
mutex_lock(&ct_priv->shared_counter_lock);
|
||||
if (entry->tuple_node.next)
|
||||
if (mlx5_tc_ct_entry_has_nat(entry))
|
||||
rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht,
|
||||
&entry->tuple_nat_node,
|
||||
tuples_nat_ht_params);
|
||||
|
|
|
@ -76,7 +76,7 @@ static const struct counter_desc mlx5e_ipsec_sw_stats_desc[] = {
|
|||
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_sw)
|
||||
{
|
||||
return NUM_IPSEC_SW_COUNTERS;
|
||||
return priv->ipsec ? NUM_IPSEC_SW_COUNTERS : 0;
|
||||
}
|
||||
|
||||
static inline MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_sw) {}
|
||||
|
@ -105,7 +105,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(ipsec_sw)
|
|||
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_hw)
|
||||
{
|
||||
return (mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0;
|
||||
return (priv->ipsec && mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0;
|
||||
}
|
||||
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_hw)
|
||||
|
|
|
@ -1151,6 +1151,7 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state)
|
|||
{
|
||||
struct mlx5e_channels new_channels = {};
|
||||
bool reset_channels = true;
|
||||
bool opened;
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&priv->state_lock);
|
||||
|
@ -1159,22 +1160,24 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state)
|
|||
mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &new_channels.params,
|
||||
trust_state);
|
||||
|
||||
if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
|
||||
priv->channels.params = new_channels.params;
|
||||
opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
|
||||
if (!opened)
|
||||
reset_channels = false;
|
||||
}
|
||||
|
||||
/* Skip if tx_min_inline is the same */
|
||||
if (new_channels.params.tx_min_inline_mode ==
|
||||
priv->channels.params.tx_min_inline_mode)
|
||||
reset_channels = false;
|
||||
|
||||
if (reset_channels)
|
||||
if (reset_channels) {
|
||||
err = mlx5e_safe_switch_channels(priv, &new_channels,
|
||||
mlx5e_update_trust_state_hw,
|
||||
&trust_state);
|
||||
else
|
||||
} else {
|
||||
err = mlx5e_update_trust_state_hw(priv, &trust_state);
|
||||
if (!err && !opened)
|
||||
priv->channels.params = new_channels.params;
|
||||
}
|
||||
|
||||
mutex_unlock(&priv->state_lock);
|
||||
|
||||
|
|
|
@ -447,12 +447,18 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
|
|||
goto out;
|
||||
}
|
||||
|
||||
new_channels.params = priv->channels.params;
|
||||
new_channels.params = *cur_params;
|
||||
new_channels.params.num_channels = count;
|
||||
|
||||
if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
|
||||
struct mlx5e_params old_params;
|
||||
|
||||
old_params = *cur_params;
|
||||
*cur_params = new_channels.params;
|
||||
err = mlx5e_num_channels_changed(priv);
|
||||
if (err)
|
||||
*cur_params = old_params;
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
|
|
@ -3614,7 +3614,14 @@ static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
|
|||
new_channels.params.num_tc = tc ? tc : 1;
|
||||
|
||||
if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
|
||||
struct mlx5e_params old_params;
|
||||
|
||||
old_params = priv->channels.params;
|
||||
priv->channels.params = new_channels.params;
|
||||
err = mlx5e_num_channels_changed(priv);
|
||||
if (err)
|
||||
priv->channels.params = old_params;
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -3757,7 +3764,7 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
|
|||
struct mlx5e_priv *priv = netdev_priv(netdev);
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
struct mlx5e_channels new_channels = {};
|
||||
struct mlx5e_params *old_params;
|
||||
struct mlx5e_params *cur_params;
|
||||
int err = 0;
|
||||
bool reset;
|
||||
|
||||
|
@ -3770,8 +3777,8 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
|
|||
goto out;
|
||||
}
|
||||
|
||||
old_params = &priv->channels.params;
|
||||
if (enable && !MLX5E_GET_PFLAG(old_params, MLX5E_PFLAG_RX_STRIDING_RQ)) {
|
||||
cur_params = &priv->channels.params;
|
||||
if (enable && !MLX5E_GET_PFLAG(cur_params, MLX5E_PFLAG_RX_STRIDING_RQ)) {
|
||||
netdev_warn(netdev, "can't set LRO with legacy RQ\n");
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
|
@ -3779,18 +3786,23 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
|
|||
|
||||
reset = test_bit(MLX5E_STATE_OPENED, &priv->state);
|
||||
|
||||
new_channels.params = *old_params;
|
||||
new_channels.params = *cur_params;
|
||||
new_channels.params.lro_en = enable;
|
||||
|
||||
if (old_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) {
|
||||
if (mlx5e_rx_mpwqe_is_linear_skb(mdev, old_params, NULL) ==
|
||||
if (cur_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) {
|
||||
if (mlx5e_rx_mpwqe_is_linear_skb(mdev, cur_params, NULL) ==
|
||||
mlx5e_rx_mpwqe_is_linear_skb(mdev, &new_channels.params, NULL))
|
||||
reset = false;
|
||||
}
|
||||
|
||||
if (!reset) {
|
||||
*old_params = new_channels.params;
|
||||
struct mlx5e_params old_params;
|
||||
|
||||
old_params = *cur_params;
|
||||
*cur_params = new_channels.params;
|
||||
err = mlx5e_modify_tirs_lro(priv);
|
||||
if (err)
|
||||
*cur_params = old_params;
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -4067,9 +4079,16 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
|
|||
}
|
||||
|
||||
if (!reset) {
|
||||
unsigned int old_mtu = params->sw_mtu;
|
||||
|
||||
params->sw_mtu = new_mtu;
|
||||
if (preactivate)
|
||||
preactivate(priv, NULL);
|
||||
if (preactivate) {
|
||||
err = preactivate(priv, NULL);
|
||||
if (err) {
|
||||
params->sw_mtu = old_mtu;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
netdev->mtu = params->sw_mtu;
|
||||
goto out;
|
||||
}
|
||||
|
@ -5027,7 +5046,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
|
|||
FT_CAP(modify_root) &&
|
||||
FT_CAP(identified_miss_table_mode) &&
|
||||
FT_CAP(flow_table_modify)) {
|
||||
#ifdef CONFIG_MLX5_ESWITCH
|
||||
#if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
|
||||
netdev->hw_features |= NETIF_F_HW_TC;
|
||||
#endif
|
||||
#ifdef CONFIG_MLX5_EN_ARFS
|
||||
|
|
|
@ -737,7 +737,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
|
|||
|
||||
netdev->features |= NETIF_F_NETNS_LOCAL;
|
||||
|
||||
#if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
|
||||
netdev->hw_features |= NETIF_F_HW_TC;
|
||||
#endif
|
||||
netdev->hw_features |= NETIF_F_SG;
|
||||
netdev->hw_features |= NETIF_F_IP_CSUM;
|
||||
netdev->hw_features |= NETIF_F_IPV6_CSUM;
|
||||
|
|
|
@ -67,6 +67,7 @@
|
|||
#include "lib/geneve.h"
|
||||
#include "lib/fs_chains.h"
|
||||
#include "diag/en_tc_tracepoint.h"
|
||||
#include <asm/div64.h>
|
||||
|
||||
#define nic_chains(priv) ((priv)->fs.tc.chains)
|
||||
#define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)
|
||||
|
@ -1162,6 +1163,9 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw,
|
|||
struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts;
|
||||
struct mlx5_flow_handle *rule;
|
||||
|
||||
if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)
|
||||
return mlx5_eswitch_add_offloaded_rule(esw, spec, attr);
|
||||
|
||||
if (flow_flag_test(flow, CT)) {
|
||||
mod_hdr_acts = &attr->parse_attr->mod_hdr_acts;
|
||||
|
||||
|
@ -1192,6 +1196,9 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw,
|
|||
{
|
||||
flow_flag_clear(flow, OFFLOADED);
|
||||
|
||||
if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)
|
||||
goto offload_rule_0;
|
||||
|
||||
if (flow_flag_test(flow, CT)) {
|
||||
mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr);
|
||||
return;
|
||||
|
@ -1200,6 +1207,7 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw,
|
|||
if (attr->esw_attr->split_count)
|
||||
mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr);
|
||||
|
||||
offload_rule_0:
|
||||
mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr);
|
||||
}
|
||||
|
||||
|
@ -2269,8 +2277,8 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
|
|||
BIT(FLOW_DISSECTOR_KEY_ENC_OPTS) |
|
||||
BIT(FLOW_DISSECTOR_KEY_MPLS))) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Unsupported key");
|
||||
netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n",
|
||||
dissector->used_keys);
|
||||
netdev_dbg(priv->netdev, "Unsupported key used: 0x%x\n",
|
||||
dissector->used_keys);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
|
@ -5007,13 +5015,13 @@ errout:
|
|||
return err;
|
||||
}
|
||||
|
||||
static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
|
||||
static int apply_police_params(struct mlx5e_priv *priv, u64 rate,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5e_rep_priv *rpriv = priv->ppriv;
|
||||
struct mlx5_eswitch *esw;
|
||||
u32 rate_mbps = 0;
|
||||
u16 vport_num;
|
||||
u32 rate_mbps;
|
||||
int err;
|
||||
|
||||
vport_num = rpriv->rep->vport;
|
||||
|
@ -5030,7 +5038,11 @@ static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
|
|||
* Moreover, if rate is non zero we choose to configure to a minimum of
|
||||
* 1 mbit/sec.
|
||||
*/
|
||||
rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0;
|
||||
if (rate) {
|
||||
rate = (rate * BITS_PER_BYTE) + 500000;
|
||||
rate_mbps = max_t(u32, do_div(rate, 1000000), 1);
|
||||
}
|
||||
|
||||
err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
|
||||
if (err)
|
||||
NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
|
||||
|
|
|
@ -1141,6 +1141,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
|
|||
destroy_ft:
|
||||
root->cmds->destroy_flow_table(root, ft);
|
||||
free_ft:
|
||||
rhltable_destroy(&ft->fgs_hash);
|
||||
kfree(ft);
|
||||
unlock_root:
|
||||
mutex_unlock(&root->chain_lock);
|
||||
|
|
|
@ -58,7 +58,7 @@ struct fw_page {
|
|||
struct rb_node rb_node;
|
||||
u64 addr;
|
||||
struct page *page;
|
||||
u16 func_id;
|
||||
u32 function;
|
||||
unsigned long bitmask;
|
||||
struct list_head list;
|
||||
unsigned free_count;
|
||||
|
@ -74,12 +74,17 @@ enum {
|
|||
MLX5_NUM_4K_IN_PAGE = PAGE_SIZE / MLX5_ADAPTER_PAGE_SIZE,
|
||||
};
|
||||
|
||||
static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func_id)
|
||||
static u32 get_function(u16 func_id, bool ec_function)
|
||||
{
|
||||
return func_id & (ec_function << 16);
|
||||
}
|
||||
|
||||
static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function)
|
||||
{
|
||||
struct rb_root *root;
|
||||
int err;
|
||||
|
||||
root = xa_load(&dev->priv.page_root_xa, func_id);
|
||||
root = xa_load(&dev->priv.page_root_xa, function);
|
||||
if (root)
|
||||
return root;
|
||||
|
||||
|
@ -87,7 +92,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func
|
|||
if (!root)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
err = xa_insert(&dev->priv.page_root_xa, func_id, root, GFP_KERNEL);
|
||||
err = xa_insert(&dev->priv.page_root_xa, function, root, GFP_KERNEL);
|
||||
if (err) {
|
||||
kfree(root);
|
||||
return ERR_PTR(err);
|
||||
|
@ -98,7 +103,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func
|
|||
return root;
|
||||
}
|
||||
|
||||
static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id)
|
||||
static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u32 function)
|
||||
{
|
||||
struct rb_node *parent = NULL;
|
||||
struct rb_root *root;
|
||||
|
@ -107,7 +112,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
|
|||
struct fw_page *tfp;
|
||||
int i;
|
||||
|
||||
root = page_root_per_func_id(dev, func_id);
|
||||
root = page_root_per_function(dev, function);
|
||||
if (IS_ERR(root))
|
||||
return PTR_ERR(root);
|
||||
|
||||
|
@ -130,7 +135,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
|
|||
|
||||
nfp->addr = addr;
|
||||
nfp->page = page;
|
||||
nfp->func_id = func_id;
|
||||
nfp->function = function;
|
||||
nfp->free_count = MLX5_NUM_4K_IN_PAGE;
|
||||
for (i = 0; i < MLX5_NUM_4K_IN_PAGE; i++)
|
||||
set_bit(i, &nfp->bitmask);
|
||||
|
@ -143,14 +148,14 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u
|
|||
}
|
||||
|
||||
static struct fw_page *find_fw_page(struct mlx5_core_dev *dev, u64 addr,
|
||||
u32 func_id)
|
||||
u32 function)
|
||||
{
|
||||
struct fw_page *result = NULL;
|
||||
struct rb_root *root;
|
||||
struct rb_node *tmp;
|
||||
struct fw_page *tfp;
|
||||
|
||||
root = xa_load(&dev->priv.page_root_xa, func_id);
|
||||
root = xa_load(&dev->priv.page_root_xa, function);
|
||||
if (WARN_ON_ONCE(!root))
|
||||
return NULL;
|
||||
|
||||
|
@ -194,14 +199,14 @@ static int mlx5_cmd_query_pages(struct mlx5_core_dev *dev, u16 *func_id,
|
|||
return err;
|
||||
}
|
||||
|
||||
static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u16 func_id)
|
||||
static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
|
||||
{
|
||||
struct fw_page *fp = NULL;
|
||||
struct fw_page *iter;
|
||||
unsigned n;
|
||||
|
||||
list_for_each_entry(iter, &dev->priv.free_list, list) {
|
||||
if (iter->func_id != func_id)
|
||||
if (iter->function != function)
|
||||
continue;
|
||||
fp = iter;
|
||||
}
|
||||
|
@ -231,7 +236,7 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp,
|
|||
{
|
||||
struct rb_root *root;
|
||||
|
||||
root = xa_load(&dev->priv.page_root_xa, fwp->func_id);
|
||||
root = xa_load(&dev->priv.page_root_xa, fwp->function);
|
||||
if (WARN_ON_ONCE(!root))
|
||||
return;
|
||||
|
||||
|
@ -244,12 +249,12 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp,
|
|||
kfree(fwp);
|
||||
}
|
||||
|
||||
static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id)
|
||||
static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
|
||||
{
|
||||
struct fw_page *fwp;
|
||||
int n;
|
||||
|
||||
fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, func_id);
|
||||
fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, function);
|
||||
if (!fwp) {
|
||||
mlx5_core_warn_rl(dev, "page not found\n");
|
||||
return;
|
||||
|
@ -263,7 +268,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id)
|
|||
list_add(&fwp->list, &dev->priv.free_list);
|
||||
}
|
||||
|
||||
static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id)
|
||||
static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
|
||||
{
|
||||
struct device *device = mlx5_core_dma_dev(dev);
|
||||
int nid = dev_to_node(device);
|
||||
|
@ -291,7 +296,7 @@ map:
|
|||
goto map;
|
||||
}
|
||||
|
||||
err = insert_page(dev, addr, page, func_id);
|
||||
err = insert_page(dev, addr, page, function);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "failed to track allocated page\n");
|
||||
dma_unmap_page(device, addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
|
||||
|
@ -328,6 +333,7 @@ static void page_notify_fail(struct mlx5_core_dev *dev, u16 func_id,
|
|||
static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
|
||||
int notify_fail, bool ec_function)
|
||||
{
|
||||
u32 function = get_function(func_id, ec_function);
|
||||
u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0};
|
||||
int inlen = MLX5_ST_SZ_BYTES(manage_pages_in);
|
||||
u64 addr;
|
||||
|
@ -345,10 +351,10 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
|
|||
|
||||
for (i = 0; i < npages; i++) {
|
||||
retry:
|
||||
err = alloc_4k(dev, &addr, func_id);
|
||||
err = alloc_4k(dev, &addr, function);
|
||||
if (err) {
|
||||
if (err == -ENOMEM)
|
||||
err = alloc_system_page(dev, func_id);
|
||||
err = alloc_system_page(dev, function);
|
||||
if (err)
|
||||
goto out_4k;
|
||||
|
||||
|
@ -384,7 +390,7 @@ retry:
|
|||
|
||||
out_4k:
|
||||
for (i--; i >= 0; i--)
|
||||
free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), func_id);
|
||||
free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), function);
|
||||
out_free:
|
||||
kvfree(in);
|
||||
if (notify_fail)
|
||||
|
@ -392,14 +398,15 @@ out_free:
|
|||
return err;
|
||||
}
|
||||
|
||||
static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id,
|
||||
static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id,
|
||||
bool ec_function)
|
||||
{
|
||||
u32 function = get_function(func_id, ec_function);
|
||||
struct rb_root *root;
|
||||
struct rb_node *p;
|
||||
int npages = 0;
|
||||
|
||||
root = xa_load(&dev->priv.page_root_xa, func_id);
|
||||
root = xa_load(&dev->priv.page_root_xa, function);
|
||||
if (WARN_ON_ONCE(!root))
|
||||
return;
|
||||
|
||||
|
@ -446,6 +453,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
|
|||
struct rb_root *root;
|
||||
struct fw_page *fwp;
|
||||
struct rb_node *p;
|
||||
bool ec_function;
|
||||
u32 func_id;
|
||||
u32 npages;
|
||||
u32 i = 0;
|
||||
|
@ -456,8 +464,9 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
|
|||
/* No hard feelings, we want our pages back! */
|
||||
npages = MLX5_GET(manage_pages_in, in, input_num_entries);
|
||||
func_id = MLX5_GET(manage_pages_in, in, function_id);
|
||||
ec_function = MLX5_GET(manage_pages_in, in, embedded_cpu_function);
|
||||
|
||||
root = xa_load(&dev->priv.page_root_xa, func_id);
|
||||
root = xa_load(&dev->priv.page_root_xa, get_function(func_id, ec_function));
|
||||
if (WARN_ON_ONCE(!root))
|
||||
return -EEXIST;
|
||||
|
||||
|
@ -473,9 +482,10 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages,
|
||||
static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
|
||||
int *nclaimed, bool ec_function)
|
||||
{
|
||||
u32 function = get_function(func_id, ec_function);
|
||||
int outlen = MLX5_ST_SZ_BYTES(manage_pages_out);
|
||||
u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {};
|
||||
int num_claimed;
|
||||
|
@ -514,7 +524,7 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages,
|
|||
}
|
||||
|
||||
for (i = 0; i < num_claimed; i++)
|
||||
free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), func_id);
|
||||
free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), function);
|
||||
|
||||
if (nclaimed)
|
||||
*nclaimed = num_claimed;
|
||||
|
|
|
@ -157,6 +157,7 @@ mlxsw_sp1_span_entry_cpu_deconfigure(struct mlxsw_sp_span_entry *span_entry)
|
|||
|
||||
static const
|
||||
struct mlxsw_sp_span_entry_ops mlxsw_sp1_span_entry_ops_cpu = {
|
||||
.is_static = true,
|
||||
.can_handle = mlxsw_sp1_span_cpu_can_handle,
|
||||
.parms_set = mlxsw_sp1_span_entry_cpu_parms,
|
||||
.configure = mlxsw_sp1_span_entry_cpu_configure,
|
||||
|
@ -214,6 +215,7 @@ mlxsw_sp_span_entry_phys_deconfigure(struct mlxsw_sp_span_entry *span_entry)
|
|||
|
||||
static const
|
||||
struct mlxsw_sp_span_entry_ops mlxsw_sp_span_entry_ops_phys = {
|
||||
.is_static = true,
|
||||
.can_handle = mlxsw_sp_port_dev_check,
|
||||
.parms_set = mlxsw_sp_span_entry_phys_parms,
|
||||
.configure = mlxsw_sp_span_entry_phys_configure,
|
||||
|
@ -721,6 +723,7 @@ mlxsw_sp2_span_entry_cpu_deconfigure(struct mlxsw_sp_span_entry *span_entry)
|
|||
|
||||
static const
|
||||
struct mlxsw_sp_span_entry_ops mlxsw_sp2_span_entry_ops_cpu = {
|
||||
.is_static = true,
|
||||
.can_handle = mlxsw_sp2_span_cpu_can_handle,
|
||||
.parms_set = mlxsw_sp2_span_entry_cpu_parms,
|
||||
.configure = mlxsw_sp2_span_entry_cpu_configure,
|
||||
|
@ -1036,6 +1039,9 @@ static void mlxsw_sp_span_respin_work(struct work_struct *work)
|
|||
if (!refcount_read(&curr->ref_count))
|
||||
continue;
|
||||
|
||||
if (curr->ops->is_static)
|
||||
continue;
|
||||
|
||||
err = curr->ops->parms_set(mlxsw_sp, curr->to_dev, &sparms);
|
||||
if (err)
|
||||
continue;
|
||||
|
|
|
@ -60,6 +60,7 @@ struct mlxsw_sp_span_entry {
|
|||
};
|
||||
|
||||
struct mlxsw_sp_span_entry_ops {
|
||||
bool is_static;
|
||||
bool (*can_handle)(const struct net_device *to_dev);
|
||||
int (*parms_set)(struct mlxsw_sp *mlxsw_sp,
|
||||
const struct net_device *to_dev,
|
||||
|
|
|
@ -129,7 +129,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
|
|||
if (ret) {
|
||||
dev_err(&pdev->dev,
|
||||
"Failed to set tx_clk\n");
|
||||
return ret;
|
||||
goto err_remove_config_dt;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -143,7 +143,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
|
|||
if (ret) {
|
||||
dev_err(&pdev->dev,
|
||||
"Failed to set clk_ptp_ref\n");
|
||||
return ret;
|
||||
goto err_remove_config_dt;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -375,6 +375,7 @@ static int ehl_pse0_common_data(struct pci_dev *pdev,
|
|||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 2;
|
||||
plat->addr64 = 32;
|
||||
return ehl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
|
@ -406,6 +407,7 @@ static int ehl_pse1_common_data(struct pci_dev *pdev,
|
|||
struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
plat->bus_id = 3;
|
||||
plat->addr64 = 32;
|
||||
return ehl_common_data(pdev, plat);
|
||||
}
|
||||
|
||||
|
|
|
@ -992,7 +992,8 @@ static void __team_compute_features(struct team *team)
|
|||
unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
|
||||
IFF_XMIT_DST_RELEASE_PERM;
|
||||
|
||||
list_for_each_entry(port, &team->port_list, list) {
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(port, &team->port_list, list) {
|
||||
vlan_features = netdev_increment_features(vlan_features,
|
||||
port->dev->vlan_features,
|
||||
TEAM_VLAN_FEATURES);
|
||||
|
@ -1006,6 +1007,7 @@ static void __team_compute_features(struct team *team)
|
|||
if (port->dev->hard_header_len > max_hard_header_len)
|
||||
max_hard_header_len = port->dev->hard_header_len;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
team->dev->vlan_features = vlan_features;
|
||||
team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
|
||||
|
@ -1020,9 +1022,7 @@ static void __team_compute_features(struct team *team)
|
|||
|
||||
static void team_compute_features(struct team *team)
|
||||
{
|
||||
mutex_lock(&team->lock);
|
||||
__team_compute_features(team);
|
||||
mutex_unlock(&team->lock);
|
||||
netdev_change_features(team->dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -968,6 +968,12 @@ static const struct usb_device_id products[] = {
|
|||
USB_CDC_SUBCLASS_ETHERNET,
|
||||
USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long)&wwan_info,
|
||||
}, {
|
||||
/* Cinterion PLS83/PLS63 modem by GEMALTO/THALES */
|
||||
USB_DEVICE_AND_INTERFACE_INFO(0x1e2d, 0x0069, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET,
|
||||
USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long)&wwan_info,
|
||||
}, {
|
||||
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET,
|
||||
USB_CDC_PROTO_NONE),
|
||||
|
|
|
@ -1302,6 +1302,7 @@ static const struct usb_device_id products[] = {
|
|||
{QMI_FIXED_INTF(0x0b3c, 0xc00a, 6)}, /* Olivetti Olicard 160 */
|
||||
{QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)}, /* Olivetti Olicard 500 */
|
||||
{QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */
|
||||
{QMI_QUIRK_SET_DTR(0x1e2d, 0x006f, 8)}, /* Cinterion PLS83/PLS63 */
|
||||
{QMI_FIXED_INTF(0x1e2d, 0x0053, 4)}, /* Cinterion PHxx,PXxx */
|
||||
{QMI_FIXED_INTF(0x1e2d, 0x0063, 10)}, /* Cinterion ALASxx (1 RmNet) */
|
||||
{QMI_FIXED_INTF(0x1e2d, 0x0082, 4)}, /* Cinterion PHxx,PXxx (2 RmNet) */
|
||||
|
|
|
@ -314,6 +314,7 @@ const struct iwl_cfg_trans_params iwl_ma_trans_cfg = {
|
|||
const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101";
|
||||
const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz";
|
||||
const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz";
|
||||
const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203";
|
||||
const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz";
|
||||
const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz";
|
||||
const char iwl_ma_name[] = "Intel(R) Wi-Fi 6";
|
||||
|
@ -340,6 +341,18 @@ const struct iwl_cfg iwl_qu_b0_hr1_b0 = {
|
|||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_qu_b0_hr_b0 = {
|
||||
.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
|
||||
IWL_DEVICE_22500,
|
||||
/*
|
||||
* This device doesn't support receiving BlockAck with a large bitmap
|
||||
* so we need to restrict the size of transmitted aggregation to the
|
||||
* HT size; mac80211 would otherwise pick the HE max (256) by default.
|
||||
*/
|
||||
.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
|
||||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_ax201_cfg_qu_hr = {
|
||||
.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
|
||||
.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
|
||||
|
@ -366,6 +379,18 @@ const struct iwl_cfg iwl_qu_c0_hr1_b0 = {
|
|||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_qu_c0_hr_b0 = {
|
||||
.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
|
||||
IWL_DEVICE_22500,
|
||||
/*
|
||||
* This device doesn't support receiving BlockAck with a large bitmap
|
||||
* so we need to restrict the size of transmitted aggregation to the
|
||||
* HT size; mac80211 would otherwise pick the HE max (256) by default.
|
||||
*/
|
||||
.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
|
||||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_ax201_cfg_qu_c0_hr_b0 = {
|
||||
.name = "Intel(R) Wi-Fi 6 AX201 160MHz",
|
||||
.fw_name_pre = IWL_QU_C_HR_B_FW_PRE,
|
||||
|
|
|
@ -80,19 +80,45 @@ static void *iwl_acpi_get_dsm_object(struct device *dev, int rev, int func,
|
|||
}
|
||||
|
||||
/*
|
||||
* Evaluate a DSM with no arguments and a single u8 return value (inside a
|
||||
* buffer object), verify and return that value.
|
||||
* Generic function to evaluate a DSM with no arguments
|
||||
* and an integer return value,
|
||||
* (as an integer object or inside a buffer object),
|
||||
* verify and assign the value in the "value" parameter.
|
||||
* return 0 in success and the appropriate errno otherwise.
|
||||
*/
|
||||
int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func)
|
||||
static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
|
||||
u64 *value, size_t expected_size)
|
||||
{
|
||||
union acpi_object *obj;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
obj = iwl_acpi_get_dsm_object(dev, rev, func, NULL);
|
||||
if (IS_ERR(obj))
|
||||
if (IS_ERR(obj)) {
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"Failed to get DSM object. func= %d\n",
|
||||
func);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
if (obj->type != ACPI_TYPE_BUFFER) {
|
||||
if (obj->type == ACPI_TYPE_INTEGER) {
|
||||
*value = obj->integer.value;
|
||||
} else if (obj->type == ACPI_TYPE_BUFFER) {
|
||||
__le64 le_value = 0;
|
||||
|
||||
if (WARN_ON_ONCE(expected_size > sizeof(le_value)))
|
||||
return -EINVAL;
|
||||
|
||||
/* if the buffer size doesn't match the expected size */
|
||||
if (obj->buffer.length != expected_size)
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM invalid buffer size, padding or truncating (%d)\n",
|
||||
obj->buffer.length);
|
||||
|
||||
/* assuming LE from Intel BIOS spec */
|
||||
memcpy(&le_value, obj->buffer.pointer,
|
||||
min_t(size_t, expected_size, (size_t)obj->buffer.length));
|
||||
*value = le64_to_cpu(le_value);
|
||||
} else {
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM method did not return a valid object, type=%d\n",
|
||||
obj->type);
|
||||
|
@ -100,15 +126,6 @@ int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (obj->buffer.length != sizeof(u8)) {
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM method returned invalid buffer, length=%d\n",
|
||||
obj->buffer.length);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = obj->buffer.pointer[0];
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM method evaluated: func=%d, ret=%d\n",
|
||||
func, ret);
|
||||
|
@ -116,6 +133,24 @@ out:
|
|||
ACPI_FREE(obj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Evaluate a DSM with no arguments and a u8 return value,
|
||||
*/
|
||||
int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value)
|
||||
{
|
||||
int ret;
|
||||
u64 val;
|
||||
|
||||
ret = iwl_acpi_get_dsm_integer(dev, rev, func, &val, sizeof(u8));
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* cast val (u64) to be u8 */
|
||||
*value = (u8)val;
|
||||
return 0;
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_acpi_get_dsm_u8);
|
||||
|
||||
union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
* Copyright (C) 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2018-2021 Intel Corporation
|
||||
*/
|
||||
#ifndef __iwl_fw_acpi__
|
||||
#define __iwl_fw_acpi__
|
||||
|
@ -99,7 +99,7 @@ struct iwl_fw_runtime;
|
|||
|
||||
void *iwl_acpi_get_object(struct device *dev, acpi_string method);
|
||||
|
||||
int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func);
|
||||
int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value);
|
||||
|
||||
union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
|
||||
union acpi_object *data,
|
||||
|
@ -159,7 +159,8 @@ static inline void *iwl_acpi_get_dsm_object(struct device *dev, int rev,
|
|||
return ERR_PTR(-ENOENT);
|
||||
}
|
||||
|
||||
static inline int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func)
|
||||
static inline
|
||||
int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value)
|
||||
{
|
||||
return -ENOENT;
|
||||
}
|
||||
|
|
|
@ -224,40 +224,46 @@ static int iwl_pnvm_parse(struct iwl_trans *trans, const u8 *data,
|
|||
int iwl_pnvm_load(struct iwl_trans *trans,
|
||||
struct iwl_notif_wait_data *notif_wait)
|
||||
{
|
||||
const struct firmware *pnvm;
|
||||
struct iwl_notification_wait pnvm_wait;
|
||||
static const u16 ntf_cmds[] = { WIDE_ID(REGULATORY_AND_NVM_GROUP,
|
||||
PNVM_INIT_COMPLETE_NTFY) };
|
||||
char pnvm_name[64];
|
||||
int ret;
|
||||
|
||||
/* if the SKU_ID is empty, there's nothing to do */
|
||||
if (!trans->sku_id[0] && !trans->sku_id[1] && !trans->sku_id[2])
|
||||
return 0;
|
||||
|
||||
/* if we already have it, nothing to do either */
|
||||
if (trans->pnvm_loaded)
|
||||
return 0;
|
||||
/* load from disk only if we haven't done it (or tried) before */
|
||||
if (!trans->pnvm_loaded) {
|
||||
const struct firmware *pnvm;
|
||||
char pnvm_name[64];
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The prefix unfortunately includes a hyphen at the end, so
|
||||
* don't add the dot here...
|
||||
*/
|
||||
snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm",
|
||||
trans->cfg->fw_name_pre);
|
||||
/*
|
||||
* The prefix unfortunately includes a hyphen at the end, so
|
||||
* don't add the dot here...
|
||||
*/
|
||||
snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm",
|
||||
trans->cfg->fw_name_pre);
|
||||
|
||||
/* ...but replace the hyphen with the dot here. */
|
||||
if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name))
|
||||
pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.';
|
||||
/* ...but replace the hyphen with the dot here. */
|
||||
if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name))
|
||||
pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.';
|
||||
|
||||
ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev);
|
||||
if (ret) {
|
||||
IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n",
|
||||
pnvm_name, ret);
|
||||
} else {
|
||||
iwl_pnvm_parse(trans, pnvm->data, pnvm->size);
|
||||
ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev);
|
||||
if (ret) {
|
||||
IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n",
|
||||
pnvm_name, ret);
|
||||
/*
|
||||
* Pretend we've loaded it - at least we've tried and
|
||||
* couldn't load it at all, so there's no point in
|
||||
* trying again over and over.
|
||||
*/
|
||||
trans->pnvm_loaded = true;
|
||||
} else {
|
||||
iwl_pnvm_parse(trans, pnvm->data, pnvm->size);
|
||||
|
||||
release_firmware(pnvm);
|
||||
release_firmware(pnvm);
|
||||
}
|
||||
}
|
||||
|
||||
iwl_init_notification_wait(notif_wait, &pnvm_wait,
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
#ifndef __IWL_CONFIG_H__
|
||||
|
@ -445,7 +445,7 @@ struct iwl_cfg {
|
|||
#define IWL_CFG_CORES_BT_GNSS 0x5
|
||||
|
||||
#define IWL_SUBDEVICE_RF_ID(subdevice) ((u16)((subdevice) & 0x00F0) >> 4)
|
||||
#define IWL_SUBDEVICE_NO_160(subdevice) ((u16)((subdevice) & 0x0100) >> 9)
|
||||
#define IWL_SUBDEVICE_NO_160(subdevice) ((u16)((subdevice) & 0x0200) >> 9)
|
||||
#define IWL_SUBDEVICE_CORES(subdevice) ((u16)((subdevice) & 0x1C00) >> 10)
|
||||
|
||||
struct iwl_dev_info {
|
||||
|
@ -491,6 +491,7 @@ extern const char iwl9260_killer_1550_name[];
|
|||
extern const char iwl9560_killer_1550i_name[];
|
||||
extern const char iwl9560_killer_1550s_name[];
|
||||
extern const char iwl_ax200_name[];
|
||||
extern const char iwl_ax203_name[];
|
||||
extern const char iwl_ax201_name[];
|
||||
extern const char iwl_ax101_name[];
|
||||
extern const char iwl_ax200_killer_1650w_name[];
|
||||
|
@ -574,6 +575,8 @@ extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
|
|||
extern const struct iwl_cfg iwl_qu_b0_hr1_b0;
|
||||
extern const struct iwl_cfg iwl_qu_c0_hr1_b0;
|
||||
extern const struct iwl_cfg iwl_quz_a0_hr1_b0;
|
||||
extern const struct iwl_cfg iwl_qu_b0_hr_b0;
|
||||
extern const struct iwl_cfg iwl_qu_c0_hr_b0;
|
||||
extern const struct iwl_cfg iwl_ax200_cfg_cc;
|
||||
extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
|
||||
extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
|
||||
|
|
|
@ -180,13 +180,6 @@ static int iwl_dbg_tlv_alloc_region(struct iwl_trans *trans,
|
|||
if (le32_to_cpu(tlv->length) < sizeof(*reg))
|
||||
return -EINVAL;
|
||||
|
||||
/* For safe using a string from FW make sure we have a
|
||||
* null terminator
|
||||
*/
|
||||
reg->name[IWL_FW_INI_MAX_NAME - 1] = 0;
|
||||
|
||||
IWL_DEBUG_FW(trans, "WRT: parsing region: %s\n", reg->name);
|
||||
|
||||
if (id >= IWL_FW_INI_MAX_REGION_ID) {
|
||||
IWL_ERR(trans, "WRT: Invalid region id %u\n", id);
|
||||
return -EINVAL;
|
||||
|
|
|
@ -150,16 +150,17 @@ u32 iwl_read_prph(struct iwl_trans *trans, u32 ofs)
|
|||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_read_prph);
|
||||
|
||||
void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val)
|
||||
void iwl_write_prph_delay(struct iwl_trans *trans, u32 ofs, u32 val, u32 delay_ms)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (iwl_trans_grab_nic_access(trans, &flags)) {
|
||||
mdelay(delay_ms);
|
||||
iwl_write_prph_no_grab(trans, ofs, val);
|
||||
iwl_trans_release_nic_access(trans, &flags);
|
||||
}
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_write_prph);
|
||||
IWL_EXPORT_SYMBOL(iwl_write_prph_delay);
|
||||
|
||||
int iwl_poll_prph_bit(struct iwl_trans *trans, u32 addr,
|
||||
u32 bits, u32 mask, int timeout)
|
||||
|
@ -219,8 +220,8 @@ IWL_EXPORT_SYMBOL(iwl_clear_bits_prph);
|
|||
void iwl_force_nmi(struct iwl_trans *trans)
|
||||
{
|
||||
if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000)
|
||||
iwl_write_prph(trans, DEVICE_SET_NMI_REG,
|
||||
DEVICE_SET_NMI_VAL_DRV);
|
||||
iwl_write_prph_delay(trans, DEVICE_SET_NMI_REG,
|
||||
DEVICE_SET_NMI_VAL_DRV, 1);
|
||||
else if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
|
||||
iwl_write_umac_prph(trans, UREG_NIC_SET_NMI_DRIVER,
|
||||
UREG_NIC_SET_NMI_DRIVER_NMI_FROM_DRIVER);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2018-2019 Intel Corporation
|
||||
* Copyright (C) 2018-2020 Intel Corporation
|
||||
*/
|
||||
#ifndef __iwl_io_h__
|
||||
#define __iwl_io_h__
|
||||
|
@ -37,7 +37,13 @@ u32 iwl_read_prph_no_grab(struct iwl_trans *trans, u32 ofs);
|
|||
u32 iwl_read_prph(struct iwl_trans *trans, u32 ofs);
|
||||
void iwl_write_prph_no_grab(struct iwl_trans *trans, u32 ofs, u32 val);
|
||||
void iwl_write_prph64_no_grab(struct iwl_trans *trans, u64 ofs, u64 val);
|
||||
void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val);
|
||||
void iwl_write_prph_delay(struct iwl_trans *trans, u32 ofs,
|
||||
u32 val, u32 delay_ms);
|
||||
static inline void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val)
|
||||
{
|
||||
iwl_write_prph_delay(trans, ofs, val, 0);
|
||||
}
|
||||
|
||||
int iwl_poll_prph_bit(struct iwl_trans *trans, u32 addr,
|
||||
u32 bits, u32 mask, int timeout);
|
||||
void iwl_set_bits_prph(struct iwl_trans *trans, u32 ofs, u32 mask);
|
||||
|
|
|
@ -301,6 +301,12 @@
|
|||
#define RADIO_RSP_ADDR_POS (6)
|
||||
#define RADIO_RSP_RD_CMD (3)
|
||||
|
||||
/* LTR control (Qu only) */
|
||||
#define HPM_MAC_LTR_CSR 0xa0348c
|
||||
#define HPM_MAC_LRT_ENABLE_ALL 0xf
|
||||
/* also uses CSR_LTR_* for values */
|
||||
#define HPM_UMAC_LTR 0xa03480
|
||||
|
||||
/* FW monitor */
|
||||
#define MON_BUFF_SAMPLE_CTL (0xa03c00)
|
||||
#define MON_BUFF_BASE_ADDR (0xa03c1c)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -2032,8 +2032,6 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
|
|||
|
||||
mutex_lock(&mvm->mutex);
|
||||
|
||||
clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
|
||||
|
||||
/* get the BSS vif pointer again */
|
||||
vif = iwl_mvm_get_bss_vif(mvm);
|
||||
if (IS_ERR_OR_NULL(vif))
|
||||
|
@ -2148,6 +2146,8 @@ out_iterate:
|
|||
iwl_mvm_d3_disconnect_iter, keep ? vif : NULL);
|
||||
|
||||
out:
|
||||
clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
|
||||
|
||||
/* no need to reset the device in unified images, if successful */
|
||||
if (unified_image && !ret) {
|
||||
/* nothing else to do if we already sent D0I3_END_CMD */
|
||||
|
|
|
@ -459,7 +459,10 @@ static ssize_t iwl_dbgfs_os_device_timediff_read(struct file *file,
|
|||
const size_t bufsz = sizeof(buf);
|
||||
int pos = 0;
|
||||
|
||||
mutex_lock(&mvm->mutex);
|
||||
iwl_mvm_get_sync_time(mvm, &curr_gp2, &curr_os);
|
||||
mutex_unlock(&mvm->mutex);
|
||||
|
||||
do_div(curr_os, NSEC_PER_USEC);
|
||||
diff = curr_os - curr_gp2;
|
||||
pos += scnprintf(buf + pos, bufsz - pos, "diff=%lld\n", diff);
|
||||
|
|
|
@ -1090,20 +1090,22 @@ static void iwl_mvm_tas_init(struct iwl_mvm *mvm)
|
|||
|
||||
static u8 iwl_mvm_eval_dsm_indonesia_5g2(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 value;
|
||||
|
||||
int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0,
|
||||
DSM_FUNC_ENABLE_INDONESIA_5G2);
|
||||
DSM_FUNC_ENABLE_INDONESIA_5G2, &value);
|
||||
|
||||
if (ret < 0)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Failed to evaluate DSM function ENABLE_INDONESIA_5G2, ret=%d\n",
|
||||
ret);
|
||||
|
||||
else if (ret >= DSM_VALUE_INDONESIA_MAX)
|
||||
else if (value >= DSM_VALUE_INDONESIA_MAX)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"DSM function ENABLE_INDONESIA_5G2 return invalid value, ret=%d\n",
|
||||
ret);
|
||||
"DSM function ENABLE_INDONESIA_5G2 return invalid value, value=%d\n",
|
||||
value);
|
||||
|
||||
else if (ret == DSM_VALUE_INDONESIA_ENABLE) {
|
||||
else if (value == DSM_VALUE_INDONESIA_ENABLE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function ENABLE_INDONESIA_5G2: Enabling 5g2\n");
|
||||
return DSM_VALUE_INDONESIA_ENABLE;
|
||||
|
@ -1114,25 +1116,26 @@ static u8 iwl_mvm_eval_dsm_indonesia_5g2(struct iwl_mvm *mvm)
|
|||
|
||||
static u8 iwl_mvm_eval_dsm_disable_srd(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 value;
|
||||
int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0,
|
||||
DSM_FUNC_DISABLE_SRD);
|
||||
DSM_FUNC_DISABLE_SRD, &value);
|
||||
|
||||
if (ret < 0)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Failed to evaluate DSM function DISABLE_SRD, ret=%d\n",
|
||||
ret);
|
||||
|
||||
else if (ret >= DSM_VALUE_SRD_MAX)
|
||||
else if (value >= DSM_VALUE_SRD_MAX)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"DSM function DISABLE_SRD return invalid value, ret=%d\n",
|
||||
ret);
|
||||
"DSM function DISABLE_SRD return invalid value, value=%d\n",
|
||||
value);
|
||||
|
||||
else if (ret == DSM_VALUE_SRD_PASSIVE) {
|
||||
else if (value == DSM_VALUE_SRD_PASSIVE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function DISABLE_SRD: setting SRD to passive\n");
|
||||
return DSM_VALUE_SRD_PASSIVE;
|
||||
|
||||
} else if (ret == DSM_VALUE_SRD_DISABLE) {
|
||||
} else if (value == DSM_VALUE_SRD_DISABLE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function DISABLE_SRD: disabling SRD\n");
|
||||
return DSM_VALUE_SRD_DISABLE;
|
||||
|
|
|
@ -4194,6 +4194,9 @@ static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
|
|||
iwl_mvm_binding_remove_vif(mvm, vif);
|
||||
|
||||
out:
|
||||
if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) &&
|
||||
switching_chanctx)
|
||||
return;
|
||||
mvmvif->phy_ctxt = NULL;
|
||||
iwl_mvm_power_update_mac(mvm);
|
||||
}
|
||||
|
|
|
@ -791,6 +791,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
|
|||
if (!mvm->scan_cmd)
|
||||
goto out_free;
|
||||
|
||||
/* invalidate ids to prevent accidental removal of sta_id 0 */
|
||||
mvm->aux_sta.sta_id = IWL_MVM_INVALID_STA;
|
||||
mvm->snif_sta.sta_id = IWL_MVM_INVALID_STA;
|
||||
|
||||
/* Set EBS as successful as long as not stated otherwise by the FW. */
|
||||
mvm->last_ebs_successful = true;
|
||||
|
||||
|
@ -1205,6 +1209,7 @@ static void iwl_mvm_reprobe_wk(struct work_struct *wk)
|
|||
reprobe = container_of(wk, struct iwl_mvm_reprobe, work);
|
||||
if (device_reprobe(reprobe->dev))
|
||||
dev_err(reprobe->dev, "reprobe failed!\n");
|
||||
put_device(reprobe->dev);
|
||||
kfree(reprobe);
|
||||
module_put(THIS_MODULE);
|
||||
}
|
||||
|
@ -1255,7 +1260,7 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
|
|||
module_put(THIS_MODULE);
|
||||
return;
|
||||
}
|
||||
reprobe->dev = mvm->trans->dev;
|
||||
reprobe->dev = get_device(mvm->trans->dev);
|
||||
INIT_WORK(&reprobe->work, iwl_mvm_reprobe_wk);
|
||||
schedule_work(&reprobe->work);
|
||||
} else if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
|
||||
|
|
|
@ -2057,6 +2057,9 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
|
||||
if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
|
||||
return -EINVAL;
|
||||
|
||||
iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
|
||||
ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
|
||||
if (ret)
|
||||
|
@ -2071,6 +2074,9 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
|
|||
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
|
||||
if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
|
||||
return -EINVAL;
|
||||
|
||||
iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
|
||||
ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
|
||||
if (ret)
|
||||
|
|
|
@ -773,6 +773,7 @@ iwl_mvm_tx_tso_segment(struct sk_buff *skb, unsigned int num_subframes,
|
|||
|
||||
next = skb_gso_segment(skb, netdev_flags);
|
||||
skb_shinfo(skb)->gso_size = mss;
|
||||
skb_shinfo(skb)->gso_type = ipv4 ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
|
||||
if (WARN_ON_ONCE(IS_ERR(next)))
|
||||
return -EINVAL;
|
||||
else if (next)
|
||||
|
@ -795,6 +796,8 @@ iwl_mvm_tx_tso_segment(struct sk_buff *skb, unsigned int num_subframes,
|
|||
|
||||
if (tcp_payload_len > mss) {
|
||||
skb_shinfo(tmp)->gso_size = mss;
|
||||
skb_shinfo(tmp)->gso_type = ipv4 ? SKB_GSO_TCPV4 :
|
||||
SKB_GSO_TCPV6;
|
||||
} else {
|
||||
if (qos) {
|
||||
u8 *qc;
|
||||
|
|
|
@ -75,6 +75,15 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
|
|||
const struct fw_img *fw)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
|
||||
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
|
||||
CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
|
||||
u32_encode_bits(250,
|
||||
CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
|
||||
CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
|
||||
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
|
||||
CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
|
||||
u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
|
||||
struct iwl_context_info_gen3 *ctxt_info_gen3;
|
||||
struct iwl_prph_scratch *prph_scratch;
|
||||
struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
|
||||
|
@ -189,8 +198,10 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
|
|||
/* Allocate IML */
|
||||
iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
|
||||
&trans_pcie->iml_dma_addr, GFP_KERNEL);
|
||||
if (!iml_img)
|
||||
return -ENOMEM;
|
||||
if (!iml_img) {
|
||||
ret = -ENOMEM;
|
||||
goto err_free_ctxt_info;
|
||||
}
|
||||
|
||||
memcpy(iml_img, trans->iml, trans->iml_len);
|
||||
|
||||
|
@ -206,23 +217,19 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
|
|||
iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
|
||||
CSR_AUTO_FUNC_BOOT_ENA);
|
||||
|
||||
if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {
|
||||
/*
|
||||
* The firmware initializes this again later (to a smaller
|
||||
* value), but for the boot process initialize the LTR to
|
||||
* ~250 usec.
|
||||
*/
|
||||
u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
|
||||
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
|
||||
CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
|
||||
u32_encode_bits(250,
|
||||
CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
|
||||
CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
|
||||
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
|
||||
CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
|
||||
u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
|
||||
|
||||
iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val);
|
||||
/*
|
||||
* To workaround hardware latency issues during the boot process,
|
||||
* initialize the LTR to ~250 usec (see ltr_val above).
|
||||
* The firmware initializes this again later (to a smaller value).
|
||||
*/
|
||||
if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
|
||||
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
|
||||
!trans->trans_cfg->integrated) {
|
||||
iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
|
||||
} else if (trans->trans_cfg->integrated &&
|
||||
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
|
||||
iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
|
||||
iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
|
||||
}
|
||||
|
||||
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
|
||||
|
@ -232,6 +239,11 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
|
|||
|
||||
return 0;
|
||||
|
||||
err_free_ctxt_info:
|
||||
dma_free_coherent(trans->dev, sizeof(*trans_pcie->ctxt_info_gen3),
|
||||
trans_pcie->ctxt_info_gen3,
|
||||
trans_pcie->ctxt_info_dma_addr);
|
||||
trans_pcie->ctxt_info_gen3 = NULL;
|
||||
err_free_prph_info:
|
||||
dma_free_coherent(trans->dev,
|
||||
sizeof(*prph_info),
|
||||
|
@ -294,6 +306,9 @@ int iwl_trans_pcie_ctx_info_gen3_set_pnvm(struct iwl_trans *trans,
|
|||
return ret;
|
||||
}
|
||||
|
||||
if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size))
|
||||
return -EBUSY;
|
||||
|
||||
prph_sc_ctrl->pnvm_cfg.pnvm_base_addr =
|
||||
cpu_to_le64(trans_pcie->pnvm_dram.physical);
|
||||
prph_sc_ctrl->pnvm_cfg.pnvm_size =
|
||||
|
|
|
@ -910,6 +910,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_qu_b0_hr1_b0, iwl_ax101_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
|
||||
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_qu_b0_hr_b0, iwl_ax203_name),
|
||||
|
||||
/* Qu C step */
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
|
@ -917,6 +922,11 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_qu_c0_hr1_b0, iwl_ax101_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
|
||||
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_qu_c0_hr_b0, iwl_ax203_name),
|
||||
|
||||
/* QuZ */
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
|
|
|
@ -2107,7 +2107,8 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
|
|||
|
||||
while (offs < dwords) {
|
||||
/* limit the time we spin here under lock to 1/2s */
|
||||
ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC);
|
||||
unsigned long end = jiffies + HZ / 2;
|
||||
bool resched = false;
|
||||
|
||||
if (iwl_trans_grab_nic_access(trans, &flags)) {
|
||||
iwl_write32(trans, HBUS_TARG_MEM_RADDR,
|
||||
|
@ -2118,14 +2119,15 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
|
|||
HBUS_TARG_MEM_RDAT);
|
||||
offs++;
|
||||
|
||||
/* calling ktime_get is expensive so
|
||||
* do it once in 128 reads
|
||||
*/
|
||||
if (offs % 128 == 0 && ktime_after(ktime_get(),
|
||||
timeout))
|
||||
if (time_after(jiffies, end)) {
|
||||
resched = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
iwl_trans_release_nic_access(trans, &flags);
|
||||
|
||||
if (resched)
|
||||
cond_resched();
|
||||
} else {
|
||||
return -EBUSY;
|
||||
}
|
||||
|
|
|
@ -201,6 +201,11 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
|
|||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
struct iwl_txq *txq = trans->txqs.txq[txq_id];
|
||||
|
||||
if (!txq) {
|
||||
IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n");
|
||||
return;
|
||||
}
|
||||
|
||||
spin_lock_bh(&txq->lock);
|
||||
while (txq->write_ptr != txq->read_ptr) {
|
||||
IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
|
||||
|
|
|
@ -142,26 +142,25 @@ void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
|
|||
* idx is bounded by n_window
|
||||
*/
|
||||
int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
|
||||
struct sk_buff *skb;
|
||||
|
||||
lockdep_assert_held(&txq->lock);
|
||||
|
||||
if (!txq->entries)
|
||||
return;
|
||||
|
||||
iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta,
|
||||
iwl_txq_get_tfd(trans, txq, idx));
|
||||
|
||||
/* free SKB */
|
||||
if (txq->entries) {
|
||||
struct sk_buff *skb;
|
||||
skb = txq->entries[idx].skb;
|
||||
|
||||
skb = txq->entries[idx].skb;
|
||||
|
||||
/* Can be called from irqs-disabled context
|
||||
* If skb is not NULL, it means that the whole queue is being
|
||||
* freed and that the queue is not empty - free the skb
|
||||
*/
|
||||
if (skb) {
|
||||
iwl_op_mode_free_skb(trans->op_mode, skb);
|
||||
txq->entries[idx].skb = NULL;
|
||||
}
|
||||
/* Can be called from irqs-disabled context
|
||||
* If skb is not NULL, it means that the whole queue is being
|
||||
* freed and that the queue is not empty - free the skb
|
||||
*/
|
||||
if (skb) {
|
||||
iwl_op_mode_free_skb(trans->op_mode, skb);
|
||||
txq->entries[idx].skb = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -841,10 +840,8 @@ void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
|
|||
int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
|
||||
struct sk_buff *skb = txq->entries[idx].skb;
|
||||
|
||||
if (WARN_ON_ONCE(!skb))
|
||||
continue;
|
||||
|
||||
iwl_txq_free_tso_page(trans, skb);
|
||||
if (!WARN_ON_ONCE(!skb))
|
||||
iwl_txq_free_tso_page(trans, skb);
|
||||
}
|
||||
iwl_txq_gen2_free_tfd(trans, txq);
|
||||
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
|
||||
|
@ -1494,28 +1491,28 @@ void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
|
|||
*/
|
||||
int rd_ptr = txq->read_ptr;
|
||||
int idx = iwl_txq_get_cmd_index(txq, rd_ptr);
|
||||
struct sk_buff *skb;
|
||||
|
||||
lockdep_assert_held(&txq->lock);
|
||||
|
||||
if (!txq->entries)
|
||||
return;
|
||||
|
||||
/* We have only q->n_window txq->entries, but we use
|
||||
* TFD_QUEUE_SIZE_MAX tfds
|
||||
*/
|
||||
iwl_txq_gen1_tfd_unmap(trans, &txq->entries[idx].meta, txq, rd_ptr);
|
||||
|
||||
/* free SKB */
|
||||
if (txq->entries) {
|
||||
struct sk_buff *skb;
|
||||
skb = txq->entries[idx].skb;
|
||||
|
||||
skb = txq->entries[idx].skb;
|
||||
|
||||
/* Can be called from irqs-disabled context
|
||||
* If skb is not NULL, it means that the whole queue is being
|
||||
* freed and that the queue is not empty - free the skb
|
||||
*/
|
||||
if (skb) {
|
||||
iwl_op_mode_free_skb(trans->op_mode, skb);
|
||||
txq->entries[idx].skb = NULL;
|
||||
}
|
||||
/* Can be called from irqs-disabled context
|
||||
* If skb is not NULL, it means that the whole queue is being
|
||||
* freed and that the queue is not empty - free the skb
|
||||
*/
|
||||
if (skb) {
|
||||
iwl_op_mode_free_skb(trans->op_mode, skb);
|
||||
txq->entries[idx].skb = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ mt7615_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
|
|||
int cmd, int *seq)
|
||||
{
|
||||
struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
|
||||
enum mt76_txq_id qid;
|
||||
enum mt76_mcuq_id qid;
|
||||
|
||||
mt7615_mcu_fill_msg(dev, skb, cmd, seq);
|
||||
if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state))
|
||||
|
|
|
@ -83,7 +83,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
|
|||
{
|
||||
struct mt76_queue *q = &dev->q_rx[qid];
|
||||
struct mt76_sdio *sdio = &dev->sdio;
|
||||
int len = 0, err, i, order;
|
||||
int len = 0, err, i;
|
||||
struct page *page;
|
||||
u8 *buf;
|
||||
|
||||
|
@ -96,8 +96,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
|
|||
if (len > sdio->func->cur_blksize)
|
||||
len = roundup(len, sdio->func->cur_blksize);
|
||||
|
||||
order = get_order(len);
|
||||
page = __dev_alloc_pages(GFP_KERNEL, order);
|
||||
page = __dev_alloc_pages(GFP_KERNEL, get_order(len));
|
||||
if (!page)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -106,7 +105,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
|
|||
err = sdio_readsb(sdio->func, buf, MCR_WRDR(qid), len);
|
||||
if (err < 0) {
|
||||
dev_err(dev->dev, "sdio read data failed:%d\n", err);
|
||||
__free_pages(page, order);
|
||||
put_page(page);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -123,7 +122,7 @@ static int mt7663s_rx_run_queue(struct mt76_dev *dev, enum mt76_rxq_id qid,
|
|||
if (q->queued + i + 1 == q->ndesc)
|
||||
break;
|
||||
}
|
||||
__free_pages(page, order);
|
||||
put_page(page);
|
||||
|
||||
spin_lock_bh(&q->lock);
|
||||
q->head = (q->head + i) % q->ndesc;
|
||||
|
|
|
@ -256,7 +256,7 @@ mt7915_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
|
|||
struct mt7915_dev *dev = container_of(mdev, struct mt7915_dev, mt76);
|
||||
struct mt7915_mcu_txd *mcu_txd;
|
||||
u8 seq, pkt_fmt, qidx;
|
||||
enum mt76_txq_id txq;
|
||||
enum mt76_mcuq_id qid;
|
||||
__le32 *txd;
|
||||
u32 val;
|
||||
|
||||
|
@ -268,18 +268,18 @@ mt7915_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
|
|||
seq = ++dev->mt76.mcu.msg_seq & 0xf;
|
||||
|
||||
if (cmd == -MCU_CMD_FW_SCATTER) {
|
||||
txq = MT_MCUQ_FWDL;
|
||||
qid = MT_MCUQ_FWDL;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
mcu_txd = (struct mt7915_mcu_txd *)skb_push(skb, sizeof(*mcu_txd));
|
||||
|
||||
if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state)) {
|
||||
txq = MT_MCUQ_WA;
|
||||
qid = MT_MCUQ_WA;
|
||||
qidx = MT_TX_MCU_PORT_RX_Q0;
|
||||
pkt_fmt = MT_TX_TYPE_CMD;
|
||||
} else {
|
||||
txq = MT_MCUQ_WM;
|
||||
qid = MT_MCUQ_WM;
|
||||
qidx = MT_TX_MCU_PORT_RX_Q0;
|
||||
pkt_fmt = MT_TX_TYPE_CMD;
|
||||
}
|
||||
|
@ -326,7 +326,7 @@ exit:
|
|||
if (wait_seq)
|
||||
*wait_seq = seq;
|
||||
|
||||
return mt76_tx_queue_skb_raw(dev, mdev->q_mcu[txq], skb, 0);
|
||||
return mt76_tx_queue_skb_raw(dev, mdev->q_mcu[qid], skb, 0);
|
||||
}
|
||||
|
||||
static void
|
||||
|
|
|
@ -152,8 +152,7 @@ mt7601u_rx_process_entry(struct mt7601u_dev *dev, struct mt7601u_dma_buf_rx *e)
|
|||
|
||||
if (new_p) {
|
||||
/* we have one extra ref from the allocator */
|
||||
__free_pages(e->p, MT_RX_ORDER);
|
||||
|
||||
put_page(e->p);
|
||||
e->p = new_p;
|
||||
}
|
||||
}
|
||||
|
@ -310,7 +309,6 @@ static int mt7601u_dma_submit_tx(struct mt7601u_dev *dev,
|
|||
}
|
||||
|
||||
e = &q->e[q->end];
|
||||
e->skb = skb;
|
||||
usb_fill_bulk_urb(e->urb, usb_dev, snd_pipe, skb->data, skb->len,
|
||||
mt7601u_complete_tx, q);
|
||||
ret = usb_submit_urb(e->urb, GFP_ATOMIC);
|
||||
|
@ -328,6 +326,7 @@ static int mt7601u_dma_submit_tx(struct mt7601u_dev *dev,
|
|||
|
||||
q->end = (q->end + 1) % q->entries;
|
||||
q->used++;
|
||||
e->skb = skb;
|
||||
|
||||
if (q->used >= q->entries)
|
||||
ieee80211_stop_queue(dev->hw, skb_get_queue_mapping(skb));
|
||||
|
|
|
@ -20,9 +20,9 @@ enum country_code_type_t {
|
|||
COUNTRY_CODE_MAX
|
||||
};
|
||||
|
||||
int rtw_regd_init(struct adapter *padapter,
|
||||
void (*reg_notifier)(struct wiphy *wiphy,
|
||||
struct regulatory_request *request));
|
||||
void rtw_regd_init(struct wiphy *wiphy,
|
||||
void (*reg_notifier)(struct wiphy *wiphy,
|
||||
struct regulatory_request *request));
|
||||
void rtw_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request);
|
||||
|
||||
|
||||
|
|
|
@ -3211,9 +3211,6 @@ void rtw_cfg80211_init_wiphy(struct adapter *padapter)
|
|||
rtw_cfg80211_init_ht_capab(&bands->ht_cap, NL80211_BAND_2GHZ, rf_type);
|
||||
}
|
||||
|
||||
/* init regulary domain */
|
||||
rtw_regd_init(padapter, rtw_reg_notifier);
|
||||
|
||||
/* copy mac_addr to wiphy */
|
||||
memcpy(wiphy->perm_addr, padapter->eeprompriv.mac_addr, ETH_ALEN);
|
||||
|
||||
|
@ -3328,6 +3325,9 @@ int rtw_wdev_alloc(struct adapter *padapter, struct device *dev)
|
|||
*((struct adapter **)wiphy_priv(wiphy)) = padapter;
|
||||
rtw_cfg80211_preinit_wiphy(padapter, wiphy);
|
||||
|
||||
/* init regulary domain */
|
||||
rtw_regd_init(wiphy, rtw_reg_notifier);
|
||||
|
||||
ret = wiphy_register(wiphy);
|
||||
if (ret < 0) {
|
||||
DBG_8192C("Couldn't register wiphy device\n");
|
||||
|
|
|
@ -139,15 +139,11 @@ static void _rtw_regd_init_wiphy(struct rtw_regulatory *reg,
|
|||
_rtw_reg_apply_flags(wiphy);
|
||||
}
|
||||
|
||||
int rtw_regd_init(struct adapter *padapter,
|
||||
void (*reg_notifier)(struct wiphy *wiphy,
|
||||
struct regulatory_request *request))
|
||||
void rtw_regd_init(struct wiphy *wiphy,
|
||||
void (*reg_notifier)(struct wiphy *wiphy,
|
||||
struct regulatory_request *request))
|
||||
{
|
||||
struct wiphy *wiphy = padapter->rtw_wdev->wiphy;
|
||||
|
||||
_rtw_regd_init_wiphy(NULL, wiphy, reg_notifier);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void rtw_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
|
||||
|
|
|
@ -92,6 +92,7 @@ struct lapb_cb {
|
|||
unsigned short n2, n2count;
|
||||
unsigned short t1, t2;
|
||||
struct timer_list t1timer, t2timer;
|
||||
bool t1timer_stop, t2timer_stop;
|
||||
|
||||
/* Internal control information */
|
||||
struct sk_buff_head write_queue;
|
||||
|
@ -103,6 +104,7 @@ struct lapb_cb {
|
|||
struct lapb_frame frmr_data;
|
||||
unsigned char frmr_type;
|
||||
|
||||
spinlock_t lock;
|
||||
refcount_t refcnt;
|
||||
};
|
||||
|
||||
|
|
|
@ -721,6 +721,8 @@ void *nft_set_elem_init(const struct nft_set *set,
|
|||
const struct nft_set_ext_tmpl *tmpl,
|
||||
const u32 *key, const u32 *key_end, const u32 *data,
|
||||
u64 timeout, u64 expiration, gfp_t gfp);
|
||||
int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
|
||||
struct nft_expr *expr_array[]);
|
||||
void nft_set_elem_destroy(const struct nft_set *set, void *elem,
|
||||
bool destroy_expr);
|
||||
|
||||
|
|
|
@ -630,6 +630,7 @@ static inline void tcp_clear_xmit_timers(struct sock *sk)
|
|||
|
||||
unsigned int tcp_sync_mss(struct sock *sk, u32 pmtu);
|
||||
unsigned int tcp_current_mss(struct sock *sk);
|
||||
u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when);
|
||||
|
||||
/* Bound MSS / TSO packet size with the half of the window */
|
||||
static inline int tcp_bound_to_half_wnd(struct tcp_sock *tp, int pktsize)
|
||||
|
@ -2060,7 +2061,7 @@ void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb);
|
|||
void tcp_newreno_mark_lost(struct sock *sk, bool snd_una_advanced);
|
||||
extern s32 tcp_rack_skb_timeout(struct tcp_sock *tp, struct sk_buff *skb,
|
||||
u32 reo_wnd);
|
||||
extern void tcp_rack_mark_lost(struct sock *sk);
|
||||
extern bool tcp_rack_mark_lost(struct sock *sk);
|
||||
extern void tcp_rack_advance(struct tcp_sock *tp, u8 sacked, u32 end_seq,
|
||||
u64 xmit_time);
|
||||
extern void tcp_rack_reo_timeout(struct sock *sk);
|
||||
|
|
|
@ -71,90 +71,4 @@ enum br_mrp_sub_tlv_header_type {
|
|||
BR_MRP_SUB_TLV_HEADER_TEST_AUTO_MGR = 0x3,
|
||||
};
|
||||
|
||||
struct br_mrp_tlv_hdr {
|
||||
__u8 type;
|
||||
__u8 length;
|
||||
};
|
||||
|
||||
struct br_mrp_sub_tlv_hdr {
|
||||
__u8 type;
|
||||
__u8 length;
|
||||
};
|
||||
|
||||
struct br_mrp_end_hdr {
|
||||
struct br_mrp_tlv_hdr hdr;
|
||||
};
|
||||
|
||||
struct br_mrp_common_hdr {
|
||||
__be16 seq_id;
|
||||
__u8 domain[MRP_DOMAIN_UUID_LENGTH];
|
||||
};
|
||||
|
||||
struct br_mrp_ring_test_hdr {
|
||||
__be16 prio;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 state;
|
||||
__be16 transitions;
|
||||
__be32 timestamp;
|
||||
};
|
||||
|
||||
struct br_mrp_ring_topo_hdr {
|
||||
__be16 prio;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 interval;
|
||||
};
|
||||
|
||||
struct br_mrp_ring_link_hdr {
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 interval;
|
||||
__be16 blocked;
|
||||
};
|
||||
|
||||
struct br_mrp_sub_opt_hdr {
|
||||
__u8 type;
|
||||
__u8 manufacture_data[MRP_MANUFACTURE_DATA_LENGTH];
|
||||
};
|
||||
|
||||
struct br_mrp_test_mgr_nack_hdr {
|
||||
__be16 prio;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 other_prio;
|
||||
__u8 other_sa[ETH_ALEN];
|
||||
};
|
||||
|
||||
struct br_mrp_test_prop_hdr {
|
||||
__be16 prio;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 other_prio;
|
||||
__u8 other_sa[ETH_ALEN];
|
||||
};
|
||||
|
||||
struct br_mrp_oui_hdr {
|
||||
__u8 oui[MRP_OUI_LENGTH];
|
||||
};
|
||||
|
||||
struct br_mrp_in_test_hdr {
|
||||
__be16 id;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 state;
|
||||
__be16 transitions;
|
||||
__be32 timestamp;
|
||||
};
|
||||
|
||||
struct br_mrp_in_topo_hdr {
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 id;
|
||||
__be16 interval;
|
||||
};
|
||||
|
||||
struct br_mrp_in_link_hdr {
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 id;
|
||||
__be16 interval;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
|
|
@ -28,10 +28,10 @@ struct ipv6_rpl_sr_hdr {
|
|||
pad:4,
|
||||
reserved1:16;
|
||||
#elif defined(__BIG_ENDIAN_BITFIELD)
|
||||
__u32 reserved:20,
|
||||
__u32 cmpri:4,
|
||||
cmpre:4,
|
||||
pad:4,
|
||||
cmpri:4,
|
||||
cmpre:4;
|
||||
reserved:20;
|
||||
#else
|
||||
#error "Please fix <asm/byteorder.h>"
|
||||
#endif
|
||||
|
|
|
@ -88,4 +88,33 @@ int br_mrp_switchdev_send_in_test(struct net_bridge *br, struct br_mrp *mrp,
|
|||
int br_mrp_ring_port_open(struct net_device *dev, u8 loc);
|
||||
int br_mrp_in_port_open(struct net_device *dev, u8 loc);
|
||||
|
||||
/* MRP protocol data units */
|
||||
struct br_mrp_tlv_hdr {
|
||||
__u8 type;
|
||||
__u8 length;
|
||||
};
|
||||
|
||||
struct br_mrp_common_hdr {
|
||||
__be16 seq_id;
|
||||
__u8 domain[MRP_DOMAIN_UUID_LENGTH];
|
||||
};
|
||||
|
||||
struct br_mrp_ring_test_hdr {
|
||||
__be16 prio;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 state;
|
||||
__be16 transitions;
|
||||
__be32 timestamp;
|
||||
} __attribute__((__packed__));
|
||||
|
||||
struct br_mrp_in_test_hdr {
|
||||
__be16 id;
|
||||
__u8 sa[ETH_ALEN];
|
||||
__be16 port_role;
|
||||
__be16 state;
|
||||
__be16 transitions;
|
||||
__be32 timestamp;
|
||||
} __attribute__((__packed__));
|
||||
|
||||
#endif /* _BR_PRIVATE_MRP_H */
|
||||
|
|
|
@ -1035,7 +1035,7 @@ source_ok:
|
|||
fld.saddr = dnet_select_source(dev_out, 0,
|
||||
RT_SCOPE_HOST);
|
||||
if (!fld.daddr)
|
||||
goto out;
|
||||
goto done;
|
||||
}
|
||||
fld.flowidn_oif = LOOPBACK_IFINDEX;
|
||||
res.type = RTN_LOCAL;
|
||||
|
|
|
@ -2859,7 +2859,8 @@ static void tcp_identify_packet_loss(struct sock *sk, int *ack_flag)
|
|||
} else if (tcp_is_rack(sk)) {
|
||||
u32 prior_retrans = tp->retrans_out;
|
||||
|
||||
tcp_rack_mark_lost(sk);
|
||||
if (tcp_rack_mark_lost(sk))
|
||||
*ack_flag &= ~FLAG_SET_XMIT_TIMER;
|
||||
if (prior_retrans > tp->retrans_out)
|
||||
*ack_flag |= FLAG_LOST_RETRANS;
|
||||
}
|
||||
|
@ -3392,8 +3393,8 @@ static void tcp_ack_probe(struct sock *sk)
|
|||
} else {
|
||||
unsigned long when = tcp_probe0_when(sk, TCP_RTO_MAX);
|
||||
|
||||
tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
|
||||
when, TCP_RTO_MAX);
|
||||
when = tcp_clamp_probe0_to_user_timeout(sk, when);
|
||||
tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3816,9 +3817,6 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
|
|||
|
||||
if (tp->tlp_high_seq)
|
||||
tcp_process_tlp_ack(sk, ack, flag);
|
||||
/* If needed, reset TLP/RTO timer; RACK may later override this. */
|
||||
if (flag & FLAG_SET_XMIT_TIMER)
|
||||
tcp_set_xmit_timer(sk);
|
||||
|
||||
if (tcp_ack_is_dubious(sk, flag)) {
|
||||
if (!(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP))) {
|
||||
|
@ -3831,6 +3829,10 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
|
|||
&rexmit);
|
||||
}
|
||||
|
||||
/* If needed, reset TLP/RTO timer when RACK doesn't set. */
|
||||
if (flag & FLAG_SET_XMIT_TIMER)
|
||||
tcp_set_xmit_timer(sk);
|
||||
|
||||
if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP))
|
||||
sk_dst_confirm(sk);
|
||||
|
||||
|
|
|
@ -4099,6 +4099,8 @@ void tcp_send_probe0(struct sock *sk)
|
|||
*/
|
||||
timeout = TCP_RESOURCE_PROBE_INTERVAL;
|
||||
}
|
||||
|
||||
timeout = tcp_clamp_probe0_to_user_timeout(sk, timeout);
|
||||
tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, timeout, TCP_RTO_MAX);
|
||||
}
|
||||
|
||||
|
|
|
@ -96,13 +96,13 @@ static void tcp_rack_detect_loss(struct sock *sk, u32 *reo_timeout)
|
|||
}
|
||||
}
|
||||
|
||||
void tcp_rack_mark_lost(struct sock *sk)
|
||||
bool tcp_rack_mark_lost(struct sock *sk)
|
||||
{
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
u32 timeout;
|
||||
|
||||
if (!tp->rack.advanced)
|
||||
return;
|
||||
return false;
|
||||
|
||||
/* Reset the advanced flag to avoid unnecessary queue scanning */
|
||||
tp->rack.advanced = 0;
|
||||
|
@ -112,6 +112,7 @@ void tcp_rack_mark_lost(struct sock *sk)
|
|||
inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT,
|
||||
timeout, inet_csk(sk)->icsk_rto);
|
||||
}
|
||||
return !!timeout;
|
||||
}
|
||||
|
||||
/* Record the most recently (re)sent time among the (s)acked packets
|
||||
|
|
|
@ -40,6 +40,24 @@ static u32 tcp_clamp_rto_to_user_timeout(const struct sock *sk)
|
|||
return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining));
|
||||
}
|
||||
|
||||
u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when)
|
||||
{
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
u32 remaining;
|
||||
s32 elapsed;
|
||||
|
||||
if (!icsk->icsk_user_timeout || !icsk->icsk_probes_tstamp)
|
||||
return when;
|
||||
|
||||
elapsed = tcp_jiffies32 - icsk->icsk_probes_tstamp;
|
||||
if (unlikely(elapsed < 0))
|
||||
elapsed = 0;
|
||||
remaining = msecs_to_jiffies(icsk->icsk_user_timeout) - elapsed;
|
||||
remaining = max_t(u32, remaining, TCP_TIMEOUT_MIN);
|
||||
|
||||
return min_t(u32, remaining, when);
|
||||
}
|
||||
|
||||
/**
|
||||
* tcp_write_err() - close socket and save error info
|
||||
* @sk: The socket the error has appeared on.
|
||||
|
|
|
@ -2902,7 +2902,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
|
|||
break;
|
||||
if (!aalg->pfkey_supported)
|
||||
continue;
|
||||
if (aalg_tmpl_set(t, aalg) && aalg->available)
|
||||
if (aalg_tmpl_set(t, aalg))
|
||||
sz += sizeof(struct sadb_comb);
|
||||
}
|
||||
return sz + sizeof(struct sadb_prop);
|
||||
|
@ -2920,7 +2920,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
|
|||
if (!ealg->pfkey_supported)
|
||||
continue;
|
||||
|
||||
if (!(ealg_tmpl_set(t, ealg) && ealg->available))
|
||||
if (!(ealg_tmpl_set(t, ealg)))
|
||||
continue;
|
||||
|
||||
for (k = 1; ; k++) {
|
||||
|
@ -2931,7 +2931,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
|
|||
if (!aalg->pfkey_supported)
|
||||
continue;
|
||||
|
||||
if (aalg_tmpl_set(t, aalg) && aalg->available)
|
||||
if (aalg_tmpl_set(t, aalg))
|
||||
sz += sizeof(struct sadb_comb);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -122,6 +122,8 @@ static struct lapb_cb *lapb_create_cb(void)
|
|||
|
||||
timer_setup(&lapb->t1timer, NULL, 0);
|
||||
timer_setup(&lapb->t2timer, NULL, 0);
|
||||
lapb->t1timer_stop = true;
|
||||
lapb->t2timer_stop = true;
|
||||
|
||||
lapb->t1 = LAPB_DEFAULT_T1;
|
||||
lapb->t2 = LAPB_DEFAULT_T2;
|
||||
|
@ -129,6 +131,8 @@ static struct lapb_cb *lapb_create_cb(void)
|
|||
lapb->mode = LAPB_DEFAULT_MODE;
|
||||
lapb->window = LAPB_DEFAULT_WINDOW;
|
||||
lapb->state = LAPB_STATE_0;
|
||||
|
||||
spin_lock_init(&lapb->lock);
|
||||
refcount_set(&lapb->refcnt, 1);
|
||||
out:
|
||||
return lapb;
|
||||
|
@ -178,11 +182,23 @@ int lapb_unregister(struct net_device *dev)
|
|||
goto out;
|
||||
lapb_put(lapb);
|
||||
|
||||
/* Wait for other refs to "lapb" to drop */
|
||||
while (refcount_read(&lapb->refcnt) > 2)
|
||||
usleep_range(1, 10);
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
lapb_stop_t1timer(lapb);
|
||||
lapb_stop_t2timer(lapb);
|
||||
|
||||
lapb_clear_queues(lapb);
|
||||
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
|
||||
/* Wait for running timers to stop */
|
||||
del_timer_sync(&lapb->t1timer);
|
||||
del_timer_sync(&lapb->t2timer);
|
||||
|
||||
__lapb_remove_cb(lapb);
|
||||
|
||||
lapb_put(lapb);
|
||||
|
@ -201,6 +217,8 @@ int lapb_getparms(struct net_device *dev, struct lapb_parms_struct *parms)
|
|||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
parms->t1 = lapb->t1 / HZ;
|
||||
parms->t2 = lapb->t2 / HZ;
|
||||
parms->n2 = lapb->n2;
|
||||
|
@ -219,6 +237,7 @@ int lapb_getparms(struct net_device *dev, struct lapb_parms_struct *parms)
|
|||
else
|
||||
parms->t2timer = (lapb->t2timer.expires - jiffies) / HZ;
|
||||
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
rc = LAPB_OK;
|
||||
out:
|
||||
|
@ -234,6 +253,8 @@ int lapb_setparms(struct net_device *dev, struct lapb_parms_struct *parms)
|
|||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
rc = LAPB_INVALUE;
|
||||
if (parms->t1 < 1 || parms->t2 < 1 || parms->n2 < 1)
|
||||
goto out_put;
|
||||
|
@ -256,6 +277,7 @@ int lapb_setparms(struct net_device *dev, struct lapb_parms_struct *parms)
|
|||
|
||||
rc = LAPB_OK;
|
||||
out_put:
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
out:
|
||||
return rc;
|
||||
|
@ -270,6 +292,8 @@ int lapb_connect_request(struct net_device *dev)
|
|||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
rc = LAPB_OK;
|
||||
if (lapb->state == LAPB_STATE_1)
|
||||
goto out_put;
|
||||
|
@ -285,24 +309,18 @@ int lapb_connect_request(struct net_device *dev)
|
|||
|
||||
rc = LAPB_OK;
|
||||
out_put:
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL(lapb_connect_request);
|
||||
|
||||
int lapb_disconnect_request(struct net_device *dev)
|
||||
static int __lapb_disconnect_request(struct lapb_cb *lapb)
|
||||
{
|
||||
struct lapb_cb *lapb = lapb_devtostruct(dev);
|
||||
int rc = LAPB_BADTOKEN;
|
||||
|
||||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
switch (lapb->state) {
|
||||
case LAPB_STATE_0:
|
||||
rc = LAPB_NOTCONNECTED;
|
||||
goto out_put;
|
||||
return LAPB_NOTCONNECTED;
|
||||
|
||||
case LAPB_STATE_1:
|
||||
lapb_dbg(1, "(%p) S1 TX DISC(1)\n", lapb->dev);
|
||||
|
@ -310,12 +328,10 @@ int lapb_disconnect_request(struct net_device *dev)
|
|||
lapb_send_control(lapb, LAPB_DISC, LAPB_POLLON, LAPB_COMMAND);
|
||||
lapb->state = LAPB_STATE_0;
|
||||
lapb_start_t1timer(lapb);
|
||||
rc = LAPB_NOTCONNECTED;
|
||||
goto out_put;
|
||||
return LAPB_NOTCONNECTED;
|
||||
|
||||
case LAPB_STATE_2:
|
||||
rc = LAPB_OK;
|
||||
goto out_put;
|
||||
return LAPB_OK;
|
||||
}
|
||||
|
||||
lapb_clear_queues(lapb);
|
||||
|
@ -328,8 +344,22 @@ int lapb_disconnect_request(struct net_device *dev)
|
|||
lapb_dbg(1, "(%p) S3 DISC(1)\n", lapb->dev);
|
||||
lapb_dbg(0, "(%p) S3 -> S2\n", lapb->dev);
|
||||
|
||||
rc = LAPB_OK;
|
||||
out_put:
|
||||
return LAPB_OK;
|
||||
}
|
||||
|
||||
int lapb_disconnect_request(struct net_device *dev)
|
||||
{
|
||||
struct lapb_cb *lapb = lapb_devtostruct(dev);
|
||||
int rc = LAPB_BADTOKEN;
|
||||
|
||||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
rc = __lapb_disconnect_request(lapb);
|
||||
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
out:
|
||||
return rc;
|
||||
|
@ -344,6 +374,8 @@ int lapb_data_request(struct net_device *dev, struct sk_buff *skb)
|
|||
if (!lapb)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
rc = LAPB_NOTCONNECTED;
|
||||
if (lapb->state != LAPB_STATE_3 && lapb->state != LAPB_STATE_4)
|
||||
goto out_put;
|
||||
|
@ -352,6 +384,7 @@ int lapb_data_request(struct net_device *dev, struct sk_buff *skb)
|
|||
lapb_kick(lapb);
|
||||
rc = LAPB_OK;
|
||||
out_put:
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
out:
|
||||
return rc;
|
||||
|
@ -364,7 +397,9 @@ int lapb_data_received(struct net_device *dev, struct sk_buff *skb)
|
|||
int rc = LAPB_BADTOKEN;
|
||||
|
||||
if (lapb) {
|
||||
spin_lock_bh(&lapb->lock);
|
||||
lapb_data_input(lapb, skb);
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
rc = LAPB_OK;
|
||||
}
|
||||
|
@ -435,6 +470,8 @@ static int lapb_device_event(struct notifier_block *this, unsigned long event,
|
|||
if (!lapb)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
|
||||
switch (event) {
|
||||
case NETDEV_UP:
|
||||
lapb_dbg(0, "(%p) Interface up: %s\n", dev, dev->name);
|
||||
|
@ -454,7 +491,7 @@ static int lapb_device_event(struct notifier_block *this, unsigned long event,
|
|||
break;
|
||||
case NETDEV_GOING_DOWN:
|
||||
if (netif_carrier_ok(dev))
|
||||
lapb_disconnect_request(dev);
|
||||
__lapb_disconnect_request(lapb);
|
||||
break;
|
||||
case NETDEV_DOWN:
|
||||
lapb_dbg(0, "(%p) Interface down: %s\n", dev, dev->name);
|
||||
|
@ -489,6 +526,7 @@ static int lapb_device_event(struct notifier_block *this, unsigned long event,
|
|||
break;
|
||||
}
|
||||
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
lapb_put(lapb);
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
|
|
@ -40,6 +40,7 @@ void lapb_start_t1timer(struct lapb_cb *lapb)
|
|||
lapb->t1timer.function = lapb_t1timer_expiry;
|
||||
lapb->t1timer.expires = jiffies + lapb->t1;
|
||||
|
||||
lapb->t1timer_stop = false;
|
||||
add_timer(&lapb->t1timer);
|
||||
}
|
||||
|
||||
|
@ -50,16 +51,19 @@ void lapb_start_t2timer(struct lapb_cb *lapb)
|
|||
lapb->t2timer.function = lapb_t2timer_expiry;
|
||||
lapb->t2timer.expires = jiffies + lapb->t2;
|
||||
|
||||
lapb->t2timer_stop = false;
|
||||
add_timer(&lapb->t2timer);
|
||||
}
|
||||
|
||||
void lapb_stop_t1timer(struct lapb_cb *lapb)
|
||||
{
|
||||
lapb->t1timer_stop = true;
|
||||
del_timer(&lapb->t1timer);
|
||||
}
|
||||
|
||||
void lapb_stop_t2timer(struct lapb_cb *lapb)
|
||||
{
|
||||
lapb->t2timer_stop = true;
|
||||
del_timer(&lapb->t2timer);
|
||||
}
|
||||
|
||||
|
@ -72,16 +76,31 @@ static void lapb_t2timer_expiry(struct timer_list *t)
|
|||
{
|
||||
struct lapb_cb *lapb = from_timer(lapb, t, t2timer);
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
if (timer_pending(&lapb->t2timer)) /* A new timer has been set up */
|
||||
goto out;
|
||||
if (lapb->t2timer_stop) /* The timer has been stopped */
|
||||
goto out;
|
||||
|
||||
if (lapb->condition & LAPB_ACK_PENDING_CONDITION) {
|
||||
lapb->condition &= ~LAPB_ACK_PENDING_CONDITION;
|
||||
lapb_timeout_response(lapb);
|
||||
}
|
||||
|
||||
out:
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
}
|
||||
|
||||
static void lapb_t1timer_expiry(struct timer_list *t)
|
||||
{
|
||||
struct lapb_cb *lapb = from_timer(lapb, t, t1timer);
|
||||
|
||||
spin_lock_bh(&lapb->lock);
|
||||
if (timer_pending(&lapb->t1timer)) /* A new timer has been set up */
|
||||
goto out;
|
||||
if (lapb->t1timer_stop) /* The timer has been stopped */
|
||||
goto out;
|
||||
|
||||
switch (lapb->state) {
|
||||
|
||||
/*
|
||||
|
@ -108,7 +127,7 @@ static void lapb_t1timer_expiry(struct timer_list *t)
|
|||
lapb->state = LAPB_STATE_0;
|
||||
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
|
||||
lapb_dbg(0, "(%p) S1 -> S0\n", lapb->dev);
|
||||
return;
|
||||
goto out;
|
||||
} else {
|
||||
lapb->n2count++;
|
||||
if (lapb->mode & LAPB_EXTENDED) {
|
||||
|
@ -132,7 +151,7 @@ static void lapb_t1timer_expiry(struct timer_list *t)
|
|||
lapb->state = LAPB_STATE_0;
|
||||
lapb_disconnect_confirmation(lapb, LAPB_TIMEDOUT);
|
||||
lapb_dbg(0, "(%p) S2 -> S0\n", lapb->dev);
|
||||
return;
|
||||
goto out;
|
||||
} else {
|
||||
lapb->n2count++;
|
||||
lapb_dbg(1, "(%p) S2 TX DISC(1)\n", lapb->dev);
|
||||
|
@ -150,7 +169,7 @@ static void lapb_t1timer_expiry(struct timer_list *t)
|
|||
lapb_stop_t2timer(lapb);
|
||||
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
|
||||
lapb_dbg(0, "(%p) S3 -> S0\n", lapb->dev);
|
||||
return;
|
||||
goto out;
|
||||
} else {
|
||||
lapb->n2count++;
|
||||
lapb_requeue_frames(lapb);
|
||||
|
@ -167,7 +186,7 @@ static void lapb_t1timer_expiry(struct timer_list *t)
|
|||
lapb->state = LAPB_STATE_0;
|
||||
lapb_disconnect_indication(lapb, LAPB_TIMEDOUT);
|
||||
lapb_dbg(0, "(%p) S4 -> S0\n", lapb->dev);
|
||||
return;
|
||||
goto out;
|
||||
} else {
|
||||
lapb->n2count++;
|
||||
lapb_transmit_frmr(lapb);
|
||||
|
@ -176,4 +195,7 @@ static void lapb_t1timer_expiry(struct timer_list *t)
|
|||
}
|
||||
|
||||
lapb_start_t1timer(lapb);
|
||||
|
||||
out:
|
||||
spin_unlock_bh(&lapb->lock);
|
||||
}
|
||||
|
|
|
@ -1078,6 +1078,7 @@ enum queue_stop_reason {
|
|||
IEEE80211_QUEUE_STOP_REASON_FLUSH,
|
||||
IEEE80211_QUEUE_STOP_REASON_TDLS_TEARDOWN,
|
||||
IEEE80211_QUEUE_STOP_REASON_RESERVE_TID,
|
||||
IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE,
|
||||
|
||||
IEEE80211_QUEUE_STOP_REASONS,
|
||||
};
|
||||
|
|
|
@ -1617,6 +1617,10 @@ static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ieee80211_stop_vif_queues(local, sdata,
|
||||
IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE);
|
||||
synchronize_net();
|
||||
|
||||
ieee80211_do_stop(sdata, false);
|
||||
|
||||
ieee80211_teardown_sdata(sdata);
|
||||
|
@ -1639,6 +1643,8 @@ static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
|
|||
err = ieee80211_do_open(&sdata->wdev, false);
|
||||
WARN(err, "type change: do_open returned %d", err);
|
||||
|
||||
ieee80211_wake_vif_queues(local, sdata,
|
||||
IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -133,16 +133,20 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
|
|||
}
|
||||
|
||||
if (wide_bw_chansw_ie) {
|
||||
u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1;
|
||||
struct ieee80211_vht_operation vht_oper = {
|
||||
.chan_width =
|
||||
wide_bw_chansw_ie->new_channel_width,
|
||||
.center_freq_seg0_idx =
|
||||
wide_bw_chansw_ie->new_center_freq_seg0,
|
||||
.center_freq_seg1_idx =
|
||||
wide_bw_chansw_ie->new_center_freq_seg1,
|
||||
.center_freq_seg1_idx = new_seg1,
|
||||
/* .basic_mcs_set doesn't matter */
|
||||
};
|
||||
struct ieee80211_ht_operation ht_oper = {};
|
||||
struct ieee80211_ht_operation ht_oper = {
|
||||
.operation_mode =
|
||||
cpu_to_le16(new_seg1 <<
|
||||
IEEE80211_HT_OP_MODE_CCFS2_SHIFT),
|
||||
};
|
||||
|
||||
/* default, for the case of IEEE80211_VHT_CHANWIDTH_USE_HT,
|
||||
* to the previously parsed chandef
|
||||
|
|
|
@ -5235,9 +5235,8 @@ static void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
|
|||
kfree(elem);
|
||||
}
|
||||
|
||||
static int nft_set_elem_expr_clone(const struct nft_ctx *ctx,
|
||||
struct nft_set *set,
|
||||
struct nft_expr *expr_array[])
|
||||
int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
|
||||
struct nft_expr *expr_array[])
|
||||
{
|
||||
struct nft_expr *expr;
|
||||
int err, i, k;
|
||||
|
|
|
@ -295,6 +295,12 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
|
|||
err = -EOPNOTSUPP;
|
||||
goto err_expr_free;
|
||||
}
|
||||
} else if (set->num_exprs > 0) {
|
||||
err = nft_set_elem_expr_clone(ctx, set, priv->expr_array);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
priv->num_exprs = set->num_exprs;
|
||||
}
|
||||
|
||||
nft_set_ext_prepare(&priv->tmpl);
|
||||
|
@ -306,8 +312,10 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
|
|||
nft_dynset_ext_add_expr(priv);
|
||||
|
||||
if (set->flags & NFT_SET_TIMEOUT) {
|
||||
if (timeout || set->timeout)
|
||||
if (timeout || set->timeout) {
|
||||
nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_TIMEOUT);
|
||||
nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_EXPIRATION);
|
||||
}
|
||||
}
|
||||
|
||||
priv->timeout = timeout;
|
||||
|
@ -376,22 +384,25 @@ static int nft_dynset_dump(struct sk_buff *skb, const struct nft_expr *expr)
|
|||
nf_jiffies64_to_msecs(priv->timeout),
|
||||
NFTA_DYNSET_PAD))
|
||||
goto nla_put_failure;
|
||||
if (priv->num_exprs == 1) {
|
||||
if (nft_expr_dump(skb, NFTA_DYNSET_EXPR, priv->expr_array[0]))
|
||||
goto nla_put_failure;
|
||||
} else if (priv->num_exprs > 1) {
|
||||
struct nlattr *nest;
|
||||
|
||||
nest = nla_nest_start_noflag(skb, NFTA_DYNSET_EXPRESSIONS);
|
||||
if (!nest)
|
||||
goto nla_put_failure;
|
||||
|
||||
for (i = 0; i < priv->num_exprs; i++) {
|
||||
if (nft_expr_dump(skb, NFTA_LIST_ELEM,
|
||||
priv->expr_array[i]))
|
||||
if (priv->set->num_exprs == 0) {
|
||||
if (priv->num_exprs == 1) {
|
||||
if (nft_expr_dump(skb, NFTA_DYNSET_EXPR,
|
||||
priv->expr_array[0]))
|
||||
goto nla_put_failure;
|
||||
} else if (priv->num_exprs > 1) {
|
||||
struct nlattr *nest;
|
||||
|
||||
nest = nla_nest_start_noflag(skb, NFTA_DYNSET_EXPRESSIONS);
|
||||
if (!nest)
|
||||
goto nla_put_failure;
|
||||
|
||||
for (i = 0; i < priv->num_exprs; i++) {
|
||||
if (nft_expr_dump(skb, NFTA_LIST_ELEM,
|
||||
priv->expr_array[i]))
|
||||
goto nla_put_failure;
|
||||
}
|
||||
nla_nest_end(skb, nest);
|
||||
}
|
||||
nla_nest_end(skb, nest);
|
||||
}
|
||||
if (nla_put_be32(skb, NFTA_DYNSET_FLAGS, htonl(flags)))
|
||||
goto nla_put_failure;
|
||||
|
|
|
@ -852,6 +852,7 @@ static int nfc_genl_stop_poll(struct sk_buff *skb, struct genl_info *info)
|
|||
|
||||
if (!dev->polling) {
|
||||
device_unlock(&dev->dev);
|
||||
nfc_put_device(dev);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -105,7 +105,7 @@ static int rawsock_connect(struct socket *sock, struct sockaddr *_addr,
|
|||
if (addr->target_idx > dev->target_next_idx - 1 ||
|
||||
addr->target_idx < dev->target_next_idx - dev->n_targets) {
|
||||
rc = -EINVAL;
|
||||
goto error;
|
||||
goto put_dev;
|
||||
}
|
||||
|
||||
rc = nfc_activate_target(dev, addr->target_idx, addr->nfc_protocol);
|
||||
|
|
|
@ -197,6 +197,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
|
|||
tail = b->peer_backlog_tail;
|
||||
while (CIRC_CNT(head, tail, size) > 0) {
|
||||
struct rxrpc_peer *peer = b->peer_backlog[tail];
|
||||
rxrpc_put_local(peer->local);
|
||||
kfree(peer);
|
||||
tail = (tail + 1) & (size - 1);
|
||||
}
|
||||
|
|
|
@ -460,10 +460,11 @@ static int __switchdev_handle_port_obj_add(struct net_device *dev,
|
|||
extack = switchdev_notifier_info_to_extack(&port_obj_info->info);
|
||||
|
||||
if (check_cb(dev)) {
|
||||
/* This flag is only checked if the return value is success. */
|
||||
port_obj_info->handled = true;
|
||||
return add_cb(dev, port_obj_info->obj, port_obj_info->trans,
|
||||
extack);
|
||||
err = add_cb(dev, port_obj_info->obj, port_obj_info->trans,
|
||||
extack);
|
||||
if (err != -EOPNOTSUPP)
|
||||
port_obj_info->handled = true;
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Switch ports might be stacked under e.g. a LAG. Ignore the
|
||||
|
@ -515,9 +516,10 @@ static int __switchdev_handle_port_obj_del(struct net_device *dev,
|
|||
int err = -EOPNOTSUPP;
|
||||
|
||||
if (check_cb(dev)) {
|
||||
/* This flag is only checked if the return value is success. */
|
||||
port_obj_info->handled = true;
|
||||
return del_cb(dev, port_obj_info->obj);
|
||||
err = del_cb(dev, port_obj_info->obj);
|
||||
if (err != -EOPNOTSUPP)
|
||||
port_obj_info->handled = true;
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Switch ports might be stacked under e.g. a LAG. Ignore the
|
||||
|
@ -568,9 +570,10 @@ static int __switchdev_handle_port_attr_set(struct net_device *dev,
|
|||
int err = -EOPNOTSUPP;
|
||||
|
||||
if (check_cb(dev)) {
|
||||
port_attr_info->handled = true;
|
||||
return set_cb(dev, port_attr_info->attr,
|
||||
port_attr_info->trans);
|
||||
err = set_cb(dev, port_attr_info->attr, port_attr_info->trans);
|
||||
if (err != -EOPNOTSUPP)
|
||||
port_attr_info->handled = true;
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Switch ports might be stacked under e.g. a LAG. Ignore the
|
||||
|
|
|
@ -896,8 +896,9 @@ out:
|
|||
int call_commit_handler(struct net_device *dev)
|
||||
{
|
||||
#ifdef CONFIG_WIRELESS_EXT
|
||||
if ((netif_running(dev)) &&
|
||||
(dev->wireless_handlers->standard[0] != NULL))
|
||||
if (netif_running(dev) &&
|
||||
dev->wireless_handlers &&
|
||||
dev->wireless_handlers->standard[0])
|
||||
/* Call the commit handler on the driver */
|
||||
return dev->wireless_handlers->standard[0](dev, NULL,
|
||||
NULL, NULL);
|
||||
|
|
|
@ -660,7 +660,7 @@ resume:
|
|||
/* only the first xfrm gets the encap type */
|
||||
encap_type = 0;
|
||||
|
||||
if (async && x->repl->recheck(x, skb, seq)) {
|
||||
if (x->repl->recheck(x, skb, seq)) {
|
||||
XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR);
|
||||
goto drop_unlock;
|
||||
}
|
||||
|
|
|
@ -793,15 +793,22 @@ static int xfrm_policy_addr_delta(const xfrm_address_t *a,
|
|||
const xfrm_address_t *b,
|
||||
u8 prefixlen, u16 family)
|
||||
{
|
||||
u32 ma, mb, mask;
|
||||
unsigned int pdw, pbi;
|
||||
int delta = 0;
|
||||
|
||||
switch (family) {
|
||||
case AF_INET:
|
||||
if (sizeof(long) == 4 && prefixlen == 0)
|
||||
return ntohl(a->a4) - ntohl(b->a4);
|
||||
return (ntohl(a->a4) & ((~0UL << (32 - prefixlen)))) -
|
||||
(ntohl(b->a4) & ((~0UL << (32 - prefixlen))));
|
||||
if (prefixlen == 0)
|
||||
return 0;
|
||||
mask = ~0U << (32 - prefixlen);
|
||||
ma = ntohl(a->a4) & mask;
|
||||
mb = ntohl(b->a4) & mask;
|
||||
if (ma < mb)
|
||||
delta = -1;
|
||||
else if (ma > mb)
|
||||
delta = 1;
|
||||
break;
|
||||
case AF_INET6:
|
||||
pdw = prefixlen >> 5;
|
||||
pbi = prefixlen & 0x1f;
|
||||
|
@ -812,10 +819,13 @@ static int xfrm_policy_addr_delta(const xfrm_address_t *a,
|
|||
return delta;
|
||||
}
|
||||
if (pbi) {
|
||||
u32 mask = ~0u << (32 - pbi);
|
||||
|
||||
delta = (ntohl(a->a6[pdw]) & mask) -
|
||||
(ntohl(b->a6[pdw]) & mask);
|
||||
mask = ~0U << (32 - pbi);
|
||||
ma = ntohl(a->a6[pdw]) & mask;
|
||||
mb = ntohl(b->a6[pdw]) & mask;
|
||||
if (ma < mb)
|
||||
delta = -1;
|
||||
else if (ma > mb)
|
||||
delta = 1;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
|
@ -3078,8 +3088,8 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net,
|
|||
xflo.flags = flags;
|
||||
|
||||
/* To accelerate a bit... */
|
||||
if ((dst_orig->flags & DST_NOXFRM) ||
|
||||
!net->xfrm.policy_count[XFRM_POLICY_OUT])
|
||||
if (!if_id && ((dst_orig->flags & DST_NOXFRM) ||
|
||||
!net->xfrm.policy_count[XFRM_POLICY_OUT]))
|
||||
goto nopol;
|
||||
|
||||
xdst = xfrm_bundle_lookup(net, fl, family, dir, &xflo, if_id);
|
||||
|
|
|
@ -203,7 +203,7 @@ multipath4_test()
|
|||
t0_rp12=$(link_stats_tx_packets_get $rp12)
|
||||
t0_rp13=$(link_stats_tx_packets_get $rp13)
|
||||
|
||||
ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
|
||||
ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
|
||||
-d 1msec -t udp "sp=1024,dp=0-32768"
|
||||
|
||||
t1_rp12=$(link_stats_tx_packets_get $rp12)
|
||||
|
|
|
@ -178,7 +178,7 @@ multipath4_test()
|
|||
t0_rp12=$(link_stats_tx_packets_get $rp12)
|
||||
t0_rp13=$(link_stats_tx_packets_get $rp13)
|
||||
|
||||
ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
|
||||
ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \
|
||||
-d 1msec -t udp "sp=1024,dp=0-32768"
|
||||
|
||||
t1_rp12=$(link_stats_tx_packets_get $rp12)
|
||||
|
|
|
@ -202,7 +202,7 @@ check_xfrm() {
|
|||
# 1: iptables -m policy rule count != 0
|
||||
rval=$1
|
||||
ip=$2
|
||||
lret=0
|
||||
local lret=0
|
||||
|
||||
ip netns exec ns1 ping -q -c 1 10.0.2.$ip > /dev/null
|
||||
|
||||
|
@ -287,6 +287,47 @@ check_hthresh_repeat()
|
|||
return 0
|
||||
}
|
||||
|
||||
# insert non-overlapping policies in a random order and check that
|
||||
# all of them can be fetched using the traffic selectors.
|
||||
check_random_order()
|
||||
{
|
||||
local ns=$1
|
||||
local log=$2
|
||||
|
||||
for i in $(seq 100); do
|
||||
ip -net $ns xfrm policy flush
|
||||
for j in $(seq 0 16 255 | sort -R); do
|
||||
ip -net $ns xfrm policy add dst $j.0.0.0/24 dir out priority 10 action allow
|
||||
done
|
||||
for j in $(seq 0 16 255); do
|
||||
if ! ip -net $ns xfrm policy get dst $j.0.0.0/24 dir out > /dev/null; then
|
||||
echo "FAIL: $log" 1>&2
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
for i in $(seq 100); do
|
||||
ip -net $ns xfrm policy flush
|
||||
for j in $(seq 0 16 255 | sort -R); do
|
||||
local addr=$(printf "e000:0000:%02x00::/56" $j)
|
||||
ip -net $ns xfrm policy add dst $addr dir out priority 10 action allow
|
||||
done
|
||||
for j in $(seq 0 16 255); do
|
||||
local addr=$(printf "e000:0000:%02x00::/56" $j)
|
||||
if ! ip -net $ns xfrm policy get dst $addr dir out > /dev/null; then
|
||||
echo "FAIL: $log" 1>&2
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
ip -net $ns xfrm policy flush
|
||||
|
||||
echo "PASS: $log"
|
||||
return 0
|
||||
}
|
||||
|
||||
#check for needed privileges
|
||||
if [ "$(id -u)" -ne 0 ];then
|
||||
echo "SKIP: Need root privileges"
|
||||
|
@ -438,6 +479,8 @@ check_exceptions "exceptions and block policies after htresh change to normal"
|
|||
|
||||
check_hthresh_repeat "policies with repeated htresh change"
|
||||
|
||||
check_random_order ns3 "policies inserted in random order"
|
||||
|
||||
for i in 1 2 3 4;do ip netns del ns$i;done
|
||||
|
||||
exit $ret
|
||||
|
|
Loading…
Reference in New Issue