Networking fixes for 5.17-rc1, including fixes from netfilter, bpf.

Current release - regressions:
 
  - fix memory leaks in the skb free deferral scheme if upper layer
    protocols are used, i.e. in-kernel TCP readers like TLS
 
 Current release - new code bugs:
 
  - nf_tables: fix NULL check typo in _clone() functions
 
  - change the default to y for Vertexcom vendor Kconfig
 
  - a couple of fixes to incorrect uses of ref tracking
 
  - two fixes for constifying netdev->dev_addr
 
 Previous releases - regressions:
 
  - bpf:
    - various verifier fixes mainly around register offset handling
      when passed to helper functions
    - fix mount source displayed for bpffs (none -> bpffs)
 
  - bonding:
    - fix extraction of ports for connection hash calculation
    - fix bond_xmit_broadcast return value when some devices are down
 
  - phy: marvell: add Marvell specific PHY loopback
 
  - sch_api: don't skip qdisc attach on ingress, prevent ref leak
 
  - htb: restore minimal packet size handling in rate control
 
  - sfp: fix high power modules without diagnostic monitoring
 
  - mscc: ocelot:
    - don't let phylink re-enable TX PAUSE on the NPI port
    - don't dereference NULL pointers with shared tc filters
 
  - smsc95xx: correct reset handling for LAN9514
 
  - cpsw: avoid alignment faults by taking NET_IP_ALIGN into account
 
  - phy: micrel: use kszphy_suspend/_resume for irq aware devices,
    avoid races with the interrupt
 
 Previous releases - always broken:
 
  - xdp: check prog type before updating BPF link
 
  - smc: resolve various races around abnormal connection termination
 
  - sit: allow encapsulated IPv6 traffic to be delivered locally
 
  - axienet: fix init/reset handling, add missing barriers,
    read the right status words, stop queues correctly
 
  - add missing dev_put() in sock_timestamping_bind_phc()
 
 Misc:
 
  - ipv4: prevent accidentally passing RTO_ONLINK to
    ip_route_output_key_hash() by sanitizing flags
 
  - ipv4: avoid quadratic behavior in netns dismantle
 
  - stmmac: dwmac-oxnas: add support for OX810SE
 
  - fsl: xgmac_mdio: add workaround for erratum A-009885
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmHoS14ACgkQMUZtbf5S
 IrtMQA/6AxhWuj2JsoNhvTzBCi4vkeo53rKU941bxOaST9Ow8dqDc7yAT8YeJU2B
 lGw6/pXx+Fm9twGsRkkQ0vX7piIk25vKzEwnlCYVVXLAnE+lPu9qFH49X1HO5Fwy
 K+frGDC524MrbJFb+UbZfJG4UitsyHoqc58Mp7ZNBe2gn12DcHotsiSJikzdd02F
 rzQZhvwRKsDS2prcIHdvVAxva380cn99mvaFqIPR9MemhWKOzVa3NfkiC3tSlhW/
 OphG3UuOfKCVdofYAO5/oXlVQcDKx0OD9Sr2q8aO0mlME0p0ounKz+LDcwkofaYQ
 pGeMY2pEAHujLyRewunrfaPv8/SIB/ulSPcyreoF28TTN20M+4onvgTHvVSyzLl7
 MA4kYH7tkPgOfbW8T573OFPdrqsy4WTrFPFovGqvDuiE8h65Pll/gTcAqsWjF/xw
 CmfmtICcsBwVGMLUzpUjKAWuB0/voa/sQUuQoxvQFsgCteuslm1suLY5EfSIhdu8
 nvhySJjPXRHicZQNflIwKTiOYYWls7yYVGe76u9hqjyD36peJXYjUjyyENIfLiFA
 0XclGIfSBMGWMGmxvGYIZDwGOKK0j+s0PipliXVjP2otLrPYUjma5Co37KW8SiSV
 9TT673FAXJNB0IJ7xiT7nRUZ/fjRrweP1glte/6d148J1Lf9MTQ=
 =XM4Y
 -----END PGP SIGNATURE-----

Merge tag 'net-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter, bpf.

  Quite a handful of old regression fixes but most of those are
  pre-5.16.

  Current release - regressions:

   - fix memory leaks in the skb free deferral scheme if upper layer
     protocols are used, i.e. in-kernel TCP readers like TLS

  Current release - new code bugs:

   - nf_tables: fix NULL check typo in _clone() functions

   - change the default to y for Vertexcom vendor Kconfig

   - a couple of fixes to incorrect uses of ref tracking

   - two fixes for constifying netdev->dev_addr

  Previous releases - regressions:

   - bpf:
      - various verifier fixes mainly around register offset handling
        when passed to helper functions
      - fix mount source displayed for bpffs (none -> bpffs)

   - bonding:
      - fix extraction of ports for connection hash calculation
      - fix bond_xmit_broadcast return value when some devices are down

   - phy: marvell: add Marvell specific PHY loopback

   - sch_api: don't skip qdisc attach on ingress, prevent ref leak

   - htb: restore minimal packet size handling in rate control

   - sfp: fix high power modules without diagnostic monitoring

   - mscc: ocelot:
      - don't let phylink re-enable TX PAUSE on the NPI port
      - don't dereference NULL pointers with shared tc filters

   - smsc95xx: correct reset handling for LAN9514

   - cpsw: avoid alignment faults by taking NET_IP_ALIGN into account

   - phy: micrel: use kszphy_suspend/_resume for irq aware devices,
     avoid races with the interrupt

  Previous releases - always broken:

   - xdp: check prog type before updating BPF link

   - smc: resolve various races around abnormal connection termination

   - sit: allow encapsulated IPv6 traffic to be delivered locally

   - axienet: fix init/reset handling, add missing barriers, read the
     right status words, stop queues correctly

   - add missing dev_put() in sock_timestamping_bind_phc()

  Misc:

   - ipv4: prevent accidentally passing RTO_ONLINK to
     ip_route_output_key_hash() by sanitizing flags

   - ipv4: avoid quadratic behavior in netns dismantle

   - stmmac: dwmac-oxnas: add support for OX810SE

   - fsl: xgmac_mdio: add workaround for erratum A-009885"

* tag 'net-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (92 commits)
  ipv4: add net_hash_mix() dispersion to fib_info_laddrhash keys
  ipv4: avoid quadratic behavior in netns dismantle
  net/fsl: xgmac_mdio: Fix incorrect iounmap when removing module
  powerpc/fsl/dts: Enable WA for erratum A-009885 on fman3l MDIO buses
  dt-bindings: net: Document fsl,erratum-a009885
  net/fsl: xgmac_mdio: Add workaround for erratum A-009885
  net: mscc: ocelot: fix using match before it is set
  net: phy: micrel: use kszphy_suspend()/kszphy_resume for irq aware devices
  net: cpsw: avoid alignment faults by taking NET_IP_ALIGN into account
  nfc: llcp: fix NULL error pointer dereference on sendmsg() after failed bind()
  net: axienet: increase default TX ring size to 128
  net: axienet: fix for TX busy handling
  net: axienet: fix number of TX ring slots for available check
  net: axienet: Fix TX ring slot available check
  net: axienet: limit minimum TX ring size
  net: axienet: add missing memory barriers
  net: axienet: reset core on initialization prior to MDIO access
  net: axienet: Wait for PhyRstCmplt after core reset
  net: axienet: increase reset timeout
  bpf, selftests: Add ringbuf memory type confusion test
  ...
This commit is contained in:
Linus Torvalds 2022-01-20 10:57:05 +02:00
commit fa2e1ba3e9
91 changed files with 1049 additions and 422 deletions

View File

@ -410,6 +410,15 @@ PROPERTIES
The settings and programming routines for internal/external The settings and programming routines for internal/external
MDIO are different. Must be included for internal MDIO. MDIO are different. Must be included for internal MDIO.
- fsl,erratum-a009885
Usage: optional
Value type: <boolean>
Definition: Indicates the presence of the A009885
erratum describing that the contents of MDIO_DATA may
become corrupt unless it is read within 16 MDC cycles
of MDIO_CFG[BSY] being cleared, when performing an
MDIO read operation.
- fsl,erratum-a011043 - fsl,erratum-a011043
Usage: optional Usage: optional
Value type: <boolean> Value type: <boolean>

View File

@ -9,6 +9,9 @@ Required properties on all platforms:
- compatible: For the OX820 SoC, it should be : - compatible: For the OX820 SoC, it should be :
- "oxsemi,ox820-dwmac" to select glue - "oxsemi,ox820-dwmac" to select glue
- "snps,dwmac-3.512" to select IP version. - "snps,dwmac-3.512" to select IP version.
For the OX810SE SoC, it should be :
- "oxsemi,ox810se-dwmac" to select glue
- "snps,dwmac-3.512" to select IP version.
- clocks: Should contain phandles to the following clocks - clocks: Should contain phandles to the following clocks
- clock-names: Should contain the following: - clock-names: Should contain the following:

View File

@ -79,6 +79,7 @@ fman0: fman@400000 {
#size-cells = <0>; #size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xfc000 0x1000>; reg = <0xfc000 0x1000>;
fsl,erratum-a009885;
}; };
xmdio0: mdio@fd000 { xmdio0: mdio@fd000 {
@ -86,6 +87,7 @@ fman0: fman@400000 {
#size-cells = <0>; #size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xfd000 0x1000>; reg = <0xfd000 0x1000>;
fsl,erratum-a009885;
}; };
}; };

View File

@ -178,7 +178,6 @@ static void ia_hack_tcq(IADEV *dev) {
static u16 get_desc (IADEV *dev, struct ia_vcc *iavcc) { static u16 get_desc (IADEV *dev, struct ia_vcc *iavcc) {
u_short desc_num, i; u_short desc_num, i;
struct sk_buff *skb;
struct ia_vcc *iavcc_r = NULL; struct ia_vcc *iavcc_r = NULL;
unsigned long delta; unsigned long delta;
static unsigned long timer = 0; static unsigned long timer = 0;
@ -202,8 +201,7 @@ static u16 get_desc (IADEV *dev, struct ia_vcc *iavcc) {
else else
dev->ffL.tcq_rd -= 2; dev->ffL.tcq_rd -= 2;
*(u_short *)(dev->seg_ram + dev->ffL.tcq_rd) = i+1; *(u_short *)(dev->seg_ram + dev->ffL.tcq_rd) = i+1;
if (!(skb = dev->desc_tbl[i].txskb) || if (!dev->desc_tbl[i].txskb || !(iavcc_r = dev->desc_tbl[i].iavcc))
!(iavcc_r = dev->desc_tbl[i].iavcc))
printk("Fatal err, desc table vcc or skb is NULL\n"); printk("Fatal err, desc table vcc or skb is NULL\n");
else else
iavcc_r->vc_desc_cnt--; iavcc_r->vc_desc_cnt--;

View File

@ -3874,8 +3874,8 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
skb->l4_hash) skb->l4_hash)
return skb->hash; return skb->hash;
return __bond_xmit_hash(bond, skb, skb->head, skb->protocol, return __bond_xmit_hash(bond, skb, skb->data, skb->protocol,
skb->mac_header, skb->network_header, skb_mac_offset(skb), skb_network_offset(skb),
skb_headlen(skb)); skb_headlen(skb));
} }
@ -4884,25 +4884,39 @@ static netdev_tx_t bond_xmit_broadcast(struct sk_buff *skb,
struct bonding *bond = netdev_priv(bond_dev); struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave = NULL; struct slave *slave = NULL;
struct list_head *iter; struct list_head *iter;
bool xmit_suc = false;
bool skb_used = false;
bond_for_each_slave_rcu(bond, slave, iter) { bond_for_each_slave_rcu(bond, slave, iter) {
if (bond_is_last_slave(bond, slave)) struct sk_buff *skb2;
break;
if (bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) {
struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP))
continue;
if (bond_is_last_slave(bond, slave)) {
skb2 = skb;
skb_used = true;
} else {
skb2 = skb_clone(skb, GFP_ATOMIC);
if (!skb2) { if (!skb2) {
net_err_ratelimited("%s: Error: %s: skb_clone() failed\n", net_err_ratelimited("%s: Error: %s: skb_clone() failed\n",
bond_dev->name, __func__); bond_dev->name, __func__);
continue; continue;
} }
bond_dev_queue_xmit(bond, skb2, slave->dev);
} }
}
if (slave && bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)
return bond_dev_queue_xmit(bond, skb, slave->dev);
return bond_tx_drop(bond_dev, skb); if (bond_dev_queue_xmit(bond, skb2, slave->dev) == NETDEV_TX_OK)
xmit_suc = true;
}
if (!skb_used)
dev_kfree_skb_any(skb);
if (xmit_suc)
return NETDEV_TX_OK;
atomic_long_inc(&bond_dev->tx_dropped);
return NET_XMIT_DROP;
} }
/*------------------------- Device initialization ---------------------------*/ /*------------------------- Device initialization ---------------------------*/

View File

@ -106,9 +106,9 @@ static void emac_update_speed(struct net_device *dev)
/* set EMAC SPEED, depend on PHY */ /* set EMAC SPEED, depend on PHY */
reg_val = readl(db->membase + EMAC_MAC_SUPP_REG); reg_val = readl(db->membase + EMAC_MAC_SUPP_REG);
reg_val &= ~(0x1 << 8); reg_val &= ~EMAC_MAC_SUPP_100M;
if (db->speed == SPEED_100) if (db->speed == SPEED_100)
reg_val |= 1 << 8; reg_val |= EMAC_MAC_SUPP_100M;
writel(reg_val, db->membase + EMAC_MAC_SUPP_REG); writel(reg_val, db->membase + EMAC_MAC_SUPP_REG);
} }
@ -264,7 +264,7 @@ static void emac_dma_done_callback(void *arg)
/* re enable interrupt */ /* re enable interrupt */
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0x01 << 8); reg_val |= EMAC_INT_CTL_RX_EN;
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
db->emacrx_completed_flag = 1; db->emacrx_completed_flag = 1;
@ -429,7 +429,7 @@ static unsigned int emac_powerup(struct net_device *ndev)
/* initial EMAC */ /* initial EMAC */
/* flush RX FIFO */ /* flush RX FIFO */
reg_val = readl(db->membase + EMAC_RX_CTL_REG); reg_val = readl(db->membase + EMAC_RX_CTL_REG);
reg_val |= 0x8; reg_val |= EMAC_RX_CTL_FLUSH_FIFO;
writel(reg_val, db->membase + EMAC_RX_CTL_REG); writel(reg_val, db->membase + EMAC_RX_CTL_REG);
udelay(1); udelay(1);
@ -441,8 +441,8 @@ static unsigned int emac_powerup(struct net_device *ndev)
/* set MII clock */ /* set MII clock */
reg_val = readl(db->membase + EMAC_MAC_MCFG_REG); reg_val = readl(db->membase + EMAC_MAC_MCFG_REG);
reg_val &= (~(0xf << 2)); reg_val &= ~EMAC_MAC_MCFG_MII_CLKD_MASK;
reg_val |= (0xD << 2); reg_val |= EMAC_MAC_MCFG_MII_CLKD_72;
writel(reg_val, db->membase + EMAC_MAC_MCFG_REG); writel(reg_val, db->membase + EMAC_MAC_MCFG_REG);
/* clear RX counter */ /* clear RX counter */
@ -506,7 +506,7 @@ static void emac_init_device(struct net_device *dev)
/* enable RX/TX0/RX Hlevel interrup */ /* enable RX/TX0/RX Hlevel interrup */
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0xf << 0) | (0x01 << 8); reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN | EMAC_INT_CTL_RX_EN);
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
spin_unlock_irqrestore(&db->lock, flags); spin_unlock_irqrestore(&db->lock, flags);
@ -637,7 +637,9 @@ static void emac_rx(struct net_device *dev)
if (!rxcount) { if (!rxcount) {
db->emacrx_completed_flag = 1; db->emacrx_completed_flag = 1;
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0xf << 0) | (0x01 << 8); reg_val |= (EMAC_INT_CTL_TX_EN |
EMAC_INT_CTL_TX_ABRT_EN |
EMAC_INT_CTL_RX_EN);
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
/* had one stuck? */ /* had one stuck? */
@ -669,7 +671,9 @@ static void emac_rx(struct net_device *dev)
writel(reg_val | EMAC_CTL_RX_EN, writel(reg_val | EMAC_CTL_RX_EN,
db->membase + EMAC_CTL_REG); db->membase + EMAC_CTL_REG);
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0xf << 0) | (0x01 << 8); reg_val |= (EMAC_INT_CTL_TX_EN |
EMAC_INT_CTL_TX_ABRT_EN |
EMAC_INT_CTL_RX_EN);
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
db->emacrx_completed_flag = 1; db->emacrx_completed_flag = 1;
@ -783,20 +787,20 @@ static irqreturn_t emac_interrupt(int irq, void *dev_id)
} }
/* Transmit Interrupt check */ /* Transmit Interrupt check */
if (int_status & (0x01 | 0x02)) if (int_status & EMAC_INT_STA_TX_COMPLETE)
emac_tx_done(dev, db, int_status); emac_tx_done(dev, db, int_status);
if (int_status & (0x04 | 0x08)) if (int_status & EMAC_INT_STA_TX_ABRT)
netdev_info(dev, " ab : %x\n", int_status); netdev_info(dev, " ab : %x\n", int_status);
/* Re-enable interrupt mask */ /* Re-enable interrupt mask */
if (db->emacrx_completed_flag == 1) { if (db->emacrx_completed_flag == 1) {
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0xf << 0) | (0x01 << 8); reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN | EMAC_INT_CTL_RX_EN);
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
} else { } else {
reg_val = readl(db->membase + EMAC_INT_CTL_REG); reg_val = readl(db->membase + EMAC_INT_CTL_REG);
reg_val |= (0xf << 0); reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN);
writel(reg_val, db->membase + EMAC_INT_CTL_REG); writel(reg_val, db->membase + EMAC_INT_CTL_REG);
} }
@ -1068,6 +1072,7 @@ out_clk_disable_unprepare:
clk_disable_unprepare(db->clk); clk_disable_unprepare(db->clk);
out_dispose_mapping: out_dispose_mapping:
irq_dispose_mapping(ndev->irq); irq_dispose_mapping(ndev->irq);
dma_release_channel(db->rx_chan);
out_iounmap: out_iounmap:
iounmap(db->membase); iounmap(db->membase);
out: out:

View File

@ -38,6 +38,7 @@
#define EMAC_RX_CTL_REG (0x3c) #define EMAC_RX_CTL_REG (0x3c)
#define EMAC_RX_CTL_AUTO_DRQ_EN (1 << 1) #define EMAC_RX_CTL_AUTO_DRQ_EN (1 << 1)
#define EMAC_RX_CTL_DMA_EN (1 << 2) #define EMAC_RX_CTL_DMA_EN (1 << 2)
#define EMAC_RX_CTL_FLUSH_FIFO (1 << 3)
#define EMAC_RX_CTL_PASS_ALL_EN (1 << 4) #define EMAC_RX_CTL_PASS_ALL_EN (1 << 4)
#define EMAC_RX_CTL_PASS_CTL_EN (1 << 5) #define EMAC_RX_CTL_PASS_CTL_EN (1 << 5)
#define EMAC_RX_CTL_PASS_CRC_ERR_EN (1 << 6) #define EMAC_RX_CTL_PASS_CRC_ERR_EN (1 << 6)
@ -61,7 +62,21 @@
#define EMAC_RX_IO_DATA_STATUS_OK (1 << 7) #define EMAC_RX_IO_DATA_STATUS_OK (1 << 7)
#define EMAC_RX_FBC_REG (0x50) #define EMAC_RX_FBC_REG (0x50)
#define EMAC_INT_CTL_REG (0x54) #define EMAC_INT_CTL_REG (0x54)
#define EMAC_INT_CTL_RX_EN (1 << 8)
#define EMAC_INT_CTL_TX0_EN (1)
#define EMAC_INT_CTL_TX1_EN (1 << 1)
#define EMAC_INT_CTL_TX_EN (EMAC_INT_CTL_TX0_EN | EMAC_INT_CTL_TX1_EN)
#define EMAC_INT_CTL_TX0_ABRT_EN (0x1 << 2)
#define EMAC_INT_CTL_TX1_ABRT_EN (0x1 << 3)
#define EMAC_INT_CTL_TX_ABRT_EN (EMAC_INT_CTL_TX0_ABRT_EN | EMAC_INT_CTL_TX1_ABRT_EN)
#define EMAC_INT_STA_REG (0x58) #define EMAC_INT_STA_REG (0x58)
#define EMAC_INT_STA_TX0_COMPLETE (0x1)
#define EMAC_INT_STA_TX1_COMPLETE (0x1 << 1)
#define EMAC_INT_STA_TX_COMPLETE (EMAC_INT_STA_TX0_COMPLETE | EMAC_INT_STA_TX1_COMPLETE)
#define EMAC_INT_STA_TX0_ABRT (0x1 << 2)
#define EMAC_INT_STA_TX1_ABRT (0x1 << 3)
#define EMAC_INT_STA_TX_ABRT (EMAC_INT_STA_TX0_ABRT | EMAC_INT_STA_TX1_ABRT)
#define EMAC_INT_STA_RX_COMPLETE (0x1 << 8)
#define EMAC_MAC_CTL0_REG (0x5c) #define EMAC_MAC_CTL0_REG (0x5c)
#define EMAC_MAC_CTL0_RX_FLOW_CTL_EN (1 << 2) #define EMAC_MAC_CTL0_RX_FLOW_CTL_EN (1 << 2)
#define EMAC_MAC_CTL0_TX_FLOW_CTL_EN (1 << 3) #define EMAC_MAC_CTL0_TX_FLOW_CTL_EN (1 << 3)
@ -87,8 +102,11 @@
#define EMAC_MAC_CLRT_RM (0x0f) #define EMAC_MAC_CLRT_RM (0x0f)
#define EMAC_MAC_MAXF_REG (0x70) #define EMAC_MAC_MAXF_REG (0x70)
#define EMAC_MAC_SUPP_REG (0x74) #define EMAC_MAC_SUPP_REG (0x74)
#define EMAC_MAC_SUPP_100M (0x1 << 8)
#define EMAC_MAC_TEST_REG (0x78) #define EMAC_MAC_TEST_REG (0x78)
#define EMAC_MAC_MCFG_REG (0x7c) #define EMAC_MAC_MCFG_REG (0x7c)
#define EMAC_MAC_MCFG_MII_CLKD_MASK (0xff << 2)
#define EMAC_MAC_MCFG_MII_CLKD_72 (0x0d << 2)
#define EMAC_MAC_A0_REG (0x98) #define EMAC_MAC_A0_REG (0x98)
#define EMAC_MAC_A1_REG (0x9c) #define EMAC_MAC_A1_REG (0x9c)
#define EMAC_MAC_A2_REG (0xa0) #define EMAC_MAC_A2_REG (0xa0)

View File

@ -1237,6 +1237,7 @@ static int bmac_probe(struct macio_dev *mdev, const struct of_device_id *match)
struct bmac_data *bp; struct bmac_data *bp;
const unsigned char *prop_addr; const unsigned char *prop_addr;
unsigned char addr[6]; unsigned char addr[6];
u8 macaddr[6];
struct net_device *dev; struct net_device *dev;
int is_bmac_plus = ((int)match->data) != 0; int is_bmac_plus = ((int)match->data) != 0;
@ -1284,7 +1285,9 @@ static int bmac_probe(struct macio_dev *mdev, const struct of_device_id *match)
rev = addr[0] == 0 && addr[1] == 0xA0; rev = addr[0] == 0 && addr[1] == 0xA0;
for (j = 0; j < 6; ++j) for (j = 0; j < 6; ++j)
dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j]; macaddr[j] = rev ? bitrev8(addr[j]): addr[j];
eth_hw_addr_set(dev, macaddr);
/* Enable chip without interrupts for now */ /* Enable chip without interrupts for now */
bmac_enable_and_reset_chip(dev); bmac_enable_and_reset_chip(dev);

View File

@ -90,7 +90,7 @@ static void mace_set_timeout(struct net_device *dev);
static void mace_tx_timeout(struct timer_list *t); static void mace_tx_timeout(struct timer_list *t);
static inline void dbdma_reset(volatile struct dbdma_regs __iomem *dma); static inline void dbdma_reset(volatile struct dbdma_regs __iomem *dma);
static inline void mace_clean_rings(struct mace_data *mp); static inline void mace_clean_rings(struct mace_data *mp);
static void __mace_set_address(struct net_device *dev, void *addr); static void __mace_set_address(struct net_device *dev, const void *addr);
/* /*
* If we can't get a skbuff when we need it, we use this area for DMA. * If we can't get a skbuff when we need it, we use this area for DMA.
@ -112,6 +112,7 @@ static int mace_probe(struct macio_dev *mdev, const struct of_device_id *match)
struct net_device *dev; struct net_device *dev;
struct mace_data *mp; struct mace_data *mp;
const unsigned char *addr; const unsigned char *addr;
u8 macaddr[ETH_ALEN];
int j, rev, rc = -EBUSY; int j, rev, rc = -EBUSY;
if (macio_resource_count(mdev) != 3 || macio_irq_count(mdev) != 3) { if (macio_resource_count(mdev) != 3 || macio_irq_count(mdev) != 3) {
@ -167,8 +168,9 @@ static int mace_probe(struct macio_dev *mdev, const struct of_device_id *match)
rev = addr[0] == 0 && addr[1] == 0xA0; rev = addr[0] == 0 && addr[1] == 0xA0;
for (j = 0; j < 6; ++j) { for (j = 0; j < 6; ++j) {
dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j]; macaddr[j] = rev ? bitrev8(addr[j]): addr[j];
} }
eth_hw_addr_set(dev, macaddr);
mp->chipid = (in_8(&mp->mace->chipid_hi) << 8) | mp->chipid = (in_8(&mp->mace->chipid_hi) << 8) |
in_8(&mp->mace->chipid_lo); in_8(&mp->mace->chipid_lo);
@ -369,11 +371,12 @@ static void mace_reset(struct net_device *dev)
out_8(&mb->plscc, PORTSEL_GPSI + ENPLSIO); out_8(&mb->plscc, PORTSEL_GPSI + ENPLSIO);
} }
static void __mace_set_address(struct net_device *dev, void *addr) static void __mace_set_address(struct net_device *dev, const void *addr)
{ {
struct mace_data *mp = netdev_priv(dev); struct mace_data *mp = netdev_priv(dev);
volatile struct mace __iomem *mb = mp->mace; volatile struct mace __iomem *mb = mp->mace;
unsigned char *p = addr; const unsigned char *p = addr;
u8 macaddr[ETH_ALEN];
int i; int i;
/* load up the hardware address */ /* load up the hardware address */
@ -385,7 +388,10 @@ static void __mace_set_address(struct net_device *dev, void *addr)
; ;
} }
for (i = 0; i < 6; ++i) for (i = 0; i < 6; ++i)
out_8(&mb->padr, dev->dev_addr[i] = p[i]); out_8(&mb->padr, macaddr[i] = p[i]);
eth_hw_addr_set(dev, macaddr);
if (mp->chipid != BROKEN_ADDRCHG_REV) if (mp->chipid != BROKEN_ADDRCHG_REV)
out_8(&mb->iac, 0); out_8(&mb->iac, 0);
} }

View File

@ -4020,10 +4020,12 @@ static int bcmgenet_probe(struct platform_device *pdev)
/* Request the WOL interrupt and advertise suspend if available */ /* Request the WOL interrupt and advertise suspend if available */
priv->wol_irq_disabled = true; priv->wol_irq_disabled = true;
err = devm_request_irq(&pdev->dev, priv->wol_irq, bcmgenet_wol_isr, 0, if (priv->wol_irq > 0) {
dev->name, priv); err = devm_request_irq(&pdev->dev, priv->wol_irq,
if (!err) bcmgenet_wol_isr, 0, dev->name, priv);
device_set_wakeup_capable(&pdev->dev, 1); if (!err)
device_set_wakeup_capable(&pdev->dev, 1);
}
/* Set the needed headroom to account for any possible /* Set the needed headroom to account for any possible
* features enabling/disabling at runtime * features enabling/disabling at runtime

View File

@ -32,6 +32,7 @@
#include <linux/tcp.h> #include <linux/tcp.h>
#include <linux/ipv6.h> #include <linux/ipv6.h>
#include <net/inet_ecn.h>
#include <net/route.h> #include <net/route.h>
#include <net/ip6_route.h> #include <net/ip6_route.h>
@ -99,7 +100,7 @@ cxgb_find_route(struct cxgb4_lld_info *lldi,
rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip, rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip,
peer_port, local_port, IPPROTO_TCP, peer_port, local_port, IPPROTO_TCP,
tos, 0); tos & ~INET_ECN_MASK, 0);
if (IS_ERR(rt)) if (IS_ERR(rt))
return NULL; return NULL;
n = dst_neigh_lookup(&rt->dst, &peer_ip); n = dst_neigh_lookup(&rt->dst, &peer_ip);

View File

@ -51,6 +51,7 @@ struct tgec_mdio_controller {
struct mdio_fsl_priv { struct mdio_fsl_priv {
struct tgec_mdio_controller __iomem *mdio_base; struct tgec_mdio_controller __iomem *mdio_base;
bool is_little_endian; bool is_little_endian;
bool has_a009885;
bool has_a011043; bool has_a011043;
}; };
@ -186,10 +187,10 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
{ {
struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
struct tgec_mdio_controller __iomem *regs = priv->mdio_base; struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
unsigned long flags;
uint16_t dev_addr; uint16_t dev_addr;
uint32_t mdio_stat; uint32_t mdio_stat;
uint32_t mdio_ctl; uint32_t mdio_ctl;
uint16_t value;
int ret; int ret;
bool endian = priv->is_little_endian; bool endian = priv->is_little_endian;
@ -221,12 +222,18 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
return ret; return ret;
} }
if (priv->has_a009885)
/* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
* must read back the data register within 16 MDC cycles.
*/
local_irq_save(flags);
/* Initiate the read */ /* Initiate the read */
xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian); xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian);
ret = xgmac_wait_until_done(&bus->dev, regs, endian); ret = xgmac_wait_until_done(&bus->dev, regs, endian);
if (ret) if (ret)
return ret; goto irq_restore;
/* Return all Fs if nothing was there */ /* Return all Fs if nothing was there */
if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) && if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
@ -234,13 +241,17 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
dev_dbg(&bus->dev, dev_dbg(&bus->dev,
"Error while reading PHY%d reg at %d.%hhu\n", "Error while reading PHY%d reg at %d.%hhu\n",
phy_id, dev_addr, regnum); phy_id, dev_addr, regnum);
return 0xffff; ret = 0xffff;
} else {
ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
dev_dbg(&bus->dev, "read %04x\n", ret);
} }
value = xgmac_read32(&regs->mdio_data, endian) & 0xffff; irq_restore:
dev_dbg(&bus->dev, "read %04x\n", value); if (priv->has_a009885)
local_irq_restore(flags);
return value; return ret;
} }
static int xgmac_mdio_probe(struct platform_device *pdev) static int xgmac_mdio_probe(struct platform_device *pdev)
@ -287,6 +298,8 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
priv->is_little_endian = device_property_read_bool(&pdev->dev, priv->is_little_endian = device_property_read_bool(&pdev->dev,
"little-endian"); "little-endian");
priv->has_a009885 = device_property_read_bool(&pdev->dev,
"fsl,erratum-a009885");
priv->has_a011043 = device_property_read_bool(&pdev->dev, priv->has_a011043 = device_property_read_bool(&pdev->dev,
"fsl,erratum-a011043"); "fsl,erratum-a011043");
@ -318,9 +331,10 @@ err_ioremap:
static int xgmac_mdio_remove(struct platform_device *pdev) static int xgmac_mdio_remove(struct platform_device *pdev)
{ {
struct mii_bus *bus = platform_get_drvdata(pdev); struct mii_bus *bus = platform_get_drvdata(pdev);
struct mdio_fsl_priv *priv = bus->priv;
mdiobus_unregister(bus); mdiobus_unregister(bus);
iounmap(bus->priv); iounmap(priv->mdio_base);
mdiobus_free(bus); mdiobus_free(bus);
return 0; return 0;

View File

@ -117,9 +117,10 @@ static int sni_82596_probe(struct platform_device *dev)
netdevice->dev_addr[5] = readb(eth_addr + 0x06); netdevice->dev_addr[5] = readb(eth_addr + 0x06);
iounmap(eth_addr); iounmap(eth_addr);
if (!netdevice->irq) { if (netdevice->irq < 0) {
printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n", printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n",
__FILE__, netdevice->base_addr); __FILE__, netdevice->base_addr);
retval = netdevice->irq;
goto probe_failed; goto probe_failed;
} }

View File

@ -283,7 +283,6 @@ struct prestera_router {
struct list_head rif_entry_list; struct list_head rif_entry_list;
struct notifier_block inetaddr_nb; struct notifier_block inetaddr_nb;
struct notifier_block inetaddr_valid_nb; struct notifier_block inetaddr_valid_nb;
bool aborted;
}; };
struct prestera_rxtx_params { struct prestera_rxtx_params {

View File

@ -1831,8 +1831,8 @@ static int prestera_iface_to_msg(struct prestera_iface *iface,
int prestera_hw_rif_create(struct prestera_switch *sw, int prestera_hw_rif_create(struct prestera_switch *sw,
struct prestera_iface *iif, u8 *mac, u16 *rif_id) struct prestera_iface *iif, u8 *mac, u16 *rif_id)
{ {
struct prestera_msg_rif_req req;
struct prestera_msg_rif_resp resp; struct prestera_msg_rif_resp resp;
struct prestera_msg_rif_req req;
int err; int err;
memcpy(req.mac, mac, ETH_ALEN); memcpy(req.mac, mac, ETH_ALEN);
@ -1868,9 +1868,9 @@ int prestera_hw_rif_delete(struct prestera_switch *sw, u16 rif_id,
int prestera_hw_vr_create(struct prestera_switch *sw, u16 *vr_id) int prestera_hw_vr_create(struct prestera_switch *sw, u16 *vr_id)
{ {
int err;
struct prestera_msg_vr_resp resp; struct prestera_msg_vr_resp resp;
struct prestera_msg_vr_req req; struct prestera_msg_vr_req req;
int err;
err = prestera_cmd_ret(sw, PRESTERA_CMD_TYPE_ROUTER_VR_CREATE, err = prestera_cmd_ret(sw, PRESTERA_CMD_TYPE_ROUTER_VR_CREATE,
&req.cmd, sizeof(req), &resp.ret, sizeof(resp)); &req.cmd, sizeof(req), &resp.ret, sizeof(resp));

View File

@ -982,6 +982,7 @@ static void prestera_switch_fini(struct prestera_switch *sw)
prestera_event_handlers_unregister(sw); prestera_event_handlers_unregister(sw);
prestera_rxtx_switch_fini(sw); prestera_rxtx_switch_fini(sw);
prestera_switchdev_fini(sw); prestera_switchdev_fini(sw);
prestera_router_fini(sw);
prestera_netdev_event_handler_unregister(sw); prestera_netdev_event_handler_unregister(sw);
prestera_hw_switch_fini(sw); prestera_hw_switch_fini(sw);
} }

View File

@ -25,10 +25,10 @@ static int __prestera_inetaddr_port_event(struct net_device *port_dev,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct prestera_port *port = netdev_priv(port_dev); struct prestera_port *port = netdev_priv(port_dev);
int err;
struct prestera_rif_entry *re;
struct prestera_rif_entry_key re_key = {}; struct prestera_rif_entry_key re_key = {};
struct prestera_rif_entry *re;
u32 kern_tb_id; u32 kern_tb_id;
int err;
err = prestera_is_valid_mac_addr(port, port_dev->dev_addr); err = prestera_is_valid_mac_addr(port, port_dev->dev_addr);
if (err) { if (err) {
@ -45,21 +45,21 @@ static int __prestera_inetaddr_port_event(struct net_device *port_dev,
switch (event) { switch (event) {
case NETDEV_UP: case NETDEV_UP:
if (re) { if (re) {
NL_SET_ERR_MSG_MOD(extack, "rif_entry already exist"); NL_SET_ERR_MSG_MOD(extack, "RIF already exist");
return -EEXIST; return -EEXIST;
} }
re = prestera_rif_entry_create(port->sw, &re_key, re = prestera_rif_entry_create(port->sw, &re_key,
prestera_fix_tb_id(kern_tb_id), prestera_fix_tb_id(kern_tb_id),
port_dev->dev_addr); port_dev->dev_addr);
if (!re) { if (!re) {
NL_SET_ERR_MSG_MOD(extack, "Can't create rif_entry"); NL_SET_ERR_MSG_MOD(extack, "Can't create RIF");
return -EINVAL; return -EINVAL;
} }
dev_hold(port_dev); dev_hold(port_dev);
break; break;
case NETDEV_DOWN: case NETDEV_DOWN:
if (!re) { if (!re) {
NL_SET_ERR_MSG_MOD(extack, "rif_entry not exist"); NL_SET_ERR_MSG_MOD(extack, "Can't find RIF");
return -EEXIST; return -EEXIST;
} }
prestera_rif_entry_destroy(port->sw, re); prestera_rif_entry_destroy(port->sw, re);
@ -75,11 +75,11 @@ static int __prestera_inetaddr_event(struct prestera_switch *sw,
unsigned long event, unsigned long event,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
if (prestera_netdev_check(dev) && !netif_is_bridge_port(dev) && if (!prestera_netdev_check(dev) || netif_is_bridge_port(dev) ||
!netif_is_lag_port(dev) && !netif_is_ovs_port(dev)) netif_is_lag_port(dev) || netif_is_ovs_port(dev))
return __prestera_inetaddr_port_event(dev, event, extack); return 0;
return 0; return __prestera_inetaddr_port_event(dev, event, extack);
} }
static int __prestera_inetaddr_cb(struct notifier_block *nb, static int __prestera_inetaddr_cb(struct notifier_block *nb,
@ -126,6 +126,8 @@ static int __prestera_inetaddr_valid_cb(struct notifier_block *nb,
goto out; goto out;
if (ipv4_is_multicast(ivi->ivi_addr)) { if (ipv4_is_multicast(ivi->ivi_addr)) {
NL_SET_ERR_MSG_MOD(ivi->extack,
"Multicast addr on RIF is not supported");
err = -EINVAL; err = -EINVAL;
goto out; goto out;
} }
@ -166,7 +168,7 @@ int prestera_router_init(struct prestera_switch *sw)
err_register_inetaddr_notifier: err_register_inetaddr_notifier:
unregister_inetaddr_validator_notifier(&router->inetaddr_valid_nb); unregister_inetaddr_validator_notifier(&router->inetaddr_valid_nb);
err_register_inetaddr_validator_notifier: err_register_inetaddr_validator_notifier:
/* prestera_router_hw_fini */ prestera_router_hw_fini(sw);
err_router_lib_init: err_router_lib_init:
kfree(sw->router); kfree(sw->router);
return err; return err;
@ -176,7 +178,7 @@ void prestera_router_fini(struct prestera_switch *sw)
{ {
unregister_inetaddr_notifier(&sw->router->inetaddr_nb); unregister_inetaddr_notifier(&sw->router->inetaddr_nb);
unregister_inetaddr_validator_notifier(&sw->router->inetaddr_valid_nb); unregister_inetaddr_validator_notifier(&sw->router->inetaddr_valid_nb);
/* router_hw_fini */ prestera_router_hw_fini(sw);
kfree(sw->router); kfree(sw->router);
sw->router = NULL; sw->router = NULL;
} }

View File

@ -29,6 +29,12 @@ int prestera_router_hw_init(struct prestera_switch *sw)
return 0; return 0;
} }
void prestera_router_hw_fini(struct prestera_switch *sw)
{
WARN_ON(!list_empty(&sw->router->vr_list));
WARN_ON(!list_empty(&sw->router->rif_entry_list));
}
static struct prestera_vr *__prestera_vr_find(struct prestera_switch *sw, static struct prestera_vr *__prestera_vr_find(struct prestera_switch *sw,
u32 tb_id) u32 tb_id)
{ {
@ -47,13 +53,8 @@ static struct prestera_vr *__prestera_vr_create(struct prestera_switch *sw,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct prestera_vr *vr; struct prestera_vr *vr;
u16 hw_vr_id;
int err; int err;
err = prestera_hw_vr_create(sw, &hw_vr_id);
if (err)
return ERR_PTR(-ENOMEM);
vr = kzalloc(sizeof(*vr), GFP_KERNEL); vr = kzalloc(sizeof(*vr), GFP_KERNEL);
if (!vr) { if (!vr) {
err = -ENOMEM; err = -ENOMEM;
@ -61,23 +62,26 @@ static struct prestera_vr *__prestera_vr_create(struct prestera_switch *sw,
} }
vr->tb_id = tb_id; vr->tb_id = tb_id;
vr->hw_vr_id = hw_vr_id;
err = prestera_hw_vr_create(sw, &vr->hw_vr_id);
if (err)
goto err_hw_create;
list_add(&vr->router_node, &sw->router->vr_list); list_add(&vr->router_node, &sw->router->vr_list);
return vr; return vr;
err_alloc_vr: err_hw_create:
prestera_hw_vr_delete(sw, hw_vr_id);
kfree(vr); kfree(vr);
err_alloc_vr:
return ERR_PTR(err); return ERR_PTR(err);
} }
static void __prestera_vr_destroy(struct prestera_switch *sw, static void __prestera_vr_destroy(struct prestera_switch *sw,
struct prestera_vr *vr) struct prestera_vr *vr)
{ {
prestera_hw_vr_delete(sw, vr->hw_vr_id);
list_del(&vr->router_node); list_del(&vr->router_node);
prestera_hw_vr_delete(sw, vr->hw_vr_id);
kfree(vr); kfree(vr);
} }
@ -87,17 +91,22 @@ static struct prestera_vr *prestera_vr_get(struct prestera_switch *sw, u32 tb_id
struct prestera_vr *vr; struct prestera_vr *vr;
vr = __prestera_vr_find(sw, tb_id); vr = __prestera_vr_find(sw, tb_id);
if (!vr) if (vr) {
refcount_inc(&vr->refcount);
} else {
vr = __prestera_vr_create(sw, tb_id, extack); vr = __prestera_vr_create(sw, tb_id, extack);
if (IS_ERR(vr)) if (IS_ERR(vr))
return ERR_CAST(vr); return ERR_CAST(vr);
refcount_set(&vr->refcount, 1);
}
return vr; return vr;
} }
static void prestera_vr_put(struct prestera_switch *sw, struct prestera_vr *vr) static void prestera_vr_put(struct prestera_switch *sw, struct prestera_vr *vr)
{ {
if (!vr->ref_cnt) if (refcount_dec_and_test(&vr->refcount))
__prestera_vr_destroy(sw, vr); __prestera_vr_destroy(sw, vr);
} }
@ -120,7 +129,7 @@ __prestera_rif_entry_key_copy(const struct prestera_rif_entry_key *in,
out->iface.vlan_id = in->iface.vlan_id; out->iface.vlan_id = in->iface.vlan_id;
break; break;
default: default:
pr_err("Unsupported iface type"); WARN(1, "Unsupported iface type");
return -EINVAL; return -EINVAL;
} }
@ -158,7 +167,6 @@ void prestera_rif_entry_destroy(struct prestera_switch *sw,
iface.vr_id = e->vr->hw_vr_id; iface.vr_id = e->vr->hw_vr_id;
prestera_hw_rif_delete(sw, e->hw_id, &iface); prestera_hw_rif_delete(sw, e->hw_id, &iface);
e->vr->ref_cnt--;
prestera_vr_put(sw, e->vr); prestera_vr_put(sw, e->vr);
kfree(e); kfree(e);
} }
@ -183,7 +191,6 @@ prestera_rif_entry_create(struct prestera_switch *sw,
if (IS_ERR(e->vr)) if (IS_ERR(e->vr))
goto err_vr_get; goto err_vr_get;
e->vr->ref_cnt++;
memcpy(&e->addr, addr, sizeof(e->addr)); memcpy(&e->addr, addr, sizeof(e->addr));
/* HW */ /* HW */
@ -198,7 +205,6 @@ prestera_rif_entry_create(struct prestera_switch *sw,
return e; return e;
err_hw_create: err_hw_create:
e->vr->ref_cnt--;
prestera_vr_put(sw, e->vr); prestera_vr_put(sw, e->vr);
err_vr_get: err_vr_get:
err_key_copy: err_key_copy:

View File

@ -6,7 +6,7 @@
struct prestera_vr { struct prestera_vr {
struct list_head router_node; struct list_head router_node;
unsigned int ref_cnt; refcount_t refcount;
u32 tb_id; /* key (kernel fib table id) */ u32 tb_id; /* key (kernel fib table id) */
u16 hw_vr_id; /* virtual router ID */ u16 hw_vr_id; /* virtual router ID */
u8 __pad[2]; u8 __pad[2];
@ -32,5 +32,6 @@ prestera_rif_entry_create(struct prestera_switch *sw,
struct prestera_rif_entry_key *k, struct prestera_rif_entry_key *k,
u32 tb_id, const unsigned char *addr); u32 tb_id, const unsigned char *addr);
int prestera_router_hw_init(struct prestera_switch *sw); int prestera_router_hw_init(struct prestera_switch *sw);
void prestera_router_hw_fini(struct prestera_switch *sw);
#endif /* _PRESTERA_ROUTER_HW_H_ */ #endif /* _PRESTERA_ROUTER_HW_H_ */

View File

@ -267,7 +267,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
phylink_config); phylink_config);
struct mtk_eth *eth = mac->hw; struct mtk_eth *eth = mac->hw;
u32 mcr_cur, mcr_new, sid, i; u32 mcr_cur, mcr_new, sid, i;
int val, ge_mode, err; int val, ge_mode, err = 0;
/* MT76x8 has no hardware settings between for the MAC */ /* MT76x8 has no hardware settings between for the MAC */
if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) && if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) &&

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/* Copyright (c) 2018 Mellanox Technologies. */ /* Copyright (c) 2018 Mellanox Technologies. */
#include <net/inet_ecn.h>
#include <net/vxlan.h> #include <net/vxlan.h>
#include <net/gre.h> #include <net/gre.h>
#include <net/geneve.h> #include <net/geneve.h>
@ -235,7 +236,7 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
int err; int err;
/* add the IP fields */ /* add the IP fields */
attr.fl.fl4.flowi4_tos = tun_key->tos; attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK;
attr.fl.fl4.daddr = tun_key->u.ipv4.dst; attr.fl.fl4.daddr = tun_key->u.ipv4.dst;
attr.fl.fl4.saddr = tun_key->u.ipv4.src; attr.fl.fl4.saddr = tun_key->u.ipv4.src;
attr.ttl = tun_key->ttl; attr.ttl = tun_key->ttl;
@ -350,7 +351,7 @@ int mlx5e_tc_tun_update_header_ipv4(struct mlx5e_priv *priv,
int err; int err;
/* add the IP fields */ /* add the IP fields */
attr.fl.fl4.flowi4_tos = tun_key->tos; attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK;
attr.fl.fl4.daddr = tun_key->u.ipv4.dst; attr.fl.fl4.daddr = tun_key->u.ipv4.dst;
attr.fl.fl4.saddr = tun_key->u.ipv4.src; attr.fl.fl4.saddr = tun_key->u.ipv4.src;
attr.ttl = tun_key->ttl; attr.ttl = tun_key->ttl;

View File

@ -771,7 +771,10 @@ void ocelot_phylink_mac_link_up(struct ocelot *ocelot, int port,
ocelot_write_rix(ocelot, 0, ANA_POL_FLOWC, port); ocelot_write_rix(ocelot, 0, ANA_POL_FLOWC, port);
ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, tx_pause); /* Don't attempt to send PAUSE frames on the NPI port, it's broken */
if (port != ocelot->npi)
ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA,
tx_pause);
/* Undo the effects of ocelot_phylink_mac_link_down: /* Undo the effects of ocelot_phylink_mac_link_down:
* enable MAC module * enable MAC module

View File

@ -559,13 +559,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
if (filter->block_id == VCAP_IS1 &&
!is_zero_ether_addr(match.mask->dst)) {
NL_SET_ERR_MSG_MOD(extack,
"Key type S1_NORMAL cannot match on destination MAC");
return -EOPNOTSUPP;
}
/* The hw support mac matches only for MAC_ETYPE key, /* The hw support mac matches only for MAC_ETYPE key,
* therefore if other matches(port, tcp flags, etc) are added * therefore if other matches(port, tcp flags, etc) are added
* then just bail out * then just bail out
@ -580,6 +573,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
return -EOPNOTSUPP; return -EOPNOTSUPP;
flow_rule_match_eth_addrs(rule, &match); flow_rule_match_eth_addrs(rule, &match);
if (filter->block_id == VCAP_IS1 &&
!is_zero_ether_addr(match.mask->dst)) {
NL_SET_ERR_MSG_MOD(extack,
"Key type S1_NORMAL cannot match on destination MAC");
return -EOPNOTSUPP;
}
filter->key_type = OCELOT_VCAP_KEY_ETYPE; filter->key_type = OCELOT_VCAP_KEY_ETYPE;
ether_addr_copy(filter->key.etype.dmac.value, ether_addr_copy(filter->key.etype.dmac.value,
match.key->dst); match.key->dst);
@ -805,13 +806,34 @@ int ocelot_cls_flower_replace(struct ocelot *ocelot, int port,
struct netlink_ext_ack *extack = f->common.extack; struct netlink_ext_ack *extack = f->common.extack;
struct ocelot_vcap_filter *filter; struct ocelot_vcap_filter *filter;
int chain = f->common.chain_index; int chain = f->common.chain_index;
int ret; int block_id, ret;
if (chain && !ocelot_find_vcap_filter_that_points_at(ocelot, chain)) { if (chain && !ocelot_find_vcap_filter_that_points_at(ocelot, chain)) {
NL_SET_ERR_MSG_MOD(extack, "No default GOTO action points to this chain"); NL_SET_ERR_MSG_MOD(extack, "No default GOTO action points to this chain");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
block_id = ocelot_chain_to_block(chain, ingress);
if (block_id < 0) {
NL_SET_ERR_MSG_MOD(extack, "Cannot offload to this chain");
return -EOPNOTSUPP;
}
filter = ocelot_vcap_block_find_filter_by_id(&ocelot->block[block_id],
f->cookie, true);
if (filter) {
/* Filter already exists on other ports */
if (!ingress) {
NL_SET_ERR_MSG_MOD(extack, "VCAP ES0 does not support shared filters");
return -EOPNOTSUPP;
}
filter->ingress_port_mask |= BIT(port);
return ocelot_vcap_filter_replace(ocelot, filter);
}
/* Filter didn't exist, create it now */
filter = ocelot_vcap_filter_create(ocelot, port, ingress, f); filter = ocelot_vcap_filter_create(ocelot, port, ingress, f);
if (!filter) if (!filter)
return -ENOMEM; return -ENOMEM;
@ -874,6 +896,12 @@ int ocelot_cls_flower_destroy(struct ocelot *ocelot, int port,
if (filter->type == OCELOT_VCAP_FILTER_DUMMY) if (filter->type == OCELOT_VCAP_FILTER_DUMMY)
return ocelot_vcap_dummy_filter_del(ocelot, filter); return ocelot_vcap_dummy_filter_del(ocelot, filter);
if (ingress) {
filter->ingress_port_mask &= ~BIT(port);
if (filter->ingress_port_mask)
return ocelot_vcap_filter_replace(ocelot, filter);
}
return ocelot_vcap_filter_del(ocelot, filter); return ocelot_vcap_filter_del(ocelot, filter);
} }
EXPORT_SYMBOL_GPL(ocelot_cls_flower_destroy); EXPORT_SYMBOL_GPL(ocelot_cls_flower_destroy);

View File

@ -1187,7 +1187,7 @@ static int ocelot_netdevice_bridge_join(struct net_device *dev,
ocelot_port_bridge_join(ocelot, port, bridge); ocelot_port_bridge_join(ocelot, port, bridge);
err = switchdev_bridge_port_offload(brport_dev, dev, priv, err = switchdev_bridge_port_offload(brport_dev, dev, priv,
&ocelot_netdevice_nb, &ocelot_switchdev_nb,
&ocelot_switchdev_blocking_nb, &ocelot_switchdev_blocking_nb,
false, extack); false, extack);
if (err) if (err)
@ -1201,7 +1201,7 @@ static int ocelot_netdevice_bridge_join(struct net_device *dev,
err_switchdev_sync: err_switchdev_sync:
switchdev_bridge_port_unoffload(brport_dev, priv, switchdev_bridge_port_unoffload(brport_dev, priv,
&ocelot_netdevice_nb, &ocelot_switchdev_nb,
&ocelot_switchdev_blocking_nb); &ocelot_switchdev_blocking_nb);
err_switchdev_offload: err_switchdev_offload:
ocelot_port_bridge_leave(ocelot, port, bridge); ocelot_port_bridge_leave(ocelot, port, bridge);
@ -1214,7 +1214,7 @@ static void ocelot_netdevice_pre_bridge_leave(struct net_device *dev,
struct ocelot_port_private *priv = netdev_priv(dev); struct ocelot_port_private *priv = netdev_priv(dev);
switchdev_bridge_port_unoffload(brport_dev, priv, switchdev_bridge_port_unoffload(brport_dev, priv,
&ocelot_netdevice_nb, &ocelot_switchdev_nb,
&ocelot_switchdev_blocking_nb); &ocelot_switchdev_blocking_nb);
} }

View File

@ -12,6 +12,7 @@
#include <linux/io.h> #include <linux/io.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/mfd/syscon.h> #include <linux/mfd/syscon.h>
@ -48,16 +49,75 @@
#define DWMAC_RX_VARDELAY(d) ((d) << DWMAC_RX_VARDELAY_SHIFT) #define DWMAC_RX_VARDELAY(d) ((d) << DWMAC_RX_VARDELAY_SHIFT)
#define DWMAC_RXN_VARDELAY(d) ((d) << DWMAC_RXN_VARDELAY_SHIFT) #define DWMAC_RXN_VARDELAY(d) ((d) << DWMAC_RXN_VARDELAY_SHIFT)
struct oxnas_dwmac;
struct oxnas_dwmac_data {
int (*setup)(struct oxnas_dwmac *dwmac);
};
struct oxnas_dwmac { struct oxnas_dwmac {
struct device *dev; struct device *dev;
struct clk *clk; struct clk *clk;
struct regmap *regmap; struct regmap *regmap;
const struct oxnas_dwmac_data *data;
}; };
static int oxnas_dwmac_setup_ox810se(struct oxnas_dwmac *dwmac)
{
unsigned int value;
int ret;
ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value);
if (ret < 0)
return ret;
/* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */
value |= BIT(DWMAC_CKEN_GTX) |
/* Use simple mux for 25/125 Mhz clock switching */
BIT(DWMAC_SIMPLE_MUX);
regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value);
return 0;
}
static int oxnas_dwmac_setup_ox820(struct oxnas_dwmac *dwmac)
{
unsigned int value;
int ret;
ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value);
if (ret < 0)
return ret;
/* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */
value |= BIT(DWMAC_CKEN_GTX) |
/* Use simple mux for 25/125 Mhz clock switching */
BIT(DWMAC_SIMPLE_MUX) |
/* set auto switch tx clock source */
BIT(DWMAC_AUTO_TX_SOURCE) |
/* enable tx & rx vardelay */
BIT(DWMAC_CKEN_TX_OUT) |
BIT(DWMAC_CKEN_TXN_OUT) |
BIT(DWMAC_CKEN_TX_IN) |
BIT(DWMAC_CKEN_RX_OUT) |
BIT(DWMAC_CKEN_RXN_OUT) |
BIT(DWMAC_CKEN_RX_IN);
regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value);
/* set tx & rx vardelay */
value = DWMAC_TX_VARDELAY(4) |
DWMAC_TXN_VARDELAY(2) |
DWMAC_RX_VARDELAY(10) |
DWMAC_RXN_VARDELAY(8);
regmap_write(dwmac->regmap, OXNAS_DWMAC_DELAY_REGOFFSET, value);
return 0;
}
static int oxnas_dwmac_init(struct platform_device *pdev, void *priv) static int oxnas_dwmac_init(struct platform_device *pdev, void *priv)
{ {
struct oxnas_dwmac *dwmac = priv; struct oxnas_dwmac *dwmac = priv;
unsigned int value;
int ret; int ret;
/* Reset HW here before changing the glue configuration */ /* Reset HW here before changing the glue configuration */
@ -69,35 +129,11 @@ static int oxnas_dwmac_init(struct platform_device *pdev, void *priv)
if (ret) if (ret)
return ret; return ret;
ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value); ret = dwmac->data->setup(dwmac);
if (ret < 0) { if (ret)
clk_disable_unprepare(dwmac->clk); clk_disable_unprepare(dwmac->clk);
return ret;
}
/* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */ return ret;
value |= BIT(DWMAC_CKEN_GTX) |
/* Use simple mux for 25/125 Mhz clock switching */
BIT(DWMAC_SIMPLE_MUX) |
/* set auto switch tx clock source */
BIT(DWMAC_AUTO_TX_SOURCE) |
/* enable tx & rx vardelay */
BIT(DWMAC_CKEN_TX_OUT) |
BIT(DWMAC_CKEN_TXN_OUT) |
BIT(DWMAC_CKEN_TX_IN) |
BIT(DWMAC_CKEN_RX_OUT) |
BIT(DWMAC_CKEN_RXN_OUT) |
BIT(DWMAC_CKEN_RX_IN);
regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value);
/* set tx & rx vardelay */
value = DWMAC_TX_VARDELAY(4) |
DWMAC_TXN_VARDELAY(2) |
DWMAC_RX_VARDELAY(10) |
DWMAC_RXN_VARDELAY(8);
regmap_write(dwmac->regmap, OXNAS_DWMAC_DELAY_REGOFFSET, value);
return 0;
} }
static void oxnas_dwmac_exit(struct platform_device *pdev, void *priv) static void oxnas_dwmac_exit(struct platform_device *pdev, void *priv)
@ -128,6 +164,12 @@ static int oxnas_dwmac_probe(struct platform_device *pdev)
goto err_remove_config_dt; goto err_remove_config_dt;
} }
dwmac->data = (const struct oxnas_dwmac_data *)of_device_get_match_data(&pdev->dev);
if (!dwmac->data) {
ret = -EINVAL;
goto err_remove_config_dt;
}
dwmac->dev = &pdev->dev; dwmac->dev = &pdev->dev;
plat_dat->bsp_priv = dwmac; plat_dat->bsp_priv = dwmac;
plat_dat->init = oxnas_dwmac_init; plat_dat->init = oxnas_dwmac_init;
@ -166,8 +208,23 @@ err_remove_config_dt:
return ret; return ret;
} }
static const struct oxnas_dwmac_data ox810se_dwmac_data = {
.setup = oxnas_dwmac_setup_ox810se,
};
static const struct oxnas_dwmac_data ox820_dwmac_data = {
.setup = oxnas_dwmac_setup_ox820,
};
static const struct of_device_id oxnas_dwmac_match[] = { static const struct of_device_id oxnas_dwmac_match[] = {
{ .compatible = "oxsemi,ox820-dwmac" }, {
.compatible = "oxsemi,ox810se-dwmac",
.data = &ox810se_dwmac_data,
},
{
.compatible = "oxsemi,ox820-dwmac",
.data = &ox820_dwmac_data,
},
{ } { }
}; };
MODULE_DEVICE_TABLE(of, oxnas_dwmac_match); MODULE_DEVICE_TABLE(of, oxnas_dwmac_match);

View File

@ -7159,7 +7159,8 @@ int stmmac_dvr_probe(struct device *device,
pm_runtime_get_noresume(device); pm_runtime_get_noresume(device);
pm_runtime_set_active(device); pm_runtime_set_active(device);
pm_runtime_enable(device); if (!pm_runtime_enabled(device))
pm_runtime_enable(device);
if (priv->hw->pcs != STMMAC_PCS_TBI && if (priv->hw->pcs != STMMAC_PCS_TBI &&
priv->hw->pcs != STMMAC_PCS_RTBI) { priv->hw->pcs != STMMAC_PCS_RTBI) {

View File

@ -349,7 +349,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev);
int pkt_size = cpsw->rx_packet_max; int pkt_size = cpsw->rx_packet_max;
int ret = 0, port, ch = xmeta->ch; int ret = 0, port, ch = xmeta->ch;
int headroom = CPSW_HEADROOM; int headroom = CPSW_HEADROOM_NA;
struct net_device *ndev = xmeta->ndev; struct net_device *ndev = xmeta->ndev;
struct cpsw_priv *priv; struct cpsw_priv *priv;
struct page_pool *pool; struct page_pool *pool;
@ -392,7 +392,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
} }
if (priv->xdp_prog) { if (priv->xdp_prog) {
int headroom = CPSW_HEADROOM, size = len; int size = len;
xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
if (status & CPDMA_RX_VLAN_ENCAP) { if (status & CPDMA_RX_VLAN_ENCAP) {
@ -442,7 +442,7 @@ requeue:
xmeta->ndev = ndev; xmeta->ndev = ndev;
xmeta->ch = ch; xmeta->ch = ch;
dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA;
ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma,
pkt_size, 0); pkt_size, 0);
if (ret < 0) { if (ret < 0) {

View File

@ -283,7 +283,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
{ {
struct page *new_page, *page = token; struct page *new_page, *page = token;
void *pa = page_address(page); void *pa = page_address(page);
int headroom = CPSW_HEADROOM; int headroom = CPSW_HEADROOM_NA;
struct cpsw_meta_xdp *xmeta; struct cpsw_meta_xdp *xmeta;
struct cpsw_common *cpsw; struct cpsw_common *cpsw;
struct net_device *ndev; struct net_device *ndev;
@ -336,7 +336,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
} }
if (priv->xdp_prog) { if (priv->xdp_prog) {
int headroom = CPSW_HEADROOM, size = len; int size = len;
xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]);
if (status & CPDMA_RX_VLAN_ENCAP) { if (status & CPDMA_RX_VLAN_ENCAP) {
@ -386,7 +386,7 @@ requeue:
xmeta->ndev = ndev; xmeta->ndev = ndev;
xmeta->ch = ch; xmeta->ch = ch;
dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA;
ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma,
pkt_size, 0); pkt_size, 0);
if (ret < 0) { if (ret < 0) {

View File

@ -1122,7 +1122,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
xmeta->ndev = priv->ndev; xmeta->ndev = priv->ndev;
xmeta->ch = ch; xmeta->ch = ch;
dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA;
ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch,
page, dma, page, dma,
cpsw->rx_packet_max, cpsw->rx_packet_max,

View File

@ -5,7 +5,7 @@
config NET_VENDOR_VERTEXCOM config NET_VENDOR_VERTEXCOM
bool "Vertexcom devices" bool "Vertexcom devices"
default n default y
help help
If you have a network (Ethernet) card belonging to this class, say Y. If you have a network (Ethernet) card belonging to this class, say Y.

View File

@ -41,8 +41,9 @@
#include "xilinx_axienet.h" #include "xilinx_axienet.h"
/* Descriptors defines for Tx and Rx DMA */ /* Descriptors defines for Tx and Rx DMA */
#define TX_BD_NUM_DEFAULT 64 #define TX_BD_NUM_DEFAULT 128
#define RX_BD_NUM_DEFAULT 1024 #define RX_BD_NUM_DEFAULT 1024
#define TX_BD_NUM_MIN (MAX_SKB_FRAGS + 1)
#define TX_BD_NUM_MAX 4096 #define TX_BD_NUM_MAX 4096
#define RX_BD_NUM_MAX 4096 #define RX_BD_NUM_MAX 4096
@ -496,7 +497,8 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
static int __axienet_device_reset(struct axienet_local *lp) static int __axienet_device_reset(struct axienet_local *lp)
{ {
u32 timeout; u32 value;
int ret;
/* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset /* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
* process of Axi DMA takes a while to complete as all pending * process of Axi DMA takes a while to complete as all pending
@ -506,15 +508,23 @@ static int __axienet_device_reset(struct axienet_local *lp)
* they both reset the entire DMA core, so only one needs to be used. * they both reset the entire DMA core, so only one needs to be used.
*/ */
axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK); axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
timeout = DELAY_OF_ONE_MILLISEC; ret = read_poll_timeout(axienet_dma_in32, value,
while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) & !(value & XAXIDMA_CR_RESET_MASK),
XAXIDMA_CR_RESET_MASK) { DELAY_OF_ONE_MILLISEC, 50000, false, lp,
udelay(1); XAXIDMA_TX_CR_OFFSET);
if (--timeout == 0) { if (ret) {
netdev_err(lp->ndev, "%s: DMA reset timeout!\n", dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__);
__func__); return ret;
return -ETIMEDOUT; }
}
/* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */
ret = read_poll_timeout(axienet_ior, value,
value & XAE_INT_PHYRSTCMPLT_MASK,
DELAY_OF_ONE_MILLISEC, 50000, false, lp,
XAE_IS_OFFSET);
if (ret) {
dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__);
return ret;
} }
return 0; return 0;
@ -623,6 +633,8 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK)) if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK))
break; break;
/* Ensure we see complete descriptor update */
dma_rmb();
phys = desc_get_phys_addr(lp, cur_p); phys = desc_get_phys_addr(lp, cur_p);
dma_unmap_single(ndev->dev.parent, phys, dma_unmap_single(ndev->dev.parent, phys,
(cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK), (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
@ -631,13 +643,15 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK)) if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
dev_consume_skb_irq(cur_p->skb); dev_consume_skb_irq(cur_p->skb);
cur_p->cntrl = 0;
cur_p->app0 = 0; cur_p->app0 = 0;
cur_p->app1 = 0; cur_p->app1 = 0;
cur_p->app2 = 0; cur_p->app2 = 0;
cur_p->app4 = 0; cur_p->app4 = 0;
cur_p->status = 0;
cur_p->skb = NULL; cur_p->skb = NULL;
/* ensure our transmit path and device don't prematurely see status cleared */
wmb();
cur_p->cntrl = 0;
cur_p->status = 0;
if (sizep) if (sizep)
*sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; *sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
@ -646,6 +660,32 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
return i; return i;
} }
/**
* axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
* @lp: Pointer to the axienet_local structure
* @num_frag: The number of BDs to check for
*
* Return: 0, on success
* NETDEV_TX_BUSY, if any of the descriptors are not free
*
* This function is invoked before BDs are allocated and transmission starts.
* This function returns 0 if a BD or group of BDs can be allocated for
* transmission. If the BD or any of the BDs are not free the function
* returns a busy status. This is invoked from axienet_start_xmit.
*/
static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
int num_frag)
{
struct axidma_bd *cur_p;
/* Ensure we see all descriptor updates from device or TX IRQ path */
rmb();
cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
if (cur_p->cntrl)
return NETDEV_TX_BUSY;
return 0;
}
/** /**
* axienet_start_xmit_done - Invoked once a transmit is completed by the * axienet_start_xmit_done - Invoked once a transmit is completed by the
* Axi DMA Tx channel. * Axi DMA Tx channel.
@ -675,30 +715,8 @@ static void axienet_start_xmit_done(struct net_device *ndev)
/* Matches barrier in axienet_start_xmit */ /* Matches barrier in axienet_start_xmit */
smp_mb(); smp_mb();
netif_wake_queue(ndev); if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
} netif_wake_queue(ndev);
/**
* axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
* @lp: Pointer to the axienet_local structure
* @num_frag: The number of BDs to check for
*
* Return: 0, on success
* NETDEV_TX_BUSY, if any of the descriptors are not free
*
* This function is invoked before BDs are allocated and transmission starts.
* This function returns 0 if a BD or group of BDs can be allocated for
* transmission. If the BD or any of the BDs are not free the function
* returns a busy status. This is invoked from axienet_start_xmit.
*/
static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
int num_frag)
{
struct axidma_bd *cur_p;
cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
return NETDEV_TX_BUSY;
return 0;
} }
/** /**
@ -730,20 +748,15 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
num_frag = skb_shinfo(skb)->nr_frags; num_frag = skb_shinfo(skb)->nr_frags;
cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
if (axienet_check_tx_bd_space(lp, num_frag)) { if (axienet_check_tx_bd_space(lp, num_frag + 1)) {
if (netif_queue_stopped(ndev)) /* Should not happen as last start_xmit call should have
return NETDEV_TX_BUSY; * checked for sufficient space and queue should only be
* woken when sufficient space is available.
*/
netif_stop_queue(ndev); netif_stop_queue(ndev);
if (net_ratelimit())
/* Matches barrier in axienet_start_xmit_done */ netdev_warn(ndev, "TX ring unexpectedly full\n");
smp_mb(); return NETDEV_TX_BUSY;
/* Space might have just been freed - check again */
if (axienet_check_tx_bd_space(lp, num_frag))
return NETDEV_TX_BUSY;
netif_wake_queue(ndev);
} }
if (skb->ip_summed == CHECKSUM_PARTIAL) { if (skb->ip_summed == CHECKSUM_PARTIAL) {
@ -804,6 +817,18 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (++lp->tx_bd_tail >= lp->tx_bd_num) if (++lp->tx_bd_tail >= lp->tx_bd_num)
lp->tx_bd_tail = 0; lp->tx_bd_tail = 0;
/* Stop queue if next transmit may not have space */
if (axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
netif_stop_queue(ndev);
/* Matches barrier in axienet_start_xmit_done */
smp_mb();
/* Space might have just been freed - check again */
if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
netif_wake_queue(ndev);
}
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -834,6 +859,8 @@ static void axienet_recv(struct net_device *ndev)
tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci; tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
/* Ensure we see complete descriptor update */
dma_rmb();
phys = desc_get_phys_addr(lp, cur_p); phys = desc_get_phys_addr(lp, cur_p);
dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size, dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
@ -1352,7 +1379,8 @@ axienet_ethtools_set_ringparam(struct net_device *ndev,
if (ering->rx_pending > RX_BD_NUM_MAX || if (ering->rx_pending > RX_BD_NUM_MAX ||
ering->rx_mini_pending || ering->rx_mini_pending ||
ering->rx_jumbo_pending || ering->rx_jumbo_pending ||
ering->rx_pending > TX_BD_NUM_MAX) ering->tx_pending < TX_BD_NUM_MIN ||
ering->tx_pending > TX_BD_NUM_MAX)
return -EINVAL; return -EINVAL;
if (netif_running(ndev)) if (netif_running(ndev))
@ -2027,6 +2055,11 @@ static int axienet_probe(struct platform_device *pdev)
lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD; lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD; lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
/* Reset core now that clocks are enabled, prior to accessing MDIO */
ret = __axienet_device_reset(lp);
if (ret)
goto cleanup_clk;
lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
if (lp->phy_node) { if (lp->phy_node) {
ret = axienet_mdio_setup(lp); ret = axienet_mdio_setup(lp);

View File

@ -1080,27 +1080,38 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
{ {
struct gsi *gsi; struct gsi *gsi;
u32 backlog; u32 backlog;
int delta;
if (!endpoint->replenish_enabled) { if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) {
if (add_one) if (add_one)
atomic_inc(&endpoint->replenish_saved); atomic_inc(&endpoint->replenish_saved);
return; return;
} }
/* If already active, just update the backlog */
if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags)) {
if (add_one)
atomic_inc(&endpoint->replenish_backlog);
return;
}
while (atomic_dec_not_zero(&endpoint->replenish_backlog)) while (atomic_dec_not_zero(&endpoint->replenish_backlog))
if (ipa_endpoint_replenish_one(endpoint)) if (ipa_endpoint_replenish_one(endpoint))
goto try_again_later; goto try_again_later;
clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
if (add_one) if (add_one)
atomic_inc(&endpoint->replenish_backlog); atomic_inc(&endpoint->replenish_backlog);
return; return;
try_again_later: try_again_later:
/* The last one didn't succeed, so fix the backlog */ clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
backlog = atomic_inc_return(&endpoint->replenish_backlog);
if (add_one) /* The last one didn't succeed, so fix the backlog */
atomic_inc(&endpoint->replenish_backlog); delta = add_one ? 2 : 1;
backlog = atomic_add_return(delta, &endpoint->replenish_backlog);
/* Whenever a receive buffer transaction completes we'll try to /* Whenever a receive buffer transaction completes we'll try to
* replenish again. It's unlikely, but if we fail to supply even * replenish again. It's unlikely, but if we fail to supply even
@ -1120,7 +1131,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
u32 max_backlog; u32 max_backlog;
u32 saved; u32 saved;
endpoint->replenish_enabled = true; set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
while ((saved = atomic_xchg(&endpoint->replenish_saved, 0))) while ((saved = atomic_xchg(&endpoint->replenish_saved, 0)))
atomic_add(saved, &endpoint->replenish_backlog); atomic_add(saved, &endpoint->replenish_backlog);
@ -1134,7 +1145,7 @@ static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
{ {
u32 backlog; u32 backlog;
endpoint->replenish_enabled = false; clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0))) while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0)))
atomic_add(backlog, &endpoint->replenish_saved); atomic_add(backlog, &endpoint->replenish_saved);
} }
@ -1691,7 +1702,8 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
/* RX transactions require a single TRE, so the maximum /* RX transactions require a single TRE, so the maximum
* backlog is the same as the maximum outstanding TREs. * backlog is the same as the maximum outstanding TREs.
*/ */
endpoint->replenish_enabled = false; clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags);
clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags);
atomic_set(&endpoint->replenish_saved, atomic_set(&endpoint->replenish_saved,
gsi_channel_tre_max(gsi, endpoint->channel_id)); gsi_channel_tre_max(gsi, endpoint->channel_id));
atomic_set(&endpoint->replenish_backlog, 0); atomic_set(&endpoint->replenish_backlog, 0);

View File

@ -40,6 +40,19 @@ enum ipa_endpoint_name {
#define IPA_ENDPOINT_MAX 32 /* Max supported by driver */ #define IPA_ENDPOINT_MAX 32 /* Max supported by driver */
/**
* enum ipa_replenish_flag: RX buffer replenish flags
*
* @IPA_REPLENISH_ENABLED: Whether receive buffer replenishing is enabled
* @IPA_REPLENISH_ACTIVE: Whether replenishing is underway
* @IPA_REPLENISH_COUNT: Number of defined replenish flags
*/
enum ipa_replenish_flag {
IPA_REPLENISH_ENABLED,
IPA_REPLENISH_ACTIVE,
IPA_REPLENISH_COUNT, /* Number of flags (must be last) */
};
/** /**
* struct ipa_endpoint - IPA endpoint information * struct ipa_endpoint - IPA endpoint information
* @ipa: IPA pointer * @ipa: IPA pointer
@ -51,7 +64,7 @@ enum ipa_endpoint_name {
* @trans_tre_max: Maximum number of TRE descriptors per transaction * @trans_tre_max: Maximum number of TRE descriptors per transaction
* @evt_ring_id: GSI event ring used by the endpoint * @evt_ring_id: GSI event ring used by the endpoint
* @netdev: Network device pointer, if endpoint uses one * @netdev: Network device pointer, if endpoint uses one
* @replenish_enabled: Whether receive buffer replenishing is enabled * @replenish_flags: Replenishing state flags
* @replenish_ready: Number of replenish transactions without doorbell * @replenish_ready: Number of replenish transactions without doorbell
* @replenish_saved: Replenish requests held while disabled * @replenish_saved: Replenish requests held while disabled
* @replenish_backlog: Number of buffers needed to fill hardware queue * @replenish_backlog: Number of buffers needed to fill hardware queue
@ -72,7 +85,7 @@ struct ipa_endpoint {
struct net_device *netdev; struct net_device *netdev;
/* Receive buffer replenishing for RX endpoints */ /* Receive buffer replenishing for RX endpoints */
bool replenish_enabled; DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT);
u32 replenish_ready; u32 replenish_ready;
atomic_t replenish_saved; atomic_t replenish_saved;
atomic_t replenish_backlog; atomic_t replenish_backlog;

View File

@ -421,7 +421,7 @@ static int at803x_set_wol(struct phy_device *phydev,
const u8 *mac; const u8 *mac;
int ret, irq_enabled; int ret, irq_enabled;
unsigned int i; unsigned int i;
const unsigned int offsets[] = { static const unsigned int offsets[] = {
AT803X_LOC_MAC_ADDR_32_47_OFFSET, AT803X_LOC_MAC_ADDR_32_47_OFFSET,
AT803X_LOC_MAC_ADDR_16_31_OFFSET, AT803X_LOC_MAC_ADDR_16_31_OFFSET,
AT803X_LOC_MAC_ADDR_0_15_OFFSET, AT803X_LOC_MAC_ADDR_0_15_OFFSET,

View File

@ -189,6 +189,8 @@
#define MII_88E1510_GEN_CTRL_REG_1_MODE_RGMII_SGMII 0x4 #define MII_88E1510_GEN_CTRL_REG_1_MODE_RGMII_SGMII 0x4
#define MII_88E1510_GEN_CTRL_REG_1_RESET 0x8000 /* Soft reset */ #define MII_88E1510_GEN_CTRL_REG_1_RESET 0x8000 /* Soft reset */
#define MII_88E1510_MSCR_2 0x15
#define MII_VCT5_TX_RX_MDI0_COUPLING 0x10 #define MII_VCT5_TX_RX_MDI0_COUPLING 0x10
#define MII_VCT5_TX_RX_MDI1_COUPLING 0x11 #define MII_VCT5_TX_RX_MDI1_COUPLING 0x11
#define MII_VCT5_TX_RX_MDI2_COUPLING 0x12 #define MII_VCT5_TX_RX_MDI2_COUPLING 0x12
@ -1932,6 +1934,58 @@ static void marvell_get_stats(struct phy_device *phydev,
data[i] = marvell_get_stat(phydev, i); data[i] = marvell_get_stat(phydev, i);
} }
static int m88e1510_loopback(struct phy_device *phydev, bool enable)
{
int err;
if (enable) {
u16 bmcr_ctl = 0, mscr2_ctl = 0;
if (phydev->speed == SPEED_1000)
bmcr_ctl = BMCR_SPEED1000;
else if (phydev->speed == SPEED_100)
bmcr_ctl = BMCR_SPEED100;
if (phydev->duplex == DUPLEX_FULL)
bmcr_ctl |= BMCR_FULLDPLX;
err = phy_write(phydev, MII_BMCR, bmcr_ctl);
if (err < 0)
return err;
if (phydev->speed == SPEED_1000)
mscr2_ctl = BMCR_SPEED1000;
else if (phydev->speed == SPEED_100)
mscr2_ctl = BMCR_SPEED100;
err = phy_modify_paged(phydev, MII_MARVELL_MSCR_PAGE,
MII_88E1510_MSCR_2, BMCR_SPEED1000 |
BMCR_SPEED100, mscr2_ctl);
if (err < 0)
return err;
/* Need soft reset to have speed configuration takes effect */
err = genphy_soft_reset(phydev);
if (err < 0)
return err;
/* FIXME: Based on trial and error test, it seem 1G need to have
* delay between soft reset and loopback enablement.
*/
if (phydev->speed == SPEED_1000)
msleep(1000);
return phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK,
BMCR_LOOPBACK);
} else {
err = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 0);
if (err < 0)
return err;
return phy_config_aneg(phydev);
}
}
static int marvell_vct5_wait_complete(struct phy_device *phydev) static int marvell_vct5_wait_complete(struct phy_device *phydev)
{ {
int i; int i;
@ -3078,7 +3132,7 @@ static struct phy_driver marvell_drivers[] = {
.get_sset_count = marvell_get_sset_count, .get_sset_count = marvell_get_sset_count,
.get_strings = marvell_get_strings, .get_strings = marvell_get_strings,
.get_stats = marvell_get_stats, .get_stats = marvell_get_stats,
.set_loopback = genphy_loopback, .set_loopback = m88e1510_loopback,
.get_tunable = m88e1011_get_tunable, .get_tunable = m88e1011_get_tunable,
.set_tunable = m88e1011_set_tunable, .set_tunable = m88e1011_set_tunable,
.cable_test_start = marvell_vct7_cable_test_start, .cable_test_start = marvell_vct7_cable_test_start,

View File

@ -1726,8 +1726,8 @@ static struct phy_driver ksphy_driver[] = {
.config_init = kszphy_config_init, .config_init = kszphy_config_init,
.config_intr = kszphy_config_intr, .config_intr = kszphy_config_intr,
.handle_interrupt = kszphy_handle_interrupt, .handle_interrupt = kszphy_handle_interrupt,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8021, .phy_id = PHY_ID_KSZ8021,
.phy_id_mask = 0x00ffffff, .phy_id_mask = 0x00ffffff,
@ -1741,8 +1741,8 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8031, .phy_id = PHY_ID_KSZ8031,
.phy_id_mask = 0x00ffffff, .phy_id_mask = 0x00ffffff,
@ -1756,8 +1756,8 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8041, .phy_id = PHY_ID_KSZ8041,
.phy_id_mask = MICREL_PHY_ID_MASK, .phy_id_mask = MICREL_PHY_ID_MASK,
@ -1788,8 +1788,8 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.name = "Micrel KSZ8051", .name = "Micrel KSZ8051",
/* PHY_BASIC_FEATURES */ /* PHY_BASIC_FEATURES */
@ -1802,8 +1802,8 @@ static struct phy_driver ksphy_driver[] = {
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.match_phy_device = ksz8051_match_phy_device, .match_phy_device = ksz8051_match_phy_device,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8001, .phy_id = PHY_ID_KSZ8001,
.name = "Micrel KSZ8001 or KS8721", .name = "Micrel KSZ8001 or KS8721",
@ -1817,8 +1817,8 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8081, .phy_id = PHY_ID_KSZ8081,
.name = "Micrel KSZ8081 or KSZ8091", .name = "Micrel KSZ8081 or KSZ8091",
@ -1848,8 +1848,8 @@ static struct phy_driver ksphy_driver[] = {
.config_init = ksz8061_config_init, .config_init = ksz8061_config_init,
.config_intr = kszphy_config_intr, .config_intr = kszphy_config_intr,
.handle_interrupt = kszphy_handle_interrupt, .handle_interrupt = kszphy_handle_interrupt,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ9021, .phy_id = PHY_ID_KSZ9021,
.phy_id_mask = 0x000ffffe, .phy_id_mask = 0x000ffffe,
@ -1864,8 +1864,8 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = genphy_resume, .resume = kszphy_resume,
.read_mmd = genphy_read_mmd_unsupported, .read_mmd = genphy_read_mmd_unsupported,
.write_mmd = genphy_write_mmd_unsupported, .write_mmd = genphy_write_mmd_unsupported,
}, { }, {
@ -1883,7 +1883,7 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = kszphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_LAN8814, .phy_id = PHY_ID_LAN8814,
@ -1928,7 +1928,7 @@ static struct phy_driver ksphy_driver[] = {
.get_sset_count = kszphy_get_sset_count, .get_sset_count = kszphy_get_sset_count,
.get_strings = kszphy_get_strings, .get_strings = kszphy_get_strings,
.get_stats = kszphy_get_stats, .get_stats = kszphy_get_stats,
.suspend = genphy_suspend, .suspend = kszphy_suspend,
.resume = kszphy_resume, .resume = kszphy_resume,
}, { }, {
.phy_id = PHY_ID_KSZ8873MLL, .phy_id = PHY_ID_KSZ8873MLL,

View File

@ -1641,17 +1641,20 @@ static int sfp_sm_probe_for_phy(struct sfp *sfp)
static int sfp_module_parse_power(struct sfp *sfp) static int sfp_module_parse_power(struct sfp *sfp)
{ {
u32 power_mW = 1000; u32 power_mW = 1000;
bool supports_a2;
if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL)) if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
power_mW = 1500; power_mW = 1500;
if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL)) if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
power_mW = 2000; power_mW = 2000;
supports_a2 = sfp->id.ext.sff8472_compliance !=
SFP_SFF8472_COMPLIANCE_NONE ||
sfp->id.ext.diagmon & SFP_DIAGMON_DDM;
if (power_mW > sfp->max_power_mW) { if (power_mW > sfp->max_power_mW) {
/* Module power specification exceeds the allowed maximum. */ /* Module power specification exceeds the allowed maximum. */
if (sfp->id.ext.sff8472_compliance == if (!supports_a2) {
SFP_SFF8472_COMPLIANCE_NONE &&
!(sfp->id.ext.diagmon & SFP_DIAGMON_DDM)) {
/* The module appears not to implement bus address /* The module appears not to implement bus address
* 0xa2, so assume that the module powers up in the * 0xa2, so assume that the module powers up in the
* indicated mode. * indicated mode.
@ -1668,11 +1671,25 @@ static int sfp_module_parse_power(struct sfp *sfp)
} }
} }
if (power_mW <= 1000) {
/* Modules below 1W do not require a power change sequence */
sfp->module_power_mW = power_mW;
return 0;
}
if (!supports_a2) {
/* The module power level is below the host maximum and the
* module appears not to implement bus address 0xa2, so assume
* that the module powers up in the indicated mode.
*/
return 0;
}
/* If the module requires a higher power mode, but also requires /* If the module requires a higher power mode, but also requires
* an address change sequence, warn the user that the module may * an address change sequence, warn the user that the module may
* not be functional. * not be functional.
*/ */
if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE && power_mW > 1000) { if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE) {
dev_warn(sfp->dev, dev_warn(sfp->dev,
"Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n", "Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n",
power_mW / 1000, (power_mW / 100) % 10); power_mW / 1000, (power_mW / 100) % 10);

View File

@ -1316,6 +1316,7 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */
{QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */
{QMI_FIXED_INTF(0x19d2, 0x1432, 3)}, /* ZTE ME3620 */ {QMI_FIXED_INTF(0x19d2, 0x1432, 3)}, /* ZTE ME3620 */
{QMI_FIXED_INTF(0x19d2, 0x1485, 5)}, /* ZTE MF286D */
{QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */
{QMI_FIXED_INTF(0x2001, 0x7e16, 3)}, /* D-Link DWM-221 */ {QMI_FIXED_INTF(0x2001, 0x7e16, 3)}, /* D-Link DWM-221 */
{QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */ {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */
@ -1401,6 +1402,7 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/ {QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/
{QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */ {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */ {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
{QMI_QUIRK_SET_DTR(0x22de, 0x9051, 2)}, /* Hucom Wireless HM-211S/K */
{QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
{QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */ {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */

View File

@ -1962,7 +1962,8 @@ static const struct driver_info smsc95xx_info = {
.bind = smsc95xx_bind, .bind = smsc95xx_bind,
.unbind = smsc95xx_unbind, .unbind = smsc95xx_unbind,
.link_reset = smsc95xx_link_reset, .link_reset = smsc95xx_link_reset,
.reset = smsc95xx_start_phy, .reset = smsc95xx_reset,
.check_connect = smsc95xx_start_phy,
.stop = smsc95xx_stop, .stop = smsc95xx_stop,
.rx_fixup = smsc95xx_rx_fixup, .rx_fixup = smsc95xx_rx_fixup,
.tx_fixup = smsc95xx_tx_fixup, .tx_fixup = smsc95xx_tx_fixup,

View File

@ -385,13 +385,13 @@ static void mhi_net_rx_refill_work(struct work_struct *work)
int err; int err;
while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) { while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) {
struct sk_buff *skb = alloc_skb(MHI_DEFAULT_MRU, GFP_KERNEL); struct sk_buff *skb = alloc_skb(mbim->mru, GFP_KERNEL);
if (unlikely(!skb)) if (unlikely(!skb))
break; break;
err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb, err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb,
MHI_DEFAULT_MRU, MHI_EOT); mbim->mru, MHI_EOT);
if (unlikely(err)) { if (unlikely(err)) {
kfree_skb(skb); kfree_skb(skb);
break; break;

View File

@ -188,7 +188,7 @@ do { \
static void pn544_hci_i2c_platform_init(struct pn544_i2c_phy *phy) static void pn544_hci_i2c_platform_init(struct pn544_i2c_phy *phy)
{ {
int polarity, retry, ret; int polarity, retry, ret;
char rset_cmd[] = { 0x05, 0xF9, 0x04, 0x00, 0xC3, 0xE5 }; static const char rset_cmd[] = { 0x05, 0xF9, 0x04, 0x00, 0xC3, 0xE5 };
int count = sizeof(rset_cmd); int count = sizeof(rset_cmd);
nfc_info(&phy->i2c_dev->dev, "Detecting nfc_en polarity\n"); nfc_info(&phy->i2c_dev->dev, "Detecting nfc_en polarity\n");

View File

@ -316,6 +316,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
return -ENOMEM; return -ENOMEM;
transaction->aid_len = skb->data[1]; transaction->aid_len = skb->data[1];
/* Checking if the length of the AID is valid */
if (transaction->aid_len > sizeof(transaction->aid))
return -EINVAL;
memcpy(transaction->aid, &skb->data[2], memcpy(transaction->aid, &skb->data[2],
transaction->aid_len); transaction->aid_len);
@ -325,6 +330,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
return -EPROTO; return -EPROTO;
transaction->params_len = skb->data[transaction->aid_len + 3]; transaction->params_len = skb->data[transaction->aid_len + 3];
/* Total size is allocated (skb->len - 2) minus fixed array members */
if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
return -EINVAL;
memcpy(transaction->params, skb->data + memcpy(transaction->params, skb->data +
transaction->aid_len + 4, transaction->params_len); transaction->aid_len + 4, transaction->params_len);

View File

@ -316,7 +316,12 @@ enum bpf_type_flag {
*/ */
MEM_RDONLY = BIT(1 + BPF_BASE_TYPE_BITS), MEM_RDONLY = BIT(1 + BPF_BASE_TYPE_BITS),
__BPF_TYPE_LAST_FLAG = MEM_RDONLY, /* MEM was "allocated" from a different helper, and cannot be mixed
* with regular non-MEM_ALLOC'ed MEM types.
*/
MEM_ALLOC = BIT(2 + BPF_BASE_TYPE_BITS),
__BPF_TYPE_LAST_FLAG = MEM_ALLOC,
}; };
/* Max number of base types. */ /* Max number of base types. */
@ -400,7 +405,7 @@ enum bpf_return_type {
RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET, RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET,
RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK, RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK,
RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON,
RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_ALLOC_MEM, RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_ALLOC_MEM,
RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID,
/* This must be the last entry. Its purpose is to ensure the enum is /* This must be the last entry. Its purpose is to ensure the enum is

View File

@ -519,8 +519,8 @@ bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off,
void void
bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt); bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt);
int check_ctx_reg(struct bpf_verifier_env *env, int check_ptr_off_reg(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg, int regno); const struct bpf_reg_state *reg, int regno);
int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
u32 regno, u32 mem_size); u32 regno, u32 mem_size);

View File

@ -117,8 +117,15 @@ int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net);
static inline void fqdir_pre_exit(struct fqdir *fqdir) static inline void fqdir_pre_exit(struct fqdir *fqdir)
{ {
fqdir->high_thresh = 0; /* prevent creation of new frags */ /* Prevent creation of new frags.
fqdir->dead = true; * Pairs with READ_ONCE() in inet_frag_find().
*/
WRITE_ONCE(fqdir->high_thresh, 0);
/* Pairs with READ_ONCE() in inet_frag_kill(), ip_expire()
* and ip6frag_expire_frag_queue().
*/
WRITE_ONCE(fqdir->dead, true);
} }
void fqdir_exit(struct fqdir *fqdir); void fqdir_exit(struct fqdir *fqdir);

View File

@ -67,7 +67,8 @@ ip6frag_expire_frag_queue(struct net *net, struct frag_queue *fq)
struct sk_buff *head; struct sk_buff *head;
rcu_read_lock(); rcu_read_lock();
if (fq->q.fqdir->dead) /* Paired with the WRITE_ONCE() in fqdir_pre_exit(). */
if (READ_ONCE(fq->q.fqdir->dead))
goto out_rcu_unlock; goto out_rcu_unlock;
spin_lock(&fq->q.lock); spin_lock(&fq->q.lock);

View File

@ -218,8 +218,10 @@ static inline int tcf_exts_init(struct tcf_exts *exts, struct net *net,
#ifdef CONFIG_NET_CLS_ACT #ifdef CONFIG_NET_CLS_ACT
exts->type = 0; exts->type = 0;
exts->nr_actions = 0; exts->nr_actions = 0;
/* Note: we do not own yet a reference on net.
* This reference might be taken later from tcf_exts_get_net().
*/
exts->net = net; exts->net = net;
netns_tracker_alloc(net, &exts->ns_tracker, GFP_KERNEL);
exts->actions = kcalloc(TCA_ACT_MAX_PRIO, sizeof(struct tc_action *), exts->actions = kcalloc(TCA_ACT_MAX_PRIO, sizeof(struct tc_action *),
GFP_KERNEL); GFP_KERNEL);
if (!exts->actions) if (!exts->actions)

View File

@ -1244,6 +1244,7 @@ struct psched_ratecfg {
u64 rate_bytes_ps; /* bytes per second */ u64 rate_bytes_ps; /* bytes per second */
u32 mult; u32 mult;
u16 overhead; u16 overhead;
u16 mpu;
u8 linklayer; u8 linklayer;
u8 shift; u8 shift;
}; };
@ -1253,6 +1254,9 @@ static inline u64 psched_l2t_ns(const struct psched_ratecfg *r,
{ {
len += r->overhead; len += r->overhead;
if (len < r->mpu)
len = r->mpu;
if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) if (unlikely(r->linklayer == TC_LINKLAYER_ATM))
return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift; return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift;
@ -1275,6 +1279,7 @@ static inline void psched_ratecfg_getrate(struct tc_ratespec *res,
res->rate = min_t(u64, r->rate_bytes_ps, ~0U); res->rate = min_t(u64, r->rate_bytes_ps, ~0U);
res->overhead = r->overhead; res->overhead = r->overhead;
res->mpu = r->mpu;
res->linklayer = (r->linklayer & TC_LINKLAYER_MASK); res->linklayer = (r->linklayer & TC_LINKLAYER_MASK);
} }

View File

@ -5686,7 +5686,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
i, btf_type_str(t)); i, btf_type_str(t));
return -EINVAL; return -EINVAL;
} }
if (check_ctx_reg(env, reg, regno)) if (check_ptr_off_reg(env, reg, regno))
return -EINVAL; return -EINVAL;
} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) { } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) {
const struct btf_type *reg_ref_t; const struct btf_type *reg_ref_t;

View File

@ -648,12 +648,22 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
int opt; int opt;
opt = fs_parse(fc, bpf_fs_parameters, param, &result); opt = fs_parse(fc, bpf_fs_parameters, param, &result);
if (opt < 0) if (opt < 0) {
/* We might like to report bad mount options here, but /* We might like to report bad mount options here, but
* traditionally we've ignored all mount options, so we'd * traditionally we've ignored all mount options, so we'd
* better continue to ignore non-existing options for bpf. * better continue to ignore non-existing options for bpf.
*/ */
return opt == -ENOPARAM ? 0 : opt; if (opt == -ENOPARAM) {
opt = vfs_parse_fs_param_source(fc, param);
if (opt != -ENOPARAM)
return opt;
return 0;
}
if (opt < 0)
return opt;
}
switch (opt) { switch (opt) {
case OPT_MODE: case OPT_MODE:

View File

@ -570,6 +570,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
if (type & MEM_RDONLY) if (type & MEM_RDONLY)
strncpy(prefix, "rdonly_", 16); strncpy(prefix, "rdonly_", 16);
if (type & MEM_ALLOC)
strncpy(prefix, "alloc_", 16);
snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s", snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s",
prefix, str[base_type(type)], postfix); prefix, str[base_type(type)], postfix);
@ -616,7 +618,7 @@ static void mark_reg_scratched(struct bpf_verifier_env *env, u32 regno)
static void mark_stack_slot_scratched(struct bpf_verifier_env *env, u32 spi) static void mark_stack_slot_scratched(struct bpf_verifier_env *env, u32 spi)
{ {
env->scratched_stack_slots |= 1UL << spi; env->scratched_stack_slots |= 1ULL << spi;
} }
static bool reg_scratched(const struct bpf_verifier_env *env, u32 regno) static bool reg_scratched(const struct bpf_verifier_env *env, u32 regno)
@ -637,14 +639,14 @@ static bool verifier_state_scratched(const struct bpf_verifier_env *env)
static void mark_verifier_state_clean(struct bpf_verifier_env *env) static void mark_verifier_state_clean(struct bpf_verifier_env *env)
{ {
env->scratched_regs = 0U; env->scratched_regs = 0U;
env->scratched_stack_slots = 0UL; env->scratched_stack_slots = 0ULL;
} }
/* Used for printing the entire verifier state. */ /* Used for printing the entire verifier state. */
static void mark_verifier_state_scratched(struct bpf_verifier_env *env) static void mark_verifier_state_scratched(struct bpf_verifier_env *env)
{ {
env->scratched_regs = ~0U; env->scratched_regs = ~0U;
env->scratched_stack_slots = ~0UL; env->scratched_stack_slots = ~0ULL;
} }
/* The reg state of a pointer or a bounded scalar was saved when /* The reg state of a pointer or a bounded scalar was saved when
@ -3969,16 +3971,17 @@ static int get_callee_stack_depth(struct bpf_verifier_env *env,
} }
#endif #endif
int check_ctx_reg(struct bpf_verifier_env *env, static int __check_ptr_off_reg(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg, int regno) const struct bpf_reg_state *reg, int regno,
bool fixed_off_ok)
{ {
/* Access to ctx or passing it to a helper is only allowed in /* Access to this pointer-typed register or passing it to a helper
* its original, unmodified form. * is only allowed in its original, unmodified form.
*/ */
if (reg->off) { if (!fixed_off_ok && reg->off) {
verbose(env, "dereference of modified ctx ptr R%d off=%d disallowed\n", verbose(env, "dereference of modified %s ptr R%d off=%d disallowed\n",
regno, reg->off); reg_type_str(env, reg->type), regno, reg->off);
return -EACCES; return -EACCES;
} }
@ -3986,13 +3989,20 @@ int check_ctx_reg(struct bpf_verifier_env *env,
char tn_buf[48]; char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env, "variable ctx access var_off=%s disallowed\n", tn_buf); verbose(env, "variable %s access var_off=%s disallowed\n",
reg_type_str(env, reg->type), tn_buf);
return -EACCES; return -EACCES;
} }
return 0; return 0;
} }
int check_ptr_off_reg(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg, int regno)
{
return __check_ptr_off_reg(env, reg, regno, false);
}
static int __check_buffer_access(struct bpf_verifier_env *env, static int __check_buffer_access(struct bpf_verifier_env *env,
const char *buf_info, const char *buf_info,
const struct bpf_reg_state *reg, const struct bpf_reg_state *reg,
@ -4437,7 +4447,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES; return -EACCES;
} }
err = check_ctx_reg(env, reg, regno); err = check_ptr_off_reg(env, reg, regno);
if (err < 0) if (err < 0)
return err; return err;
@ -5127,6 +5137,7 @@ static const struct bpf_reg_types mem_types = {
PTR_TO_MAP_KEY, PTR_TO_MAP_KEY,
PTR_TO_MAP_VALUE, PTR_TO_MAP_VALUE,
PTR_TO_MEM, PTR_TO_MEM,
PTR_TO_MEM | MEM_ALLOC,
PTR_TO_BUF, PTR_TO_BUF,
}, },
}; };
@ -5144,7 +5155,7 @@ static const struct bpf_reg_types int_ptr_types = {
static const struct bpf_reg_types fullsock_types = { .types = { PTR_TO_SOCKET } }; static const struct bpf_reg_types fullsock_types = { .types = { PTR_TO_SOCKET } };
static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM } }; static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } };
@ -5244,12 +5255,6 @@ found:
kernel_type_name(btf_vmlinux, *arg_btf_id)); kernel_type_name(btf_vmlinux, *arg_btf_id));
return -EACCES; return -EACCES;
} }
if (!tnum_is_const(reg->var_off) || reg->var_off.value) {
verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
regno);
return -EACCES;
}
} }
return 0; return 0;
@ -5304,10 +5309,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
if (err) if (err)
return err; return err;
if (type == PTR_TO_CTX) { switch ((u32)type) {
err = check_ctx_reg(env, reg, regno); case SCALAR_VALUE:
/* Pointer types where reg offset is explicitly allowed: */
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
case PTR_TO_MAP_KEY:
case PTR_TO_MAP_VALUE:
case PTR_TO_MEM:
case PTR_TO_MEM | MEM_RDONLY:
case PTR_TO_MEM | MEM_ALLOC:
case PTR_TO_BUF:
case PTR_TO_BUF | MEM_RDONLY:
case PTR_TO_STACK:
/* Some of the argument types nevertheless require a
* zero register offset.
*/
if (arg_type == ARG_PTR_TO_ALLOC_MEM)
goto force_off_check;
break;
/* All the rest must be rejected: */
default:
force_off_check:
err = __check_ptr_off_reg(env, reg, regno,
type == PTR_TO_BTF_ID);
if (err < 0) if (err < 0)
return err; return err;
break;
} }
skip_type_check: skip_type_check:
@ -9507,9 +9535,13 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
return 0; return 0;
} }
if (insn->src_reg == BPF_PSEUDO_BTF_ID) { /* All special src_reg cases are listed below. From this point onwards
mark_reg_known_zero(env, regs, insn->dst_reg); * we either succeed and assign a corresponding dst_reg->type after
* zeroing the offset, or fail and reject the program.
*/
mark_reg_known_zero(env, regs, insn->dst_reg);
if (insn->src_reg == BPF_PSEUDO_BTF_ID) {
dst_reg->type = aux->btf_var.reg_type; dst_reg->type = aux->btf_var.reg_type;
switch (base_type(dst_reg->type)) { switch (base_type(dst_reg->type)) {
case PTR_TO_MEM: case PTR_TO_MEM:
@ -9547,7 +9579,6 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
} }
map = env->used_maps[aux->map_index]; map = env->used_maps[aux->map_index];
mark_reg_known_zero(env, regs, insn->dst_reg);
dst_reg->map_ptr = map; dst_reg->map_ptr = map;
if (insn->src_reg == BPF_PSEUDO_MAP_VALUE || if (insn->src_reg == BPF_PSEUDO_MAP_VALUE ||
@ -9651,7 +9682,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
return err; return err;
} }
err = check_ctx_reg(env, &regs[ctx_reg], ctx_reg); err = check_ptr_off_reg(env, &regs[ctx_reg], ctx_reg);
if (err < 0) if (err < 0)
return err; return err;

View File

@ -69,9 +69,12 @@ int ref_tracker_alloc(struct ref_tracker_dir *dir,
unsigned long entries[REF_TRACKER_STACK_ENTRIES]; unsigned long entries[REF_TRACKER_STACK_ENTRIES];
struct ref_tracker *tracker; struct ref_tracker *tracker;
unsigned int nr_entries; unsigned int nr_entries;
gfp_t gfp_mask = gfp;
unsigned long flags; unsigned long flags;
*trackerp = tracker = kzalloc(sizeof(*tracker), gfp | __GFP_NOFAIL); if (gfp & __GFP_DIRECT_RECLAIM)
gfp_mask |= __GFP_NOFAIL;
*trackerp = tracker = kzalloc(sizeof(*tracker), gfp_mask);
if (unlikely(!tracker)) { if (unlikely(!tracker)) {
pr_err_once("memory allocation failure, unreliable refcount tracker.\n"); pr_err_once("memory allocation failure, unreliable refcount tracker.\n");
refcount_inc(&dir->untracked); refcount_inc(&dir->untracked);

View File

@ -615,6 +615,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
err = dev_set_allmulti(dev, 1); err = dev_set_allmulti(dev, 1);
if (err) { if (err) {
br_multicast_del_port(p); br_multicast_del_port(p);
dev_put_track(dev, &p->dev_tracker);
kfree(p); /* kobject not yet init'd, manually free */ kfree(p); /* kobject not yet init'd, manually free */
goto err1; goto err1;
} }
@ -724,10 +725,10 @@ err3:
sysfs_remove_link(br->ifobj, p->dev->name); sysfs_remove_link(br->ifobj, p->dev->name);
err2: err2:
br_multicast_del_port(p); br_multicast_del_port(p);
dev_put_track(dev, &p->dev_tracker);
kobject_put(&p->kobj); kobject_put(&p->kobj);
dev_set_allmulti(dev, -1); dev_set_allmulti(dev, -1);
err1: err1:
dev_put(dev);
return err; return err;
} }

View File

@ -8981,6 +8981,12 @@ static int bpf_xdp_link_update(struct bpf_link *link, struct bpf_prog *new_prog,
goto out_unlock; goto out_unlock;
} }
old_prog = link->prog; old_prog = link->prog;
if (old_prog->type != new_prog->type ||
old_prog->expected_attach_type != new_prog->expected_attach_type) {
err = -EINVAL;
goto out_unlock;
}
if (old_prog == new_prog) { if (old_prog == new_prog) {
/* no-op, don't disturb drivers */ /* no-op, don't disturb drivers */
bpf_prog_put(new_prog); bpf_prog_put(new_prog);

View File

@ -164,8 +164,10 @@ static void ops_exit_list(const struct pernet_operations *ops,
{ {
struct net *net; struct net *net;
if (ops->exit) { if (ops->exit) {
list_for_each_entry(net, net_exit_list, exit_list) list_for_each_entry(net, net_exit_list, exit_list) {
ops->exit(net); ops->exit(net);
cond_resched();
}
} }
if (ops->exit_batch) if (ops->exit_batch)
ops->exit_batch(net_exit_list); ops->exit_batch(net_exit_list);

View File

@ -61,7 +61,7 @@ static int of_get_mac_addr_nvmem(struct device_node *np, u8 *addr)
{ {
struct platform_device *pdev = of_find_device_by_node(np); struct platform_device *pdev = of_find_device_by_node(np);
struct nvmem_cell *cell; struct nvmem_cell *cell;
const void *buf; const void *mac;
size_t len; size_t len;
int ret; int ret;
@ -78,32 +78,21 @@ static int of_get_mac_addr_nvmem(struct device_node *np, u8 *addr)
if (IS_ERR(cell)) if (IS_ERR(cell))
return PTR_ERR(cell); return PTR_ERR(cell);
buf = nvmem_cell_read(cell, &len); mac = nvmem_cell_read(cell, &len);
nvmem_cell_put(cell); nvmem_cell_put(cell);
if (IS_ERR(buf)) if (IS_ERR(mac))
return PTR_ERR(buf); return PTR_ERR(mac);
ret = 0; if (len != ETH_ALEN || !is_valid_ether_addr(mac)) {
if (len == ETH_ALEN) { kfree(mac);
if (is_valid_ether_addr(buf)) return -EINVAL;
memcpy(addr, buf, ETH_ALEN);
else
ret = -EINVAL;
} else if (len == 3 * ETH_ALEN - 1) {
u8 mac[ETH_ALEN];
if (mac_pton(buf, mac))
memcpy(addr, mac, ETH_ALEN);
else
ret = -EINVAL;
} else {
ret = -EINVAL;
} }
kfree(buf); memcpy(addr, mac, ETH_ALEN);
kfree(mac);
return ret; return 0;
} }
/** /**

View File

@ -844,6 +844,8 @@ static int sock_timestamping_bind_phc(struct sock *sk, int phc_index)
} }
num = ethtool_get_phc_vclocks(dev, &vclock_index); num = ethtool_get_phc_vclocks(dev, &vclock_index);
dev_put(dev);
for (i = 0; i < num; i++) { for (i = 0; i < num; i++) {
if (*(vclock_index + i) == phc_index) { if (*(vclock_index + i) == phc_index) {
match = true; match = true;
@ -2047,6 +2049,9 @@ void sk_destruct(struct sock *sk)
{ {
bool use_call_rcu = sock_flag(sk, SOCK_RCU_FREE); bool use_call_rcu = sock_flag(sk, SOCK_RCU_FREE);
WARN_ON_ONCE(!llist_empty(&sk->defer_list));
sk_defer_free_flush(sk);
if (rcu_access_pointer(sk->sk_reuseport_cb)) { if (rcu_access_pointer(sk->sk_reuseport_cb)) {
reuseport_detach_sock(sk); reuseport_detach_sock(sk);
use_call_rcu = true; use_call_rcu = true;

View File

@ -29,6 +29,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/netlink.h> #include <linux/netlink.h>
#include <linux/hash.h>
#include <net/arp.h> #include <net/arp.h>
#include <net/ip.h> #include <net/ip.h>
@ -51,6 +52,7 @@ static DEFINE_SPINLOCK(fib_info_lock);
static struct hlist_head *fib_info_hash; static struct hlist_head *fib_info_hash;
static struct hlist_head *fib_info_laddrhash; static struct hlist_head *fib_info_laddrhash;
static unsigned int fib_info_hash_size; static unsigned int fib_info_hash_size;
static unsigned int fib_info_hash_bits;
static unsigned int fib_info_cnt; static unsigned int fib_info_cnt;
#define DEVINDEX_HASHBITS 8 #define DEVINDEX_HASHBITS 8
@ -249,7 +251,6 @@ void free_fib_info(struct fib_info *fi)
pr_warn("Freeing alive fib_info %p\n", fi); pr_warn("Freeing alive fib_info %p\n", fi);
return; return;
} }
fib_info_cnt--;
call_rcu(&fi->rcu, free_fib_info_rcu); call_rcu(&fi->rcu, free_fib_info_rcu);
} }
@ -260,6 +261,10 @@ void fib_release_info(struct fib_info *fi)
spin_lock_bh(&fib_info_lock); spin_lock_bh(&fib_info_lock);
if (fi && refcount_dec_and_test(&fi->fib_treeref)) { if (fi && refcount_dec_and_test(&fi->fib_treeref)) {
hlist_del(&fi->fib_hash); hlist_del(&fi->fib_hash);
/* Paired with READ_ONCE() in fib_create_info(). */
WRITE_ONCE(fib_info_cnt, fib_info_cnt - 1);
if (fi->fib_prefsrc) if (fi->fib_prefsrc)
hlist_del(&fi->fib_lhash); hlist_del(&fi->fib_lhash);
if (fi->nh) { if (fi->nh) {
@ -316,11 +321,15 @@ static inline int nh_comp(struct fib_info *fi, struct fib_info *ofi)
static inline unsigned int fib_devindex_hashfn(unsigned int val) static inline unsigned int fib_devindex_hashfn(unsigned int val)
{ {
unsigned int mask = DEVINDEX_HASHSIZE - 1; return hash_32(val, DEVINDEX_HASHBITS);
}
return (val ^ static struct hlist_head *
(val >> DEVINDEX_HASHBITS) ^ fib_info_devhash_bucket(const struct net_device *dev)
(val >> (DEVINDEX_HASHBITS * 2))) & mask; {
u32 val = net_hash_mix(dev_net(dev)) ^ dev->ifindex;
return &fib_info_devhash[fib_devindex_hashfn(val)];
} }
static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope, static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope,
@ -430,12 +439,11 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev)
{ {
struct hlist_head *head; struct hlist_head *head;
struct fib_nh *nh; struct fib_nh *nh;
unsigned int hash;
spin_lock(&fib_info_lock); spin_lock(&fib_info_lock);
hash = fib_devindex_hashfn(dev->ifindex); head = fib_info_devhash_bucket(dev);
head = &fib_info_devhash[hash];
hlist_for_each_entry(nh, head, nh_hash) { hlist_for_each_entry(nh, head, nh_hash) {
if (nh->fib_nh_dev == dev && if (nh->fib_nh_dev == dev &&
nh->fib_nh_gw4 == gw && nh->fib_nh_gw4 == gw &&
@ -1240,13 +1248,13 @@ int fib_check_nh(struct net *net, struct fib_nh *nh, u32 table, u8 scope,
return err; return err;
} }
static inline unsigned int fib_laddr_hashfn(__be32 val) static struct hlist_head *
fib_info_laddrhash_bucket(const struct net *net, __be32 val)
{ {
unsigned int mask = (fib_info_hash_size - 1); u32 slot = hash_32(net_hash_mix(net) ^ (__force u32)val,
fib_info_hash_bits);
return ((__force u32)val ^ return &fib_info_laddrhash[slot];
((__force u32)val >> 7) ^
((__force u32)val >> 14)) & mask;
} }
static struct hlist_head *fib_info_hash_alloc(int bytes) static struct hlist_head *fib_info_hash_alloc(int bytes)
@ -1282,6 +1290,7 @@ static void fib_info_hash_move(struct hlist_head *new_info_hash,
old_info_hash = fib_info_hash; old_info_hash = fib_info_hash;
old_laddrhash = fib_info_laddrhash; old_laddrhash = fib_info_laddrhash;
fib_info_hash_size = new_size; fib_info_hash_size = new_size;
fib_info_hash_bits = ilog2(new_size);
for (i = 0; i < old_size; i++) { for (i = 0; i < old_size; i++) {
struct hlist_head *head = &fib_info_hash[i]; struct hlist_head *head = &fib_info_hash[i];
@ -1299,21 +1308,20 @@ static void fib_info_hash_move(struct hlist_head *new_info_hash,
} }
fib_info_hash = new_info_hash; fib_info_hash = new_info_hash;
fib_info_laddrhash = new_laddrhash;
for (i = 0; i < old_size; i++) { for (i = 0; i < old_size; i++) {
struct hlist_head *lhead = &fib_info_laddrhash[i]; struct hlist_head *lhead = &old_laddrhash[i];
struct hlist_node *n; struct hlist_node *n;
struct fib_info *fi; struct fib_info *fi;
hlist_for_each_entry_safe(fi, n, lhead, fib_lhash) { hlist_for_each_entry_safe(fi, n, lhead, fib_lhash) {
struct hlist_head *ldest; struct hlist_head *ldest;
unsigned int new_hash;
new_hash = fib_laddr_hashfn(fi->fib_prefsrc); ldest = fib_info_laddrhash_bucket(fi->fib_net,
ldest = &new_laddrhash[new_hash]; fi->fib_prefsrc);
hlist_add_head(&fi->fib_lhash, ldest); hlist_add_head(&fi->fib_lhash, ldest);
} }
} }
fib_info_laddrhash = new_laddrhash;
spin_unlock_bh(&fib_info_lock); spin_unlock_bh(&fib_info_lock);
@ -1430,7 +1438,9 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
#endif #endif
err = -ENOBUFS; err = -ENOBUFS;
if (fib_info_cnt >= fib_info_hash_size) {
/* Paired with WRITE_ONCE() in fib_release_info() */
if (READ_ONCE(fib_info_cnt) >= fib_info_hash_size) {
unsigned int new_size = fib_info_hash_size << 1; unsigned int new_size = fib_info_hash_size << 1;
struct hlist_head *new_info_hash; struct hlist_head *new_info_hash;
struct hlist_head *new_laddrhash; struct hlist_head *new_laddrhash;
@ -1462,7 +1472,6 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
return ERR_PTR(err); return ERR_PTR(err);
} }
fib_info_cnt++;
fi->fib_net = net; fi->fib_net = net;
fi->fib_protocol = cfg->fc_protocol; fi->fib_protocol = cfg->fc_protocol;
fi->fib_scope = cfg->fc_scope; fi->fib_scope = cfg->fc_scope;
@ -1591,12 +1600,13 @@ link_it:
refcount_set(&fi->fib_treeref, 1); refcount_set(&fi->fib_treeref, 1);
refcount_set(&fi->fib_clntref, 1); refcount_set(&fi->fib_clntref, 1);
spin_lock_bh(&fib_info_lock); spin_lock_bh(&fib_info_lock);
fib_info_cnt++;
hlist_add_head(&fi->fib_hash, hlist_add_head(&fi->fib_hash,
&fib_info_hash[fib_info_hashfn(fi)]); &fib_info_hash[fib_info_hashfn(fi)]);
if (fi->fib_prefsrc) { if (fi->fib_prefsrc) {
struct hlist_head *head; struct hlist_head *head;
head = &fib_info_laddrhash[fib_laddr_hashfn(fi->fib_prefsrc)]; head = fib_info_laddrhash_bucket(net, fi->fib_prefsrc);
hlist_add_head(&fi->fib_lhash, head); hlist_add_head(&fi->fib_lhash, head);
} }
if (fi->nh) { if (fi->nh) {
@ -1604,12 +1614,10 @@ link_it:
} else { } else {
change_nexthops(fi) { change_nexthops(fi) {
struct hlist_head *head; struct hlist_head *head;
unsigned int hash;
if (!nexthop_nh->fib_nh_dev) if (!nexthop_nh->fib_nh_dev)
continue; continue;
hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex); head = fib_info_devhash_bucket(nexthop_nh->fib_nh_dev);
head = &fib_info_devhash[hash];
hlist_add_head(&nexthop_nh->nh_hash, head); hlist_add_head(&nexthop_nh->nh_hash, head);
} endfor_nexthops(fi) } endfor_nexthops(fi)
} }
@ -1870,16 +1878,16 @@ nla_put_failure:
*/ */
int fib_sync_down_addr(struct net_device *dev, __be32 local) int fib_sync_down_addr(struct net_device *dev, __be32 local)
{ {
int ret = 0;
unsigned int hash = fib_laddr_hashfn(local);
struct hlist_head *head = &fib_info_laddrhash[hash];
int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN; int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN;
struct net *net = dev_net(dev); struct net *net = dev_net(dev);
struct hlist_head *head;
struct fib_info *fi; struct fib_info *fi;
int ret = 0;
if (!fib_info_laddrhash || local == 0) if (!fib_info_laddrhash || local == 0)
return 0; return 0;
head = fib_info_laddrhash_bucket(net, local);
hlist_for_each_entry(fi, head, fib_lhash) { hlist_for_each_entry(fi, head, fib_lhash) {
if (!net_eq(fi->fib_net, net) || if (!net_eq(fi->fib_net, net) ||
fi->fib_tb_id != tb_id) fi->fib_tb_id != tb_id)
@ -1961,8 +1969,7 @@ void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig)
void fib_sync_mtu(struct net_device *dev, u32 orig_mtu) void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
{ {
unsigned int hash = fib_devindex_hashfn(dev->ifindex); struct hlist_head *head = fib_info_devhash_bucket(dev);
struct hlist_head *head = &fib_info_devhash[hash];
struct fib_nh *nh; struct fib_nh *nh;
hlist_for_each_entry(nh, head, nh_hash) { hlist_for_each_entry(nh, head, nh_hash) {
@ -1981,12 +1988,11 @@ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
*/ */
int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force) int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
{ {
int ret = 0; struct hlist_head *head = fib_info_devhash_bucket(dev);
int scope = RT_SCOPE_NOWHERE;
struct fib_info *prev_fi = NULL; struct fib_info *prev_fi = NULL;
unsigned int hash = fib_devindex_hashfn(dev->ifindex); int scope = RT_SCOPE_NOWHERE;
struct hlist_head *head = &fib_info_devhash[hash];
struct fib_nh *nh; struct fib_nh *nh;
int ret = 0;
if (force) if (force)
scope = -1; scope = -1;
@ -2131,7 +2137,6 @@ out:
int fib_sync_up(struct net_device *dev, unsigned char nh_flags) int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
{ {
struct fib_info *prev_fi; struct fib_info *prev_fi;
unsigned int hash;
struct hlist_head *head; struct hlist_head *head;
struct fib_nh *nh; struct fib_nh *nh;
int ret; int ret;
@ -2147,8 +2152,7 @@ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
} }
prev_fi = NULL; prev_fi = NULL;
hash = fib_devindex_hashfn(dev->ifindex); head = fib_info_devhash_bucket(dev);
head = &fib_info_devhash[hash];
ret = 0; ret = 0;
hlist_for_each_entry(nh, head, nh_hash) { hlist_for_each_entry(nh, head, nh_hash) {

View File

@ -235,9 +235,9 @@ void inet_frag_kill(struct inet_frag_queue *fq)
/* The RCU read lock provides a memory barrier /* The RCU read lock provides a memory barrier
* guaranteeing that if fqdir->dead is false then * guaranteeing that if fqdir->dead is false then
* the hash table destruction will not start until * the hash table destruction will not start until
* after we unlock. Paired with inet_frags_exit_net(). * after we unlock. Paired with fqdir_pre_exit().
*/ */
if (!fqdir->dead) { if (!READ_ONCE(fqdir->dead)) {
rhashtable_remove_fast(&fqdir->rhashtable, &fq->node, rhashtable_remove_fast(&fqdir->rhashtable, &fq->node,
fqdir->f->rhash_params); fqdir->f->rhash_params);
refcount_dec(&fq->refcnt); refcount_dec(&fq->refcnt);
@ -352,9 +352,11 @@ static struct inet_frag_queue *inet_frag_create(struct fqdir *fqdir,
/* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */ /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */
struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key) struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key)
{ {
/* This pairs with WRITE_ONCE() in fqdir_pre_exit(). */
long high_thresh = READ_ONCE(fqdir->high_thresh);
struct inet_frag_queue *fq = NULL, *prev; struct inet_frag_queue *fq = NULL, *prev;
if (!fqdir->high_thresh || frag_mem_limit(fqdir) > fqdir->high_thresh) if (!high_thresh || frag_mem_limit(fqdir) > high_thresh)
return NULL; return NULL;
rcu_read_lock(); rcu_read_lock();

View File

@ -144,7 +144,8 @@ static void ip_expire(struct timer_list *t)
rcu_read_lock(); rcu_read_lock();
if (qp->q.fqdir->dead) /* Paired with WRITE_ONCE() in fqdir_pre_exit(). */
if (READ_ONCE(qp->q.fqdir->dead))
goto out_rcu_unlock; goto out_rcu_unlock;
spin_lock(&qp->q.lock); spin_lock(&qp->q.lock);

View File

@ -604,8 +604,9 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
key = &info->key; key = &info->key;
ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src, ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
tunnel_id_to_key32(key->tun_id), key->tos, 0, tunnel_id_to_key32(key->tun_id),
skb->mark, skb_get_hash(skb)); key->tos & ~INET_ECN_MASK, 0, skb->mark,
skb_get_hash(skb));
rt = ip_route_output_key(dev_net(dev), &fl4); rt = ip_route_output_key(dev_net(dev), &fl4);
if (IS_ERR(rt)) if (IS_ERR(rt))
return PTR_ERR(rt); return PTR_ERR(rt);

View File

@ -956,7 +956,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, fl4.saddr); dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, fl4.saddr);
} }
if (rt->rt_type != RTN_UNICAST) { if (rt->rt_type != RTN_UNICAST && rt->rt_type != RTN_LOCAL) {
ip_rt_put(rt); ip_rt_put(rt);
dev->stats.tx_carrier_errors++; dev->stats.tx_carrier_errors++;
goto tx_error_icmp; goto tx_error_icmp;

View File

@ -285,7 +285,7 @@ static void __mctp_route_test_init(struct kunit *test,
struct mctp_test_route **rtp, struct mctp_test_route **rtp,
struct socket **sockp) struct socket **sockp)
{ {
struct sockaddr_mctp addr; struct sockaddr_mctp addr = {0};
struct mctp_test_route *rt; struct mctp_test_route *rt;
struct mctp_test_dev *dev; struct mctp_test_dev *dev;
struct socket *sock; struct socket *sock;

View File

@ -206,7 +206,7 @@ static int nft_connlimit_clone(struct nft_expr *dst, const struct nft_expr *src)
struct nft_connlimit *priv_src = nft_expr_priv(src); struct nft_connlimit *priv_src = nft_expr_priv(src);
priv_dst->list = kmalloc(sizeof(*priv_dst->list), GFP_ATOMIC); priv_dst->list = kmalloc(sizeof(*priv_dst->list), GFP_ATOMIC);
if (priv_dst->list) if (!priv_dst->list)
return -ENOMEM; return -ENOMEM;
nf_conncount_list_init(priv_dst->list); nf_conncount_list_init(priv_dst->list);

View File

@ -106,7 +106,7 @@ static int nft_last_clone(struct nft_expr *dst, const struct nft_expr *src)
struct nft_last_priv *priv_dst = nft_expr_priv(dst); struct nft_last_priv *priv_dst = nft_expr_priv(dst);
priv_dst->last = kzalloc(sizeof(*priv_dst->last), GFP_ATOMIC); priv_dst->last = kzalloc(sizeof(*priv_dst->last), GFP_ATOMIC);
if (priv_dst->last) if (!priv_dst->last)
return -ENOMEM; return -ENOMEM;
return 0; return 0;

View File

@ -145,7 +145,7 @@ static int nft_limit_clone(struct nft_limit_priv *priv_dst,
priv_dst->invert = priv_src->invert; priv_dst->invert = priv_src->invert;
priv_dst->limit = kmalloc(sizeof(*priv_dst->limit), GFP_ATOMIC); priv_dst->limit = kmalloc(sizeof(*priv_dst->limit), GFP_ATOMIC);
if (priv_dst->limit) if (!priv_dst->limit)
return -ENOMEM; return -ENOMEM;
spin_lock_init(&priv_dst->limit->lock); spin_lock_init(&priv_dst->limit->lock);

View File

@ -237,7 +237,7 @@ static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src)
struct nft_quota *priv_dst = nft_expr_priv(dst); struct nft_quota *priv_dst = nft_expr_priv(dst);
priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), GFP_ATOMIC); priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), GFP_ATOMIC);
if (priv_dst->consumed) if (!priv_dst->consumed)
return -ENOMEM; return -ENOMEM;
atomic64_set(priv_dst->consumed, 0); atomic64_set(priv_dst->consumed, 0);

View File

@ -789,6 +789,11 @@ static int llcp_sock_sendmsg(struct socket *sock, struct msghdr *msg,
lock_sock(sk); lock_sock(sk);
if (!llcp_sock->local) {
release_sock(sk);
return -ENODEV;
}
if (sk->sk_type == SOCK_DGRAM) { if (sk->sk_type == SOCK_DGRAM) {
DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr, DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr,
msg->msg_name); msg->msg_name);

View File

@ -1062,7 +1062,7 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
qdisc_offload_graft_root(dev, new, old, extack); qdisc_offload_graft_root(dev, new, old, extack);
if (new && new->ops->attach) if (new && new->ops->attach && !ingress)
goto skip; goto skip;
for (i = 0; i < num_q; i++) { for (i = 0; i < num_q; i++) {

View File

@ -1529,6 +1529,7 @@ void psched_ratecfg_precompute(struct psched_ratecfg *r,
{ {
memset(r, 0, sizeof(*r)); memset(r, 0, sizeof(*r));
r->overhead = conf->overhead; r->overhead = conf->overhead;
r->mpu = conf->mpu;
r->rate_bytes_ps = max_t(u64, conf->rate, rate64); r->rate_bytes_ps = max_t(u64, conf->rate, rate64);
r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK); r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK);
psched_ratecfg_precompute__(r->rate_bytes_ps, &r->mult, &r->shift); psched_ratecfg_precompute__(r->rate_bytes_ps, &r->mult, &r->shift);

View File

@ -634,9 +634,13 @@ static void smc_conn_abort(struct smc_sock *smc, int local_first)
{ {
struct smc_connection *conn = &smc->conn; struct smc_connection *conn = &smc->conn;
struct smc_link_group *lgr = conn->lgr; struct smc_link_group *lgr = conn->lgr;
bool lgr_valid = false;
if (smc_conn_lgr_valid(conn))
lgr_valid = true;
smc_conn_free(conn); smc_conn_free(conn);
if (local_first) if (local_first && lgr_valid)
smc_lgr_cleanup_early(lgr); smc_lgr_cleanup_early(lgr);
} }

View File

@ -221,6 +221,7 @@ struct smc_connection {
*/ */
u64 peer_token; /* SMC-D token of peer */ u64 peer_token; /* SMC-D token of peer */
u8 killed : 1; /* abnormal termination */ u8 killed : 1; /* abnormal termination */
u8 freed : 1; /* normal termiation */
u8 out_of_sync : 1; /* out of sync with peer */ u8 out_of_sync : 1; /* out of sync with peer */
}; };

View File

@ -197,7 +197,8 @@ int smc_cdc_get_slot_and_msg_send(struct smc_connection *conn)
{ {
int rc; int rc;
if (!conn->lgr || (conn->lgr->is_smcd && conn->lgr->peer_shutdown)) if (!smc_conn_lgr_valid(conn) ||
(conn->lgr->is_smcd && conn->lgr->peer_shutdown))
return -EPIPE; return -EPIPE;
if (conn->lgr->is_smcd) { if (conn->lgr->is_smcd) {

View File

@ -774,7 +774,7 @@ int smc_clc_send_decline(struct smc_sock *smc, u32 peer_diag_info, u8 version)
dclc.os_type = version == SMC_V1 ? 0 : SMC_CLC_OS_LINUX; dclc.os_type = version == SMC_V1 ? 0 : SMC_CLC_OS_LINUX;
dclc.hdr.typev2 = (peer_diag_info == SMC_CLC_DECL_SYNCERR) ? dclc.hdr.typev2 = (peer_diag_info == SMC_CLC_DECL_SYNCERR) ?
SMC_FIRST_CONTACT_MASK : 0; SMC_FIRST_CONTACT_MASK : 0;
if ((!smc->conn.lgr || !smc->conn.lgr->is_smcd) && if ((!smc_conn_lgr_valid(&smc->conn) || !smc->conn.lgr->is_smcd) &&
smc_ib_is_valid_local_systemid()) smc_ib_is_valid_local_systemid())
memcpy(dclc.id_for_peer, local_systemid, memcpy(dclc.id_for_peer, local_systemid,
sizeof(local_systemid)); sizeof(local_systemid));

View File

@ -211,14 +211,13 @@ static void smc_lgr_unregister_conn(struct smc_connection *conn)
{ {
struct smc_link_group *lgr = conn->lgr; struct smc_link_group *lgr = conn->lgr;
if (!lgr) if (!smc_conn_lgr_valid(conn))
return; return;
write_lock_bh(&lgr->conns_lock); write_lock_bh(&lgr->conns_lock);
if (conn->alert_token_local) { if (conn->alert_token_local) {
__smc_lgr_unregister_conn(conn); __smc_lgr_unregister_conn(conn);
} }
write_unlock_bh(&lgr->conns_lock); write_unlock_bh(&lgr->conns_lock);
conn->lgr = NULL;
} }
int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb) int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb)
@ -749,9 +748,12 @@ int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk,
} }
get_device(&lnk->smcibdev->ibdev->dev); get_device(&lnk->smcibdev->ibdev->dev);
atomic_inc(&lnk->smcibdev->lnk_cnt); atomic_inc(&lnk->smcibdev->lnk_cnt);
refcount_set(&lnk->refcnt, 1); /* link refcnt is set to 1 */
lnk->clearing = 0;
lnk->path_mtu = lnk->smcibdev->pattr[lnk->ibport - 1].active_mtu; lnk->path_mtu = lnk->smcibdev->pattr[lnk->ibport - 1].active_mtu;
lnk->link_id = smcr_next_link_id(lgr); lnk->link_id = smcr_next_link_id(lgr);
lnk->lgr = lgr; lnk->lgr = lgr;
smc_lgr_hold(lgr); /* lgr_put in smcr_link_clear() */
lnk->link_idx = link_idx; lnk->link_idx = link_idx;
smc_ibdev_cnt_inc(lnk); smc_ibdev_cnt_inc(lnk);
smcr_copy_dev_info_to_link(lnk); smcr_copy_dev_info_to_link(lnk);
@ -806,6 +808,7 @@ out:
lnk->state = SMC_LNK_UNUSED; lnk->state = SMC_LNK_UNUSED;
if (!atomic_dec_return(&smcibdev->lnk_cnt)) if (!atomic_dec_return(&smcibdev->lnk_cnt))
wake_up(&smcibdev->lnks_deleted); wake_up(&smcibdev->lnks_deleted);
smc_lgr_put(lgr); /* lgr_hold above */
return rc; return rc;
} }
@ -844,6 +847,7 @@ static int smc_lgr_create(struct smc_sock *smc, struct smc_init_info *ini)
lgr->terminating = 0; lgr->terminating = 0;
lgr->freeing = 0; lgr->freeing = 0;
lgr->vlan_id = ini->vlan_id; lgr->vlan_id = ini->vlan_id;
refcount_set(&lgr->refcnt, 1); /* set lgr refcnt to 1 */
mutex_init(&lgr->sndbufs_lock); mutex_init(&lgr->sndbufs_lock);
mutex_init(&lgr->rmbs_lock); mutex_init(&lgr->rmbs_lock);
rwlock_init(&lgr->conns_lock); rwlock_init(&lgr->conns_lock);
@ -996,8 +1000,12 @@ void smc_switch_link_and_count(struct smc_connection *conn,
struct smc_link *to_lnk) struct smc_link *to_lnk)
{ {
atomic_dec(&conn->lnk->conn_cnt); atomic_dec(&conn->lnk->conn_cnt);
/* link_hold in smc_conn_create() */
smcr_link_put(conn->lnk);
conn->lnk = to_lnk; conn->lnk = to_lnk;
atomic_inc(&conn->lnk->conn_cnt); atomic_inc(&conn->lnk->conn_cnt);
/* link_put in smc_conn_free() */
smcr_link_hold(conn->lnk);
} }
struct smc_link *smc_switch_conns(struct smc_link_group *lgr, struct smc_link *smc_switch_conns(struct smc_link_group *lgr,
@ -1130,8 +1138,19 @@ void smc_conn_free(struct smc_connection *conn)
{ {
struct smc_link_group *lgr = conn->lgr; struct smc_link_group *lgr = conn->lgr;
if (!lgr) if (!lgr || conn->freed)
/* Connection has never been registered in a
* link group, or has already been freed.
*/
return; return;
conn->freed = 1;
if (!smc_conn_lgr_valid(conn))
/* Connection has already unregistered from
* link group.
*/
goto lgr_put;
if (lgr->is_smcd) { if (lgr->is_smcd) {
if (!list_empty(&lgr->list)) if (!list_empty(&lgr->list))
smc_ism_unset_conn(conn); smc_ism_unset_conn(conn);
@ -1148,6 +1167,10 @@ void smc_conn_free(struct smc_connection *conn)
if (!lgr->conns_num) if (!lgr->conns_num)
smc_lgr_schedule_free_work(lgr); smc_lgr_schedule_free_work(lgr);
lgr_put:
if (!lgr->is_smcd)
smcr_link_put(conn->lnk); /* link_hold in smc_conn_create() */
smc_lgr_put(lgr); /* lgr_hold in smc_conn_create() */
} }
/* unregister a link from a buf_desc */ /* unregister a link from a buf_desc */
@ -1203,21 +1226,11 @@ static void smcr_rtoken_clear_link(struct smc_link *lnk)
} }
} }
/* must be called under lgr->llc_conf_mutex lock */ static void __smcr_link_clear(struct smc_link *lnk)
void smcr_link_clear(struct smc_link *lnk, bool log)
{ {
struct smc_link_group *lgr = lnk->lgr;
struct smc_ib_device *smcibdev; struct smc_ib_device *smcibdev;
if (!lnk->lgr || lnk->state == SMC_LNK_UNUSED)
return;
lnk->peer_qpn = 0;
smc_llc_link_clear(lnk, log);
smcr_buf_unmap_lgr(lnk);
smcr_rtoken_clear_link(lnk);
smc_ib_modify_qp_error(lnk);
smc_wr_free_link(lnk);
smc_ib_destroy_queue_pair(lnk);
smc_ib_dealloc_protection_domain(lnk);
smc_wr_free_link_mem(lnk); smc_wr_free_link_mem(lnk);
smc_ibdev_cnt_dec(lnk); smc_ibdev_cnt_dec(lnk);
put_device(&lnk->smcibdev->ibdev->dev); put_device(&lnk->smcibdev->ibdev->dev);
@ -1226,6 +1239,36 @@ void smcr_link_clear(struct smc_link *lnk, bool log)
lnk->state = SMC_LNK_UNUSED; lnk->state = SMC_LNK_UNUSED;
if (!atomic_dec_return(&smcibdev->lnk_cnt)) if (!atomic_dec_return(&smcibdev->lnk_cnt))
wake_up(&smcibdev->lnks_deleted); wake_up(&smcibdev->lnks_deleted);
smc_lgr_put(lgr); /* lgr_hold in smcr_link_init() */
}
/* must be called under lgr->llc_conf_mutex lock */
void smcr_link_clear(struct smc_link *lnk, bool log)
{
if (!lnk->lgr || lnk->clearing ||
lnk->state == SMC_LNK_UNUSED)
return;
lnk->clearing = 1;
lnk->peer_qpn = 0;
smc_llc_link_clear(lnk, log);
smcr_buf_unmap_lgr(lnk);
smcr_rtoken_clear_link(lnk);
smc_ib_modify_qp_error(lnk);
smc_wr_free_link(lnk);
smc_ib_destroy_queue_pair(lnk);
smc_ib_dealloc_protection_domain(lnk);
smcr_link_put(lnk); /* theoretically last link_put */
}
void smcr_link_hold(struct smc_link *lnk)
{
refcount_inc(&lnk->refcnt);
}
void smcr_link_put(struct smc_link *lnk)
{
if (refcount_dec_and_test(&lnk->refcnt))
__smcr_link_clear(lnk);
} }
static void smcr_buf_free(struct smc_link_group *lgr, bool is_rmb, static void smcr_buf_free(struct smc_link_group *lgr, bool is_rmb,
@ -1290,6 +1333,21 @@ static void smc_lgr_free_bufs(struct smc_link_group *lgr)
__smc_lgr_free_bufs(lgr, true); __smc_lgr_free_bufs(lgr, true);
} }
/* won't be freed until no one accesses to lgr anymore */
static void __smc_lgr_free(struct smc_link_group *lgr)
{
smc_lgr_free_bufs(lgr);
if (lgr->is_smcd) {
if (!atomic_dec_return(&lgr->smcd->lgr_cnt))
wake_up(&lgr->smcd->lgrs_deleted);
} else {
smc_wr_free_lgr_mem(lgr);
if (!atomic_dec_return(&lgr_cnt))
wake_up(&lgrs_deleted);
}
kfree(lgr);
}
/* remove a link group */ /* remove a link group */
static void smc_lgr_free(struct smc_link_group *lgr) static void smc_lgr_free(struct smc_link_group *lgr)
{ {
@ -1305,19 +1363,23 @@ static void smc_lgr_free(struct smc_link_group *lgr)
smc_llc_lgr_clear(lgr); smc_llc_lgr_clear(lgr);
} }
smc_lgr_free_bufs(lgr);
destroy_workqueue(lgr->tx_wq); destroy_workqueue(lgr->tx_wq);
if (lgr->is_smcd) { if (lgr->is_smcd) {
smc_ism_put_vlan(lgr->smcd, lgr->vlan_id); smc_ism_put_vlan(lgr->smcd, lgr->vlan_id);
put_device(&lgr->smcd->dev); put_device(&lgr->smcd->dev);
if (!atomic_dec_return(&lgr->smcd->lgr_cnt))
wake_up(&lgr->smcd->lgrs_deleted);
} else {
smc_wr_free_lgr_mem(lgr);
if (!atomic_dec_return(&lgr_cnt))
wake_up(&lgrs_deleted);
} }
kfree(lgr); smc_lgr_put(lgr); /* theoretically last lgr_put */
}
void smc_lgr_hold(struct smc_link_group *lgr)
{
refcount_inc(&lgr->refcnt);
}
void smc_lgr_put(struct smc_link_group *lgr)
{
if (refcount_dec_and_test(&lgr->refcnt))
__smc_lgr_free(lgr);
} }
static void smc_sk_wake_ups(struct smc_sock *smc) static void smc_sk_wake_ups(struct smc_sock *smc)
@ -1469,16 +1531,11 @@ void smc_smcd_terminate_all(struct smcd_dev *smcd)
/* Called when an SMCR device is removed or the smc module is unloaded. /* Called when an SMCR device is removed or the smc module is unloaded.
* If smcibdev is given, all SMCR link groups using this device are terminated. * If smcibdev is given, all SMCR link groups using this device are terminated.
* If smcibdev is NULL, all SMCR link groups are terminated. * If smcibdev is NULL, all SMCR link groups are terminated.
*
* We must wait here for QPs been destroyed before we destroy the CQs,
* or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus
* smc_sock cannot be released.
*/ */
void smc_smcr_terminate_all(struct smc_ib_device *smcibdev) void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
{ {
struct smc_link_group *lgr, *lg; struct smc_link_group *lgr, *lg;
LIST_HEAD(lgr_free_list); LIST_HEAD(lgr_free_list);
LIST_HEAD(lgr_linkdown_list);
int i; int i;
spin_lock_bh(&smc_lgr_list.lock); spin_lock_bh(&smc_lgr_list.lock);
@ -1490,7 +1547,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) { list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
if (lgr->lnk[i].smcibdev == smcibdev) if (lgr->lnk[i].smcibdev == smcibdev)
list_move_tail(&lgr->list, &lgr_linkdown_list); smcr_link_down_cond_sched(&lgr->lnk[i]);
} }
} }
} }
@ -1502,16 +1559,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
__smc_lgr_terminate(lgr, false); __smc_lgr_terminate(lgr, false);
} }
list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
if (lgr->lnk[i].smcibdev == smcibdev) {
mutex_lock(&lgr->llc_conf_mutex);
smcr_link_down_cond(&lgr->lnk[i]);
mutex_unlock(&lgr->llc_conf_mutex);
}
}
}
if (smcibdev) { if (smcibdev) {
if (atomic_read(&smcibdev->lnk_cnt)) if (atomic_read(&smcibdev->lnk_cnt))
wait_event(smcibdev->lnks_deleted, wait_event(smcibdev->lnks_deleted,
@ -1856,6 +1903,10 @@ create:
goto out; goto out;
} }
} }
smc_lgr_hold(conn->lgr); /* lgr_put in smc_conn_free() */
if (!conn->lgr->is_smcd)
smcr_link_hold(conn->lnk); /* link_put in smc_conn_free() */
conn->freed = 0;
conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE; conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE;
conn->local_tx_ctrl.len = SMC_WR_TX_SIZE; conn->local_tx_ctrl.len = SMC_WR_TX_SIZE;
conn->urg_state = SMC_URG_READ; conn->urg_state = SMC_URG_READ;
@ -2240,14 +2291,16 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb)
void smc_sndbuf_sync_sg_for_cpu(struct smc_connection *conn) void smc_sndbuf_sync_sg_for_cpu(struct smc_connection *conn)
{ {
if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd ||
!smc_link_active(conn->lnk))
return; return;
smc_ib_sync_sg_for_cpu(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); smc_ib_sync_sg_for_cpu(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE);
} }
void smc_sndbuf_sync_sg_for_device(struct smc_connection *conn) void smc_sndbuf_sync_sg_for_device(struct smc_connection *conn)
{ {
if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd ||
!smc_link_active(conn->lnk))
return; return;
smc_ib_sync_sg_for_device(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); smc_ib_sync_sg_for_device(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE);
} }
@ -2256,7 +2309,7 @@ void smc_rmb_sync_sg_for_cpu(struct smc_connection *conn)
{ {
int i; int i;
if (!conn->lgr || conn->lgr->is_smcd) if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd)
return; return;
for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
if (!smc_link_active(&conn->lgr->lnk[i])) if (!smc_link_active(&conn->lgr->lnk[i]))
@ -2270,7 +2323,7 @@ void smc_rmb_sync_sg_for_device(struct smc_connection *conn)
{ {
int i; int i;
if (!conn->lgr || conn->lgr->is_smcd) if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd)
return; return;
for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
if (!smc_link_active(&conn->lgr->lnk[i])) if (!smc_link_active(&conn->lgr->lnk[i]))

View File

@ -137,6 +137,8 @@ struct smc_link {
u8 peer_link_uid[SMC_LGR_ID_SIZE]; /* peer uid */ u8 peer_link_uid[SMC_LGR_ID_SIZE]; /* peer uid */
u8 link_idx; /* index in lgr link array */ u8 link_idx; /* index in lgr link array */
u8 link_is_asym; /* is link asymmetric? */ u8 link_is_asym; /* is link asymmetric? */
u8 clearing : 1; /* link is being cleared */
refcount_t refcnt; /* link reference count */
struct smc_link_group *lgr; /* parent link group */ struct smc_link_group *lgr; /* parent link group */
struct work_struct link_down_wrk; /* wrk to bring link down */ struct work_struct link_down_wrk; /* wrk to bring link down */
char ibname[IB_DEVICE_NAME_MAX]; /* ib device name */ char ibname[IB_DEVICE_NAME_MAX]; /* ib device name */
@ -249,6 +251,7 @@ struct smc_link_group {
u8 terminating : 1;/* lgr is terminating */ u8 terminating : 1;/* lgr is terminating */
u8 freeing : 1; /* lgr is being freed */ u8 freeing : 1; /* lgr is being freed */
refcount_t refcnt; /* lgr reference count */
bool is_smcd; /* SMC-R or SMC-D */ bool is_smcd; /* SMC-R or SMC-D */
u8 smc_version; u8 smc_version;
u8 negotiated_eid[SMC_MAX_EID_LEN]; u8 negotiated_eid[SMC_MAX_EID_LEN];
@ -409,6 +412,11 @@ static inline struct smc_connection *smc_lgr_find_conn(
return res; return res;
} }
static inline bool smc_conn_lgr_valid(struct smc_connection *conn)
{
return conn->lgr && conn->alert_token_local;
}
/* /*
* Returns true if the specified link is usable. * Returns true if the specified link is usable.
* *
@ -487,6 +495,8 @@ struct smc_clc_msg_accept_confirm;
void smc_lgr_cleanup_early(struct smc_link_group *lgr); void smc_lgr_cleanup_early(struct smc_link_group *lgr);
void smc_lgr_terminate_sched(struct smc_link_group *lgr); void smc_lgr_terminate_sched(struct smc_link_group *lgr);
void smc_lgr_hold(struct smc_link_group *lgr);
void smc_lgr_put(struct smc_link_group *lgr);
void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport); void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport);
void smcr_port_err(struct smc_ib_device *smcibdev, u8 ibport); void smcr_port_err(struct smc_ib_device *smcibdev, u8 ibport);
void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid,
@ -518,6 +528,8 @@ void smc_core_exit(void);
int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk, int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk,
u8 link_idx, struct smc_init_info *ini); u8 link_idx, struct smc_init_info *ini);
void smcr_link_clear(struct smc_link *lnk, bool log); void smcr_link_clear(struct smc_link *lnk, bool log);
void smcr_link_hold(struct smc_link *lnk);
void smcr_link_put(struct smc_link *lnk);
void smc_switch_link_and_count(struct smc_connection *conn, void smc_switch_link_and_count(struct smc_connection *conn,
struct smc_link *to_lnk); struct smc_link *to_lnk);
int smcr_buf_map_lgr(struct smc_link *lnk); int smcr_buf_map_lgr(struct smc_link *lnk);

View File

@ -89,7 +89,7 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
r->diag_state = sk->sk_state; r->diag_state = sk->sk_state;
if (smc->use_fallback) if (smc->use_fallback)
r->diag_mode = SMC_DIAG_MODE_FALLBACK_TCP; r->diag_mode = SMC_DIAG_MODE_FALLBACK_TCP;
else if (smc->conn.lgr && smc->conn.lgr->is_smcd) else if (smc_conn_lgr_valid(&smc->conn) && smc->conn.lgr->is_smcd)
r->diag_mode = SMC_DIAG_MODE_SMCD; r->diag_mode = SMC_DIAG_MODE_SMCD;
else else
r->diag_mode = SMC_DIAG_MODE_SMCR; r->diag_mode = SMC_DIAG_MODE_SMCR;
@ -142,7 +142,7 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
goto errout; goto errout;
} }
if (smc->conn.lgr && !smc->conn.lgr->is_smcd && if (smc_conn_lgr_valid(&smc->conn) && !smc->conn.lgr->is_smcd &&
(req->diag_ext & (1 << (SMC_DIAG_LGRINFO - 1))) && (req->diag_ext & (1 << (SMC_DIAG_LGRINFO - 1))) &&
!list_empty(&smc->conn.lgr->list)) { !list_empty(&smc->conn.lgr->list)) {
struct smc_link *link = smc->conn.lnk; struct smc_link *link = smc->conn.lnk;
@ -164,7 +164,7 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
if (nla_put(skb, SMC_DIAG_LGRINFO, sizeof(linfo), &linfo) < 0) if (nla_put(skb, SMC_DIAG_LGRINFO, sizeof(linfo), &linfo) < 0)
goto errout; goto errout;
} }
if (smc->conn.lgr && smc->conn.lgr->is_smcd && if (smc_conn_lgr_valid(&smc->conn) && smc->conn.lgr->is_smcd &&
(req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) && (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) &&
!list_empty(&smc->conn.lgr->list)) { !list_empty(&smc->conn.lgr->list)) {
struct smc_connection *conn = &smc->conn; struct smc_connection *conn = &smc->conn;

View File

@ -369,7 +369,8 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
memcpy(new_pe->pnet_name, pnet_name, SMC_MAX_PNETID_LEN); memcpy(new_pe->pnet_name, pnet_name, SMC_MAX_PNETID_LEN);
strncpy(new_pe->eth_name, eth_name, IFNAMSIZ); strncpy(new_pe->eth_name, eth_name, IFNAMSIZ);
new_pe->ndev = ndev; new_pe->ndev = ndev;
netdev_tracker_alloc(ndev, &new_pe->dev_tracker, GFP_KERNEL); if (ndev)
netdev_tracker_alloc(ndev, &new_pe->dev_tracker, GFP_KERNEL);
rc = -EEXIST; rc = -EEXIST;
new_netdev = true; new_netdev = true;
write_lock(&pnettable->lock); write_lock(&pnettable->lock);

View File

@ -125,10 +125,6 @@ int smc_wr_tx_v2_send(struct smc_link *link,
int smc_wr_tx_send_wait(struct smc_link *link, struct smc_wr_tx_pend_priv *priv, int smc_wr_tx_send_wait(struct smc_link *link, struct smc_wr_tx_pend_priv *priv,
unsigned long timeout); unsigned long timeout);
void smc_wr_tx_cq_handler(struct ib_cq *ib_cq, void *cq_context); void smc_wr_tx_cq_handler(struct ib_cq *ib_cq, void *cq_context);
void smc_wr_tx_dismiss_slots(struct smc_link *lnk, u8 wr_rx_hdr_type,
smc_wr_tx_filter filter,
smc_wr_tx_dismisser dismisser,
unsigned long data);
void smc_wr_tx_wait_no_pending_sends(struct smc_link *link); void smc_wr_tx_wait_no_pending_sends(struct smc_link *link);
int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler); int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler);

View File

@ -2059,6 +2059,7 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
splice_read_end: splice_read_end:
release_sock(sk); release_sock(sk);
sk_defer_free_flush(sk);
return copied ? : err; return copied ? : err;
} }

View File

@ -192,8 +192,11 @@ void wait_for_unix_gc(void)
{ {
/* If number of inflight sockets is insane, /* If number of inflight sockets is insane,
* force a garbage collect right now. * force a garbage collect right now.
* Paired with the WRITE_ONCE() in unix_inflight(),
* unix_notinflight() and gc_in_progress().
*/ */
if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress) if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
!READ_ONCE(gc_in_progress))
unix_gc(); unix_gc();
wait_event(unix_gc_wait, gc_in_progress == false); wait_event(unix_gc_wait, gc_in_progress == false);
} }
@ -213,7 +216,9 @@ void unix_gc(void)
if (gc_in_progress) if (gc_in_progress)
goto out; goto out;
gc_in_progress = true; /* Paired with READ_ONCE() in wait_for_unix_gc(). */
WRITE_ONCE(gc_in_progress, true);
/* First, select candidates for garbage collection. Only /* First, select candidates for garbage collection. Only
* in-flight sockets are considered, and from those only ones * in-flight sockets are considered, and from those only ones
* which don't have any external reference. * which don't have any external reference.
@ -299,7 +304,10 @@ void unix_gc(void)
/* All candidates should have been detached by now. */ /* All candidates should have been detached by now. */
BUG_ON(!list_empty(&gc_candidates)); BUG_ON(!list_empty(&gc_candidates));
gc_in_progress = false;
/* Paired with READ_ONCE() in wait_for_unix_gc(). */
WRITE_ONCE(gc_in_progress, false);
wake_up(&unix_gc_wait); wake_up(&unix_gc_wait);
out: out:

View File

@ -60,7 +60,8 @@ void unix_inflight(struct user_struct *user, struct file *fp)
} else { } else {
BUG_ON(list_empty(&u->link)); BUG_ON(list_empty(&u->link));
} }
unix_tot_inflight++; /* Paired with READ_ONCE() in wait_for_unix_gc() */
WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
} }
user->unix_inflight++; user->unix_inflight++;
spin_unlock(&unix_gc_lock); spin_unlock(&unix_gc_lock);
@ -80,7 +81,8 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
if (atomic_long_dec_and_test(&u->inflight)) if (atomic_long_dec_and_test(&u->inflight))
list_del_init(&u->link); list_del_init(&u->link);
unix_tot_inflight--; /* Paired with READ_ONCE() in wait_for_unix_gc() */
WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
} }
user->unix_inflight--; user->unix_inflight--;
spin_unlock(&unix_gc_lock); spin_unlock(&unix_gc_lock);

View File

@ -31,6 +31,7 @@
#include <linux/if_tunnel.h> #include <linux/if_tunnel.h>
#include <net/dst.h> #include <net/dst.h>
#include <net/flow.h> #include <net/flow.h>
#include <net/inet_ecn.h>
#include <net/xfrm.h> #include <net/xfrm.h>
#include <net/ip.h> #include <net/ip.h>
#include <net/gre.h> #include <net/gre.h>
@ -3295,7 +3296,7 @@ decode_session4(struct sk_buff *skb, struct flowi *fl, bool reverse)
fl4->flowi4_proto = iph->protocol; fl4->flowi4_proto = iph->protocol;
fl4->daddr = reverse ? iph->saddr : iph->daddr; fl4->daddr = reverse ? iph->saddr : iph->daddr;
fl4->saddr = reverse ? iph->daddr : iph->saddr; fl4->saddr = reverse ? iph->daddr : iph->saddr;
fl4->flowi4_tos = iph->tos; fl4->flowi4_tos = iph->tos & ~INET_ECN_MASK;
if (!ip_is_fragment(iph)) { if (!ip_is_fragment(iph)) {
switch (iph->protocol) { switch (iph->protocol) {

View File

@ -10,6 +10,7 @@
#include "test_d_path.skel.h" #include "test_d_path.skel.h"
#include "test_d_path_check_rdonly_mem.skel.h" #include "test_d_path_check_rdonly_mem.skel.h"
#include "test_d_path_check_types.skel.h"
static int duration; static int duration;
@ -167,6 +168,16 @@ static void test_d_path_check_rdonly_mem(void)
test_d_path_check_rdonly_mem__destroy(skel); test_d_path_check_rdonly_mem__destroy(skel);
} }
static void test_d_path_check_types(void)
{
struct test_d_path_check_types *skel;
skel = test_d_path_check_types__open_and_load();
ASSERT_ERR_PTR(skel, "unexpected_load_passing_wrong_type");
test_d_path_check_types__destroy(skel);
}
void test_d_path(void) void test_d_path(void)
{ {
if (test__start_subtest("basic")) if (test__start_subtest("basic"))
@ -174,4 +185,7 @@ void test_d_path(void)
if (test__start_subtest("check_rdonly_mem")) if (test__start_subtest("check_rdonly_mem"))
test_d_path_check_rdonly_mem(); test_d_path_check_rdonly_mem();
if (test__start_subtest("check_alloc_mem"))
test_d_path_check_types();
} }

View File

@ -8,46 +8,47 @@
void serial_test_xdp_link(void) void serial_test_xdp_link(void)
{ {
__u32 duration = 0, id1, id2, id0 = 0, prog_fd1, prog_fd2, err;
DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1); DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1);
struct test_xdp_link *skel1 = NULL, *skel2 = NULL; struct test_xdp_link *skel1 = NULL, *skel2 = NULL;
__u32 id1, id2, id0 = 0, prog_fd1, prog_fd2;
struct bpf_link_info link_info; struct bpf_link_info link_info;
struct bpf_prog_info prog_info; struct bpf_prog_info prog_info;
struct bpf_link *link; struct bpf_link *link;
int err;
__u32 link_info_len = sizeof(link_info); __u32 link_info_len = sizeof(link_info);
__u32 prog_info_len = sizeof(prog_info); __u32 prog_info_len = sizeof(prog_info);
skel1 = test_xdp_link__open_and_load(); skel1 = test_xdp_link__open_and_load();
if (CHECK(!skel1, "skel_load", "skeleton open and load failed\n")) if (!ASSERT_OK_PTR(skel1, "skel_load"))
goto cleanup; goto cleanup;
prog_fd1 = bpf_program__fd(skel1->progs.xdp_handler); prog_fd1 = bpf_program__fd(skel1->progs.xdp_handler);
skel2 = test_xdp_link__open_and_load(); skel2 = test_xdp_link__open_and_load();
if (CHECK(!skel2, "skel_load", "skeleton open and load failed\n")) if (!ASSERT_OK_PTR(skel2, "skel_load"))
goto cleanup; goto cleanup;
prog_fd2 = bpf_program__fd(skel2->progs.xdp_handler); prog_fd2 = bpf_program__fd(skel2->progs.xdp_handler);
memset(&prog_info, 0, sizeof(prog_info)); memset(&prog_info, 0, sizeof(prog_info));
err = bpf_obj_get_info_by_fd(prog_fd1, &prog_info, &prog_info_len); err = bpf_obj_get_info_by_fd(prog_fd1, &prog_info, &prog_info_len);
if (CHECK(err, "fd_info1", "failed %d\n", -errno)) if (!ASSERT_OK(err, "fd_info1"))
goto cleanup; goto cleanup;
id1 = prog_info.id; id1 = prog_info.id;
memset(&prog_info, 0, sizeof(prog_info)); memset(&prog_info, 0, sizeof(prog_info));
err = bpf_obj_get_info_by_fd(prog_fd2, &prog_info, &prog_info_len); err = bpf_obj_get_info_by_fd(prog_fd2, &prog_info, &prog_info_len);
if (CHECK(err, "fd_info2", "failed %d\n", -errno)) if (!ASSERT_OK(err, "fd_info2"))
goto cleanup; goto cleanup;
id2 = prog_info.id; id2 = prog_info.id;
/* set initial prog attachment */ /* set initial prog attachment */
err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts); err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts);
if (CHECK(err, "fd_attach", "initial prog attach failed: %d\n", err)) if (!ASSERT_OK(err, "fd_attach"))
goto cleanup; goto cleanup;
/* validate prog ID */ /* validate prog ID */
err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
CHECK(err || id0 != id1, "id1_check", if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val"))
"loaded prog id %u != id1 %u, err %d", id0, id1, err); goto cleanup;
/* BPF link is not allowed to replace prog attachment */ /* BPF link is not allowed to replace prog attachment */
link = bpf_program__attach_xdp(skel1->progs.xdp_handler, IFINDEX_LO); link = bpf_program__attach_xdp(skel1->progs.xdp_handler, IFINDEX_LO);
@ -62,7 +63,7 @@ void serial_test_xdp_link(void)
/* detach BPF program */ /* detach BPF program */
opts.old_fd = prog_fd1; opts.old_fd = prog_fd1;
err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts); err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts);
if (CHECK(err, "prog_detach", "failed %d\n", err)) if (!ASSERT_OK(err, "prog_detach"))
goto cleanup; goto cleanup;
/* now BPF link should attach successfully */ /* now BPF link should attach successfully */
@ -73,24 +74,23 @@ void serial_test_xdp_link(void)
/* validate prog ID */ /* validate prog ID */
err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
if (CHECK(err || id0 != id1, "id1_check", if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val"))
"loaded prog id %u != id1 %u, err %d", id0, id1, err))
goto cleanup; goto cleanup;
/* BPF prog attach is not allowed to replace BPF link */ /* BPF prog attach is not allowed to replace BPF link */
opts.old_fd = prog_fd1; opts.old_fd = prog_fd1;
err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts); err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts);
if (CHECK(!err, "prog_attach_fail", "unexpected success\n")) if (!ASSERT_ERR(err, "prog_attach_fail"))
goto cleanup; goto cleanup;
/* Can't force-update when BPF link is active */ /* Can't force-update when BPF link is active */
err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd2, 0); err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd2, 0);
if (CHECK(!err, "prog_update_fail", "unexpected success\n")) if (!ASSERT_ERR(err, "prog_update_fail"))
goto cleanup; goto cleanup;
/* Can't force-detach when BPF link is active */ /* Can't force-detach when BPF link is active */
err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0);
if (CHECK(!err, "prog_detach_fail", "unexpected success\n")) if (!ASSERT_ERR(err, "prog_detach_fail"))
goto cleanup; goto cleanup;
/* BPF link is not allowed to replace another BPF link */ /* BPF link is not allowed to replace another BPF link */
@ -110,40 +110,39 @@ void serial_test_xdp_link(void)
skel2->links.xdp_handler = link; skel2->links.xdp_handler = link;
err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0);
if (CHECK(err || id0 != id2, "id2_check", if (!ASSERT_OK(err, "id2_check_err") || !ASSERT_EQ(id0, id2, "id2_check_val"))
"loaded prog id %u != id2 %u, err %d", id0, id1, err))
goto cleanup; goto cleanup;
/* updating program under active BPF link works as expected */ /* updating program under active BPF link works as expected */
err = bpf_link__update_program(link, skel1->progs.xdp_handler); err = bpf_link__update_program(link, skel1->progs.xdp_handler);
if (CHECK(err, "link_upd", "failed: %d\n", err)) if (!ASSERT_OK(err, "link_upd"))
goto cleanup; goto cleanup;
memset(&link_info, 0, sizeof(link_info)); memset(&link_info, 0, sizeof(link_info));
err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len); err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len);
if (CHECK(err, "link_info", "failed: %d\n", err)) if (!ASSERT_OK(err, "link_info"))
goto cleanup; goto cleanup;
CHECK(link_info.type != BPF_LINK_TYPE_XDP, "link_type", ASSERT_EQ(link_info.type, BPF_LINK_TYPE_XDP, "link_type");
"got %u != exp %u\n", link_info.type, BPF_LINK_TYPE_XDP); ASSERT_EQ(link_info.prog_id, id1, "link_prog_id");
CHECK(link_info.prog_id != id1, "link_prog_id", ASSERT_EQ(link_info.xdp.ifindex, IFINDEX_LO, "link_ifindex");
"got %u != exp %u\n", link_info.prog_id, id1);
CHECK(link_info.xdp.ifindex != IFINDEX_LO, "link_ifindex", /* updating program under active BPF link with different type fails */
"got %u != exp %u\n", link_info.xdp.ifindex, IFINDEX_LO); err = bpf_link__update_program(link, skel1->progs.tc_handler);
if (!ASSERT_ERR(err, "link_upd_invalid"))
goto cleanup;
err = bpf_link__detach(link); err = bpf_link__detach(link);
if (CHECK(err, "link_detach", "failed %d\n", err)) if (!ASSERT_OK(err, "link_detach"))
goto cleanup; goto cleanup;
memset(&link_info, 0, sizeof(link_info)); memset(&link_info, 0, sizeof(link_info));
err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len); err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len);
if (CHECK(err, "link_info", "failed: %d\n", err))
goto cleanup; ASSERT_OK(err, "link_info");
CHECK(link_info.prog_id != id1, "link_prog_id", ASSERT_EQ(link_info.prog_id, id1, "link_prog_id");
"got %u != exp %u\n", link_info.prog_id, id1);
/* ifindex should be zeroed out */ /* ifindex should be zeroed out */
CHECK(link_info.xdp.ifindex != 0, "link_ifindex", ASSERT_EQ(link_info.xdp.ifindex, 0, "link_ifindex");
"got %u != exp %u\n", link_info.xdp.ifindex, 0);
cleanup: cleanup:
test_xdp_link__destroy(skel1); test_xdp_link__destroy(skel1);

View File

@ -0,0 +1,32 @@
// SPDX-License-Identifier: GPL-2.0
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
extern const int bpf_prog_active __ksym;
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
__uint(max_entries, 1 << 12);
} ringbuf SEC(".maps");
SEC("fentry/security_inode_getattr")
int BPF_PROG(d_path_check_rdonly_mem, struct path *path, struct kstat *stat,
__u32 request_mask, unsigned int query_flags)
{
void *active;
u32 cpu;
cpu = bpf_get_smp_processor_id();
active = (void *)bpf_per_cpu_ptr(&bpf_prog_active, cpu);
if (active) {
/* FAIL here! 'active' points to 'regular' memory. It
* cannot be submitted to ring buffer.
*/
bpf_ringbuf_submit(active, 0);
}
return 0;
}
char _license[] SEC("license") = "GPL";

View File

@ -10,3 +10,9 @@ int xdp_handler(struct xdp_md *xdp)
{ {
return 0; return 0;
} }
SEC("tc")
int tc_handler(struct __sk_buff *skb)
{
return 0;
}

View File

@ -0,0 +1,95 @@
{
"ringbuf: invalid reservation offset 1",
.insns = {
/* reserve 8 byte ringbuf memory */
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_2, 8),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
/* store a pointer to the reserved memory in R6 */
BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
/* check whether the reservation was successful */
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
/* spill R6(mem) into the stack */
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
/* fill it back in R7 */
BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8),
/* should be able to access *(R7) = 0 */
BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0),
/* submit the reserved ringbuf memory */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
/* add invalid offset to reserved ringbuf memory */
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xcafe),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_ringbuf = { 1 },
.result = REJECT,
.errstr = "dereference of modified alloc_mem ptr R1",
},
{
"ringbuf: invalid reservation offset 2",
.insns = {
/* reserve 8 byte ringbuf memory */
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_2, 8),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
/* store a pointer to the reserved memory in R6 */
BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
/* check whether the reservation was successful */
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
/* spill R6(mem) into the stack */
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
/* fill it back in R7 */
BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8),
/* add invalid offset to reserved ringbuf memory */
BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 0xcafe),
/* should be able to access *(R7) = 0 */
BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0),
/* submit the reserved ringbuf memory */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_ringbuf = { 1 },
.result = REJECT,
.errstr = "R7 min value is outside of the allowed memory range",
},
{
"ringbuf: check passing rb mem to helpers",
.insns = {
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
/* reserve 8 byte ringbuf memory */
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_2, 8),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
/* check whether the reservation was successful */
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
BPF_EXIT_INSN(),
/* pass allocated ring buffer memory to fib lookup */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
BPF_MOV64_IMM(BPF_REG_3, 8),
BPF_MOV64_IMM(BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_fib_lookup),
/* submit the ringbuf memory */
BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_ringbuf = { 2 },
.prog_type = BPF_PROG_TYPE_XDP,
.result = ACCEPT,
},

View File

@ -84,7 +84,7 @@
}, },
.fixup_map_ringbuf = { 1 }, .fixup_map_ringbuf = { 1 },
.result = REJECT, .result = REJECT,
.errstr = "R0 pointer arithmetic on mem_or_null prohibited", .errstr = "R0 pointer arithmetic on alloc_mem_or_null prohibited",
}, },
{ {
"check corrupted spill/fill", "check corrupted spill/fill",

View File

@ -4059,6 +4059,9 @@ usage: ${0##*/} OPTS
-p Pause on fail -p Pause on fail
-P Pause after each test -P Pause after each test
-v Be verbose -v Be verbose
Tests:
$TESTS_IPV4 $TESTS_IPV6 $TESTS_OTHER
EOF EOF
} }

View File

@ -1 +1 @@
timeout=300 timeout=1500