Including fixes from bpf.
Current release - regressions: - eth: fman: re-expose location of the MAC address to userspace, apparently some udev scripts depended on the exact value Current release - new code bugs: - bpf: - wait for busy refill_work when destroying bpf memory allocator - allow bpf_user_ringbuf_drain() callbacks to return 1 - fix dispatcher patchable function entry to 5 bytes nop Previous releases - regressions: - net-memcg: avoid stalls when under memory pressure - tcp: fix indefinite deferral of RTO with SACK reneging - tipc: fix a null-ptr-deref in tipc_topsrv_accept - eth: macb: specify PHY PM management done by MAC - tcp: fix a signed-integer-overflow bug in tcp_add_backlog() Previous releases - always broken: - eth: amd-xgbe: SFP fixes and compatibility improvements Misc: - docs: netdev: offer performance feedback to contributors Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmNW024ACgkQMUZtbf5S IrvX7w//SP/zKZwgzC13zd2rrCP16TX2QvkHPmSLvcldQDXdCypmsoc5Vb8UNpkG jwAuy2pxqPy2oxTwTBQv9TNRT2oqEFOsFTK+w410whlL7g1wZ02aXU8qFhV2XumW o4gRtM+UISPUKFbOnawdK1XlrNdeLF3bjETvW2GP9zxCb0iqoQXtDDNKxv2B2iQA MSyTtzHA4n9GS7LKGtPgsP2Ose7h1Z+AjTIpQH1nvfEHJUf/wmxUdCK+fuwfeLjY PhmYaPG/333j1bfBk1Ms/nUYA5KRXlEj9A/7jDtxhxNEwaTNKyLB19a6oVxXxpSQ x/k+nZP1RColn5xeco5a1X9aHHQ46PJQ8wVAmxYDIeIA5XPMgShNmhAyjrq1ac+o 9vYeYpmnMGSTLdBMvGbWpynWHe7SddgF8LkbnYf2HLKbxe4bgkOnmxOUH4q9iinZ MfVSknjax4DP0C7X1kGgR6WyltWnkrahOdUkINsIUNxj0KxJa/eStpJIbJrfkxNV gHbOjB2/bF3SXENrS4A0IJCgsbO9YugN83Eyu0WDWQOw9wVgopzxOJx9R+H0wkVH XpGGP8qi1DZiTE3iQiq1LHj6f6kirFmtt9QFH5yzaqtKBaqXakHaXwUO4VtD+BI9 NPFKvFL6jrp8EAn0PTM/RrvhJZN+V0bFXiyiMe0TLx+aR0UMxGc= =dD6N -----END PGP SIGNATURE----- Merge tag 'net-6.1-rc3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bpf. The net-memcg fix stands out, the rest is very run-off-the-mill. Maybe I'm biased. Current release - regressions: - eth: fman: re-expose location of the MAC address to userspace, apparently some udev scripts depended on the exact value Current release - new code bugs: - bpf: - wait for busy refill_work when destroying bpf memory allocator - allow bpf_user_ringbuf_drain() callbacks to return 1 - fix dispatcher patchable function entry to 5 bytes nop Previous releases - regressions: - net-memcg: avoid stalls when under memory pressure - tcp: fix indefinite deferral of RTO with SACK reneging - tipc: fix a null-ptr-deref in tipc_topsrv_accept - eth: macb: specify PHY PM management done by MAC - tcp: fix a signed-integer-overflow bug in tcp_add_backlog() Previous releases - always broken: - eth: amd-xgbe: SFP fixes and compatibility improvements Misc: - docs: netdev: offer performance feedback to contributors" * tag 'net-6.1-rc3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (37 commits) net-memcg: avoid stalls when under memory pressure tcp: fix indefinite deferral of RTO with SACK reneging tcp: fix a signed-integer-overflow bug in tcp_add_backlog() net: lantiq_etop: don't free skb when returning NETDEV_TX_BUSY net: fix UAF issue in nfqnl_nf_hook_drop() when ops_init() failed docs: netdev: offer performance feedback to contributors kcm: annotate data-races around kcm->rx_wait kcm: annotate data-races around kcm->rx_psock net: fman: Use physical address for userspace interfaces net/mlx5e: Cleanup MACsec uninitialization routine atlantic: fix deadlock at aq_nic_stop nfp: only clean `sp_indiff` when application firmware is unloaded amd-xgbe: add the bit rate quirk for Molex cables amd-xgbe: fix the SFP compliance codes check for DAC cables amd-xgbe: enable PLL_CTL for fixed PHY modes only amd-xgbe: use enums for mailbox cmd and sub_cmds amd-xgbe: Yellow carp devices do not need rrc bpf: Use __llist_del_all() whenever possbile during memory draining bpf: Wait for busy refill_work when destroying bpf memory allocator MAINTAINERS: add keyword match on PTP ...
This commit is contained in:
commit
337a0a0b63
|
@ -319,3 +319,13 @@ unpatched tree to confirm infrastructure didn't mangle it.
|
||||||
Finally, go back and read
|
Finally, go back and read
|
||||||
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
|
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
|
||||||
to be sure you are not repeating some common mistake documented there.
|
to be sure you are not repeating some common mistake documented there.
|
||||||
|
|
||||||
|
My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback?
|
||||||
|
---------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Yes, especially if you spend significant amount of time reviewing code
|
||||||
|
and go out of your way to improve shared infrastructure.
|
||||||
|
|
||||||
|
The feedback must be requested by you, the contributor, and will always
|
||||||
|
be shared with you (even if you request for it to be submitted to your
|
||||||
|
manager).
|
||||||
|
|
|
@ -16675,6 +16675,7 @@ F: Documentation/driver-api/ptp.rst
|
||||||
F: drivers/net/phy/dp83640*
|
F: drivers/net/phy/dp83640*
|
||||||
F: drivers/ptp/*
|
F: drivers/ptp/*
|
||||||
F: include/linux/ptp_cl*
|
F: include/linux/ptp_cl*
|
||||||
|
K: (?:\b|_)ptp(?:\b|_)
|
||||||
|
|
||||||
PTP VIRTUAL CLOCK SUPPORT
|
PTP VIRTUAL CLOCK SUPPORT
|
||||||
M: Yangbo Lu <yangbo.lu@nxp.com>
|
M: Yangbo Lu <yangbo.lu@nxp.com>
|
||||||
|
|
|
@ -11,6 +11,7 @@
|
||||||
#include <linux/bpf.h>
|
#include <linux/bpf.h>
|
||||||
#include <linux/memory.h>
|
#include <linux/memory.h>
|
||||||
#include <linux/sort.h>
|
#include <linux/sort.h>
|
||||||
|
#include <linux/init.h>
|
||||||
#include <asm/extable.h>
|
#include <asm/extable.h>
|
||||||
#include <asm/set_memory.h>
|
#include <asm/set_memory.h>
|
||||||
#include <asm/nospec-branch.h>
|
#include <asm/nospec-branch.h>
|
||||||
|
@ -388,6 +389,18 @@ out:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int __init bpf_arch_init_dispatcher_early(void *ip)
|
||||||
|
{
|
||||||
|
const u8 *nop_insn = x86_nops[5];
|
||||||
|
|
||||||
|
if (is_endbr(*(u32 *)ip))
|
||||||
|
ip += ENDBR_INSN_SIZE;
|
||||||
|
|
||||||
|
if (memcmp(ip, nop_insn, X86_PATCH_SIZE))
|
||||||
|
text_poke_early(ip, nop_insn, X86_PATCH_SIZE);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
|
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
|
||||||
void *old_addr, void *new_addr)
|
void *old_addr, void *new_addr)
|
||||||
{
|
{
|
||||||
|
|
|
@ -285,6 +285,9 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||||
|
|
||||||
/* Yellow Carp devices do not need cdr workaround */
|
/* Yellow Carp devices do not need cdr workaround */
|
||||||
pdata->vdata->an_cdr_workaround = 0;
|
pdata->vdata->an_cdr_workaround = 0;
|
||||||
|
|
||||||
|
/* Yellow Carp devices do not need rrc */
|
||||||
|
pdata->vdata->enable_rrc = 0;
|
||||||
} else {
|
} else {
|
||||||
pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF;
|
pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF;
|
||||||
pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT;
|
pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT;
|
||||||
|
@ -483,6 +486,7 @@ static struct xgbe_version_data xgbe_v2a = {
|
||||||
.tx_desc_prefetch = 5,
|
.tx_desc_prefetch = 5,
|
||||||
.rx_desc_prefetch = 5,
|
.rx_desc_prefetch = 5,
|
||||||
.an_cdr_workaround = 1,
|
.an_cdr_workaround = 1,
|
||||||
|
.enable_rrc = 1,
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct xgbe_version_data xgbe_v2b = {
|
static struct xgbe_version_data xgbe_v2b = {
|
||||||
|
@ -498,6 +502,7 @@ static struct xgbe_version_data xgbe_v2b = {
|
||||||
.tx_desc_prefetch = 5,
|
.tx_desc_prefetch = 5,
|
||||||
.rx_desc_prefetch = 5,
|
.rx_desc_prefetch = 5,
|
||||||
.an_cdr_workaround = 1,
|
.an_cdr_workaround = 1,
|
||||||
|
.enable_rrc = 1,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_device_id xgbe_pci_table[] = {
|
static const struct pci_device_id xgbe_pci_table[] = {
|
||||||
|
|
|
@ -239,6 +239,7 @@ enum xgbe_sfp_speed {
|
||||||
#define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d
|
#define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d
|
||||||
#define XGBE_SFP_BASE_BR_10GBE_MIN 0x64
|
#define XGBE_SFP_BASE_BR_10GBE_MIN 0x64
|
||||||
#define XGBE_SFP_BASE_BR_10GBE_MAX 0x68
|
#define XGBE_SFP_BASE_BR_10GBE_MAX 0x68
|
||||||
|
#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX 0x78
|
||||||
|
|
||||||
#define XGBE_SFP_BASE_CU_CABLE_LEN 18
|
#define XGBE_SFP_BASE_CU_CABLE_LEN 18
|
||||||
|
|
||||||
|
@ -284,6 +285,8 @@ struct xgbe_sfp_eeprom {
|
||||||
#define XGBE_BEL_FUSE_VENDOR "BEL-FUSE "
|
#define XGBE_BEL_FUSE_VENDOR "BEL-FUSE "
|
||||||
#define XGBE_BEL_FUSE_PARTNO "1GBT-SFP06 "
|
#define XGBE_BEL_FUSE_PARTNO "1GBT-SFP06 "
|
||||||
|
|
||||||
|
#define XGBE_MOLEX_VENDOR "Molex Inc. "
|
||||||
|
|
||||||
struct xgbe_sfp_ascii {
|
struct xgbe_sfp_ascii {
|
||||||
union {
|
union {
|
||||||
char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
|
char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
|
||||||
|
@ -834,6 +837,10 @@ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
|
||||||
break;
|
break;
|
||||||
case XGBE_SFP_SPEED_10000:
|
case XGBE_SFP_SPEED_10000:
|
||||||
min = XGBE_SFP_BASE_BR_10GBE_MIN;
|
min = XGBE_SFP_BASE_BR_10GBE_MIN;
|
||||||
|
if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
|
||||||
|
XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
|
||||||
|
max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
|
||||||
|
else
|
||||||
max = XGBE_SFP_BASE_BR_10GBE_MAX;
|
max = XGBE_SFP_BASE_BR_10GBE_MAX;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -1151,7 +1158,10 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Determine the type of SFP */
|
/* Determine the type of SFP */
|
||||||
if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
|
if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
|
||||||
|
xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
|
||||||
|
phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
|
||||||
|
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
|
||||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
|
phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
|
||||||
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
|
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
|
||||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
|
phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
|
||||||
|
@ -1167,9 +1177,6 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
|
||||||
phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
|
phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
|
||||||
else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
|
else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
|
||||||
phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
|
phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
|
||||||
else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
|
|
||||||
xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
|
|
||||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
|
|
||||||
|
|
||||||
switch (phy_data->sfp_base) {
|
switch (phy_data->sfp_base) {
|
||||||
case XGBE_SFP_BASE_1000_T:
|
case XGBE_SFP_BASE_1000_T:
|
||||||
|
@ -1979,6 +1986,10 @@ static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
|
||||||
|
|
||||||
static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
|
static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
|
||||||
{
|
{
|
||||||
|
/* PLL_CTRL feature needs to be enabled for fixed PHY modes (Non-Autoneg) only */
|
||||||
|
if (pdata->phy.autoneg != AUTONEG_DISABLE)
|
||||||
|
return;
|
||||||
|
|
||||||
XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
|
XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
|
||||||
XGBE_PMA_PLL_CTRL_MASK,
|
XGBE_PMA_PLL_CTRL_MASK,
|
||||||
enable ? XGBE_PMA_PLL_CTRL_ENABLE
|
enable ? XGBE_PMA_PLL_CTRL_ENABLE
|
||||||
|
@ -1989,7 +2000,7 @@ static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
|
static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
|
||||||
unsigned int cmd, unsigned int sub_cmd)
|
enum xgbe_mb_cmd cmd, enum xgbe_mb_subcmd sub_cmd)
|
||||||
{
|
{
|
||||||
unsigned int s0 = 0;
|
unsigned int s0 = 0;
|
||||||
unsigned int wait;
|
unsigned int wait;
|
||||||
|
@ -2029,14 +2040,16 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
|
||||||
xgbe_phy_rx_reset(pdata);
|
xgbe_phy_rx_reset(pdata);
|
||||||
|
|
||||||
reenable_pll:
|
reenable_pll:
|
||||||
/* Enable PLL re-initialization */
|
/* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */
|
||||||
|
if (cmd != XGBE_MB_CMD_POWER_OFF &&
|
||||||
|
cmd != XGBE_MB_CMD_RRC)
|
||||||
xgbe_phy_pll_ctrl(pdata, true);
|
xgbe_phy_pll_ctrl(pdata, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
|
static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
|
||||||
{
|
{
|
||||||
/* Receiver Reset Cycle */
|
/* Receiver Reset Cycle */
|
||||||
xgbe_phy_perform_ratechange(pdata, 5, 0);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_RRC, XGBE_MB_SUBCMD_NONE);
|
||||||
|
|
||||||
netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n");
|
netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n");
|
||||||
}
|
}
|
||||||
|
@ -2046,7 +2059,7 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata)
|
||||||
struct xgbe_phy_data *phy_data = pdata->phy_data;
|
struct xgbe_phy_data *phy_data = pdata->phy_data;
|
||||||
|
|
||||||
/* Power off */
|
/* Power off */
|
||||||
xgbe_phy_perform_ratechange(pdata, 0, 0);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_POWER_OFF, XGBE_MB_SUBCMD_NONE);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_UNKNOWN;
|
phy_data->cur_mode = XGBE_MODE_UNKNOWN;
|
||||||
|
|
||||||
|
@ -2061,14 +2074,17 @@ static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
|
||||||
|
|
||||||
/* 10G/SFI */
|
/* 10G/SFI */
|
||||||
if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
|
if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
|
||||||
xgbe_phy_perform_ratechange(pdata, 3, 0);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE);
|
||||||
} else {
|
} else {
|
||||||
if (phy_data->sfp_cable_len <= 1)
|
if (phy_data->sfp_cable_len <= 1)
|
||||||
xgbe_phy_perform_ratechange(pdata, 3, 1);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_1M);
|
||||||
else if (phy_data->sfp_cable_len <= 3)
|
else if (phy_data->sfp_cable_len <= 3)
|
||||||
xgbe_phy_perform_ratechange(pdata, 3, 2);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_3M);
|
||||||
else
|
else
|
||||||
xgbe_phy_perform_ratechange(pdata, 3, 3);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_OTHER);
|
||||||
}
|
}
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_SFI;
|
phy_data->cur_mode = XGBE_MODE_SFI;
|
||||||
|
@ -2083,7 +2099,7 @@ static void xgbe_phy_x_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 1G/X */
|
/* 1G/X */
|
||||||
xgbe_phy_perform_ratechange(pdata, 1, 3);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_X;
|
phy_data->cur_mode = XGBE_MODE_X;
|
||||||
|
|
||||||
|
@ -2097,7 +2113,7 @@ static void xgbe_phy_sgmii_1000_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 1G/SGMII */
|
/* 1G/SGMII */
|
||||||
xgbe_phy_perform_ratechange(pdata, 1, 2);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_SGMII);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_SGMII_1000;
|
phy_data->cur_mode = XGBE_MODE_SGMII_1000;
|
||||||
|
|
||||||
|
@ -2111,7 +2127,7 @@ static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 100M/SGMII */
|
/* 100M/SGMII */
|
||||||
xgbe_phy_perform_ratechange(pdata, 1, 1);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_100MBITS);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_SGMII_100;
|
phy_data->cur_mode = XGBE_MODE_SGMII_100;
|
||||||
|
|
||||||
|
@ -2125,7 +2141,7 @@ static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 10G/KR */
|
/* 10G/KR */
|
||||||
xgbe_phy_perform_ratechange(pdata, 4, 0);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_KR;
|
phy_data->cur_mode = XGBE_MODE_KR;
|
||||||
|
|
||||||
|
@ -2139,7 +2155,7 @@ static void xgbe_phy_kx_2500_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 2.5G/KX */
|
/* 2.5G/KX */
|
||||||
xgbe_phy_perform_ratechange(pdata, 2, 0);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_2_5G, XGBE_MB_SUBCMD_NONE);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_KX_2500;
|
phy_data->cur_mode = XGBE_MODE_KX_2500;
|
||||||
|
|
||||||
|
@ -2153,7 +2169,7 @@ static void xgbe_phy_kx_1000_mode(struct xgbe_prv_data *pdata)
|
||||||
xgbe_phy_set_redrv_mode(pdata);
|
xgbe_phy_set_redrv_mode(pdata);
|
||||||
|
|
||||||
/* 1G/KX */
|
/* 1G/KX */
|
||||||
xgbe_phy_perform_ratechange(pdata, 1, 3);
|
xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
|
||||||
|
|
||||||
phy_data->cur_mode = XGBE_MODE_KX_1000;
|
phy_data->cur_mode = XGBE_MODE_KX_1000;
|
||||||
|
|
||||||
|
@ -2640,7 +2656,7 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* No link, attempt a receiver reset cycle */
|
/* No link, attempt a receiver reset cycle */
|
||||||
if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
|
if (pdata->vdata->enable_rrc && phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
|
||||||
phy_data->rrc_count = 0;
|
phy_data->rrc_count = 0;
|
||||||
xgbe_phy_rrc(pdata);
|
xgbe_phy_rrc(pdata);
|
||||||
}
|
}
|
||||||
|
|
|
@ -611,6 +611,31 @@ enum xgbe_mdio_mode {
|
||||||
XGBE_MDIO_MODE_CL45,
|
XGBE_MDIO_MODE_CL45,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
enum xgbe_mb_cmd {
|
||||||
|
XGBE_MB_CMD_POWER_OFF = 0,
|
||||||
|
XGBE_MB_CMD_SET_1G,
|
||||||
|
XGBE_MB_CMD_SET_2_5G,
|
||||||
|
XGBE_MB_CMD_SET_10G_SFI,
|
||||||
|
XGBE_MB_CMD_SET_10G_KR,
|
||||||
|
XGBE_MB_CMD_RRC
|
||||||
|
};
|
||||||
|
|
||||||
|
enum xgbe_mb_subcmd {
|
||||||
|
XGBE_MB_SUBCMD_NONE = 0,
|
||||||
|
|
||||||
|
/* 10GbE SFP subcommands */
|
||||||
|
XGBE_MB_SUBCMD_ACTIVE = 0,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_1M,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_3M,
|
||||||
|
XGBE_MB_SUBCMD_PASSIVE_OTHER,
|
||||||
|
|
||||||
|
/* 1GbE Mode subcommands */
|
||||||
|
XGBE_MB_SUBCMD_10MBITS = 0,
|
||||||
|
XGBE_MB_SUBCMD_100MBITS,
|
||||||
|
XGBE_MB_SUBCMD_1G_SGMII,
|
||||||
|
XGBE_MB_SUBCMD_1G_KX
|
||||||
|
};
|
||||||
|
|
||||||
struct xgbe_phy {
|
struct xgbe_phy {
|
||||||
struct ethtool_link_ksettings lks;
|
struct ethtool_link_ksettings lks;
|
||||||
|
|
||||||
|
@ -1013,6 +1038,7 @@ struct xgbe_version_data {
|
||||||
unsigned int tx_desc_prefetch;
|
unsigned int tx_desc_prefetch;
|
||||||
unsigned int rx_desc_prefetch;
|
unsigned int rx_desc_prefetch;
|
||||||
unsigned int an_cdr_workaround;
|
unsigned int an_cdr_workaround;
|
||||||
|
unsigned int enable_rrc;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct xgbe_prv_data {
|
struct xgbe_prv_data {
|
||||||
|
|
|
@ -1394,26 +1394,57 @@ static void aq_check_txsa_expiration(struct aq_nic_s *nic)
|
||||||
egress_sa_threshold_expired);
|
egress_sa_threshold_expired);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#define AQ_LOCKED_MDO_DEF(mdo) \
|
||||||
|
static int aq_locked_mdo_##mdo(struct macsec_context *ctx) \
|
||||||
|
{ \
|
||||||
|
struct aq_nic_s *nic = netdev_priv(ctx->netdev); \
|
||||||
|
int ret; \
|
||||||
|
mutex_lock(&nic->macsec_mutex); \
|
||||||
|
ret = aq_mdo_##mdo(ctx); \
|
||||||
|
mutex_unlock(&nic->macsec_mutex); \
|
||||||
|
return ret; \
|
||||||
|
}
|
||||||
|
|
||||||
|
AQ_LOCKED_MDO_DEF(dev_open)
|
||||||
|
AQ_LOCKED_MDO_DEF(dev_stop)
|
||||||
|
AQ_LOCKED_MDO_DEF(add_secy)
|
||||||
|
AQ_LOCKED_MDO_DEF(upd_secy)
|
||||||
|
AQ_LOCKED_MDO_DEF(del_secy)
|
||||||
|
AQ_LOCKED_MDO_DEF(add_rxsc)
|
||||||
|
AQ_LOCKED_MDO_DEF(upd_rxsc)
|
||||||
|
AQ_LOCKED_MDO_DEF(del_rxsc)
|
||||||
|
AQ_LOCKED_MDO_DEF(add_rxsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(upd_rxsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(del_rxsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(add_txsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(upd_txsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(del_txsa)
|
||||||
|
AQ_LOCKED_MDO_DEF(get_dev_stats)
|
||||||
|
AQ_LOCKED_MDO_DEF(get_tx_sc_stats)
|
||||||
|
AQ_LOCKED_MDO_DEF(get_tx_sa_stats)
|
||||||
|
AQ_LOCKED_MDO_DEF(get_rx_sc_stats)
|
||||||
|
AQ_LOCKED_MDO_DEF(get_rx_sa_stats)
|
||||||
|
|
||||||
const struct macsec_ops aq_macsec_ops = {
|
const struct macsec_ops aq_macsec_ops = {
|
||||||
.mdo_dev_open = aq_mdo_dev_open,
|
.mdo_dev_open = aq_locked_mdo_dev_open,
|
||||||
.mdo_dev_stop = aq_mdo_dev_stop,
|
.mdo_dev_stop = aq_locked_mdo_dev_stop,
|
||||||
.mdo_add_secy = aq_mdo_add_secy,
|
.mdo_add_secy = aq_locked_mdo_add_secy,
|
||||||
.mdo_upd_secy = aq_mdo_upd_secy,
|
.mdo_upd_secy = aq_locked_mdo_upd_secy,
|
||||||
.mdo_del_secy = aq_mdo_del_secy,
|
.mdo_del_secy = aq_locked_mdo_del_secy,
|
||||||
.mdo_add_rxsc = aq_mdo_add_rxsc,
|
.mdo_add_rxsc = aq_locked_mdo_add_rxsc,
|
||||||
.mdo_upd_rxsc = aq_mdo_upd_rxsc,
|
.mdo_upd_rxsc = aq_locked_mdo_upd_rxsc,
|
||||||
.mdo_del_rxsc = aq_mdo_del_rxsc,
|
.mdo_del_rxsc = aq_locked_mdo_del_rxsc,
|
||||||
.mdo_add_rxsa = aq_mdo_add_rxsa,
|
.mdo_add_rxsa = aq_locked_mdo_add_rxsa,
|
||||||
.mdo_upd_rxsa = aq_mdo_upd_rxsa,
|
.mdo_upd_rxsa = aq_locked_mdo_upd_rxsa,
|
||||||
.mdo_del_rxsa = aq_mdo_del_rxsa,
|
.mdo_del_rxsa = aq_locked_mdo_del_rxsa,
|
||||||
.mdo_add_txsa = aq_mdo_add_txsa,
|
.mdo_add_txsa = aq_locked_mdo_add_txsa,
|
||||||
.mdo_upd_txsa = aq_mdo_upd_txsa,
|
.mdo_upd_txsa = aq_locked_mdo_upd_txsa,
|
||||||
.mdo_del_txsa = aq_mdo_del_txsa,
|
.mdo_del_txsa = aq_locked_mdo_del_txsa,
|
||||||
.mdo_get_dev_stats = aq_mdo_get_dev_stats,
|
.mdo_get_dev_stats = aq_locked_mdo_get_dev_stats,
|
||||||
.mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats,
|
.mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats,
|
||||||
.mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats,
|
.mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats,
|
||||||
.mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats,
|
.mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats,
|
||||||
.mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats,
|
.mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats,
|
||||||
};
|
};
|
||||||
|
|
||||||
int aq_macsec_init(struct aq_nic_s *nic)
|
int aq_macsec_init(struct aq_nic_s *nic)
|
||||||
|
@ -1435,6 +1466,7 @@ int aq_macsec_init(struct aq_nic_s *nic)
|
||||||
|
|
||||||
nic->ndev->features |= NETIF_F_HW_MACSEC;
|
nic->ndev->features |= NETIF_F_HW_MACSEC;
|
||||||
nic->ndev->macsec_ops = &aq_macsec_ops;
|
nic->ndev->macsec_ops = &aq_macsec_ops;
|
||||||
|
mutex_init(&nic->macsec_mutex);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1458,7 +1490,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
|
||||||
if (!nic->macsec_cfg)
|
if (!nic->macsec_cfg)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
rtnl_lock();
|
mutex_lock(&nic->macsec_mutex);
|
||||||
|
|
||||||
if (nic->aq_fw_ops->send_macsec_req) {
|
if (nic->aq_fw_ops->send_macsec_req) {
|
||||||
struct macsec_cfg_request cfg = { 0 };
|
struct macsec_cfg_request cfg = { 0 };
|
||||||
|
@ -1507,7 +1539,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
|
||||||
ret = aq_apply_macsec_cfg(nic);
|
ret = aq_apply_macsec_cfg(nic);
|
||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
rtnl_unlock();
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1519,9 +1551,9 @@ void aq_macsec_work(struct aq_nic_s *nic)
|
||||||
if (!netif_carrier_ok(nic->ndev))
|
if (!netif_carrier_ok(nic->ndev))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
rtnl_lock();
|
mutex_lock(&nic->macsec_mutex);
|
||||||
aq_check_txsa_expiration(nic);
|
aq_check_txsa_expiration(nic);
|
||||||
rtnl_unlock();
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
|
int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
|
||||||
|
@ -1532,21 +1564,30 @@ int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
|
||||||
if (!cfg)
|
if (!cfg)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
mutex_lock(&nic->macsec_mutex);
|
||||||
|
|
||||||
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
||||||
if (!test_bit(i, &cfg->rxsc_idx_busy))
|
if (!test_bit(i, &cfg->rxsc_idx_busy))
|
||||||
continue;
|
continue;
|
||||||
cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
|
cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
return cnt;
|
return cnt;
|
||||||
}
|
}
|
||||||
|
|
||||||
int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
|
int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
|
||||||
{
|
{
|
||||||
|
int cnt;
|
||||||
|
|
||||||
if (!nic->macsec_cfg)
|
if (!nic->macsec_cfg)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return hweight_long(nic->macsec_cfg->txsc_idx_busy);
|
mutex_lock(&nic->macsec_mutex);
|
||||||
|
cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy);
|
||||||
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
|
|
||||||
|
return cnt;
|
||||||
}
|
}
|
||||||
|
|
||||||
int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
|
int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
|
||||||
|
@ -1557,12 +1598,15 @@ int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
|
||||||
if (!cfg)
|
if (!cfg)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
mutex_lock(&nic->macsec_mutex);
|
||||||
|
|
||||||
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
||||||
if (!test_bit(i, &cfg->txsc_idx_busy))
|
if (!test_bit(i, &cfg->txsc_idx_busy))
|
||||||
continue;
|
continue;
|
||||||
cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
|
cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
return cnt;
|
return cnt;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1634,6 +1678,8 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
|
||||||
if (!cfg)
|
if (!cfg)
|
||||||
return data;
|
return data;
|
||||||
|
|
||||||
|
mutex_lock(&nic->macsec_mutex);
|
||||||
|
|
||||||
aq_macsec_update_stats(nic);
|
aq_macsec_update_stats(nic);
|
||||||
|
|
||||||
common_stats = &cfg->stats;
|
common_stats = &cfg->stats;
|
||||||
|
@ -1716,5 +1762,7 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
|
||||||
|
|
||||||
data += i;
|
data += i;
|
||||||
|
|
||||||
|
mutex_unlock(&nic->macsec_mutex);
|
||||||
|
|
||||||
return data;
|
return data;
|
||||||
}
|
}
|
||||||
|
|
|
@ -157,6 +157,8 @@ struct aq_nic_s {
|
||||||
struct mutex fwreq_mutex;
|
struct mutex fwreq_mutex;
|
||||||
#if IS_ENABLED(CONFIG_MACSEC)
|
#if IS_ENABLED(CONFIG_MACSEC)
|
||||||
struct aq_macsec_cfg *macsec_cfg;
|
struct aq_macsec_cfg *macsec_cfg;
|
||||||
|
/* mutex to protect data in macsec_cfg */
|
||||||
|
struct mutex macsec_mutex;
|
||||||
#endif
|
#endif
|
||||||
/* PTP support */
|
/* PTP support */
|
||||||
struct aq_ptp_s *aq_ptp;
|
struct aq_ptp_s *aq_ptp;
|
||||||
|
|
|
@ -806,6 +806,7 @@ static int macb_mii_probe(struct net_device *dev)
|
||||||
|
|
||||||
bp->phylink_config.dev = &dev->dev;
|
bp->phylink_config.dev = &dev->dev;
|
||||||
bp->phylink_config.type = PHYLINK_NETDEV;
|
bp->phylink_config.type = PHYLINK_NETDEV;
|
||||||
|
bp->phylink_config.mac_managed_pm = true;
|
||||||
|
|
||||||
if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
|
if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
|
||||||
bp->phylink_config.poll_fixed_state = true;
|
bp->phylink_config.poll_fixed_state = true;
|
||||||
|
|
|
@ -221,8 +221,8 @@ static int dpaa_netdev_init(struct net_device *net_dev,
|
||||||
net_dev->netdev_ops = dpaa_ops;
|
net_dev->netdev_ops = dpaa_ops;
|
||||||
mac_addr = mac_dev->addr;
|
mac_addr = mac_dev->addr;
|
||||||
|
|
||||||
net_dev->mem_start = (unsigned long)mac_dev->vaddr;
|
net_dev->mem_start = (unsigned long)priv->mac_dev->res->start;
|
||||||
net_dev->mem_end = (unsigned long)mac_dev->vaddr_end;
|
net_dev->mem_end = (unsigned long)priv->mac_dev->res->end;
|
||||||
|
|
||||||
net_dev->min_mtu = ETH_MIN_MTU;
|
net_dev->min_mtu = ETH_MIN_MTU;
|
||||||
net_dev->max_mtu = dpaa_get_max_mtu();
|
net_dev->max_mtu = dpaa_get_max_mtu();
|
||||||
|
|
|
@ -18,7 +18,7 @@ static ssize_t dpaa_eth_show_addr(struct device *dev,
|
||||||
|
|
||||||
if (mac_dev)
|
if (mac_dev)
|
||||||
return sprintf(buf, "%llx",
|
return sprintf(buf, "%llx",
|
||||||
(unsigned long long)mac_dev->vaddr);
|
(unsigned long long)mac_dev->res->start);
|
||||||
else
|
else
|
||||||
return sprintf(buf, "none");
|
return sprintf(buf, "none");
|
||||||
}
|
}
|
||||||
|
|
|
@ -279,7 +279,6 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||||
struct device_node *mac_node, *dev_node;
|
struct device_node *mac_node, *dev_node;
|
||||||
struct mac_device *mac_dev;
|
struct mac_device *mac_dev;
|
||||||
struct platform_device *of_dev;
|
struct platform_device *of_dev;
|
||||||
struct resource *res;
|
|
||||||
struct mac_priv_s *priv;
|
struct mac_priv_s *priv;
|
||||||
struct fman_mac_params params;
|
struct fman_mac_params params;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
@ -338,24 +337,25 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||||
of_node_put(dev_node);
|
of_node_put(dev_node);
|
||||||
|
|
||||||
/* Get the address of the memory mapped registers */
|
/* Get the address of the memory mapped registers */
|
||||||
res = platform_get_mem_or_io(_of_dev, 0);
|
mac_dev->res = platform_get_mem_or_io(_of_dev, 0);
|
||||||
if (!res) {
|
if (!mac_dev->res) {
|
||||||
dev_err(dev, "could not get registers\n");
|
dev_err(dev, "could not get registers\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = devm_request_resource(dev, fman_get_mem_region(priv->fman), res);
|
err = devm_request_resource(dev, fman_get_mem_region(priv->fman),
|
||||||
|
mac_dev->res);
|
||||||
if (err) {
|
if (err) {
|
||||||
dev_err_probe(dev, err, "could not request resource\n");
|
dev_err_probe(dev, err, "could not request resource\n");
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
mac_dev->vaddr = devm_ioremap(dev, res->start, resource_size(res));
|
mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,
|
||||||
|
resource_size(mac_dev->res));
|
||||||
if (!mac_dev->vaddr) {
|
if (!mac_dev->vaddr) {
|
||||||
dev_err(dev, "devm_ioremap() failed\n");
|
dev_err(dev, "devm_ioremap() failed\n");
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
mac_dev->vaddr_end = mac_dev->vaddr + resource_size(res);
|
|
||||||
|
|
||||||
if (!of_device_is_available(mac_node))
|
if (!of_device_is_available(mac_node))
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
|
@ -20,8 +20,8 @@ struct mac_priv_s;
|
||||||
|
|
||||||
struct mac_device {
|
struct mac_device {
|
||||||
void __iomem *vaddr;
|
void __iomem *vaddr;
|
||||||
void __iomem *vaddr_end;
|
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
|
struct resource *res;
|
||||||
u8 addr[ETH_ALEN];
|
u8 addr[ETH_ALEN];
|
||||||
struct fman_port *port[2];
|
struct fman_port *port[2];
|
||||||
u32 if_support;
|
u32 if_support;
|
||||||
|
|
|
@ -85,6 +85,7 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
|
||||||
struct tag_sml_funcfg_tbl *funcfg_table_elem;
|
struct tag_sml_funcfg_tbl *funcfg_table_elem;
|
||||||
struct hinic_cmd_lt_rd *read_data;
|
struct hinic_cmd_lt_rd *read_data;
|
||||||
u16 out_size = sizeof(*read_data);
|
u16 out_size = sizeof(*read_data);
|
||||||
|
int ret = ~0;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
|
read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
|
||||||
|
@ -111,20 +112,25 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
|
||||||
|
|
||||||
switch (idx) {
|
switch (idx) {
|
||||||
case VALID:
|
case VALID:
|
||||||
return funcfg_table_elem->dw0.bs.valid;
|
ret = funcfg_table_elem->dw0.bs.valid;
|
||||||
|
break;
|
||||||
case RX_MODE:
|
case RX_MODE:
|
||||||
return funcfg_table_elem->dw0.bs.nic_rx_mode;
|
ret = funcfg_table_elem->dw0.bs.nic_rx_mode;
|
||||||
|
break;
|
||||||
case MTU:
|
case MTU:
|
||||||
return funcfg_table_elem->dw1.bs.mtu;
|
ret = funcfg_table_elem->dw1.bs.mtu;
|
||||||
|
break;
|
||||||
case RQ_DEPTH:
|
case RQ_DEPTH:
|
||||||
return funcfg_table_elem->dw13.bs.cfg_rq_depth;
|
ret = funcfg_table_elem->dw13.bs.cfg_rq_depth;
|
||||||
|
break;
|
||||||
case QUEUE_NUM:
|
case QUEUE_NUM:
|
||||||
return funcfg_table_elem->dw13.bs.cfg_q_num;
|
ret = funcfg_table_elem->dw13.bs.cfg_q_num;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
kfree(read_data);
|
kfree(read_data);
|
||||||
|
|
||||||
return ~0;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
|
static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
|
||||||
|
|
|
@ -924,7 +924,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
|
||||||
|
|
||||||
err_set_cmdq_depth:
|
err_set_cmdq_depth:
|
||||||
hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
|
hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
|
||||||
|
free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]);
|
||||||
err_cmdq_ctxt:
|
err_cmdq_ctxt:
|
||||||
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
|
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
|
||||||
HINIC_MAX_CMDQ_TYPES);
|
HINIC_MAX_CMDQ_TYPES);
|
||||||
|
|
|
@ -877,7 +877,7 @@ int hinic_set_interrupt_cfg(struct hinic_hwdev *hwdev,
|
||||||
if (err)
|
if (err)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt;
|
interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt;
|
||||||
interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
|
interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
|
||||||
|
|
||||||
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
|
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
|
||||||
|
|
|
@ -1174,7 +1174,6 @@ int hinic_vf_func_init(struct hinic_hwdev *hwdev)
|
||||||
dev_err(&hwdev->hwif->pdev->dev,
|
dev_err(&hwdev->hwif->pdev->dev,
|
||||||
"Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
|
"Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
|
||||||
err, register_info.status, out_size);
|
err, register_info.status, out_size);
|
||||||
hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC);
|
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -485,7 +485,6 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
|
||||||
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
|
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
|
||||||
|
|
||||||
if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
|
if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
|
||||||
dev_kfree_skb_any(skb);
|
|
||||||
netdev_err(dev, "tx ring full\n");
|
netdev_err(dev, "tx ring full\n");
|
||||||
netif_tx_stop_queue(txq);
|
netif_tx_stop_queue(txq);
|
||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
|
|
|
@ -1846,25 +1846,16 @@ err_hash:
|
||||||
void mlx5e_macsec_cleanup(struct mlx5e_priv *priv)
|
void mlx5e_macsec_cleanup(struct mlx5e_priv *priv)
|
||||||
{
|
{
|
||||||
struct mlx5e_macsec *macsec = priv->macsec;
|
struct mlx5e_macsec *macsec = priv->macsec;
|
||||||
struct mlx5_core_dev *mdev = macsec->mdev;
|
struct mlx5_core_dev *mdev = priv->mdev;
|
||||||
|
|
||||||
if (!macsec)
|
if (!macsec)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
mlx5_notifier_unregister(mdev, &macsec->nb);
|
mlx5_notifier_unregister(mdev, &macsec->nb);
|
||||||
|
|
||||||
mlx5e_macsec_fs_cleanup(macsec->macsec_fs);
|
mlx5e_macsec_fs_cleanup(macsec->macsec_fs);
|
||||||
|
|
||||||
/* Cleanup workqueue */
|
|
||||||
destroy_workqueue(macsec->wq);
|
destroy_workqueue(macsec->wq);
|
||||||
|
|
||||||
mlx5e_macsec_aso_cleanup(&macsec->aso, mdev);
|
mlx5e_macsec_aso_cleanup(&macsec->aso, mdev);
|
||||||
|
|
||||||
priv->macsec = NULL;
|
|
||||||
|
|
||||||
rhashtable_destroy(&macsec->sci_hash);
|
rhashtable_destroy(&macsec->sci_hash);
|
||||||
|
|
||||||
mutex_destroy(&macsec->lock);
|
mutex_destroy(&macsec->lock);
|
||||||
|
|
||||||
kfree(macsec);
|
kfree(macsec);
|
||||||
}
|
}
|
||||||
|
|
|
@ -656,7 +656,15 @@ void lan966x_stats_get(struct net_device *dev,
|
||||||
stats->rx_dropped = dev->stats.rx_dropped +
|
stats->rx_dropped = dev->stats.rx_dropped +
|
||||||
lan966x->stats[idx + SYS_COUNT_RX_LONG] +
|
lan966x->stats[idx + SYS_COUNT_RX_LONG] +
|
||||||
lan966x->stats[idx + SYS_COUNT_DR_LOCAL] +
|
lan966x->stats[idx + SYS_COUNT_DR_LOCAL] +
|
||||||
lan966x->stats[idx + SYS_COUNT_DR_TAIL];
|
lan966x->stats[idx + SYS_COUNT_DR_TAIL] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_0] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_1] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_2] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_3] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_4] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_5] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_6] +
|
||||||
|
lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_7];
|
||||||
|
|
||||||
for (i = 0; i < LAN966X_NUM_TC; i++) {
|
for (i = 0; i < LAN966X_NUM_TC; i++) {
|
||||||
stats->rx_dropped +=
|
stats->rx_dropped +=
|
||||||
|
|
|
@ -716,16 +716,26 @@ static u64 nfp_net_pf_get_app_cap(struct nfp_pf *pf)
|
||||||
return val;
|
return val;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff)
|
static void nfp_pf_cfg_hwinfo(struct nfp_pf *pf)
|
||||||
{
|
{
|
||||||
struct nfp_nsp *nsp;
|
struct nfp_nsp *nsp;
|
||||||
char hwinfo[32];
|
char hwinfo[32];
|
||||||
|
bool sp_indiff;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
nsp = nfp_nsp_open(pf->cpp);
|
nsp = nfp_nsp_open(pf->cpp);
|
||||||
if (IS_ERR(nsp))
|
if (IS_ERR(nsp))
|
||||||
return PTR_ERR(nsp);
|
return;
|
||||||
|
|
||||||
|
if (!nfp_nsp_has_hwinfo_set(nsp))
|
||||||
|
goto end;
|
||||||
|
|
||||||
|
sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
|
||||||
|
(nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
|
||||||
|
|
||||||
|
/* No need to clean `sp_indiff` in driver, management firmware
|
||||||
|
* will do it when application firmware is unloaded.
|
||||||
|
*/
|
||||||
snprintf(hwinfo, sizeof(hwinfo), "sp_indiff=%d", sp_indiff);
|
snprintf(hwinfo, sizeof(hwinfo), "sp_indiff=%d", sp_indiff);
|
||||||
err = nfp_nsp_hwinfo_set(nsp, hwinfo, sizeof(hwinfo));
|
err = nfp_nsp_hwinfo_set(nsp, hwinfo, sizeof(hwinfo));
|
||||||
/* Not a fatal error, no need to return error to stop driver from loading */
|
/* Not a fatal error, no need to return error to stop driver from loading */
|
||||||
|
@ -739,21 +749,8 @@ static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff)
|
||||||
pf->eth_tbl = __nfp_eth_read_ports(pf->cpp, nsp);
|
pf->eth_tbl = __nfp_eth_read_ports(pf->cpp, nsp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
end:
|
||||||
nfp_nsp_close(nsp);
|
nfp_nsp_close(nsp);
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int nfp_pf_nsp_cfg(struct nfp_pf *pf)
|
|
||||||
{
|
|
||||||
bool sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
|
|
||||||
(nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
|
|
||||||
|
|
||||||
return nfp_pf_cfg_hwinfo(pf, sp_indiff);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void nfp_pf_nsp_clean(struct nfp_pf *pf)
|
|
||||||
{
|
|
||||||
nfp_pf_cfg_hwinfo(pf, false);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nfp_pci_probe(struct pci_dev *pdev,
|
static int nfp_pci_probe(struct pci_dev *pdev,
|
||||||
|
@ -856,13 +853,11 @@ static int nfp_pci_probe(struct pci_dev *pdev,
|
||||||
goto err_fw_unload;
|
goto err_fw_unload;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = nfp_pf_nsp_cfg(pf);
|
nfp_pf_cfg_hwinfo(pf);
|
||||||
if (err)
|
|
||||||
goto err_fw_unload;
|
|
||||||
|
|
||||||
err = nfp_net_pci_probe(pf);
|
err = nfp_net_pci_probe(pf);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_nsp_clean;
|
goto err_fw_unload;
|
||||||
|
|
||||||
err = nfp_hwmon_register(pf);
|
err = nfp_hwmon_register(pf);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
@ -874,8 +869,6 @@ static int nfp_pci_probe(struct pci_dev *pdev,
|
||||||
|
|
||||||
err_net_remove:
|
err_net_remove:
|
||||||
nfp_net_pci_remove(pf);
|
nfp_net_pci_remove(pf);
|
||||||
err_nsp_clean:
|
|
||||||
nfp_pf_nsp_clean(pf);
|
|
||||||
err_fw_unload:
|
err_fw_unload:
|
||||||
kfree(pf->rtbl);
|
kfree(pf->rtbl);
|
||||||
nfp_mip_close(pf->mip);
|
nfp_mip_close(pf->mip);
|
||||||
|
@ -915,7 +908,6 @@ static void __nfp_pci_shutdown(struct pci_dev *pdev, bool unload_fw)
|
||||||
|
|
||||||
nfp_net_pci_remove(pf);
|
nfp_net_pci_remove(pf);
|
||||||
|
|
||||||
nfp_pf_nsp_clean(pf);
|
|
||||||
vfree(pf->dumpspec);
|
vfree(pf->dumpspec);
|
||||||
kfree(pf->rtbl);
|
kfree(pf->rtbl);
|
||||||
nfp_mip_close(pf->mip);
|
nfp_mip_close(pf->mip);
|
||||||
|
|
|
@ -1961,11 +1961,13 @@ static int netsec_register_mdio(struct netsec_priv *priv, u32 phy_addr)
|
||||||
ret = PTR_ERR(priv->phydev);
|
ret = PTR_ERR(priv->phydev);
|
||||||
dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
|
dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
|
||||||
priv->phydev = NULL;
|
priv->phydev = NULL;
|
||||||
|
mdiobus_unregister(bus);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = phy_device_register(priv->phydev);
|
ret = phy_device_register(priv->phydev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
phy_device_free(priv->phydev);
|
||||||
mdiobus_unregister(bus);
|
mdiobus_unregister(bus);
|
||||||
dev_err(priv->dev,
|
dev_err(priv->dev,
|
||||||
"phy_device_register err(%d)\n", ret);
|
"phy_device_register err(%d)\n", ret);
|
||||||
|
|
|
@ -54,16 +54,19 @@ static int virtual_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
|
||||||
mutex_lock(&nci_mutex);
|
mutex_lock(&nci_mutex);
|
||||||
if (state != virtual_ncidev_enabled) {
|
if (state != virtual_ncidev_enabled) {
|
||||||
mutex_unlock(&nci_mutex);
|
mutex_unlock(&nci_mutex);
|
||||||
|
kfree_skb(skb);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (send_buff) {
|
if (send_buff) {
|
||||||
mutex_unlock(&nci_mutex);
|
mutex_unlock(&nci_mutex);
|
||||||
|
kfree_skb(skb);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
send_buff = skb_copy(skb, GFP_KERNEL);
|
send_buff = skb_copy(skb, GFP_KERNEL);
|
||||||
mutex_unlock(&nci_mutex);
|
mutex_unlock(&nci_mutex);
|
||||||
wake_up_interruptible(&wq);
|
wake_up_interruptible(&wq);
|
||||||
|
consume_skb(skb);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -27,6 +27,7 @@
|
||||||
#include <linux/bpfptr.h>
|
#include <linux/bpfptr.h>
|
||||||
#include <linux/btf.h>
|
#include <linux/btf.h>
|
||||||
#include <linux/rcupdate_trace.h>
|
#include <linux/rcupdate_trace.h>
|
||||||
|
#include <linux/init.h>
|
||||||
|
|
||||||
struct bpf_verifier_env;
|
struct bpf_verifier_env;
|
||||||
struct bpf_verifier_log;
|
struct bpf_verifier_log;
|
||||||
|
@ -970,6 +971,8 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
|
||||||
struct bpf_attach_target_info *tgt_info);
|
struct bpf_attach_target_info *tgt_info);
|
||||||
void bpf_trampoline_put(struct bpf_trampoline *tr);
|
void bpf_trampoline_put(struct bpf_trampoline *tr);
|
||||||
int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
|
int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
|
||||||
|
int __init bpf_arch_init_dispatcher_early(void *ip);
|
||||||
|
|
||||||
#define BPF_DISPATCHER_INIT(_name) { \
|
#define BPF_DISPATCHER_INIT(_name) { \
|
||||||
.mutex = __MUTEX_INITIALIZER(_name.mutex), \
|
.mutex = __MUTEX_INITIALIZER(_name.mutex), \
|
||||||
.func = &_name##_func, \
|
.func = &_name##_func, \
|
||||||
|
@ -983,6 +986,13 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
|
||||||
}, \
|
}, \
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#define BPF_DISPATCHER_INIT_CALL(_name) \
|
||||||
|
static int __init _name##_init(void) \
|
||||||
|
{ \
|
||||||
|
return bpf_arch_init_dispatcher_early(_name##_func); \
|
||||||
|
} \
|
||||||
|
early_initcall(_name##_init)
|
||||||
|
|
||||||
#ifdef CONFIG_X86_64
|
#ifdef CONFIG_X86_64
|
||||||
#define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5)))
|
#define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5)))
|
||||||
#else
|
#else
|
||||||
|
@ -1000,7 +1010,9 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
|
||||||
} \
|
} \
|
||||||
EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \
|
EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \
|
||||||
struct bpf_dispatcher bpf_dispatcher_##name = \
|
struct bpf_dispatcher bpf_dispatcher_##name = \
|
||||||
BPF_DISPATCHER_INIT(bpf_dispatcher_##name);
|
BPF_DISPATCHER_INIT(bpf_dispatcher_##name); \
|
||||||
|
BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name);
|
||||||
|
|
||||||
#define DECLARE_BPF_DISPATCHER(name) \
|
#define DECLARE_BPF_DISPATCHER(name) \
|
||||||
unsigned int bpf_dispatcher_##name##_func( \
|
unsigned int bpf_dispatcher_##name##_func( \
|
||||||
const void *ctx, \
|
const void *ctx, \
|
||||||
|
|
|
@ -2585,7 +2585,7 @@ static inline gfp_t gfp_any(void)
|
||||||
|
|
||||||
static inline gfp_t gfp_memcg_charge(void)
|
static inline gfp_t gfp_memcg_charge(void)
|
||||||
{
|
{
|
||||||
return in_softirq() ? GFP_NOWAIT : GFP_KERNEL;
|
return in_softirq() ? GFP_ATOMIC : GFP_KERNEL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline long sock_rcvtimeo(const struct sock *sk, bool noblock)
|
static inline long sock_rcvtimeo(const struct sock *sk, bool noblock)
|
||||||
|
|
|
@ -4436,6 +4436,11 @@ static int btf_func_proto_check(struct btf_verifier_env *env,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (btf_type_is_resolve_source_only(ret_type)) {
|
||||||
|
btf_verifier_log_type(env, t, "Invalid return type");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
if (btf_type_needs_resolve(ret_type) &&
|
if (btf_type_needs_resolve(ret_type) &&
|
||||||
!env_type_is_resolved(env, ret_type_id)) {
|
!env_type_is_resolved(env, ret_type_id)) {
|
||||||
err = btf_resolve(env, ret_type, ret_type_id);
|
err = btf_resolve(env, ret_type, ret_type_id);
|
||||||
|
|
|
@ -4,6 +4,7 @@
|
||||||
#include <linux/hash.h>
|
#include <linux/hash.h>
|
||||||
#include <linux/bpf.h>
|
#include <linux/bpf.h>
|
||||||
#include <linux/filter.h>
|
#include <linux/filter.h>
|
||||||
|
#include <linux/init.h>
|
||||||
|
|
||||||
/* The BPF dispatcher is a multiway branch code generator. The
|
/* The BPF dispatcher is a multiway branch code generator. The
|
||||||
* dispatcher is a mechanism to avoid the performance penalty of an
|
* dispatcher is a mechanism to avoid the performance penalty of an
|
||||||
|
@ -90,6 +91,11 @@ int __weak arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int n
|
||||||
return -ENOTSUPP;
|
return -ENOTSUPP;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int __weak __init bpf_arch_init_dispatcher_early(void *ip)
|
||||||
|
{
|
||||||
|
return -ENOTSUPP;
|
||||||
|
}
|
||||||
|
|
||||||
static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf)
|
static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf)
|
||||||
{
|
{
|
||||||
s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0];
|
s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0];
|
||||||
|
|
|
@ -418,14 +418,17 @@ static void drain_mem_cache(struct bpf_mem_cache *c)
|
||||||
/* No progs are using this bpf_mem_cache, but htab_map_free() called
|
/* No progs are using this bpf_mem_cache, but htab_map_free() called
|
||||||
* bpf_mem_cache_free() for all remaining elements and they can be in
|
* bpf_mem_cache_free() for all remaining elements and they can be in
|
||||||
* free_by_rcu or in waiting_for_gp lists, so drain those lists now.
|
* free_by_rcu or in waiting_for_gp lists, so drain those lists now.
|
||||||
|
*
|
||||||
|
* Except for waiting_for_gp list, there are no concurrent operations
|
||||||
|
* on these lists, so it is safe to use __llist_del_all().
|
||||||
*/
|
*/
|
||||||
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu))
|
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu))
|
||||||
free_one(c, llnode);
|
free_one(c, llnode);
|
||||||
llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp))
|
llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp))
|
||||||
free_one(c, llnode);
|
free_one(c, llnode);
|
||||||
llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist))
|
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist))
|
||||||
free_one(c, llnode);
|
free_one(c, llnode);
|
||||||
llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra))
|
llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist_extra))
|
||||||
free_one(c, llnode);
|
free_one(c, llnode);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -493,6 +496,16 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
|
||||||
rcu_in_progress = 0;
|
rcu_in_progress = 0;
|
||||||
for_each_possible_cpu(cpu) {
|
for_each_possible_cpu(cpu) {
|
||||||
c = per_cpu_ptr(ma->cache, cpu);
|
c = per_cpu_ptr(ma->cache, cpu);
|
||||||
|
/*
|
||||||
|
* refill_work may be unfinished for PREEMPT_RT kernel
|
||||||
|
* in which irq work is invoked in a per-CPU RT thread.
|
||||||
|
* It is also possible for kernel with
|
||||||
|
* arch_irq_work_has_interrupt() being false and irq
|
||||||
|
* work is invoked in timer interrupt. So waiting for
|
||||||
|
* the completion of irq work to ease the handling of
|
||||||
|
* concurrency.
|
||||||
|
*/
|
||||||
|
irq_work_sync(&c->refill_work);
|
||||||
drain_mem_cache(c);
|
drain_mem_cache(c);
|
||||||
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
|
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
|
||||||
}
|
}
|
||||||
|
@ -507,6 +520,7 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
|
||||||
cc = per_cpu_ptr(ma->caches, cpu);
|
cc = per_cpu_ptr(ma->caches, cpu);
|
||||||
for (i = 0; i < NUM_CACHES; i++) {
|
for (i = 0; i < NUM_CACHES; i++) {
|
||||||
c = &cc->cache[i];
|
c = &cc->cache[i];
|
||||||
|
irq_work_sync(&c->refill_work);
|
||||||
drain_mem_cache(c);
|
drain_mem_cache(c);
|
||||||
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
|
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
|
||||||
}
|
}
|
||||||
|
|
|
@ -6946,6 +6946,7 @@ static int set_user_ringbuf_callback_state(struct bpf_verifier_env *env,
|
||||||
__mark_reg_not_init(env, &callee->regs[BPF_REG_5]);
|
__mark_reg_not_init(env, &callee->regs[BPF_REG_5]);
|
||||||
|
|
||||||
callee->in_callback_fn = true;
|
callee->in_callback_fn = true;
|
||||||
|
callee->callback_ret_range = tnum_range(0, 1);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -117,6 +117,7 @@ static int net_assign_generic(struct net *net, unsigned int id, void *data)
|
||||||
|
|
||||||
static int ops_init(const struct pernet_operations *ops, struct net *net)
|
static int ops_init(const struct pernet_operations *ops, struct net *net)
|
||||||
{
|
{
|
||||||
|
struct net_generic *ng;
|
||||||
int err = -ENOMEM;
|
int err = -ENOMEM;
|
||||||
void *data = NULL;
|
void *data = NULL;
|
||||||
|
|
||||||
|
@ -135,7 +136,13 @@ static int ops_init(const struct pernet_operations *ops, struct net *net)
|
||||||
if (!err)
|
if (!err)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
if (ops->id && ops->size) {
|
||||||
cleanup:
|
cleanup:
|
||||||
|
ng = rcu_dereference_protected(net->gen,
|
||||||
|
lockdep_is_held(&pernet_ops_rwsem));
|
||||||
|
ng->ptr[*ops->id] = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
kfree(data);
|
kfree(data);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
|
|
@ -64,7 +64,7 @@ static int pse_prepare_data(const struct ethnl_req_info *req_base,
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
ret = pse_get_pse_attributes(dev, info->extack, data);
|
ret = pse_get_pse_attributes(dev, info ? info->extack : NULL, data);
|
||||||
|
|
||||||
ethnl_ops_complete(dev);
|
ethnl_ops_complete(dev);
|
||||||
|
|
||||||
|
|
|
@ -2192,7 +2192,8 @@ void tcp_enter_loss(struct sock *sk)
|
||||||
*/
|
*/
|
||||||
static bool tcp_check_sack_reneging(struct sock *sk, int flag)
|
static bool tcp_check_sack_reneging(struct sock *sk, int flag)
|
||||||
{
|
{
|
||||||
if (flag & FLAG_SACK_RENEGING) {
|
if (flag & FLAG_SACK_RENEGING &&
|
||||||
|
flag & FLAG_SND_UNA_ADVANCED) {
|
||||||
struct tcp_sock *tp = tcp_sk(sk);
|
struct tcp_sock *tp = tcp_sk(sk);
|
||||||
unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
|
unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
|
||||||
msecs_to_jiffies(10));
|
msecs_to_jiffies(10));
|
||||||
|
|
|
@ -1874,11 +1874,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
|
||||||
__skb_push(skb, hdrlen);
|
__skb_push(skb, hdrlen);
|
||||||
|
|
||||||
no_coalesce:
|
no_coalesce:
|
||||||
|
limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
|
||||||
|
|
||||||
/* Only socket owner can try to collapse/prune rx queues
|
/* Only socket owner can try to collapse/prune rx queues
|
||||||
* to reduce memory overhead, so add a little headroom here.
|
* to reduce memory overhead, so add a little headroom here.
|
||||||
* Few sockets backlog are possibly concurrently non empty.
|
* Few sockets backlog are possibly concurrently non empty.
|
||||||
*/
|
*/
|
||||||
limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
|
limit += 64 * 1024;
|
||||||
|
|
||||||
if (unlikely(sk_add_backlog(sk, skb, limit))) {
|
if (unlikely(sk_add_backlog(sk, skb, limit))) {
|
||||||
bh_unlock_sock(sk);
|
bh_unlock_sock(sk);
|
||||||
|
|
|
@ -162,7 +162,8 @@ static void kcm_rcv_ready(struct kcm_sock *kcm)
|
||||||
/* Buffer limit is okay now, add to ready list */
|
/* Buffer limit is okay now, add to ready list */
|
||||||
list_add_tail(&kcm->wait_rx_list,
|
list_add_tail(&kcm->wait_rx_list,
|
||||||
&kcm->mux->kcm_rx_waiters);
|
&kcm->mux->kcm_rx_waiters);
|
||||||
kcm->rx_wait = true;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_wait, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void kcm_rfree(struct sk_buff *skb)
|
static void kcm_rfree(struct sk_buff *skb)
|
||||||
|
@ -178,7 +179,7 @@ static void kcm_rfree(struct sk_buff *skb)
|
||||||
/* For reading rx_wait and rx_psock without holding lock */
|
/* For reading rx_wait and rx_psock without holding lock */
|
||||||
smp_mb__after_atomic();
|
smp_mb__after_atomic();
|
||||||
|
|
||||||
if (!kcm->rx_wait && !kcm->rx_psock &&
|
if (!READ_ONCE(kcm->rx_wait) && !READ_ONCE(kcm->rx_psock) &&
|
||||||
sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) {
|
sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) {
|
||||||
spin_lock_bh(&mux->rx_lock);
|
spin_lock_bh(&mux->rx_lock);
|
||||||
kcm_rcv_ready(kcm);
|
kcm_rcv_ready(kcm);
|
||||||
|
@ -237,7 +238,8 @@ try_again:
|
||||||
if (kcm_queue_rcv_skb(&kcm->sk, skb)) {
|
if (kcm_queue_rcv_skb(&kcm->sk, skb)) {
|
||||||
/* Should mean socket buffer full */
|
/* Should mean socket buffer full */
|
||||||
list_del(&kcm->wait_rx_list);
|
list_del(&kcm->wait_rx_list);
|
||||||
kcm->rx_wait = false;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_wait, false);
|
||||||
|
|
||||||
/* Commit rx_wait to read in kcm_free */
|
/* Commit rx_wait to read in kcm_free */
|
||||||
smp_wmb();
|
smp_wmb();
|
||||||
|
@ -280,10 +282,12 @@ static struct kcm_sock *reserve_rx_kcm(struct kcm_psock *psock,
|
||||||
kcm = list_first_entry(&mux->kcm_rx_waiters,
|
kcm = list_first_entry(&mux->kcm_rx_waiters,
|
||||||
struct kcm_sock, wait_rx_list);
|
struct kcm_sock, wait_rx_list);
|
||||||
list_del(&kcm->wait_rx_list);
|
list_del(&kcm->wait_rx_list);
|
||||||
kcm->rx_wait = false;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_wait, false);
|
||||||
|
|
||||||
psock->rx_kcm = kcm;
|
psock->rx_kcm = kcm;
|
||||||
kcm->rx_psock = psock;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_psock, psock);
|
||||||
|
|
||||||
spin_unlock_bh(&mux->rx_lock);
|
spin_unlock_bh(&mux->rx_lock);
|
||||||
|
|
||||||
|
@ -310,7 +314,8 @@ static void unreserve_rx_kcm(struct kcm_psock *psock,
|
||||||
spin_lock_bh(&mux->rx_lock);
|
spin_lock_bh(&mux->rx_lock);
|
||||||
|
|
||||||
psock->rx_kcm = NULL;
|
psock->rx_kcm = NULL;
|
||||||
kcm->rx_psock = NULL;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_psock, NULL);
|
||||||
|
|
||||||
/* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with
|
/* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with
|
||||||
* kcm_rfree
|
* kcm_rfree
|
||||||
|
@ -1240,7 +1245,8 @@ static void kcm_recv_disable(struct kcm_sock *kcm)
|
||||||
if (!kcm->rx_psock) {
|
if (!kcm->rx_psock) {
|
||||||
if (kcm->rx_wait) {
|
if (kcm->rx_wait) {
|
||||||
list_del(&kcm->wait_rx_list);
|
list_del(&kcm->wait_rx_list);
|
||||||
kcm->rx_wait = false;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_wait, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue);
|
requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue);
|
||||||
|
@ -1793,7 +1799,8 @@ static void kcm_done(struct kcm_sock *kcm)
|
||||||
|
|
||||||
if (kcm->rx_wait) {
|
if (kcm->rx_wait) {
|
||||||
list_del(&kcm->wait_rx_list);
|
list_del(&kcm->wait_rx_list);
|
||||||
kcm->rx_wait = false;
|
/* paired with lockless reads in kcm_rfree() */
|
||||||
|
WRITE_ONCE(kcm->rx_wait, false);
|
||||||
}
|
}
|
||||||
/* Move any pending receive messages to other kcm sockets */
|
/* Move any pending receive messages to other kcm sockets */
|
||||||
requeue_rx_msgs(mux, &sk->sk_receive_queue);
|
requeue_rx_msgs(mux, &sk->sk_receive_queue);
|
||||||
|
|
|
@ -450,12 +450,19 @@ static void tipc_conn_data_ready(struct sock *sk)
|
||||||
static void tipc_topsrv_accept(struct work_struct *work)
|
static void tipc_topsrv_accept(struct work_struct *work)
|
||||||
{
|
{
|
||||||
struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork);
|
struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork);
|
||||||
struct socket *lsock = srv->listener;
|
struct socket *newsock, *lsock;
|
||||||
struct socket *newsock;
|
|
||||||
struct tipc_conn *con;
|
struct tipc_conn *con;
|
||||||
struct sock *newsk;
|
struct sock *newsk;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
spin_lock_bh(&srv->idr_lock);
|
||||||
|
if (!srv->listener) {
|
||||||
|
spin_unlock_bh(&srv->idr_lock);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
lsock = srv->listener;
|
||||||
|
spin_unlock_bh(&srv->idr_lock);
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
|
ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
@ -489,7 +496,7 @@ static void tipc_topsrv_listener_data_ready(struct sock *sk)
|
||||||
|
|
||||||
read_lock_bh(&sk->sk_callback_lock);
|
read_lock_bh(&sk->sk_callback_lock);
|
||||||
srv = sk->sk_user_data;
|
srv = sk->sk_user_data;
|
||||||
if (srv->listener)
|
if (srv)
|
||||||
queue_work(srv->rcv_wq, &srv->awork);
|
queue_work(srv->rcv_wq, &srv->awork);
|
||||||
read_unlock_bh(&sk->sk_callback_lock);
|
read_unlock_bh(&sk->sk_callback_lock);
|
||||||
}
|
}
|
||||||
|
@ -699,8 +706,9 @@ static void tipc_topsrv_stop(struct net *net)
|
||||||
__module_get(lsock->sk->sk_prot_creator->owner);
|
__module_get(lsock->sk->sk_prot_creator->owner);
|
||||||
srv->listener = NULL;
|
srv->listener = NULL;
|
||||||
spin_unlock_bh(&srv->idr_lock);
|
spin_unlock_bh(&srv->idr_lock);
|
||||||
sock_release(lsock);
|
|
||||||
tipc_topsrv_work_stop(srv);
|
tipc_topsrv_work_stop(srv);
|
||||||
|
sock_release(lsock);
|
||||||
idr_destroy(&srv->conn_idr);
|
idr_destroy(&srv->conn_idr);
|
||||||
kfree(srv);
|
kfree(srv);
|
||||||
}
|
}
|
||||||
|
|
|
@ -3935,6 +3935,19 @@ static struct btf_raw_test raw_tests[] = {
|
||||||
.btf_load_err = true,
|
.btf_load_err = true,
|
||||||
.err_str = "Invalid type_id",
|
.err_str = "Invalid type_id",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
.descr = "decl_tag test #16, func proto, return type",
|
||||||
|
.raw_types = {
|
||||||
|
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||||
|
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
|
||||||
|
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DECL_TAG, 0, 0), 2), (-1), /* [3] */
|
||||||
|
BTF_FUNC_PROTO_ENC(3, 0), /* [4] */
|
||||||
|
BTF_END_RAW,
|
||||||
|
},
|
||||||
|
BTF_STR_SEC("\0local\0tag1"),
|
||||||
|
.btf_load_err = true,
|
||||||
|
.err_str = "Invalid return type",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
.descr = "type_tag test #1",
|
.descr = "type_tag test #1",
|
||||||
.raw_types = {
|
.raw_types = {
|
||||||
|
|
|
@ -47,14 +47,14 @@ record_sample(struct bpf_dynptr *dynptr, void *context)
|
||||||
if (status) {
|
if (status) {
|
||||||
bpf_printk("bpf_dynptr_read() failed: %d\n", status);
|
bpf_printk("bpf_dynptr_read() failed: %d\n", status);
|
||||||
err = 1;
|
err = 1;
|
||||||
return 0;
|
return 1;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
sample = bpf_dynptr_data(dynptr, 0, sizeof(*sample));
|
sample = bpf_dynptr_data(dynptr, 0, sizeof(*sample));
|
||||||
if (!sample) {
|
if (!sample) {
|
||||||
bpf_printk("Unexpectedly failed to get sample\n");
|
bpf_printk("Unexpectedly failed to get sample\n");
|
||||||
err = 2;
|
err = 2;
|
||||||
return 0;
|
return 1;
|
||||||
}
|
}
|
||||||
stack_sample = *sample;
|
stack_sample = *sample;
|
||||||
}
|
}
|
||||||
|
|
|
@ -7,6 +7,8 @@ TEST_PROGS := \
|
||||||
bond-lladdr-target.sh \
|
bond-lladdr-target.sh \
|
||||||
dev_addr_lists.sh
|
dev_addr_lists.sh
|
||||||
|
|
||||||
TEST_FILES := lag_lib.sh
|
TEST_FILES := \
|
||||||
|
lag_lib.sh \
|
||||||
|
net_forwarding_lib.sh
|
||||||
|
|
||||||
include ../../../lib.mk
|
include ../../../lib.mk
|
||||||
|
|
|
@ -14,7 +14,7 @@ ALL_TESTS="
|
||||||
REQUIRE_MZ=no
|
REQUIRE_MZ=no
|
||||||
NUM_NETIFS=0
|
NUM_NETIFS=0
|
||||||
lib_dir=$(dirname "$0")
|
lib_dir=$(dirname "$0")
|
||||||
source "$lib_dir"/../../../net/forwarding/lib.sh
|
source "$lib_dir"/net_forwarding_lib.sh
|
||||||
|
|
||||||
source "$lib_dir"/lag_lib.sh
|
source "$lib_dir"/lag_lib.sh
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
../../../net/forwarding/lib.sh
|
|
@ -18,8 +18,8 @@ NUM_NETIFS=1
|
||||||
REQUIRE_JQ="no"
|
REQUIRE_JQ="no"
|
||||||
REQUIRE_MZ="no"
|
REQUIRE_MZ="no"
|
||||||
NETIF_CREATE="no"
|
NETIF_CREATE="no"
|
||||||
lib_dir=$(dirname $0)/../../../net/forwarding
|
lib_dir=$(dirname "$0")
|
||||||
source $lib_dir/lib.sh
|
source "$lib_dir"/lib.sh
|
||||||
|
|
||||||
cleanup() {
|
cleanup() {
|
||||||
echo "Cleaning up"
|
echo "Cleaning up"
|
||||||
|
|
|
@ -3,4 +3,8 @@
|
||||||
|
|
||||||
TEST_PROGS := dev_addr_lists.sh
|
TEST_PROGS := dev_addr_lists.sh
|
||||||
|
|
||||||
|
TEST_FILES := \
|
||||||
|
lag_lib.sh \
|
||||||
|
net_forwarding_lib.sh
|
||||||
|
|
||||||
include ../../../lib.mk
|
include ../../../lib.mk
|
||||||
|
|
|
@ -11,14 +11,14 @@ ALL_TESTS="
|
||||||
REQUIRE_MZ=no
|
REQUIRE_MZ=no
|
||||||
NUM_NETIFS=0
|
NUM_NETIFS=0
|
||||||
lib_dir=$(dirname "$0")
|
lib_dir=$(dirname "$0")
|
||||||
source "$lib_dir"/../../../net/forwarding/lib.sh
|
source "$lib_dir"/net_forwarding_lib.sh
|
||||||
|
|
||||||
source "$lib_dir"/../bonding/lag_lib.sh
|
source "$lib_dir"/lag_lib.sh
|
||||||
|
|
||||||
|
|
||||||
destroy()
|
destroy()
|
||||||
{
|
{
|
||||||
local ifnames=(dummy0 dummy1 team0 mv0)
|
local ifnames=(dummy1 dummy2 team0 mv0)
|
||||||
local ifname
|
local ifname
|
||||||
|
|
||||||
for ifname in "${ifnames[@]}"; do
|
for ifname in "${ifnames[@]}"; do
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
../bonding/lag_lib.sh
|
|
@ -0,0 +1 @@
|
||||||
|
../../../net/forwarding/lib.sh
|
|
@ -70,7 +70,7 @@ endef
|
||||||
run_tests: all
|
run_tests: all
|
||||||
ifdef building_out_of_srctree
|
ifdef building_out_of_srctree
|
||||||
@if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \
|
@if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \
|
||||||
rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
|
rsync -aLq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
|
||||||
fi
|
fi
|
||||||
@if [ "X$(TEST_PROGS)" != "X" ]; then \
|
@if [ "X$(TEST_PROGS)" != "X" ]; then \
|
||||||
$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
|
$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
|
||||||
|
@ -84,7 +84,7 @@ endif
|
||||||
|
|
||||||
define INSTALL_SINGLE_RULE
|
define INSTALL_SINGLE_RULE
|
||||||
$(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH))
|
$(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH))
|
||||||
$(if $(INSTALL_LIST),rsync -a $(INSTALL_LIST) $(INSTALL_PATH)/)
|
$(if $(INSTALL_LIST),rsync -aL $(INSTALL_LIST) $(INSTALL_PATH)/)
|
||||||
endef
|
endef
|
||||||
|
|
||||||
define INSTALL_RULE
|
define INSTALL_RULE
|
||||||
|
|
Loading…
Reference in New Issue