Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: 1) Revert CHECKSUM_COMPLETE optimization in pskb_trim_rcsum(), I can't figure out why it breaks things. 2) Fix comparison in netfilter ipset's hash_netnet4_data_equal(), it was basically doing "x == x", from Dave Jones. 3) Freescale FEC driver was DMA mapping the wrong number of bytes, from Sebastian Siewior. 4) Blackhole and prohibit routes in ipv6 were not doing the right thing because their ->input and ->output methods were not being assigned correctly. Now they behave properly like their ipv4 counterparts. From Kamala R. 5) Several drivers advertise the NETIF_F_FRAGLIST capability, but really do not support this feature and will send garbage packets if fed fraglist SKBs. From Eric Dumazet. 6) Fix long standing user triggerable BUG_ON over loopback in RDS protocol stack, from Venkat Venkatsubra. 7) Several not so common code paths can potentially try to invoke packet scheduler actions that might be NULL without checking. Shore things up by either 1) defining a method as mandatory and erroring on registration if that method is NULL 2) defininig a method as optional and the registration function hooks up a default implementation when NULL is seen. From Jamal Hadi Salim. 8) Fix fragment detection in xen-natback driver, from Paul Durrant. 9) Kill dangling enter_memory_pressure method in cg_proto ops, from Eric W Biederman. 10) SKBs that traverse namespaces should have their local_df cleared, from Hannes Frederic Sowa. 11) IOCB file position is not being updated by macvtap_aio_read() and tun_chr_aio_read(). From Zhi Yong Wu. 12) Don't free virtio_net netdev before releasing all of the NAPI instances. From Andrey Vagin. 13) Procfs entry leak in xt_hashlimit, from Sergey Popovich. 14) IPv6 routes that are no cached routes should not count against the garbage collection limits. We had this almost right, but were missing handling addrconf generated routes properly. From Hannes Frederic Sowa. 15) fib{4,6}_rule_suppress() have to consider potentially seeing NULL route info when they are called, from Stefan Tomanek. 16) TUN and MACVTAP have had truncated packet signalling for some time, fix from Jason Wang. 17) Fix use after frrr in __udp4_lib_rcv(), from Eric Dumazet. 18) xen-netback does not interpret the NAPI budget properly for TX work, fix from Paul Durrant. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (132 commits) igb: Fix for issue where values could be too high for udelay function. i40e: fix null dereference xen-netback: fix gso_prefix check net: make neigh_priv_len in struct net_device 16bit instead of 8bit drivers: net: cpsw: fix for cpsw crash when build as modules xen-netback: napi: don't prematurely request a tx event xen-netback: napi: fix abuse of budget sch_tbf: use do_div() for 64-bit divide udp: ipv4: must add synchronization in udp_sk_rx_dst_set() net:fec: remove duplicate lines in comment about errata ERR006358 Revert "8390 : Replace ei_debug with msg_enable/NETIF_MSG_* feature" 8390 : Replace ei_debug with msg_enable/NETIF_MSG_* feature xen-netback: make sure skb linear area covers checksum field net: smc91x: Fix device tree based configuration so it's usable udp: ipv4: fix potential use after free in udp_v4_early_demux() macvtap: signal truncated packets tun: unbreak truncated packet signalling net: sched: htb: fix the calculation of quantum net: sched: tbf: fix the calculation of max_size micrel: add support for KSZ8041RNLI ...
This commit is contained in:
commit
4a251dd29c
|
@ -4,7 +4,7 @@ This file provides information, what the device node
|
|||
for the davinci_emac interface contains.
|
||||
|
||||
Required properties:
|
||||
- compatible: "ti,davinci-dm6467-emac";
|
||||
- compatible: "ti,davinci-dm6467-emac" or "ti,am3517-emac"
|
||||
- reg: Offset and length of the register set for the device
|
||||
- ti,davinci-ctrl-reg-offset: offset to control register
|
||||
- ti,davinci-ctrl-mod-reg-offset: offset to control module register
|
||||
|
|
|
@ -8,3 +8,7 @@ Required properties:
|
|||
Optional properties:
|
||||
- phy-device : phandle to Ethernet phy
|
||||
- local-mac-address : Ethernet mac address to use
|
||||
- reg-io-width : Mask of sizes (in bytes) of the IO accesses that
|
||||
are supported on the device. Valid value for SMSC LAN91c111 are
|
||||
1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning
|
||||
16-bit access only.
|
||||
|
|
|
@ -123,6 +123,16 @@ Transmission process is similar to capture as shown below.
|
|||
[shutdown] close() --------> destruction of the transmission socket and
|
||||
deallocation of all associated resources.
|
||||
|
||||
Socket creation and destruction is also straight forward, and is done
|
||||
the same way as in capturing described in the previous paragraph:
|
||||
|
||||
int fd = socket(PF_PACKET, mode, 0);
|
||||
|
||||
The protocol can optionally be 0 in case we only want to transmit
|
||||
via this socket, which avoids an expensive call to packet_rcv().
|
||||
In this case, you also need to bind(2) the TX_RING with sll_protocol = 0
|
||||
set. Otherwise, htons(ETH_P_ALL) or any other protocol, for example.
|
||||
|
||||
Binding the socket to your network interface is mandatory (with zero copy) to
|
||||
know the header size of frames used in the circular buffer.
|
||||
|
||||
|
|
|
@ -4466,10 +4466,8 @@ M: Bruce Allan <bruce.w.allan@intel.com>
|
|||
M: Carolyn Wyborny <carolyn.wyborny@intel.com>
|
||||
M: Don Skidmore <donald.c.skidmore@intel.com>
|
||||
M: Greg Rose <gregory.v.rose@intel.com>
|
||||
M: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
|
||||
M: Alex Duyck <alexander.h.duyck@intel.com>
|
||||
M: John Ronciak <john.ronciak@intel.com>
|
||||
M: Tushar Dave <tushar.n.dave@intel.com>
|
||||
L: e1000-devel@lists.sourceforge.net
|
||||
W: http://www.intel.com/support/feedback.htm
|
||||
W: http://e1000.sourceforge.net/
|
||||
|
|
|
@ -4199,9 +4199,9 @@ static int bond_check_params(struct bond_params *params)
|
|||
(arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[i]; i++) {
|
||||
/* not complete check, but should be good enough to
|
||||
catch mistakes */
|
||||
__be32 ip = in_aton(arp_ip_target[i]);
|
||||
if (!isdigit(arp_ip_target[i][0]) || ip == 0 ||
|
||||
ip == htonl(INADDR_BROADCAST)) {
|
||||
__be32 ip;
|
||||
if (!in4_pton(arp_ip_target[i], -1, (u8 *)&ip, -1, NULL) ||
|
||||
IS_IP_TARGET_UNUSABLE_ADDRESS(ip)) {
|
||||
pr_warning("Warning: bad arp_ip_target module parameter (%s), ARP monitoring will not be performed\n",
|
||||
arp_ip_target[i]);
|
||||
arp_interval = 0;
|
||||
|
|
|
@ -1635,12 +1635,12 @@ static ssize_t bonding_show_packets_per_slave(struct device *d,
|
|||
char *buf)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
int packets_per_slave = bond->params.packets_per_slave;
|
||||
unsigned int packets_per_slave = bond->params.packets_per_slave;
|
||||
|
||||
if (packets_per_slave > 1)
|
||||
packets_per_slave = reciprocal_value(packets_per_slave);
|
||||
|
||||
return sprintf(buf, "%d\n", packets_per_slave);
|
||||
return sprintf(buf, "%u\n", packets_per_slave);
|
||||
}
|
||||
|
||||
static ssize_t bonding_store_packets_per_slave(struct device *d,
|
||||
|
|
|
@ -717,8 +717,7 @@ static int emac_open(struct net_device *dev)
|
|||
if (netif_msg_ifup(db))
|
||||
dev_dbg(db->dev, "enabling %s\n", dev->name);
|
||||
|
||||
if (devm_request_irq(db->dev, dev->irq, &emac_interrupt,
|
||||
0, dev->name, dev))
|
||||
if (request_irq(dev->irq, &emac_interrupt, 0, dev->name, dev))
|
||||
return -EAGAIN;
|
||||
|
||||
/* Initialize EMAC board */
|
||||
|
@ -774,6 +773,8 @@ static int emac_stop(struct net_device *ndev)
|
|||
|
||||
emac_shutdown(ndev);
|
||||
|
||||
free_irq(ndev->irq, ndev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -3114,6 +3114,11 @@ int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs_param)
|
|||
{
|
||||
struct bnx2x *bp = netdev_priv(pci_get_drvdata(dev));
|
||||
|
||||
if (!IS_SRIOV(bp)) {
|
||||
BNX2X_ERR("failed to configure SR-IOV since vfdb was not allocated. Check dmesg for errors in probe stage\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
DP(BNX2X_MSG_IOV, "bnx2x_sriov_configure called with %d, BNX2X_NR_VIRTFN(bp) was %d\n",
|
||||
num_vfs_param, BNX2X_NR_VIRTFN(bp));
|
||||
|
||||
|
|
|
@ -8932,6 +8932,9 @@ static int tg3_chip_reset(struct tg3 *tp)
|
|||
void (*write_op)(struct tg3 *, u32, u32);
|
||||
int i, err;
|
||||
|
||||
if (!pci_device_is_present(tp->pdev))
|
||||
return -ENODEV;
|
||||
|
||||
tg3_nvram_lock(tp);
|
||||
|
||||
tg3_ape_lock(tp, TG3_APE_LOCK_GRC);
|
||||
|
@ -11581,10 +11584,11 @@ static int tg3_close(struct net_device *dev)
|
|||
memset(&tp->net_stats_prev, 0, sizeof(tp->net_stats_prev));
|
||||
memset(&tp->estats_prev, 0, sizeof(tp->estats_prev));
|
||||
|
||||
tg3_power_down_prepare(tp);
|
||||
|
||||
tg3_carrier_off(tp);
|
||||
if (pci_device_is_present(tp->pdev)) {
|
||||
tg3_power_down_prepare(tp);
|
||||
|
||||
tg3_carrier_off(tp);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -16499,6 +16503,9 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
|
|||
/* Clear this out for sanity. */
|
||||
tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0);
|
||||
|
||||
/* Clear TG3PCI_REG_BASE_ADDR to prevent hangs. */
|
||||
tw32(TG3PCI_REG_BASE_ADDR, 0);
|
||||
|
||||
pci_read_config_dword(tp->pdev, TG3PCI_PCISTATE,
|
||||
&pci_state_reg);
|
||||
if ((pci_state_reg & PCISTATE_CONV_PCI_MODE) == 0 &&
|
||||
|
@ -17726,10 +17733,12 @@ static int tg3_suspend(struct device *device)
|
|||
struct pci_dev *pdev = to_pci_dev(device);
|
||||
struct net_device *dev = pci_get_drvdata(pdev);
|
||||
struct tg3 *tp = netdev_priv(dev);
|
||||
int err;
|
||||
int err = 0;
|
||||
|
||||
rtnl_lock();
|
||||
|
||||
if (!netif_running(dev))
|
||||
return 0;
|
||||
goto unlock;
|
||||
|
||||
tg3_reset_task_cancel(tp);
|
||||
tg3_phy_stop(tp);
|
||||
|
@ -17771,6 +17780,8 @@ out:
|
|||
tg3_phy_start(tp);
|
||||
}
|
||||
|
||||
unlock:
|
||||
rtnl_unlock();
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -17779,10 +17790,12 @@ static int tg3_resume(struct device *device)
|
|||
struct pci_dev *pdev = to_pci_dev(device);
|
||||
struct net_device *dev = pci_get_drvdata(pdev);
|
||||
struct tg3 *tp = netdev_priv(dev);
|
||||
int err;
|
||||
int err = 0;
|
||||
|
||||
rtnl_lock();
|
||||
|
||||
if (!netif_running(dev))
|
||||
return 0;
|
||||
goto unlock;
|
||||
|
||||
netif_device_attach(dev);
|
||||
|
||||
|
@ -17806,6 +17819,8 @@ out:
|
|||
if (!err)
|
||||
tg3_phy_start(tp);
|
||||
|
||||
unlock:
|
||||
rtnl_unlock();
|
||||
return err;
|
||||
}
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
|
|
@ -49,13 +49,15 @@
|
|||
#include <asm/io.h>
|
||||
#include "cxgb4_uld.h"
|
||||
|
||||
#define FW_VERSION_MAJOR 1
|
||||
#define FW_VERSION_MINOR 4
|
||||
#define FW_VERSION_MICRO 0
|
||||
#define T4FW_VERSION_MAJOR 0x01
|
||||
#define T4FW_VERSION_MINOR 0x06
|
||||
#define T4FW_VERSION_MICRO 0x18
|
||||
#define T4FW_VERSION_BUILD 0x00
|
||||
|
||||
#define FW_VERSION_MAJOR_T5 0
|
||||
#define FW_VERSION_MINOR_T5 0
|
||||
#define FW_VERSION_MICRO_T5 0
|
||||
#define T5FW_VERSION_MAJOR 0x01
|
||||
#define T5FW_VERSION_MINOR 0x08
|
||||
#define T5FW_VERSION_MICRO 0x1C
|
||||
#define T5FW_VERSION_BUILD 0x00
|
||||
|
||||
#define CH_WARN(adap, fmt, ...) dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__)
|
||||
|
||||
|
@ -240,6 +242,26 @@ struct pci_params {
|
|||
unsigned char width;
|
||||
};
|
||||
|
||||
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
|
||||
#define CHELSIO_CHIP_FPGA 0x100
|
||||
#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
|
||||
#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
|
||||
|
||||
#define CHELSIO_T4 0x4
|
||||
#define CHELSIO_T5 0x5
|
||||
|
||||
enum chip_type {
|
||||
T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
|
||||
T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
|
||||
T4_FIRST_REV = T4_A1,
|
||||
T4_LAST_REV = T4_A2,
|
||||
|
||||
T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
|
||||
T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
|
||||
T5_FIRST_REV = T5_A0,
|
||||
T5_LAST_REV = T5_A1,
|
||||
};
|
||||
|
||||
struct adapter_params {
|
||||
struct tp_params tp;
|
||||
struct vpd_params vpd;
|
||||
|
@ -259,7 +281,7 @@ struct adapter_params {
|
|||
|
||||
unsigned char nports; /* # of ethernet ports */
|
||||
unsigned char portvec;
|
||||
unsigned char rev; /* chip revision */
|
||||
enum chip_type chip; /* chip code */
|
||||
unsigned char offload;
|
||||
|
||||
unsigned char bypass;
|
||||
|
@ -267,6 +289,23 @@ struct adapter_params {
|
|||
unsigned int ofldq_wr_cred;
|
||||
};
|
||||
|
||||
#include "t4fw_api.h"
|
||||
|
||||
#define FW_VERSION(chip) ( \
|
||||
FW_HDR_FW_VER_MAJOR_GET(chip##FW_VERSION_MAJOR) | \
|
||||
FW_HDR_FW_VER_MINOR_GET(chip##FW_VERSION_MINOR) | \
|
||||
FW_HDR_FW_VER_MICRO_GET(chip##FW_VERSION_MICRO) | \
|
||||
FW_HDR_FW_VER_BUILD_GET(chip##FW_VERSION_BUILD))
|
||||
#define FW_INTFVER(chip, intf) (FW_HDR_INTFVER_##intf)
|
||||
|
||||
struct fw_info {
|
||||
u8 chip;
|
||||
char *fs_name;
|
||||
char *fw_mod_name;
|
||||
struct fw_hdr fw_hdr;
|
||||
};
|
||||
|
||||
|
||||
struct trace_params {
|
||||
u32 data[TRACE_LEN / 4];
|
||||
u32 mask[TRACE_LEN / 4];
|
||||
|
@ -512,25 +551,6 @@ struct sge {
|
|||
|
||||
struct l2t_data;
|
||||
|
||||
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
|
||||
#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
|
||||
#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
|
||||
|
||||
#define CHELSIO_T4 0x4
|
||||
#define CHELSIO_T5 0x5
|
||||
|
||||
enum chip_type {
|
||||
T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
|
||||
T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
|
||||
T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
|
||||
T4_FIRST_REV = T4_A1,
|
||||
T4_LAST_REV = T4_A3,
|
||||
|
||||
T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
|
||||
T5_FIRST_REV = T5_A1,
|
||||
T5_LAST_REV = T5_A1,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
|
||||
/* T4 supports SRIOV on PF0-3 and T5 on PF0-7. However, the Serial
|
||||
|
@ -715,12 +735,12 @@ enum {
|
|||
|
||||
static inline int is_t5(enum chip_type chip)
|
||||
{
|
||||
return (chip >= T5_FIRST_REV && chip <= T5_LAST_REV);
|
||||
return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T5;
|
||||
}
|
||||
|
||||
static inline int is_t4(enum chip_type chip)
|
||||
{
|
||||
return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
|
||||
return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
|
||||
}
|
||||
|
||||
static inline u32 t4_read_reg(struct adapter *adap, u32 reg_addr)
|
||||
|
@ -900,7 +920,11 @@ int get_vpd_params(struct adapter *adapter, struct vpd_params *p);
|
|||
int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
|
||||
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
|
||||
int t4_load_cfg(struct adapter *adapter, const u8 *cfg_data, unsigned int size);
|
||||
int t4_check_fw_version(struct adapter *adapter);
|
||||
int t4_get_fw_version(struct adapter *adapter, u32 *vers);
|
||||
int t4_get_tp_version(struct adapter *adapter, u32 *vers);
|
||||
int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
|
||||
const u8 *fw_data, unsigned int fw_size,
|
||||
struct fw_hdr *card_fw, enum dev_state state, int *reset);
|
||||
int t4_prep_adapter(struct adapter *adapter);
|
||||
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
|
||||
void t4_fatal_err(struct adapter *adapter);
|
||||
|
|
|
@ -276,9 +276,9 @@ static DEFINE_PCI_DEVICE_TABLE(cxgb4_pci_tbl) = {
|
|||
{ 0, }
|
||||
};
|
||||
|
||||
#define FW_FNAME "cxgb4/t4fw.bin"
|
||||
#define FW4_FNAME "cxgb4/t4fw.bin"
|
||||
#define FW5_FNAME "cxgb4/t5fw.bin"
|
||||
#define FW_CFNAME "cxgb4/t4-config.txt"
|
||||
#define FW4_CFNAME "cxgb4/t4-config.txt"
|
||||
#define FW5_CFNAME "cxgb4/t5-config.txt"
|
||||
|
||||
MODULE_DESCRIPTION(DRV_DESC);
|
||||
|
@ -286,7 +286,7 @@ MODULE_AUTHOR("Chelsio Communications");
|
|||
MODULE_LICENSE("Dual BSD/GPL");
|
||||
MODULE_VERSION(DRV_VERSION);
|
||||
MODULE_DEVICE_TABLE(pci, cxgb4_pci_tbl);
|
||||
MODULE_FIRMWARE(FW_FNAME);
|
||||
MODULE_FIRMWARE(FW4_FNAME);
|
||||
MODULE_FIRMWARE(FW5_FNAME);
|
||||
|
||||
/*
|
||||
|
@ -1070,72 +1070,6 @@ freeout: t4_free_sge_resources(adap);
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns 0 if new FW was successfully loaded, a positive errno if a load was
|
||||
* started but failed, and a negative errno if flash load couldn't start.
|
||||
*/
|
||||
static int upgrade_fw(struct adapter *adap)
|
||||
{
|
||||
int ret;
|
||||
u32 vers, exp_major;
|
||||
const struct fw_hdr *hdr;
|
||||
const struct firmware *fw;
|
||||
struct device *dev = adap->pdev_dev;
|
||||
char *fw_file_name;
|
||||
|
||||
switch (CHELSIO_CHIP_VERSION(adap->chip)) {
|
||||
case CHELSIO_T4:
|
||||
fw_file_name = FW_FNAME;
|
||||
exp_major = FW_VERSION_MAJOR;
|
||||
break;
|
||||
case CHELSIO_T5:
|
||||
fw_file_name = FW5_FNAME;
|
||||
exp_major = FW_VERSION_MAJOR_T5;
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "Unsupported chip type, %x\n", adap->chip);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = request_firmware(&fw, fw_file_name, dev);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "unable to load firmware image %s, error %d\n",
|
||||
fw_file_name, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
hdr = (const struct fw_hdr *)fw->data;
|
||||
vers = ntohl(hdr->fw_ver);
|
||||
if (FW_HDR_FW_VER_MAJOR_GET(vers) != exp_major) {
|
||||
ret = -EINVAL; /* wrong major version, won't do */
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the flash FW is unusable or we found something newer, load it.
|
||||
*/
|
||||
if (FW_HDR_FW_VER_MAJOR_GET(adap->params.fw_vers) != exp_major ||
|
||||
vers > adap->params.fw_vers) {
|
||||
dev_info(dev, "upgrading firmware ...\n");
|
||||
ret = t4_fw_upgrade(adap, adap->mbox, fw->data, fw->size,
|
||||
/*force=*/false);
|
||||
if (!ret)
|
||||
dev_info(dev,
|
||||
"firmware upgraded to version %pI4 from %s\n",
|
||||
&hdr->fw_ver, fw_file_name);
|
||||
else
|
||||
dev_err(dev, "firmware upgrade failed! err=%d\n", -ret);
|
||||
} else {
|
||||
/*
|
||||
* Tell our caller that we didn't upgrade the firmware.
|
||||
*/
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
out: release_firmware(fw);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate a chunk of memory using kmalloc or, if that fails, vmalloc.
|
||||
* The allocated memory is cleared.
|
||||
|
@ -1415,7 +1349,7 @@ static int get_sset_count(struct net_device *dev, int sset)
|
|||
static int get_regs_len(struct net_device *dev)
|
||||
{
|
||||
struct adapter *adap = netdev2adap(dev);
|
||||
if (is_t4(adap->chip))
|
||||
if (is_t4(adap->params.chip))
|
||||
return T4_REGMAP_SIZE;
|
||||
else
|
||||
return T5_REGMAP_SIZE;
|
||||
|
@ -1499,7 +1433,7 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
|
|||
data += sizeof(struct port_stats) / sizeof(u64);
|
||||
collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data);
|
||||
data += sizeof(struct queue_port_stats) / sizeof(u64);
|
||||
if (!is_t4(adapter->chip)) {
|
||||
if (!is_t4(adapter->params.chip)) {
|
||||
t4_write_reg(adapter, SGE_STAT_CFG, STATSOURCE_T5(7));
|
||||
val1 = t4_read_reg(adapter, SGE_STAT_TOTAL);
|
||||
val2 = t4_read_reg(adapter, SGE_STAT_MATCH);
|
||||
|
@ -1521,8 +1455,8 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
|
|||
*/
|
||||
static inline unsigned int mk_adap_vers(const struct adapter *ap)
|
||||
{
|
||||
return CHELSIO_CHIP_VERSION(ap->chip) |
|
||||
(CHELSIO_CHIP_RELEASE(ap->chip) << 10) | (1 << 16);
|
||||
return CHELSIO_CHIP_VERSION(ap->params.chip) |
|
||||
(CHELSIO_CHIP_RELEASE(ap->params.chip) << 10) | (1 << 16);
|
||||
}
|
||||
|
||||
static void reg_block_dump(struct adapter *ap, void *buf, unsigned int start,
|
||||
|
@ -2189,7 +2123,7 @@ static void get_regs(struct net_device *dev, struct ethtool_regs *regs,
|
|||
static const unsigned int *reg_ranges;
|
||||
int arr_size = 0, buf_size = 0;
|
||||
|
||||
if (is_t4(ap->chip)) {
|
||||
if (is_t4(ap->params.chip)) {
|
||||
reg_ranges = &t4_reg_ranges[0];
|
||||
arr_size = ARRAY_SIZE(t4_reg_ranges);
|
||||
buf_size = T4_REGMAP_SIZE;
|
||||
|
@ -2967,7 +2901,7 @@ static int setup_debugfs(struct adapter *adap)
|
|||
size = t4_read_reg(adap, MA_EDRAM1_BAR);
|
||||
add_debugfs_mem(adap, "edc1", MEM_EDC1, EDRAM_SIZE_GET(size));
|
||||
}
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
size = t4_read_reg(adap, MA_EXT_MEMORY_BAR);
|
||||
if (i & EXT_MEM_ENABLE)
|
||||
add_debugfs_mem(adap, "mc", MEM_MC,
|
||||
|
@ -3419,7 +3353,7 @@ unsigned int cxgb4_dbfifo_count(const struct net_device *dev, int lpfifo)
|
|||
|
||||
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
|
||||
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
lp_count = G_LP_COUNT(v1);
|
||||
hp_count = G_HP_COUNT(v1);
|
||||
} else {
|
||||
|
@ -3588,7 +3522,7 @@ static void drain_db_fifo(struct adapter *adap, int usecs)
|
|||
do {
|
||||
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
|
||||
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
lp_count = G_LP_COUNT(v1);
|
||||
hp_count = G_HP_COUNT(v1);
|
||||
} else {
|
||||
|
@ -3708,7 +3642,7 @@ static void process_db_drop(struct work_struct *work)
|
|||
|
||||
adap = container_of(work, struct adapter, db_drop_task);
|
||||
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
disable_dbs(adap);
|
||||
notify_rdma_uld(adap, CXGB4_CONTROL_DB_DROP);
|
||||
drain_db_fifo(adap, 1);
|
||||
|
@ -3753,7 +3687,7 @@ static void process_db_drop(struct work_struct *work)
|
|||
|
||||
void t4_db_full(struct adapter *adap)
|
||||
{
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
t4_set_reg_field(adap, SGE_INT_ENABLE3,
|
||||
DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
|
||||
queue_work(workq, &adap->db_full_task);
|
||||
|
@ -3762,7 +3696,7 @@ void t4_db_full(struct adapter *adap)
|
|||
|
||||
void t4_db_dropped(struct adapter *adap)
|
||||
{
|
||||
if (is_t4(adap->chip))
|
||||
if (is_t4(adap->params.chip))
|
||||
queue_work(workq, &adap->db_drop_task);
|
||||
}
|
||||
|
||||
|
@ -3789,7 +3723,7 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
|
|||
lli.nchan = adap->params.nports;
|
||||
lli.nports = adap->params.nports;
|
||||
lli.wr_cred = adap->params.ofldq_wr_cred;
|
||||
lli.adapter_type = adap->params.rev;
|
||||
lli.adapter_type = adap->params.chip;
|
||||
lli.iscsi_iolen = MAXRXDATA_GET(t4_read_reg(adap, TP_PARA_REG2));
|
||||
lli.udb_density = 1 << QUEUESPERPAGEPF0_GET(
|
||||
t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF) >>
|
||||
|
@ -4483,7 +4417,7 @@ static void setup_memwin(struct adapter *adap)
|
|||
u32 bar0, mem_win0_base, mem_win1_base, mem_win2_base;
|
||||
|
||||
bar0 = pci_resource_start(adap->pdev, 0); /* truncation intentional */
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
mem_win0_base = bar0 + MEMWIN0_BASE;
|
||||
mem_win1_base = bar0 + MEMWIN1_BASE;
|
||||
mem_win2_base = bar0 + MEMWIN2_BASE;
|
||||
|
@ -4668,8 +4602,10 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
const struct firmware *cf;
|
||||
unsigned long mtype = 0, maddr = 0;
|
||||
u32 finiver, finicsum, cfcsum;
|
||||
int ret, using_flash;
|
||||
int ret;
|
||||
int config_issued = 0;
|
||||
char *fw_config_file, fw_config_file_path[256];
|
||||
char *config_name = NULL;
|
||||
|
||||
/*
|
||||
* Reset device if necessary.
|
||||
|
@ -4686,9 +4622,9 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
* then use that. Otherwise, use the configuration file stored
|
||||
* in the adapter flash ...
|
||||
*/
|
||||
switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
|
||||
switch (CHELSIO_CHIP_VERSION(adapter->params.chip)) {
|
||||
case CHELSIO_T4:
|
||||
fw_config_file = FW_CFNAME;
|
||||
fw_config_file = FW4_CFNAME;
|
||||
break;
|
||||
case CHELSIO_T5:
|
||||
fw_config_file = FW5_CFNAME;
|
||||
|
@ -4702,13 +4638,16 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
|
||||
ret = request_firmware(&cf, fw_config_file, adapter->pdev_dev);
|
||||
if (ret < 0) {
|
||||
using_flash = 1;
|
||||
config_name = "On FLASH";
|
||||
mtype = FW_MEMTYPE_CF_FLASH;
|
||||
maddr = t4_flash_cfg_addr(adapter);
|
||||
} else {
|
||||
u32 params[7], val[7];
|
||||
|
||||
using_flash = 0;
|
||||
sprintf(fw_config_file_path,
|
||||
"/lib/firmware/%s", fw_config_file);
|
||||
config_name = fw_config_file_path;
|
||||
|
||||
if (cf->size >= FLASH_CFG_MAX_SIZE)
|
||||
ret = -ENOMEM;
|
||||
else {
|
||||
|
@ -4776,6 +4715,26 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
FW_LEN16(caps_cmd));
|
||||
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
|
||||
&caps_cmd);
|
||||
|
||||
/* If the CAPS_CONFIG failed with an ENOENT (for a Firmware
|
||||
* Configuration File in FLASH), our last gasp effort is to use the
|
||||
* Firmware Configuration File which is embedded in the firmware. A
|
||||
* very few early versions of the firmware didn't have one embedded
|
||||
* but we can ignore those.
|
||||
*/
|
||||
if (ret == -ENOENT) {
|
||||
memset(&caps_cmd, 0, sizeof(caps_cmd));
|
||||
caps_cmd.op_to_write =
|
||||
htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
|
||||
FW_CMD_REQUEST |
|
||||
FW_CMD_READ);
|
||||
caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
|
||||
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd,
|
||||
sizeof(caps_cmd), &caps_cmd);
|
||||
config_name = "Firmware Default";
|
||||
}
|
||||
|
||||
config_issued = 1;
|
||||
if (ret < 0)
|
||||
goto bye;
|
||||
|
||||
|
@ -4816,7 +4775,6 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
if (ret < 0)
|
||||
goto bye;
|
||||
|
||||
sprintf(fw_config_file_path, "/lib/firmware/%s", fw_config_file);
|
||||
/*
|
||||
* Return successfully and note that we're operating with parameters
|
||||
* not supplied by the driver, rather than from hard-wired
|
||||
|
@ -4824,11 +4782,8 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
*/
|
||||
adapter->flags |= USING_SOFT_PARAMS;
|
||||
dev_info(adapter->pdev_dev, "Successfully configured using Firmware "\
|
||||
"Configuration File %s, version %#x, computed checksum %#x\n",
|
||||
(using_flash
|
||||
? "in device FLASH"
|
||||
: fw_config_file_path),
|
||||
finiver, cfcsum);
|
||||
"Configuration File \"%s\", version %#x, computed checksum %#x\n",
|
||||
config_name, finiver, cfcsum);
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
@ -4837,9 +4792,9 @@ static int adap_init0_config(struct adapter *adapter, int reset)
|
|||
* want to issue a warning since this is fairly common.)
|
||||
*/
|
||||
bye:
|
||||
if (ret != -ENOENT)
|
||||
dev_warn(adapter->pdev_dev, "Configuration file error %d\n",
|
||||
-ret);
|
||||
if (config_issued && ret != -ENOENT)
|
||||
dev_warn(adapter->pdev_dev, "\"%s\" configuration file error %d\n",
|
||||
config_name, -ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -5086,6 +5041,47 @@ bye:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static struct fw_info fw_info_array[] = {
|
||||
{
|
||||
.chip = CHELSIO_T4,
|
||||
.fs_name = FW4_CFNAME,
|
||||
.fw_mod_name = FW4_FNAME,
|
||||
.fw_hdr = {
|
||||
.chip = FW_HDR_CHIP_T4,
|
||||
.fw_ver = __cpu_to_be32(FW_VERSION(T4)),
|
||||
.intfver_nic = FW_INTFVER(T4, NIC),
|
||||
.intfver_vnic = FW_INTFVER(T4, VNIC),
|
||||
.intfver_ri = FW_INTFVER(T4, RI),
|
||||
.intfver_iscsi = FW_INTFVER(T4, ISCSI),
|
||||
.intfver_fcoe = FW_INTFVER(T4, FCOE),
|
||||
},
|
||||
}, {
|
||||
.chip = CHELSIO_T5,
|
||||
.fs_name = FW5_CFNAME,
|
||||
.fw_mod_name = FW5_FNAME,
|
||||
.fw_hdr = {
|
||||
.chip = FW_HDR_CHIP_T5,
|
||||
.fw_ver = __cpu_to_be32(FW_VERSION(T5)),
|
||||
.intfver_nic = FW_INTFVER(T5, NIC),
|
||||
.intfver_vnic = FW_INTFVER(T5, VNIC),
|
||||
.intfver_ri = FW_INTFVER(T5, RI),
|
||||
.intfver_iscsi = FW_INTFVER(T5, ISCSI),
|
||||
.intfver_fcoe = FW_INTFVER(T5, FCOE),
|
||||
},
|
||||
}
|
||||
};
|
||||
|
||||
static struct fw_info *find_fw_info(int chip)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(fw_info_array); i++) {
|
||||
if (fw_info_array[i].chip == chip)
|
||||
return &fw_info_array[i];
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Phase 0 of initialization: contact FW, obtain config, perform basic init.
|
||||
*/
|
||||
|
@ -5123,44 +5119,54 @@ static int adap_init0(struct adapter *adap)
|
|||
* later reporting and B. to warn if the currently loaded firmware
|
||||
* is excessively mismatched relative to the driver.)
|
||||
*/
|
||||
ret = t4_check_fw_version(adap);
|
||||
|
||||
/* The error code -EFAULT is returned by t4_check_fw_version() if
|
||||
* firmware on adapter < supported firmware. If firmware on adapter
|
||||
* is too old (not supported by driver) and we're the MASTER_PF set
|
||||
* adapter state to DEV_STATE_UNINIT to force firmware upgrade
|
||||
* and reinitialization.
|
||||
*/
|
||||
if ((adap->flags & MASTER_PF) && ret == -EFAULT)
|
||||
state = DEV_STATE_UNINIT;
|
||||
t4_get_fw_version(adap, &adap->params.fw_vers);
|
||||
t4_get_tp_version(adap, &adap->params.tp_vers);
|
||||
if ((adap->flags & MASTER_PF) && state != DEV_STATE_INIT) {
|
||||
if (ret == -EINVAL || ret == -EFAULT || ret > 0) {
|
||||
if (upgrade_fw(adap) >= 0) {
|
||||
/*
|
||||
* Note that the chip was reset as part of the
|
||||
* firmware upgrade so we don't reset it again
|
||||
* below and grab the new firmware version.
|
||||
*/
|
||||
reset = 0;
|
||||
ret = t4_check_fw_version(adap);
|
||||
} else
|
||||
if (ret == -EFAULT) {
|
||||
/*
|
||||
* Firmware is old but still might
|
||||
* work if we force reinitialization
|
||||
* of the adapter. Ignoring FW upgrade
|
||||
* failure.
|
||||
*/
|
||||
dev_warn(adap->pdev_dev,
|
||||
"Ignoring firmware upgrade "
|
||||
"failure, and forcing driver "
|
||||
"to reinitialize the "
|
||||
"adapter.\n");
|
||||
ret = 0;
|
||||
}
|
||||
struct fw_info *fw_info;
|
||||
struct fw_hdr *card_fw;
|
||||
const struct firmware *fw;
|
||||
const u8 *fw_data = NULL;
|
||||
unsigned int fw_size = 0;
|
||||
|
||||
/* This is the firmware whose headers the driver was compiled
|
||||
* against
|
||||
*/
|
||||
fw_info = find_fw_info(CHELSIO_CHIP_VERSION(adap->params.chip));
|
||||
if (fw_info == NULL) {
|
||||
dev_err(adap->pdev_dev,
|
||||
"unable to get firmware info for chip %d.\n",
|
||||
CHELSIO_CHIP_VERSION(adap->params.chip));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* allocate memory to read the header of the firmware on the
|
||||
* card
|
||||
*/
|
||||
card_fw = t4_alloc_mem(sizeof(*card_fw));
|
||||
|
||||
/* Get FW from from /lib/firmware/ */
|
||||
ret = request_firmware(&fw, fw_info->fw_mod_name,
|
||||
adap->pdev_dev);
|
||||
if (ret < 0) {
|
||||
dev_err(adap->pdev_dev,
|
||||
"unable to load firmware image %s, error %d\n",
|
||||
fw_info->fw_mod_name, ret);
|
||||
} else {
|
||||
fw_data = fw->data;
|
||||
fw_size = fw->size;
|
||||
}
|
||||
|
||||
/* upgrade FW logic */
|
||||
ret = t4_prep_fw(adap, fw_info, fw_data, fw_size, card_fw,
|
||||
state, &reset);
|
||||
|
||||
/* Cleaning up */
|
||||
if (fw != NULL)
|
||||
release_firmware(fw);
|
||||
t4_free_mem(card_fw);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto bye;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -5245,7 +5251,7 @@ static int adap_init0(struct adapter *adap)
|
|||
if (ret == -ENOENT) {
|
||||
dev_info(adap->pdev_dev,
|
||||
"No Configuration File present "
|
||||
"on adapter. Using hard-wired "
|
||||
"on adapter. Using hard-wired "
|
||||
"configuration parameters.\n");
|
||||
ret = adap_init0_no_config(adap, reset);
|
||||
}
|
||||
|
@ -5787,7 +5793,7 @@ static void print_port_info(const struct net_device *dev)
|
|||
|
||||
netdev_info(dev, "Chelsio %s rev %d %s %sNIC PCIe x%d%s%s\n",
|
||||
adap->params.vpd.id,
|
||||
CHELSIO_CHIP_RELEASE(adap->params.rev), buf,
|
||||
CHELSIO_CHIP_RELEASE(adap->params.chip), buf,
|
||||
is_offload(adap) ? "R" : "", adap->params.pci.width, spd,
|
||||
(adap->flags & USING_MSIX) ? " MSI-X" :
|
||||
(adap->flags & USING_MSI) ? " MSI" : "");
|
||||
|
@ -5910,7 +5916,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (err)
|
||||
goto out_unmap_bar0;
|
||||
|
||||
if (!is_t4(adapter->chip)) {
|
||||
if (!is_t4(adapter->params.chip)) {
|
||||
s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
|
||||
qpp = 1 << QUEUESPERPAGEPF0_GET(t4_read_reg(adapter,
|
||||
SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp);
|
||||
|
@ -6064,7 +6070,7 @@ sriov:
|
|||
out_free_dev:
|
||||
free_some_resources(adapter);
|
||||
out_unmap_bar:
|
||||
if (!is_t4(adapter->chip))
|
||||
if (!is_t4(adapter->params.chip))
|
||||
iounmap(adapter->bar2);
|
||||
out_unmap_bar0:
|
||||
iounmap(adapter->regs);
|
||||
|
@ -6116,7 +6122,7 @@ static void remove_one(struct pci_dev *pdev)
|
|||
|
||||
free_some_resources(adapter);
|
||||
iounmap(adapter->regs);
|
||||
if (!is_t4(adapter->chip))
|
||||
if (!is_t4(adapter->params.chip))
|
||||
iounmap(adapter->bar2);
|
||||
kfree(adapter);
|
||||
pci_disable_pcie_error_reporting(pdev);
|
||||
|
|
|
@ -509,7 +509,7 @@ static inline void ring_fl_db(struct adapter *adap, struct sge_fl *q)
|
|||
u32 val;
|
||||
if (q->pend_cred >= 8) {
|
||||
val = PIDX(q->pend_cred / 8);
|
||||
if (!is_t4(adap->chip))
|
||||
if (!is_t4(adap->params.chip))
|
||||
val |= DBTYPE(1);
|
||||
wmb();
|
||||
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
|
||||
|
@ -847,7 +847,7 @@ static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q, int n)
|
|||
wmb(); /* write descriptors before telling HW */
|
||||
spin_lock(&q->db_lock);
|
||||
if (!q->db_disabled) {
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
|
||||
QID(q->cntxt_id) | PIDX(n));
|
||||
} else {
|
||||
|
@ -1596,7 +1596,7 @@ static noinline int handle_trace_pkt(struct adapter *adap,
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (is_t4(adap->chip))
|
||||
if (is_t4(adap->params.chip))
|
||||
__skb_pull(skb, sizeof(struct cpl_trace_pkt));
|
||||
else
|
||||
__skb_pull(skb, sizeof(struct cpl_t5_trace_pkt));
|
||||
|
@ -1661,7 +1661,7 @@ int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
|
|||
const struct cpl_rx_pkt *pkt;
|
||||
struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq);
|
||||
struct sge *s = &q->adap->sge;
|
||||
int cpl_trace_pkt = is_t4(q->adap->chip) ?
|
||||
int cpl_trace_pkt = is_t4(q->adap->params.chip) ?
|
||||
CPL_TRACE_PKT : CPL_TRACE_PKT_T5;
|
||||
|
||||
if (unlikely(*(u8 *)rsp == cpl_trace_pkt))
|
||||
|
@ -2182,7 +2182,7 @@ err:
|
|||
static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
|
||||
{
|
||||
q->cntxt_id = id;
|
||||
if (!is_t4(adap->chip)) {
|
||||
if (!is_t4(adap->params.chip)) {
|
||||
unsigned int s_qpp;
|
||||
unsigned short udb_density;
|
||||
unsigned long qpshift;
|
||||
|
@ -2641,7 +2641,7 @@ static int t4_sge_init_hard(struct adapter *adap)
|
|||
* Set up to drop DOORBELL writes when the DOORBELL FIFO overflows
|
||||
* and generate an interrupt when this occurs so we can recover.
|
||||
*/
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS,
|
||||
V_HP_INT_THRESH(M_HP_INT_THRESH) |
|
||||
V_LP_INT_THRESH(M_LP_INT_THRESH),
|
||||
|
|
|
@ -296,7 +296,7 @@ int t4_mc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
|
|||
u32 mc_bist_cmd, mc_bist_cmd_addr, mc_bist_cmd_len;
|
||||
u32 mc_bist_status_rdata, mc_bist_data_pattern;
|
||||
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
mc_bist_cmd = MC_BIST_CMD;
|
||||
mc_bist_cmd_addr = MC_BIST_CMD_ADDR;
|
||||
mc_bist_cmd_len = MC_BIST_CMD_LEN;
|
||||
|
@ -349,7 +349,7 @@ int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
|
|||
u32 edc_bist_cmd, edc_bist_cmd_addr, edc_bist_cmd_len;
|
||||
u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata;
|
||||
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
edc_bist_cmd = EDC_REG(EDC_BIST_CMD, idx);
|
||||
edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR, idx);
|
||||
edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN, idx);
|
||||
|
@ -402,7 +402,7 @@ int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data, u64 *ecc)
|
|||
static int t4_mem_win_rw(struct adapter *adap, u32 addr, __be32 *data, int dir)
|
||||
{
|
||||
int i;
|
||||
u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
|
||||
u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
|
||||
|
||||
/*
|
||||
* Setup offset into PCIE memory window. Address must be a
|
||||
|
@ -863,104 +863,169 @@ unlock:
|
|||
}
|
||||
|
||||
/**
|
||||
* get_fw_version - read the firmware version
|
||||
* t4_get_fw_version - read the firmware version
|
||||
* @adapter: the adapter
|
||||
* @vers: where to place the version
|
||||
*
|
||||
* Reads the FW version from flash.
|
||||
*/
|
||||
static int get_fw_version(struct adapter *adapter, u32 *vers)
|
||||
int t4_get_fw_version(struct adapter *adapter, u32 *vers)
|
||||
{
|
||||
return t4_read_flash(adapter, adapter->params.sf_fw_start +
|
||||
offsetof(struct fw_hdr, fw_ver), 1, vers, 0);
|
||||
return t4_read_flash(adapter, FLASH_FW_START +
|
||||
offsetof(struct fw_hdr, fw_ver), 1,
|
||||
vers, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* get_tp_version - read the TP microcode version
|
||||
* t4_get_tp_version - read the TP microcode version
|
||||
* @adapter: the adapter
|
||||
* @vers: where to place the version
|
||||
*
|
||||
* Reads the TP microcode version from flash.
|
||||
*/
|
||||
static int get_tp_version(struct adapter *adapter, u32 *vers)
|
||||
int t4_get_tp_version(struct adapter *adapter, u32 *vers)
|
||||
{
|
||||
return t4_read_flash(adapter, adapter->params.sf_fw_start +
|
||||
return t4_read_flash(adapter, FLASH_FW_START +
|
||||
offsetof(struct fw_hdr, tp_microcode_ver),
|
||||
1, vers, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_check_fw_version - check if the FW is compatible with this driver
|
||||
* @adapter: the adapter
|
||||
*
|
||||
* Checks if an adapter's FW is compatible with the driver. Returns 0
|
||||
* if there's exact match, a negative error if the version could not be
|
||||
* read or there's a major version mismatch, and a positive value if the
|
||||
* expected major version is found but there's a minor version mismatch.
|
||||
/* Is the given firmware API compatible with the one the driver was compiled
|
||||
* with?
|
||||
*/
|
||||
int t4_check_fw_version(struct adapter *adapter)
|
||||
static int fw_compatible(const struct fw_hdr *hdr1, const struct fw_hdr *hdr2)
|
||||
{
|
||||
u32 api_vers[2];
|
||||
int ret, major, minor, micro;
|
||||
int exp_major, exp_minor, exp_micro;
|
||||
|
||||
ret = get_fw_version(adapter, &adapter->params.fw_vers);
|
||||
if (!ret)
|
||||
ret = get_tp_version(adapter, &adapter->params.tp_vers);
|
||||
if (!ret)
|
||||
ret = t4_read_flash(adapter, adapter->params.sf_fw_start +
|
||||
offsetof(struct fw_hdr, intfver_nic),
|
||||
2, api_vers, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
/* short circuit if it's the exact same firmware version */
|
||||
if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver)
|
||||
return 1;
|
||||
|
||||
major = FW_HDR_FW_VER_MAJOR_GET(adapter->params.fw_vers);
|
||||
minor = FW_HDR_FW_VER_MINOR_GET(adapter->params.fw_vers);
|
||||
micro = FW_HDR_FW_VER_MICRO_GET(adapter->params.fw_vers);
|
||||
#define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x)
|
||||
if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) &&
|
||||
SAME_INTF(ri) && SAME_INTF(iscsi) && SAME_INTF(fcoe))
|
||||
return 1;
|
||||
#undef SAME_INTF
|
||||
|
||||
switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
|
||||
case CHELSIO_T4:
|
||||
exp_major = FW_VERSION_MAJOR;
|
||||
exp_minor = FW_VERSION_MINOR;
|
||||
exp_micro = FW_VERSION_MICRO;
|
||||
break;
|
||||
case CHELSIO_T5:
|
||||
exp_major = FW_VERSION_MAJOR_T5;
|
||||
exp_minor = FW_VERSION_MINOR_T5;
|
||||
exp_micro = FW_VERSION_MICRO_T5;
|
||||
break;
|
||||
default:
|
||||
dev_err(adapter->pdev_dev, "Unsupported chip type, %x\n",
|
||||
adapter->chip);
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* The firmware in the filesystem is usable, but should it be installed?
|
||||
* This routine explains itself in detail if it indicates the filesystem
|
||||
* firmware should be installed.
|
||||
*/
|
||||
static int should_install_fs_fw(struct adapter *adap, int card_fw_usable,
|
||||
int k, int c)
|
||||
{
|
||||
const char *reason;
|
||||
|
||||
if (!card_fw_usable) {
|
||||
reason = "incompatible or unusable";
|
||||
goto install;
|
||||
}
|
||||
|
||||
memcpy(adapter->params.api_vers, api_vers,
|
||||
sizeof(adapter->params.api_vers));
|
||||
|
||||
if (major < exp_major || (major == exp_major && minor < exp_minor) ||
|
||||
(major == exp_major && minor == exp_minor && micro < exp_micro)) {
|
||||
dev_err(adapter->pdev_dev,
|
||||
"Card has firmware version %u.%u.%u, minimum "
|
||||
"supported firmware is %u.%u.%u.\n", major, minor,
|
||||
micro, exp_major, exp_minor, exp_micro);
|
||||
return -EFAULT;
|
||||
if (k > c) {
|
||||
reason = "older than the version supported with this driver";
|
||||
goto install;
|
||||
}
|
||||
|
||||
if (major != exp_major) { /* major mismatch - fail */
|
||||
dev_err(adapter->pdev_dev,
|
||||
"card FW has major version %u, driver wants %u\n",
|
||||
major, exp_major);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
|
||||
if (minor == exp_minor && micro == exp_micro)
|
||||
return 0; /* perfect match */
|
||||
install:
|
||||
dev_err(adap->pdev_dev, "firmware on card (%u.%u.%u.%u) is %s, "
|
||||
"installing firmware %u.%u.%u.%u on card.\n",
|
||||
FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
|
||||
FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c), reason,
|
||||
FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
|
||||
FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
|
||||
|
||||
/* Minor/micro version mismatch. Report it but often it's OK. */
|
||||
return 1;
|
||||
}
|
||||
|
||||
int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
|
||||
const u8 *fw_data, unsigned int fw_size,
|
||||
struct fw_hdr *card_fw, enum dev_state state,
|
||||
int *reset)
|
||||
{
|
||||
int ret, card_fw_usable, fs_fw_usable;
|
||||
const struct fw_hdr *fs_fw;
|
||||
const struct fw_hdr *drv_fw;
|
||||
|
||||
drv_fw = &fw_info->fw_hdr;
|
||||
|
||||
/* Read the header of the firmware on the card */
|
||||
ret = -t4_read_flash(adap, FLASH_FW_START,
|
||||
sizeof(*card_fw) / sizeof(uint32_t),
|
||||
(uint32_t *)card_fw, 1);
|
||||
if (ret == 0) {
|
||||
card_fw_usable = fw_compatible(drv_fw, (const void *)card_fw);
|
||||
} else {
|
||||
dev_err(adap->pdev_dev,
|
||||
"Unable to read card's firmware header: %d\n", ret);
|
||||
card_fw_usable = 0;
|
||||
}
|
||||
|
||||
if (fw_data != NULL) {
|
||||
fs_fw = (const void *)fw_data;
|
||||
fs_fw_usable = fw_compatible(drv_fw, fs_fw);
|
||||
} else {
|
||||
fs_fw = NULL;
|
||||
fs_fw_usable = 0;
|
||||
}
|
||||
|
||||
if (card_fw_usable && card_fw->fw_ver == drv_fw->fw_ver &&
|
||||
(!fs_fw_usable || fs_fw->fw_ver == drv_fw->fw_ver)) {
|
||||
/* Common case: the firmware on the card is an exact match and
|
||||
* the filesystem one is an exact match too, or the filesystem
|
||||
* one is absent/incompatible.
|
||||
*/
|
||||
} else if (fs_fw_usable && state == DEV_STATE_UNINIT &&
|
||||
should_install_fs_fw(adap, card_fw_usable,
|
||||
be32_to_cpu(fs_fw->fw_ver),
|
||||
be32_to_cpu(card_fw->fw_ver))) {
|
||||
ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,
|
||||
fw_size, 0);
|
||||
if (ret != 0) {
|
||||
dev_err(adap->pdev_dev,
|
||||
"failed to install firmware: %d\n", ret);
|
||||
goto bye;
|
||||
}
|
||||
|
||||
/* Installed successfully, update the cached header too. */
|
||||
memcpy(card_fw, fs_fw, sizeof(*card_fw));
|
||||
card_fw_usable = 1;
|
||||
*reset = 0; /* already reset as part of load_fw */
|
||||
}
|
||||
|
||||
if (!card_fw_usable) {
|
||||
uint32_t d, c, k;
|
||||
|
||||
d = be32_to_cpu(drv_fw->fw_ver);
|
||||
c = be32_to_cpu(card_fw->fw_ver);
|
||||
k = fs_fw ? be32_to_cpu(fs_fw->fw_ver) : 0;
|
||||
|
||||
dev_err(adap->pdev_dev, "Cannot find a usable firmware: "
|
||||
"chip state %d, "
|
||||
"driver compiled with %d.%d.%d.%d, "
|
||||
"card has %d.%d.%d.%d, filesystem has %d.%d.%d.%d\n",
|
||||
state,
|
||||
FW_HDR_FW_VER_MAJOR_GET(d), FW_HDR_FW_VER_MINOR_GET(d),
|
||||
FW_HDR_FW_VER_MICRO_GET(d), FW_HDR_FW_VER_BUILD_GET(d),
|
||||
FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
|
||||
FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c),
|
||||
FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
|
||||
FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
|
||||
ret = EINVAL;
|
||||
goto bye;
|
||||
}
|
||||
|
||||
/* We're using whatever's on the card and it's known to be good. */
|
||||
adap->params.fw_vers = be32_to_cpu(card_fw->fw_ver);
|
||||
adap->params.tp_vers = be32_to_cpu(card_fw->tp_microcode_ver);
|
||||
|
||||
bye:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_flash_erase_sectors - erase a range of flash sectors
|
||||
* @adapter: the adapter
|
||||
|
@ -1368,7 +1433,7 @@ static void pcie_intr_handler(struct adapter *adapter)
|
|||
PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
|
||||
pcie_port_intr_info) +
|
||||
t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
|
||||
is_t4(adapter->chip) ?
|
||||
is_t4(adapter->params.chip) ?
|
||||
pcie_intr_info : t5_pcie_intr_info);
|
||||
|
||||
if (fat)
|
||||
|
@ -1782,7 +1847,7 @@ static void xgmac_intr_handler(struct adapter *adap, int port)
|
|||
{
|
||||
u32 v, int_cause_reg;
|
||||
|
||||
if (is_t4(adap->chip))
|
||||
if (is_t4(adap->params.chip))
|
||||
int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE);
|
||||
else
|
||||
int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE);
|
||||
|
@ -2250,7 +2315,7 @@ void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
|
|||
|
||||
#define GET_STAT(name) \
|
||||
t4_read_reg64(adap, \
|
||||
(is_t4(adap->chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
|
||||
(is_t4(adap->params.chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
|
||||
T5_PORT_REG(idx, MPS_PORT_STAT_##name##_L)))
|
||||
#define GET_STAT_COM(name) t4_read_reg64(adap, MPS_STAT_##name##_L)
|
||||
|
||||
|
@ -2332,7 +2397,7 @@ void t4_wol_magic_enable(struct adapter *adap, unsigned int port,
|
|||
{
|
||||
u32 mag_id_reg_l, mag_id_reg_h, port_cfg_reg;
|
||||
|
||||
if (is_t4(adap->chip)) {
|
||||
if (is_t4(adap->params.chip)) {
|
||||
mag_id_reg_l = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_LO);
|
||||
mag_id_reg_h = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_HI);
|
||||
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
|
||||
|
@ -2374,7 +2439,7 @@ int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
|
|||
int i;
|
||||
u32 port_cfg_reg;
|
||||
|
||||
if (is_t4(adap->chip))
|
||||
if (is_t4(adap->params.chip))
|
||||
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
|
||||
else
|
||||
port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2);
|
||||
|
@ -2387,7 +2452,7 @@ int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
|
|||
return -EINVAL;
|
||||
|
||||
#define EPIO_REG(name) \
|
||||
(is_t4(adap->chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
|
||||
(is_t4(adap->params.chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
|
||||
T5_PORT_REG(port, MAC_PORT_EPIO_##name))
|
||||
|
||||
t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
|
||||
|
@ -2474,7 +2539,7 @@ int t4_fwaddrspace_write(struct adapter *adap, unsigned int mbox,
|
|||
int t4_mem_win_read_len(struct adapter *adap, u32 addr, __be32 *data, int len)
|
||||
{
|
||||
int i, off;
|
||||
u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
|
||||
u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
|
||||
|
||||
/* Align on a 2KB boundary.
|
||||
*/
|
||||
|
@ -3306,7 +3371,7 @@ int t4_alloc_mac_filt(struct adapter *adap, unsigned int mbox,
|
|||
int i, ret;
|
||||
struct fw_vi_mac_cmd c;
|
||||
struct fw_vi_mac_exact *p;
|
||||
unsigned int max_naddr = is_t4(adap->chip) ?
|
||||
unsigned int max_naddr = is_t4(adap->params.chip) ?
|
||||
NUM_MPS_CLS_SRAM_L_INSTANCES :
|
||||
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
|
||||
|
||||
|
@ -3368,7 +3433,7 @@ int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid,
|
|||
int ret, mode;
|
||||
struct fw_vi_mac_cmd c;
|
||||
struct fw_vi_mac_exact *p = c.u.exact;
|
||||
unsigned int max_mac_addr = is_t4(adap->chip) ?
|
||||
unsigned int max_mac_addr = is_t4(adap->params.chip) ?
|
||||
NUM_MPS_CLS_SRAM_L_INSTANCES :
|
||||
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
|
||||
|
||||
|
@ -3699,13 +3764,14 @@ int t4_prep_adapter(struct adapter *adapter)
|
|||
{
|
||||
int ret, ver;
|
||||
uint16_t device_id;
|
||||
u32 pl_rev;
|
||||
|
||||
ret = t4_wait_dev_ready(adapter);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
get_pci_mode(adapter, &adapter->params.pci);
|
||||
adapter->params.rev = t4_read_reg(adapter, PL_REV);
|
||||
pl_rev = G_REV(t4_read_reg(adapter, PL_REV));
|
||||
|
||||
ret = get_flash_params(adapter);
|
||||
if (ret < 0) {
|
||||
|
@ -3717,14 +3783,13 @@ int t4_prep_adapter(struct adapter *adapter)
|
|||
*/
|
||||
pci_read_config_word(adapter->pdev, PCI_DEVICE_ID, &device_id);
|
||||
ver = device_id >> 12;
|
||||
adapter->params.chip = 0;
|
||||
switch (ver) {
|
||||
case CHELSIO_T4:
|
||||
adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4,
|
||||
adapter->params.rev);
|
||||
adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T4, pl_rev);
|
||||
break;
|
||||
case CHELSIO_T5:
|
||||
adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5,
|
||||
adapter->params.rev);
|
||||
adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, pl_rev);
|
||||
break;
|
||||
default:
|
||||
dev_err(adapter->pdev_dev, "Device %d is not supported\n",
|
||||
|
@ -3732,9 +3797,6 @@ int t4_prep_adapter(struct adapter *adapter)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Reassign the updated revision field */
|
||||
adapter->params.rev = adapter->chip;
|
||||
|
||||
init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
|
||||
|
||||
/*
|
||||
|
|
|
@ -1092,6 +1092,11 @@
|
|||
|
||||
#define PL_REV 0x1943c
|
||||
|
||||
#define S_REV 0
|
||||
#define M_REV 0xfU
|
||||
#define V_REV(x) ((x) << S_REV)
|
||||
#define G_REV(x) (((x) >> S_REV) & M_REV)
|
||||
|
||||
#define LE_DB_CONFIG 0x19c04
|
||||
#define HASHEN 0x00100000U
|
||||
|
||||
|
@ -1199,4 +1204,13 @@
|
|||
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
|
||||
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
|
||||
|
||||
#define A_PL_VF_REV 0x4
|
||||
#define A_PL_VF_WHOAMI 0x0
|
||||
#define A_PL_VF_REVISION 0x8
|
||||
|
||||
#define S_CHIPID 4
|
||||
#define M_CHIPID 0xfU
|
||||
#define V_CHIPID(x) ((x) << S_CHIPID)
|
||||
#define G_CHIPID(x) (((x) >> S_CHIPID) & M_CHIPID)
|
||||
|
||||
#endif /* __T4_REGS_H */
|
||||
|
|
|
@ -2157,7 +2157,7 @@ struct fw_debug_cmd {
|
|||
|
||||
struct fw_hdr {
|
||||
u8 ver;
|
||||
u8 reserved1;
|
||||
u8 chip; /* terminator chip type */
|
||||
__be16 len512; /* bin length in units of 512-bytes */
|
||||
__be32 fw_ver; /* firmware version */
|
||||
__be32 tp_microcode_ver;
|
||||
|
@ -2176,6 +2176,11 @@ struct fw_hdr {
|
|||
__be32 reserved6[23];
|
||||
};
|
||||
|
||||
enum fw_hdr_chip {
|
||||
FW_HDR_CHIP_T4,
|
||||
FW_HDR_CHIP_T5
|
||||
};
|
||||
|
||||
#define FW_HDR_FW_VER_MAJOR_GET(x) (((x) >> 24) & 0xff)
|
||||
#define FW_HDR_FW_VER_MINOR_GET(x) (((x) >> 16) & 0xff)
|
||||
#define FW_HDR_FW_VER_MICRO_GET(x) (((x) >> 8) & 0xff)
|
||||
|
|
|
@ -344,7 +344,6 @@ struct adapter {
|
|||
unsigned long registered_device_map;
|
||||
unsigned long open_device_map;
|
||||
unsigned long flags;
|
||||
enum chip_type chip;
|
||||
struct adapter_params params;
|
||||
|
||||
/* queue and interrupt resources */
|
||||
|
|
|
@ -1064,7 +1064,7 @@ static inline unsigned int mk_adap_vers(const struct adapter *adapter)
|
|||
/*
|
||||
* Chip version 4, revision 0x3f (cxgb4vf).
|
||||
*/
|
||||
return CHELSIO_CHIP_VERSION(adapter->chip) | (0x3f << 10);
|
||||
return CHELSIO_CHIP_VERSION(adapter->params.chip) | (0x3f << 10);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1551,9 +1551,13 @@ static void cxgb4vf_get_regs(struct net_device *dev,
|
|||
reg_block_dump(adapter, regbuf,
|
||||
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_FIRST,
|
||||
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_LAST);
|
||||
|
||||
/* T5 adds new registers in the PL Register map.
|
||||
*/
|
||||
reg_block_dump(adapter, regbuf,
|
||||
T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_FIRST,
|
||||
T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_LAST);
|
||||
T4VF_PL_BASE_ADDR + (is_t4(adapter->params.chip)
|
||||
? A_PL_VF_WHOAMI : A_PL_VF_REVISION));
|
||||
reg_block_dump(adapter, regbuf,
|
||||
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_FIRST,
|
||||
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_LAST);
|
||||
|
@ -2087,6 +2091,7 @@ static int adap_init0(struct adapter *adapter)
|
|||
unsigned int ethqsets;
|
||||
int err;
|
||||
u32 param, val = 0;
|
||||
unsigned int chipid;
|
||||
|
||||
/*
|
||||
* Wait for the device to become ready before proceeding ...
|
||||
|
@ -2114,12 +2119,14 @@ static int adap_init0(struct adapter *adapter)
|
|||
return err;
|
||||
}
|
||||
|
||||
adapter->params.chip = 0;
|
||||
switch (adapter->pdev->device >> 12) {
|
||||
case CHELSIO_T4:
|
||||
adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
|
||||
adapter->params.chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
|
||||
break;
|
||||
case CHELSIO_T5:
|
||||
adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5, 0);
|
||||
chipid = G_REV(t4_read_reg(adapter, A_PL_VF_REV));
|
||||
adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, chipid);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -537,7 +537,7 @@ static inline void ring_fl_db(struct adapter *adapter, struct sge_fl *fl)
|
|||
*/
|
||||
if (fl->pend_cred >= FL_PER_EQ_UNIT) {
|
||||
val = PIDX(fl->pend_cred / FL_PER_EQ_UNIT);
|
||||
if (!is_t4(adapter->chip))
|
||||
if (!is_t4(adapter->params.chip))
|
||||
val |= DBTYPE(1);
|
||||
wmb();
|
||||
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
|
||||
|
|
|
@ -39,21 +39,28 @@
|
|||
#include "../cxgb4/t4fw_api.h"
|
||||
|
||||
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
|
||||
#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
|
||||
#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
|
||||
#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
|
||||
|
||||
/* All T4 and later chips have their PCI-E Device IDs encoded as 0xVFPP where:
|
||||
*
|
||||
* V = "4" for T4; "5" for T5, etc. or
|
||||
* = "a" for T4 FPGA; "b" for T4 FPGA, etc.
|
||||
* F = "0" for PF 0..3; "4".."7" for PF4..7; and "8" for VFs
|
||||
* PP = adapter product designation
|
||||
*/
|
||||
#define CHELSIO_T4 0x4
|
||||
#define CHELSIO_T5 0x5
|
||||
|
||||
enum chip_type {
|
||||
T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
|
||||
T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
|
||||
T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
|
||||
T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
|
||||
T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
|
||||
T4_FIRST_REV = T4_A1,
|
||||
T4_LAST_REV = T4_A3,
|
||||
T4_LAST_REV = T4_A2,
|
||||
|
||||
T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
|
||||
T5_FIRST_REV = T5_A1,
|
||||
T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
|
||||
T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
|
||||
T5_FIRST_REV = T5_A0,
|
||||
T5_LAST_REV = T5_A1,
|
||||
};
|
||||
|
||||
|
@ -203,6 +210,7 @@ struct adapter_params {
|
|||
struct vpd_params vpd; /* Vital Product Data */
|
||||
struct rss_params rss; /* Receive Side Scaling */
|
||||
struct vf_resources vfres; /* Virtual Function Resource limits */
|
||||
enum chip_type chip; /* chip code */
|
||||
u8 nports; /* # of Ethernet "ports" */
|
||||
};
|
||||
|
||||
|
@ -253,7 +261,7 @@ static inline int t4vf_wr_mbox_ns(struct adapter *adapter, const void *cmd,
|
|||
|
||||
static inline int is_t4(enum chip_type chip)
|
||||
{
|
||||
return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
|
||||
return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
|
||||
}
|
||||
|
||||
int t4vf_wait_dev_ready(struct adapter *);
|
||||
|
|
|
@ -1027,7 +1027,7 @@ int t4vf_alloc_mac_filt(struct adapter *adapter, unsigned int viid, bool free,
|
|||
unsigned nfilters = 0;
|
||||
unsigned int rem = naddr;
|
||||
struct fw_vi_mac_cmd cmd, rpl;
|
||||
unsigned int max_naddr = is_t4(adapter->chip) ?
|
||||
unsigned int max_naddr = is_t4(adapter->params.chip) ?
|
||||
NUM_MPS_CLS_SRAM_L_INSTANCES :
|
||||
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
|
||||
|
||||
|
@ -1121,7 +1121,7 @@ int t4vf_change_mac(struct adapter *adapter, unsigned int viid,
|
|||
struct fw_vi_mac_exact *p = &cmd.u.exact[0];
|
||||
size_t len16 = DIV_ROUND_UP(offsetof(struct fw_vi_mac_cmd,
|
||||
u.exact[1]), 16);
|
||||
unsigned int max_naddr = is_t4(adapter->chip) ?
|
||||
unsigned int max_naddr = is_t4(adapter->params.chip) ?
|
||||
NUM_MPS_CLS_SRAM_L_INSTANCES :
|
||||
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
|
||||
|
||||
|
|
|
@ -64,6 +64,9 @@
|
|||
#define SLIPORT_ERROR_NO_RESOURCE1 0x2
|
||||
#define SLIPORT_ERROR_NO_RESOURCE2 0x9
|
||||
|
||||
#define SLIPORT_ERROR_FW_RESET1 0x2
|
||||
#define SLIPORT_ERROR_FW_RESET2 0x0
|
||||
|
||||
/********* Memory BAR register ************/
|
||||
#define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc
|
||||
/* Host Interrupt Enable, if set interrupts are enabled although "PCI Interrupt
|
||||
|
|
|
@ -2464,8 +2464,16 @@ void be_detect_error(struct be_adapter *adapter)
|
|||
*/
|
||||
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
|
||||
adapter->hw_error = true;
|
||||
dev_err(&adapter->pdev->dev,
|
||||
"Error detected in the card\n");
|
||||
/* Do not log error messages if its a FW reset */
|
||||
if (sliport_err1 == SLIPORT_ERROR_FW_RESET1 &&
|
||||
sliport_err2 == SLIPORT_ERROR_FW_RESET2) {
|
||||
dev_info(&adapter->pdev->dev,
|
||||
"Firmware update in progress\n");
|
||||
return;
|
||||
} else {
|
||||
dev_err(&adapter->pdev->dev,
|
||||
"Error detected in the card\n");
|
||||
}
|
||||
}
|
||||
|
||||
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
|
||||
|
@ -2932,28 +2940,35 @@ static void be_cancel_worker(struct be_adapter *adapter)
|
|||
}
|
||||
}
|
||||
|
||||
static int be_clear(struct be_adapter *adapter)
|
||||
static void be_mac_clear(struct be_adapter *adapter)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (adapter->pmac_id) {
|
||||
for (i = 0; i < (adapter->uc_macs + 1); i++)
|
||||
be_cmd_pmac_del(adapter, adapter->if_handle,
|
||||
adapter->pmac_id[i], 0);
|
||||
adapter->uc_macs = 0;
|
||||
|
||||
kfree(adapter->pmac_id);
|
||||
adapter->pmac_id = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static int be_clear(struct be_adapter *adapter)
|
||||
{
|
||||
be_cancel_worker(adapter);
|
||||
|
||||
if (sriov_enabled(adapter))
|
||||
be_vf_clear(adapter);
|
||||
|
||||
/* delete the primary mac along with the uc-mac list */
|
||||
for (i = 0; i < (adapter->uc_macs + 1); i++)
|
||||
be_cmd_pmac_del(adapter, adapter->if_handle,
|
||||
adapter->pmac_id[i], 0);
|
||||
adapter->uc_macs = 0;
|
||||
be_mac_clear(adapter);
|
||||
|
||||
be_cmd_if_destroy(adapter, adapter->if_handle, 0);
|
||||
|
||||
be_clear_queues(adapter);
|
||||
|
||||
kfree(adapter->pmac_id);
|
||||
adapter->pmac_id = NULL;
|
||||
|
||||
be_msix_disable(adapter);
|
||||
return 0;
|
||||
}
|
||||
|
@ -3812,6 +3827,8 @@ static int lancer_fw_download(struct be_adapter *adapter,
|
|||
}
|
||||
|
||||
if (change_status == LANCER_FW_RESET_NEEDED) {
|
||||
dev_info(&adapter->pdev->dev,
|
||||
"Resetting adapter to activate new FW\n");
|
||||
status = lancer_physdev_ctrl(adapter,
|
||||
PHYSDEV_CONTROL_FW_RESET_MASK);
|
||||
if (status) {
|
||||
|
@ -4363,13 +4380,13 @@ static int lancer_recover_func(struct be_adapter *adapter)
|
|||
goto err;
|
||||
}
|
||||
|
||||
dev_err(dev, "Error recovery successful\n");
|
||||
dev_err(dev, "Adapter recovery successful\n");
|
||||
return 0;
|
||||
err:
|
||||
if (status == -EAGAIN)
|
||||
dev_err(dev, "Waiting for resource provisioning\n");
|
||||
else
|
||||
dev_err(dev, "Error recovery failed\n");
|
||||
dev_err(dev, "Adapter recovery failed\n");
|
||||
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -98,10 +98,6 @@ static void set_multicast_list(struct net_device *ndev);
|
|||
* detected as not set during a prior frame transmission, then the
|
||||
* ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
|
||||
* were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
|
||||
* If the ready bit in the transmit buffer descriptor (TxBD[R]) is previously
|
||||
* detected as not set during a prior frame transmission, then the
|
||||
* ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
|
||||
* were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
|
||||
* frames not being transmitted until there is a 0-to-1 transition on
|
||||
* ENET_TDAR[TDAR].
|
||||
*/
|
||||
|
@ -385,7 +381,7 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||
* data.
|
||||
*/
|
||||
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr,
|
||||
FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
|
||||
skb->len, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
|
||||
bdp->cbd_bufaddr = 0;
|
||||
fep->tx_skbuff[index] = NULL;
|
||||
|
@ -779,11 +775,10 @@ fec_enet_tx(struct net_device *ndev)
|
|||
else
|
||||
index = bdp - fep->tx_bd_base;
|
||||
|
||||
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
|
||||
FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
|
||||
bdp->cbd_bufaddr = 0;
|
||||
|
||||
skb = fep->tx_skbuff[index];
|
||||
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, skb->len,
|
||||
DMA_TO_DEVICE);
|
||||
bdp->cbd_bufaddr = 0;
|
||||
|
||||
/* Check for errors. */
|
||||
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
|
||||
|
|
|
@ -3033,7 +3033,7 @@ static struct ehea_port *ehea_setup_single_port(struct ehea_adapter *adapter,
|
|||
|
||||
dev->hw_features = NETIF_F_SG | NETIF_F_TSO |
|
||||
NETIF_F_IP_CSUM | NETIF_F_HW_VLAN_CTAG_TX;
|
||||
dev->features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO |
|
||||
dev->features = NETIF_F_SG | NETIF_F_TSO |
|
||||
NETIF_F_HIGHDMA | NETIF_F_IP_CSUM |
|
||||
NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM;
|
||||
|
|
|
@ -354,6 +354,9 @@ static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct(
|
|||
struct rtnl_link_stats64 *vsi_stats = i40e_get_vsi_stats_struct(vsi);
|
||||
int i;
|
||||
|
||||
if (!vsi->tx_rings)
|
||||
return stats;
|
||||
|
||||
rcu_read_lock();
|
||||
for (i = 0; i < vsi->num_queue_pairs; i++) {
|
||||
struct i40e_ring *tx_ring, *rx_ring;
|
||||
|
|
|
@ -1728,7 +1728,10 @@ s32 igb_phy_has_link(struct e1000_hw *hw, u32 iterations,
|
|||
* ownership of the resources, wait and try again to
|
||||
* see if they have relinquished the resources yet.
|
||||
*/
|
||||
udelay(usec_interval);
|
||||
if (usec_interval >= 1000)
|
||||
mdelay(usec_interval/1000);
|
||||
else
|
||||
udelay(usec_interval);
|
||||
}
|
||||
ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status);
|
||||
if (ret_val)
|
||||
|
|
|
@ -1378,7 +1378,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
|
|||
|
||||
dev_kfree_skb_any(skb);
|
||||
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
|
||||
rx_desc->data_size, DMA_FROM_DEVICE);
|
||||
MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
|
||||
}
|
||||
|
||||
if (rx_done)
|
||||
|
@ -1424,7 +1424,7 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
|
|||
}
|
||||
|
||||
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
|
||||
rx_desc->data_size, DMA_FROM_DEVICE);
|
||||
MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
|
||||
|
||||
rx_bytes = rx_desc->data_size -
|
||||
(ETH_FCS_LEN + MVNETA_MH_SIZE);
|
||||
|
|
|
@ -2635,6 +2635,8 @@ static int __init mlx4_init(void)
|
|||
return -ENOMEM;
|
||||
|
||||
ret = pci_register_driver(&mlx4_driver);
|
||||
if (ret < 0)
|
||||
destroy_workqueue(mlx4_wq);
|
||||
return ret < 0 ? ret : 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -5150,8 +5150,10 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
|
|||
{
|
||||
struct fe_priv *np = netdev_priv(dev);
|
||||
u8 __iomem *base = get_hwbase(dev);
|
||||
int result;
|
||||
memset(buffer, 0, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(u64));
|
||||
int result, count;
|
||||
|
||||
count = nv_get_sset_count(dev, ETH_SS_TEST);
|
||||
memset(buffer, 0, count * sizeof(u64));
|
||||
|
||||
if (!nv_link_test(dev)) {
|
||||
test->flags |= ETH_TEST_FL_FAILED;
|
||||
|
@ -5195,7 +5197,7 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
|
|||
return;
|
||||
}
|
||||
|
||||
if (!nv_loopback_test(dev)) {
|
||||
if (count > NV_TEST_COUNT_BASE && !nv_loopback_test(dev)) {
|
||||
test->flags |= ETH_TEST_FL_FAILED;
|
||||
buffer[3] = 1;
|
||||
}
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
*/
|
||||
#define DRV_NAME "qlge"
|
||||
#define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver "
|
||||
#define DRV_VERSION "1.00.00.33"
|
||||
#define DRV_VERSION "1.00.00.34"
|
||||
|
||||
#define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */
|
||||
|
||||
|
|
|
@ -181,6 +181,7 @@ static const char ql_gstrings_test[][ETH_GSTRING_LEN] = {
|
|||
};
|
||||
#define QLGE_TEST_LEN (sizeof(ql_gstrings_test) / ETH_GSTRING_LEN)
|
||||
#define QLGE_STATS_LEN ARRAY_SIZE(ql_gstrings_stats)
|
||||
#define QLGE_RCV_MAC_ERR_STATS 7
|
||||
|
||||
static int ql_update_ring_coalescing(struct ql_adapter *qdev)
|
||||
{
|
||||
|
@ -280,6 +281,9 @@ static void ql_update_stats(struct ql_adapter *qdev)
|
|||
iter++;
|
||||
}
|
||||
|
||||
/* Update receive mac error statistics */
|
||||
iter += QLGE_RCV_MAC_ERR_STATS;
|
||||
|
||||
/*
|
||||
* Get Per-priority TX pause frame counter statistics.
|
||||
*/
|
||||
|
|
|
@ -2376,14 +2376,6 @@ static netdev_features_t qlge_fix_features(struct net_device *ndev,
|
|||
netdev_features_t features)
|
||||
{
|
||||
int err;
|
||||
/*
|
||||
* Since there is no support for separate rx/tx vlan accel
|
||||
* enable/disable make sure tx flag is always in same state as rx.
|
||||
*/
|
||||
if (features & NETIF_F_HW_VLAN_CTAG_RX)
|
||||
features |= NETIF_F_HW_VLAN_CTAG_TX;
|
||||
else
|
||||
features &= ~NETIF_F_HW_VLAN_CTAG_TX;
|
||||
|
||||
/* Update the behavior of vlan accel in the adapter */
|
||||
err = qlge_update_hw_vlan_features(ndev, features);
|
||||
|
|
|
@ -585,7 +585,7 @@ static void efx_start_datapath(struct efx_nic *efx)
|
|||
EFX_MAX_FRAME_LEN(efx->net_dev->mtu) +
|
||||
efx->type->rx_buffer_padding);
|
||||
rx_buf_len = (sizeof(struct efx_rx_page_state) +
|
||||
NET_IP_ALIGN + efx->rx_dma_len);
|
||||
efx->rx_ip_align + efx->rx_dma_len);
|
||||
if (rx_buf_len <= PAGE_SIZE) {
|
||||
efx->rx_scatter = efx->type->always_rx_scatter;
|
||||
efx->rx_buffer_order = 0;
|
||||
|
@ -645,6 +645,8 @@ static void efx_start_datapath(struct efx_nic *efx)
|
|||
WARN_ON(channel->rx_pkt_n_frags);
|
||||
}
|
||||
|
||||
efx_ptp_start_datapath(efx);
|
||||
|
||||
if (netif_device_present(efx->net_dev))
|
||||
netif_tx_wake_all_queues(efx->net_dev);
|
||||
}
|
||||
|
@ -659,6 +661,8 @@ static void efx_stop_datapath(struct efx_nic *efx)
|
|||
EFX_ASSERT_RESET_SERIALISED(efx);
|
||||
BUG_ON(efx->port_enabled);
|
||||
|
||||
efx_ptp_stop_datapath(efx);
|
||||
|
||||
/* Stop RX refill */
|
||||
efx_for_each_channel(channel, efx) {
|
||||
efx_for_each_channel_rx_queue(rx_queue, channel)
|
||||
|
@ -2540,6 +2544,8 @@ static int efx_init_struct(struct efx_nic *efx,
|
|||
|
||||
efx->net_dev = net_dev;
|
||||
efx->rx_prefix_size = efx->type->rx_prefix_size;
|
||||
efx->rx_ip_align =
|
||||
NET_IP_ALIGN ? (efx->rx_prefix_size + NET_IP_ALIGN) % 4 : 0;
|
||||
efx->rx_packet_hash_offset =
|
||||
efx->type->rx_hash_offset - efx->type->rx_prefix_size;
|
||||
spin_lock_init(&efx->stats_lock);
|
||||
|
|
|
@ -50,6 +50,7 @@ struct efx_mcdi_async_param {
|
|||
static void efx_mcdi_timeout_async(unsigned long context);
|
||||
static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating,
|
||||
bool *was_attached_out);
|
||||
static bool efx_mcdi_poll_once(struct efx_nic *efx);
|
||||
|
||||
static inline struct efx_mcdi_iface *efx_mcdi(struct efx_nic *efx)
|
||||
{
|
||||
|
@ -237,6 +238,21 @@ static void efx_mcdi_read_response_header(struct efx_nic *efx)
|
|||
}
|
||||
}
|
||||
|
||||
static bool efx_mcdi_poll_once(struct efx_nic *efx)
|
||||
{
|
||||
struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
|
||||
|
||||
rmb();
|
||||
if (!efx->type->mcdi_poll_response(efx))
|
||||
return false;
|
||||
|
||||
spin_lock_bh(&mcdi->iface_lock);
|
||||
efx_mcdi_read_response_header(efx);
|
||||
spin_unlock_bh(&mcdi->iface_lock);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int efx_mcdi_poll(struct efx_nic *efx)
|
||||
{
|
||||
struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
|
||||
|
@ -272,18 +288,13 @@ static int efx_mcdi_poll(struct efx_nic *efx)
|
|||
|
||||
time = jiffies;
|
||||
|
||||
rmb();
|
||||
if (efx->type->mcdi_poll_response(efx))
|
||||
if (efx_mcdi_poll_once(efx))
|
||||
break;
|
||||
|
||||
if (time_after(time, finish))
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
spin_lock_bh(&mcdi->iface_lock);
|
||||
efx_mcdi_read_response_header(efx);
|
||||
spin_unlock_bh(&mcdi->iface_lock);
|
||||
|
||||
/* Return rc=0 like wait_event_timeout() */
|
||||
return 0;
|
||||
}
|
||||
|
@ -619,6 +630,16 @@ int efx_mcdi_rpc_finish(struct efx_nic *efx, unsigned cmd, size_t inlen,
|
|||
rc = efx_mcdi_await_completion(efx);
|
||||
|
||||
if (rc != 0) {
|
||||
netif_err(efx, hw, efx->net_dev,
|
||||
"MC command 0x%x inlen %d mode %d timed out\n",
|
||||
cmd, (int)inlen, mcdi->mode);
|
||||
|
||||
if (mcdi->mode == MCDI_MODE_EVENTS && efx_mcdi_poll_once(efx)) {
|
||||
netif_err(efx, hw, efx->net_dev,
|
||||
"MCDI request was completed without an event\n");
|
||||
rc = 0;
|
||||
}
|
||||
|
||||
/* Close the race with efx_mcdi_ev_cpl() executing just too late
|
||||
* and completing a request we've just cancelled, by ensuring
|
||||
* that the seqno check therein fails.
|
||||
|
@ -627,11 +648,9 @@ int efx_mcdi_rpc_finish(struct efx_nic *efx, unsigned cmd, size_t inlen,
|
|||
++mcdi->seqno;
|
||||
++mcdi->credits;
|
||||
spin_unlock_bh(&mcdi->iface_lock);
|
||||
}
|
||||
|
||||
netif_err(efx, hw, efx->net_dev,
|
||||
"MC command 0x%x inlen %d mode %d timed out\n",
|
||||
cmd, (int)inlen, mcdi->mode);
|
||||
} else {
|
||||
if (rc == 0) {
|
||||
size_t hdr_len, data_len;
|
||||
|
||||
/* At the very least we need a memory barrier here to ensure
|
||||
|
|
|
@ -683,6 +683,8 @@ struct vfdi_status;
|
|||
* @n_channels: Number of channels in use
|
||||
* @n_rx_channels: Number of channels used for RX (= number of RX queues)
|
||||
* @n_tx_channels: Number of channels used for TX
|
||||
* @rx_ip_align: RX DMA address offset to have IP header aligned in
|
||||
* in accordance with NET_IP_ALIGN
|
||||
* @rx_dma_len: Current maximum RX DMA length
|
||||
* @rx_buffer_order: Order (log2) of number of pages for each RX buffer
|
||||
* @rx_buffer_truesize: Amortised allocation size of an RX buffer,
|
||||
|
@ -816,6 +818,7 @@ struct efx_nic {
|
|||
unsigned rss_spread;
|
||||
unsigned tx_channel_offset;
|
||||
unsigned n_tx_channels;
|
||||
unsigned int rx_ip_align;
|
||||
unsigned int rx_dma_len;
|
||||
unsigned int rx_buffer_order;
|
||||
unsigned int rx_buffer_truesize;
|
||||
|
|
|
@ -560,6 +560,8 @@ void efx_ptp_get_ts_info(struct efx_nic *efx, struct ethtool_ts_info *ts_info);
|
|||
bool efx_ptp_is_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
|
||||
int efx_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
|
||||
void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev);
|
||||
void efx_ptp_start_datapath(struct efx_nic *efx);
|
||||
void efx_ptp_stop_datapath(struct efx_nic *efx);
|
||||
|
||||
extern const struct efx_nic_type falcon_a1_nic_type;
|
||||
extern const struct efx_nic_type falcon_b0_nic_type;
|
||||
|
|
|
@ -220,6 +220,7 @@ struct efx_ptp_timeset {
|
|||
* @evt_list: List of MC receive events awaiting packets
|
||||
* @evt_free_list: List of free events
|
||||
* @evt_lock: Lock for manipulating evt_list and evt_free_list
|
||||
* @evt_overflow: Boolean indicating that event list has overflowed
|
||||
* @rx_evts: Instantiated events (on evt_list and evt_free_list)
|
||||
* @workwq: Work queue for processing pending PTP operations
|
||||
* @work: Work task
|
||||
|
@ -270,6 +271,7 @@ struct efx_ptp_data {
|
|||
struct list_head evt_list;
|
||||
struct list_head evt_free_list;
|
||||
spinlock_t evt_lock;
|
||||
bool evt_overflow;
|
||||
struct efx_ptp_event_rx rx_evts[MAX_RECEIVE_EVENTS];
|
||||
struct workqueue_struct *workwq;
|
||||
struct work_struct work;
|
||||
|
@ -635,6 +637,11 @@ static void efx_ptp_drop_time_expired_events(struct efx_nic *efx)
|
|||
}
|
||||
}
|
||||
}
|
||||
/* If the event overflow flag is set and the event list is now empty
|
||||
* clear the flag to re-enable the overflow warning message.
|
||||
*/
|
||||
if (ptp->evt_overflow && list_empty(&ptp->evt_list))
|
||||
ptp->evt_overflow = false;
|
||||
spin_unlock_bh(&ptp->evt_lock);
|
||||
}
|
||||
|
||||
|
@ -676,6 +683,11 @@ static enum ptp_packet_state efx_ptp_match_rx(struct efx_nic *efx,
|
|||
break;
|
||||
}
|
||||
}
|
||||
/* If the event overflow flag is set and the event list is now empty
|
||||
* clear the flag to re-enable the overflow warning message.
|
||||
*/
|
||||
if (ptp->evt_overflow && list_empty(&ptp->evt_list))
|
||||
ptp->evt_overflow = false;
|
||||
spin_unlock_bh(&ptp->evt_lock);
|
||||
|
||||
return rc;
|
||||
|
@ -705,8 +717,9 @@ static bool efx_ptp_process_events(struct efx_nic *efx, struct sk_buff_head *q)
|
|||
__skb_queue_tail(q, skb);
|
||||
} else if (time_after(jiffies, match->expiry)) {
|
||||
match->state = PTP_PACKET_STATE_TIMED_OUT;
|
||||
netif_warn(efx, rx_err, efx->net_dev,
|
||||
"PTP packet - no timestamp seen\n");
|
||||
if (net_ratelimit())
|
||||
netif_warn(efx, rx_err, efx->net_dev,
|
||||
"PTP packet - no timestamp seen\n");
|
||||
__skb_queue_tail(q, skb);
|
||||
} else {
|
||||
/* Replace unprocessed entry and stop */
|
||||
|
@ -788,9 +801,14 @@ fail:
|
|||
static int efx_ptp_stop(struct efx_nic *efx)
|
||||
{
|
||||
struct efx_ptp_data *ptp = efx->ptp_data;
|
||||
int rc = efx_ptp_disable(efx);
|
||||
struct list_head *cursor;
|
||||
struct list_head *next;
|
||||
int rc;
|
||||
|
||||
if (ptp == NULL)
|
||||
return 0;
|
||||
|
||||
rc = efx_ptp_disable(efx);
|
||||
|
||||
if (ptp->rxfilter_installed) {
|
||||
efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED,
|
||||
|
@ -809,11 +827,19 @@ static int efx_ptp_stop(struct efx_nic *efx)
|
|||
list_for_each_safe(cursor, next, &efx->ptp_data->evt_list) {
|
||||
list_move(cursor, &efx->ptp_data->evt_free_list);
|
||||
}
|
||||
ptp->evt_overflow = false;
|
||||
spin_unlock_bh(&efx->ptp_data->evt_lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int efx_ptp_restart(struct efx_nic *efx)
|
||||
{
|
||||
if (efx->ptp_data && efx->ptp_data->enabled)
|
||||
return efx_ptp_start(efx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void efx_ptp_pps_worker(struct work_struct *work)
|
||||
{
|
||||
struct efx_ptp_data *ptp =
|
||||
|
@ -901,6 +927,7 @@ static int efx_ptp_probe_channel(struct efx_channel *channel)
|
|||
spin_lock_init(&ptp->evt_lock);
|
||||
for (pos = 0; pos < MAX_RECEIVE_EVENTS; pos++)
|
||||
list_add(&ptp->rx_evts[pos].link, &ptp->evt_free_list);
|
||||
ptp->evt_overflow = false;
|
||||
|
||||
ptp->phc_clock_info.owner = THIS_MODULE;
|
||||
snprintf(ptp->phc_clock_info.name,
|
||||
|
@ -989,7 +1016,11 @@ bool efx_ptp_is_ptp_tx(struct efx_nic *efx, struct sk_buff *skb)
|
|||
skb->len >= PTP_MIN_LENGTH &&
|
||||
skb->len <= MC_CMD_PTP_IN_TRANSMIT_PACKET_MAXNUM &&
|
||||
likely(skb->protocol == htons(ETH_P_IP)) &&
|
||||
skb_transport_header_was_set(skb) &&
|
||||
skb_network_header_len(skb) >= sizeof(struct iphdr) &&
|
||||
ip_hdr(skb)->protocol == IPPROTO_UDP &&
|
||||
skb_headlen(skb) >=
|
||||
skb_transport_offset(skb) + sizeof(struct udphdr) &&
|
||||
udp_hdr(skb)->dest == htons(PTP_EVENT_PORT);
|
||||
}
|
||||
|
||||
|
@ -1106,7 +1137,7 @@ static int efx_ptp_change_mode(struct efx_nic *efx, bool enable_wanted,
|
|||
{
|
||||
if ((enable_wanted != efx->ptp_data->enabled) ||
|
||||
(enable_wanted && (efx->ptp_data->mode != new_mode))) {
|
||||
int rc;
|
||||
int rc = 0;
|
||||
|
||||
if (enable_wanted) {
|
||||
/* Change of mode requires disable */
|
||||
|
@ -1123,7 +1154,8 @@ static int efx_ptp_change_mode(struct efx_nic *efx, bool enable_wanted,
|
|||
* succeed.
|
||||
*/
|
||||
efx->ptp_data->mode = new_mode;
|
||||
rc = efx_ptp_start(efx);
|
||||
if (netif_running(efx->net_dev))
|
||||
rc = efx_ptp_start(efx);
|
||||
if (rc == 0) {
|
||||
rc = efx_ptp_synchronize(efx,
|
||||
PTP_SYNC_ATTEMPTS * 2);
|
||||
|
@ -1295,8 +1327,13 @@ static void ptp_event_rx(struct efx_nic *efx, struct efx_ptp_data *ptp)
|
|||
list_add_tail(&evt->link, &ptp->evt_list);
|
||||
|
||||
queue_work(ptp->workwq, &ptp->work);
|
||||
} else {
|
||||
netif_err(efx, rx_err, efx->net_dev, "No free PTP event");
|
||||
} else if (!ptp->evt_overflow) {
|
||||
/* Log a warning message and set the event overflow flag.
|
||||
* The message won't be logged again until the event queue
|
||||
* becomes empty.
|
||||
*/
|
||||
netif_err(efx, rx_err, efx->net_dev, "PTP event queue overflow\n");
|
||||
ptp->evt_overflow = true;
|
||||
}
|
||||
spin_unlock_bh(&ptp->evt_lock);
|
||||
}
|
||||
|
@ -1389,7 +1426,7 @@ static int efx_phc_adjfreq(struct ptp_clock_info *ptp, s32 delta)
|
|||
if (rc != 0)
|
||||
return rc;
|
||||
|
||||
ptp_data->current_adjfreq = delta;
|
||||
ptp_data->current_adjfreq = adjustment_ns;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1404,7 +1441,7 @@ static int efx_phc_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
|||
|
||||
MCDI_SET_DWORD(inbuf, PTP_IN_OP, MC_CMD_PTP_OP_ADJUST);
|
||||
MCDI_SET_DWORD(inbuf, PTP_IN_PERIPH_ID, 0);
|
||||
MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, 0);
|
||||
MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, ptp_data->current_adjfreq);
|
||||
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_SECONDS, (u32)delta_ts.tv_sec);
|
||||
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_NANOSECONDS, (u32)delta_ts.tv_nsec);
|
||||
return efx_mcdi_rpc(efx, MC_CMD_PTP, inbuf, sizeof(inbuf),
|
||||
|
@ -1491,3 +1528,14 @@ void efx_ptp_probe(struct efx_nic *efx)
|
|||
efx->extra_channel_type[EFX_EXTRA_CHANNEL_PTP] =
|
||||
&efx_ptp_channel_type;
|
||||
}
|
||||
|
||||
void efx_ptp_start_datapath(struct efx_nic *efx)
|
||||
{
|
||||
if (efx_ptp_restart(efx))
|
||||
netif_err(efx, drv, efx->net_dev, "Failed to restart PTP.\n");
|
||||
}
|
||||
|
||||
void efx_ptp_stop_datapath(struct efx_nic *efx)
|
||||
{
|
||||
efx_ptp_stop(efx);
|
||||
}
|
||||
|
|
|
@ -94,7 +94,7 @@ static inline void efx_sync_rx_buffer(struct efx_nic *efx,
|
|||
|
||||
void efx_rx_config_page_split(struct efx_nic *efx)
|
||||
{
|
||||
efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + NET_IP_ALIGN,
|
||||
efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align,
|
||||
EFX_RX_BUF_ALIGNMENT);
|
||||
efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 :
|
||||
((PAGE_SIZE - sizeof(struct efx_rx_page_state)) /
|
||||
|
@ -189,9 +189,9 @@ static int efx_init_rx_buffers(struct efx_rx_queue *rx_queue)
|
|||
do {
|
||||
index = rx_queue->added_count & rx_queue->ptr_mask;
|
||||
rx_buf = efx_rx_buffer(rx_queue, index);
|
||||
rx_buf->dma_addr = dma_addr + NET_IP_ALIGN;
|
||||
rx_buf->dma_addr = dma_addr + efx->rx_ip_align;
|
||||
rx_buf->page = page;
|
||||
rx_buf->page_offset = page_offset + NET_IP_ALIGN;
|
||||
rx_buf->page_offset = page_offset + efx->rx_ip_align;
|
||||
rx_buf->len = efx->rx_dma_len;
|
||||
rx_buf->flags = 0;
|
||||
++rx_queue->added_count;
|
||||
|
|
|
@ -82,6 +82,7 @@ static const char version[] =
|
|||
#include <linux/mii.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/etherdevice.h>
|
||||
|
@ -2184,6 +2185,15 @@ static void smc_release_datacs(struct platform_device *pdev, struct net_device *
|
|||
}
|
||||
}
|
||||
|
||||
#if IS_BUILTIN(CONFIG_OF)
|
||||
static const struct of_device_id smc91x_match[] = {
|
||||
{ .compatible = "smsc,lan91c94", },
|
||||
{ .compatible = "smsc,lan91c111", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, smc91x_match);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* smc_init(void)
|
||||
* Input parameters:
|
||||
|
@ -2198,6 +2208,7 @@ static void smc_release_datacs(struct platform_device *pdev, struct net_device *
|
|||
static int smc_drv_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct smc91x_platdata *pd = dev_get_platdata(&pdev->dev);
|
||||
const struct of_device_id *match = NULL;
|
||||
struct smc_local *lp;
|
||||
struct net_device *ndev;
|
||||
struct resource *res, *ires;
|
||||
|
@ -2217,11 +2228,34 @@ static int smc_drv_probe(struct platform_device *pdev)
|
|||
*/
|
||||
|
||||
lp = netdev_priv(ndev);
|
||||
lp->cfg.flags = 0;
|
||||
|
||||
if (pd) {
|
||||
memcpy(&lp->cfg, pd, sizeof(lp->cfg));
|
||||
lp->io_shift = SMC91X_IO_SHIFT(lp->cfg.flags);
|
||||
} else {
|
||||
}
|
||||
|
||||
#if IS_BUILTIN(CONFIG_OF)
|
||||
match = of_match_device(of_match_ptr(smc91x_match), &pdev->dev);
|
||||
if (match) {
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
u32 val;
|
||||
|
||||
/* Combination of IO widths supported, default to 16-bit */
|
||||
if (!of_property_read_u32(np, "reg-io-width", &val)) {
|
||||
if (val & 1)
|
||||
lp->cfg.flags |= SMC91X_USE_8BIT;
|
||||
if ((val == 0) || (val & 2))
|
||||
lp->cfg.flags |= SMC91X_USE_16BIT;
|
||||
if (val & 4)
|
||||
lp->cfg.flags |= SMC91X_USE_32BIT;
|
||||
} else {
|
||||
lp->cfg.flags |= SMC91X_USE_16BIT;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
if (!pd && !match) {
|
||||
lp->cfg.flags |= (SMC_CAN_USE_8BIT) ? SMC91X_USE_8BIT : 0;
|
||||
lp->cfg.flags |= (SMC_CAN_USE_16BIT) ? SMC91X_USE_16BIT : 0;
|
||||
lp->cfg.flags |= (SMC_CAN_USE_32BIT) ? SMC91X_USE_32BIT : 0;
|
||||
|
@ -2370,15 +2404,6 @@ static int smc_drv_resume(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static const struct of_device_id smc91x_match[] = {
|
||||
{ .compatible = "smsc,lan91c94", },
|
||||
{ .compatible = "smsc,lan91c111", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, smc91x_match);
|
||||
#endif
|
||||
|
||||
static struct dev_pm_ops smc_drv_pm_ops = {
|
||||
.suspend = smc_drv_suspend,
|
||||
.resume = smc_drv_resume,
|
||||
|
|
|
@ -2019,7 +2019,6 @@ bdx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO
|
||||
| NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM
|
||||
/*| NETIF_F_FRAGLIST */
|
||||
;
|
||||
ndev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG |
|
||||
NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_TX;
|
||||
|
|
|
@ -1151,6 +1151,12 @@ static int cpsw_ndo_open(struct net_device *ndev)
|
|||
* receive descs
|
||||
*/
|
||||
cpsw_info(priv, ifup, "submitted %d rx descriptors\n", i);
|
||||
|
||||
if (cpts_register(&priv->pdev->dev, priv->cpts,
|
||||
priv->data.cpts_clock_mult,
|
||||
priv->data.cpts_clock_shift))
|
||||
dev_err(priv->dev, "error registering cpts device\n");
|
||||
|
||||
}
|
||||
|
||||
/* Enable Interrupt pacing if configured */
|
||||
|
@ -1197,6 +1203,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
|
|||
netif_carrier_off(priv->ndev);
|
||||
|
||||
if (cpsw_common_res_usage_state(priv) <= 1) {
|
||||
cpts_unregister(priv->cpts);
|
||||
cpsw_intr_disable(priv);
|
||||
cpdma_ctlr_int_ctrl(priv->dma, false);
|
||||
cpdma_ctlr_stop(priv->dma);
|
||||
|
@ -1816,6 +1823,8 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
|
|||
}
|
||||
|
||||
i++;
|
||||
if (i == data->slaves)
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1983,9 +1992,15 @@ static int cpsw_probe(struct platform_device *pdev)
|
|||
goto clean_runtime_disable_ret;
|
||||
}
|
||||
priv->regs = ss_regs;
|
||||
priv->version = __raw_readl(&priv->regs->id_ver);
|
||||
priv->host_port = HOST_PORT_NUM;
|
||||
|
||||
/* Need to enable clocks with runtime PM api to access module
|
||||
* registers
|
||||
*/
|
||||
pm_runtime_get_sync(&pdev->dev);
|
||||
priv->version = readl(&priv->regs->id_ver);
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
priv->wr_regs = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(priv->wr_regs)) {
|
||||
|
@ -2155,8 +2170,6 @@ static int cpsw_remove(struct platform_device *pdev)
|
|||
unregister_netdev(cpsw_get_slave_ndev(priv, 1));
|
||||
unregister_netdev(ndev);
|
||||
|
||||
cpts_unregister(priv->cpts);
|
||||
|
||||
cpsw_ale_destroy(priv->ale);
|
||||
cpdma_chan_destroy(priv->txch);
|
||||
cpdma_chan_destroy(priv->rxch);
|
||||
|
|
|
@ -61,6 +61,7 @@
|
|||
#include <linux/davinci_emac.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_net.h>
|
||||
|
||||
|
@ -1752,10 +1753,14 @@ static const struct net_device_ops emac_netdev_ops = {
|
|||
#endif
|
||||
};
|
||||
|
||||
static const struct of_device_id davinci_emac_of_match[];
|
||||
|
||||
static struct emac_platform_data *
|
||||
davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
|
||||
{
|
||||
struct device_node *np;
|
||||
const struct of_device_id *match;
|
||||
const struct emac_platform_data *auxdata;
|
||||
struct emac_platform_data *pdata = NULL;
|
||||
const u8 *mac_addr;
|
||||
|
||||
|
@ -1793,7 +1798,20 @@ davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
|
|||
|
||||
priv->phy_node = of_parse_phandle(np, "phy-handle", 0);
|
||||
if (!priv->phy_node)
|
||||
pdata->phy_id = "";
|
||||
pdata->phy_id = NULL;
|
||||
|
||||
auxdata = pdev->dev.platform_data;
|
||||
if (auxdata) {
|
||||
pdata->interrupt_enable = auxdata->interrupt_enable;
|
||||
pdata->interrupt_disable = auxdata->interrupt_disable;
|
||||
}
|
||||
|
||||
match = of_match_device(davinci_emac_of_match, &pdev->dev);
|
||||
if (match && match->data) {
|
||||
auxdata = match->data;
|
||||
pdata->version = auxdata->version;
|
||||
pdata->hw_ram_addr = auxdata->hw_ram_addr;
|
||||
}
|
||||
|
||||
pdev->dev.platform_data = pdata;
|
||||
|
||||
|
@ -2020,8 +2038,14 @@ static const struct dev_pm_ops davinci_emac_pm_ops = {
|
|||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_OF)
|
||||
static const struct emac_platform_data am3517_emac_data = {
|
||||
.version = EMAC_VERSION_2,
|
||||
.hw_ram_addr = 0x01e20000,
|
||||
};
|
||||
|
||||
static const struct of_device_id davinci_emac_of_match[] = {
|
||||
{.compatible = "ti,davinci-dm6467-emac", },
|
||||
{.compatible = "ti,am3517-emac", .data = &am3517_emac_data, },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, davinci_emac_of_match);
|
||||
|
|
|
@ -1017,7 +1017,7 @@ static int temac_of_probe(struct platform_device *op)
|
|||
platform_set_drvdata(op, ndev);
|
||||
SET_NETDEV_DEV(ndev, &op->dev);
|
||||
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
|
||||
ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
|
||||
ndev->features = NETIF_F_SG;
|
||||
ndev->netdev_ops = &temac_netdev_ops;
|
||||
ndev->ethtool_ops = &temac_ethtool_ops;
|
||||
#if 0
|
||||
|
|
|
@ -1486,7 +1486,7 @@ static int axienet_of_probe(struct platform_device *op)
|
|||
|
||||
SET_NETDEV_DEV(ndev, &op->dev);
|
||||
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
|
||||
ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
|
||||
ndev->features = NETIF_F_SG;
|
||||
ndev->netdev_ops = &axienet_netdev_ops;
|
||||
ndev->ethtool_ops = &axienet_ethtool_ops;
|
||||
|
||||
|
|
|
@ -163,26 +163,9 @@ static void xemaclite_enable_interrupts(struct net_local *drvdata)
|
|||
__raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
|
||||
drvdata->base_addr + XEL_TSR_OFFSET);
|
||||
|
||||
/* Enable the Tx interrupts for the second Buffer if
|
||||
* configured in HW */
|
||||
if (drvdata->tx_ping_pong != 0) {
|
||||
reg_data = __raw_readl(drvdata->base_addr +
|
||||
XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
|
||||
__raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
|
||||
drvdata->base_addr + XEL_BUFFER_OFFSET +
|
||||
XEL_TSR_OFFSET);
|
||||
}
|
||||
|
||||
/* Enable the Rx interrupts for the first buffer */
|
||||
__raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET);
|
||||
|
||||
/* Enable the Rx interrupts for the second Buffer if
|
||||
* configured in HW */
|
||||
if (drvdata->rx_ping_pong != 0) {
|
||||
__raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr +
|
||||
XEL_BUFFER_OFFSET + XEL_RSR_OFFSET);
|
||||
}
|
||||
|
||||
/* Enable the Global Interrupt Enable */
|
||||
__raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
|
||||
}
|
||||
|
@ -206,31 +189,10 @@ static void xemaclite_disable_interrupts(struct net_local *drvdata)
|
|||
__raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
|
||||
drvdata->base_addr + XEL_TSR_OFFSET);
|
||||
|
||||
/* Disable the Tx interrupts for the second Buffer
|
||||
* if configured in HW */
|
||||
if (drvdata->tx_ping_pong != 0) {
|
||||
reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
|
||||
XEL_TSR_OFFSET);
|
||||
__raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
|
||||
drvdata->base_addr + XEL_BUFFER_OFFSET +
|
||||
XEL_TSR_OFFSET);
|
||||
}
|
||||
|
||||
/* Disable the Rx interrupts for the first buffer */
|
||||
reg_data = __raw_readl(drvdata->base_addr + XEL_RSR_OFFSET);
|
||||
__raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
|
||||
drvdata->base_addr + XEL_RSR_OFFSET);
|
||||
|
||||
/* Disable the Rx interrupts for the second buffer
|
||||
* if configured in HW */
|
||||
if (drvdata->rx_ping_pong != 0) {
|
||||
|
||||
reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
|
||||
XEL_RSR_OFFSET);
|
||||
__raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
|
||||
drvdata->base_addr + XEL_BUFFER_OFFSET +
|
||||
XEL_RSR_OFFSET);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -258,6 +220,13 @@ static void xemaclite_aligned_write(void *src_ptr, u32 *dest_ptr,
|
|||
*to_u16_ptr++ = *from_u16_ptr++;
|
||||
*to_u16_ptr++ = *from_u16_ptr++;
|
||||
|
||||
/* This barrier resolves occasional issues seen around
|
||||
* cases where the data is not properly flushed out
|
||||
* from the processor store buffers to the destination
|
||||
* memory locations.
|
||||
*/
|
||||
wmb();
|
||||
|
||||
/* Output a word */
|
||||
*to_u32_ptr++ = align_buffer;
|
||||
}
|
||||
|
@ -273,6 +242,12 @@ static void xemaclite_aligned_write(void *src_ptr, u32 *dest_ptr,
|
|||
for (; length > 0; length--)
|
||||
*to_u8_ptr++ = *from_u8_ptr++;
|
||||
|
||||
/* This barrier resolves occasional issues seen around
|
||||
* cases where the data is not properly flushed out
|
||||
* from the processor store buffers to the destination
|
||||
* memory locations.
|
||||
*/
|
||||
wmb();
|
||||
*to_u32_ptr = align_buffer;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -770,7 +770,7 @@ static ssize_t macvtap_put_user(struct macvtap_queue *q,
|
|||
int ret;
|
||||
int vnet_hdr_len = 0;
|
||||
int vlan_offset = 0;
|
||||
int copied;
|
||||
int copied, total;
|
||||
|
||||
if (q->flags & IFF_VNET_HDR) {
|
||||
struct virtio_net_hdr vnet_hdr;
|
||||
|
@ -785,7 +785,8 @@ static ssize_t macvtap_put_user(struct macvtap_queue *q,
|
|||
if (memcpy_toiovecend(iv, (void *)&vnet_hdr, 0, sizeof(vnet_hdr)))
|
||||
return -EFAULT;
|
||||
}
|
||||
copied = vnet_hdr_len;
|
||||
total = copied = vnet_hdr_len;
|
||||
total += skb->len;
|
||||
|
||||
if (!vlan_tx_tag_present(skb))
|
||||
len = min_t(int, skb->len, len);
|
||||
|
@ -800,6 +801,7 @@ static ssize_t macvtap_put_user(struct macvtap_queue *q,
|
|||
|
||||
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
|
||||
len = min_t(int, skb->len + VLAN_HLEN, len);
|
||||
total += VLAN_HLEN;
|
||||
|
||||
copy = min_t(int, vlan_offset, len);
|
||||
ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
|
||||
|
@ -817,10 +819,9 @@ static ssize_t macvtap_put_user(struct macvtap_queue *q,
|
|||
}
|
||||
|
||||
ret = skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
|
||||
copied += len;
|
||||
|
||||
done:
|
||||
return ret ? ret : copied;
|
||||
return ret ? ret : total;
|
||||
}
|
||||
|
||||
static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb,
|
||||
|
@ -875,7 +876,9 @@ static ssize_t macvtap_aio_read(struct kiocb *iocb, const struct iovec *iv,
|
|||
}
|
||||
|
||||
ret = macvtap_do_read(q, iocb, iv, len, file->f_flags & O_NONBLOCK);
|
||||
ret = min_t(ssize_t, ret, len); /* XXX copied from tun.c. Why? */
|
||||
ret = min_t(ssize_t, ret, len);
|
||||
if (ret > 0)
|
||||
iocb->ki_pos = ret;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -335,6 +335,21 @@ static struct phy_driver ksphy_driver[] = {
|
|||
.suspend = genphy_suspend,
|
||||
.resume = genphy_resume,
|
||||
.driver = { .owner = THIS_MODULE,},
|
||||
}, {
|
||||
.phy_id = PHY_ID_KSZ8041RNLI,
|
||||
.phy_id_mask = 0x00fffff0,
|
||||
.name = "Micrel KSZ8041RNLI",
|
||||
.features = PHY_BASIC_FEATURES |
|
||||
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
|
||||
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
|
||||
.config_init = kszphy_config_init,
|
||||
.config_aneg = genphy_config_aneg,
|
||||
.read_status = genphy_read_status,
|
||||
.ack_interrupt = kszphy_ack_interrupt,
|
||||
.config_intr = kszphy_config_intr,
|
||||
.suspend = genphy_suspend,
|
||||
.resume = genphy_resume,
|
||||
.driver = { .owner = THIS_MODULE,},
|
||||
}, {
|
||||
.phy_id = PHY_ID_KSZ8051,
|
||||
.phy_id_mask = 0x00fffff0,
|
||||
|
|
|
@ -1184,7 +1184,7 @@ static ssize_t tun_put_user(struct tun_struct *tun,
|
|||
{
|
||||
struct tun_pi pi = { 0, skb->protocol };
|
||||
ssize_t total = 0;
|
||||
int vlan_offset = 0;
|
||||
int vlan_offset = 0, copied;
|
||||
|
||||
if (!(tun->flags & TUN_NO_PI)) {
|
||||
if ((len -= sizeof(pi)) < 0)
|
||||
|
@ -1248,6 +1248,8 @@ static ssize_t tun_put_user(struct tun_struct *tun,
|
|||
total += tun->vnet_hdr_sz;
|
||||
}
|
||||
|
||||
copied = total;
|
||||
total += skb->len;
|
||||
if (!vlan_tx_tag_present(skb)) {
|
||||
len = min_t(int, skb->len, len);
|
||||
} else {
|
||||
|
@ -1262,24 +1264,24 @@ static ssize_t tun_put_user(struct tun_struct *tun,
|
|||
|
||||
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
|
||||
len = min_t(int, skb->len + VLAN_HLEN, len);
|
||||
total += VLAN_HLEN;
|
||||
|
||||
copy = min_t(int, vlan_offset, len);
|
||||
ret = skb_copy_datagram_const_iovec(skb, 0, iv, total, copy);
|
||||
ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
|
||||
len -= copy;
|
||||
total += copy;
|
||||
copied += copy;
|
||||
if (ret || !len)
|
||||
goto done;
|
||||
|
||||
copy = min_t(int, sizeof(veth), len);
|
||||
ret = memcpy_toiovecend(iv, (void *)&veth, total, copy);
|
||||
ret = memcpy_toiovecend(iv, (void *)&veth, copied, copy);
|
||||
len -= copy;
|
||||
total += copy;
|
||||
copied += copy;
|
||||
if (ret || !len)
|
||||
goto done;
|
||||
}
|
||||
|
||||
skb_copy_datagram_const_iovec(skb, vlan_offset, iv, total, len);
|
||||
total += len;
|
||||
skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
|
||||
|
||||
done:
|
||||
tun->dev->stats.tx_packets++;
|
||||
|
@ -1356,6 +1358,8 @@ static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv,
|
|||
ret = tun_do_read(tun, tfile, iocb, iv, len,
|
||||
file->f_flags & O_NONBLOCK);
|
||||
ret = min_t(ssize_t, ret, len);
|
||||
if (ret > 0)
|
||||
iocb->ki_pos = ret;
|
||||
out:
|
||||
tun_put(tun);
|
||||
return ret;
|
||||
|
|
|
@ -426,10 +426,10 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
|
|||
if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
|
||||
pr_debug("%s: short packet %i\n", dev->name, len);
|
||||
dev->stats.rx_length_errors++;
|
||||
if (vi->big_packets)
|
||||
give_pages(rq, buf);
|
||||
else if (vi->mergeable_rx_bufs)
|
||||
if (vi->mergeable_rx_bufs)
|
||||
put_page(virt_to_head_page(buf));
|
||||
else if (vi->big_packets)
|
||||
give_pages(rq, buf);
|
||||
else
|
||||
dev_kfree_skb(buf);
|
||||
return;
|
||||
|
@ -1367,6 +1367,11 @@ static void virtnet_config_changed(struct virtio_device *vdev)
|
|||
|
||||
static void virtnet_free_queues(struct virtnet_info *vi)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < vi->max_queue_pairs; i++)
|
||||
netif_napi_del(&vi->rq[i].napi);
|
||||
|
||||
kfree(vi->rq);
|
||||
kfree(vi->sq);
|
||||
}
|
||||
|
@ -1396,10 +1401,10 @@ static void free_unused_bufs(struct virtnet_info *vi)
|
|||
struct virtqueue *vq = vi->rq[i].vq;
|
||||
|
||||
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
|
||||
if (vi->big_packets)
|
||||
give_pages(&vi->rq[i], buf);
|
||||
else if (vi->mergeable_rx_bufs)
|
||||
if (vi->mergeable_rx_bufs)
|
||||
put_page(virt_to_head_page(buf));
|
||||
else if (vi->big_packets)
|
||||
give_pages(&vi->rq[i], buf);
|
||||
else
|
||||
dev_kfree_skb(buf);
|
||||
--vi->rq[i].num;
|
||||
|
|
|
@ -1668,7 +1668,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
netdev_dbg(dev, "circular route to %pI4\n",
|
||||
&dst->sin.sin_addr.s_addr);
|
||||
dev->stats.collisions++;
|
||||
goto tx_error;
|
||||
goto rt_tx_error;
|
||||
}
|
||||
|
||||
/* Bypass encapsulation if the destination is local */
|
||||
|
|
|
@ -3984,18 +3984,20 @@ static void ar9003_hw_quick_drop_apply(struct ath_hw *ah, u16 freq)
|
|||
int quick_drop;
|
||||
s32 t[3], f[3] = {5180, 5500, 5785};
|
||||
|
||||
if (!(pBase->miscConfiguration & BIT(1)))
|
||||
if (!(pBase->miscConfiguration & BIT(4)))
|
||||
return;
|
||||
|
||||
if (freq < 4000)
|
||||
quick_drop = eep->modalHeader2G.quick_drop;
|
||||
else {
|
||||
t[0] = eep->base_ext1.quick_drop_low;
|
||||
t[1] = eep->modalHeader5G.quick_drop;
|
||||
t[2] = eep->base_ext1.quick_drop_high;
|
||||
quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3);
|
||||
if (AR_SREV_9300(ah) || AR_SREV_9580(ah) || AR_SREV_9340(ah)) {
|
||||
if (freq < 4000) {
|
||||
quick_drop = eep->modalHeader2G.quick_drop;
|
||||
} else {
|
||||
t[0] = eep->base_ext1.quick_drop_low;
|
||||
t[1] = eep->modalHeader5G.quick_drop;
|
||||
t[2] = eep->base_ext1.quick_drop_high;
|
||||
quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3);
|
||||
}
|
||||
REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop);
|
||||
}
|
||||
REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop);
|
||||
}
|
||||
|
||||
static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, bool is2ghz)
|
||||
|
@ -4035,7 +4037,7 @@ static void ar9003_hw_xlna_bias_strength_apply(struct ath_hw *ah, bool is2ghz)
|
|||
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
|
||||
u8 bias;
|
||||
|
||||
if (!(eep->baseEepHeader.featureEnable & 0x40))
|
||||
if (!(eep->baseEepHeader.miscConfiguration & 0x40))
|
||||
return;
|
||||
|
||||
if (!AR_SREV_9300(ah))
|
||||
|
|
|
@ -146,10 +146,9 @@ static void ath9k_hw_set_clockrate(struct ath_hw *ah)
|
|||
else
|
||||
clockrate = ATH9K_CLOCK_RATE_5GHZ_OFDM;
|
||||
|
||||
if (IS_CHAN_HT40(chan))
|
||||
clockrate *= 2;
|
||||
|
||||
if (ah->curchan) {
|
||||
if (chan) {
|
||||
if (IS_CHAN_HT40(chan))
|
||||
clockrate *= 2;
|
||||
if (IS_CHAN_HALF_RATE(chan))
|
||||
clockrate /= 2;
|
||||
if (IS_CHAN_QUARTER_RATE(chan))
|
||||
|
|
|
@ -1276,6 +1276,10 @@ static void ath_tx_fill_desc(struct ath_softc *sc, struct ath_buf *bf,
|
|||
if (!rts_thresh || (len > rts_thresh))
|
||||
rts = true;
|
||||
}
|
||||
|
||||
if (!aggr)
|
||||
len = fi->framelen;
|
||||
|
||||
ath_buf_set_rate(sc, bf, &info, len, rts);
|
||||
}
|
||||
|
||||
|
|
|
@ -2041,13 +2041,20 @@ static void wcn36xx_smd_rsp_process(struct wcn36xx *wcn, void *buf, size_t len)
|
|||
case WCN36XX_HAL_DELETE_STA_CONTEXT_IND:
|
||||
mutex_lock(&wcn->hal_ind_mutex);
|
||||
msg_ind = kmalloc(sizeof(*msg_ind), GFP_KERNEL);
|
||||
msg_ind->msg_len = len;
|
||||
msg_ind->msg = kmalloc(len, GFP_KERNEL);
|
||||
memcpy(msg_ind->msg, buf, len);
|
||||
list_add_tail(&msg_ind->list, &wcn->hal_ind_queue);
|
||||
queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work);
|
||||
wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n");
|
||||
if (msg_ind) {
|
||||
msg_ind->msg_len = len;
|
||||
msg_ind->msg = kmalloc(len, GFP_KERNEL);
|
||||
memcpy(msg_ind->msg, buf, len);
|
||||
list_add_tail(&msg_ind->list, &wcn->hal_ind_queue);
|
||||
queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work);
|
||||
wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n");
|
||||
}
|
||||
mutex_unlock(&wcn->hal_ind_mutex);
|
||||
if (msg_ind)
|
||||
break;
|
||||
/* FIXME: Do something smarter then just printing an error. */
|
||||
wcn36xx_err("Run out of memory while handling SMD_EVENT (%d)\n",
|
||||
msg_header->msg_type);
|
||||
break;
|
||||
default:
|
||||
wcn36xx_err("SMD_EVENT (%d) not supported\n",
|
||||
|
|
|
@ -5,6 +5,8 @@ config BRCMSMAC
|
|||
tristate "Broadcom IEEE802.11n PCIe SoftMAC WLAN driver"
|
||||
depends on MAC80211
|
||||
depends on BCMA
|
||||
select NEW_LEDS if BCMA_DRIVER_GPIO
|
||||
select LEDS_CLASS if BCMA_DRIVER_GPIO
|
||||
select BRCMUTIL
|
||||
select FW_LOADER
|
||||
select CRC_CCITT
|
||||
|
|
|
@ -109,6 +109,8 @@ static inline int brcmf_sdioh_f0_write_byte(struct brcmf_sdio_dev *sdiodev,
|
|||
brcmf_err("Disable F2 failed:%d\n",
|
||||
err_ret);
|
||||
}
|
||||
} else {
|
||||
err_ret = -ENOENT;
|
||||
}
|
||||
} else if ((regaddr == SDIO_CCCR_ABORT) ||
|
||||
(regaddr == SDIO_CCCR_IENx)) {
|
||||
|
|
|
@ -67,8 +67,8 @@
|
|||
#include "iwl-agn-hw.h"
|
||||
|
||||
/* Highest firmware API version supported */
|
||||
#define IWL7260_UCODE_API_MAX 7
|
||||
#define IWL3160_UCODE_API_MAX 7
|
||||
#define IWL7260_UCODE_API_MAX 8
|
||||
#define IWL3160_UCODE_API_MAX 8
|
||||
|
||||
/* Oldest version we won't warn about */
|
||||
#define IWL7260_UCODE_API_OK 7
|
||||
|
@ -130,6 +130,7 @@ const struct iwl_cfg iwl7260_2ac_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7260_2ac_cfg_high_temp = {
|
||||
|
@ -140,6 +141,7 @@ const struct iwl_cfg iwl7260_2ac_cfg_high_temp = {
|
|||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
.high_temp = true,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7260_2n_cfg = {
|
||||
|
@ -149,6 +151,7 @@ const struct iwl_cfg iwl7260_2n_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7260_n_cfg = {
|
||||
|
@ -158,6 +161,7 @@ const struct iwl_cfg iwl7260_n_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_2ac_cfg = {
|
||||
|
@ -167,6 +171,7 @@ const struct iwl_cfg iwl3160_2ac_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL3160_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_2n_cfg = {
|
||||
|
@ -176,6 +181,7 @@ const struct iwl_cfg iwl3160_2n_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL3160_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_n_cfg = {
|
||||
|
@ -185,6 +191,7 @@ const struct iwl_cfg iwl3160_n_cfg = {
|
|||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL3160_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
|
||||
.host_interrupt_operation_mode = true,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7265_2ac_cfg = {
|
||||
|
@ -196,5 +203,23 @@ const struct iwl_cfg iwl7265_2ac_cfg = {
|
|||
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7265_2n_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless N 7265",
|
||||
.fw_name_pre = IWL7265_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7265_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7265_n_cfg = {
|
||||
.name = "Intel(R) Wireless N 7265",
|
||||
.fw_name_pre = IWL7265_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7265_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
|
||||
MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
|
||||
|
|
|
@ -207,6 +207,8 @@ struct iwl_eeprom_params {
|
|||
* @rx_with_siso_diversity: 1x1 device with rx antenna diversity
|
||||
* @internal_wimax_coex: internal wifi/wimax combo device
|
||||
* @high_temp: Is this NIC is designated to be in high temperature.
|
||||
* @host_interrupt_operation_mode: device needs host interrupt operation
|
||||
* mode set
|
||||
*
|
||||
* We enable the driver to be backward compatible wrt. hardware features.
|
||||
* API differences in uCode shouldn't be handled here but through TLVs
|
||||
|
@ -235,6 +237,7 @@ struct iwl_cfg {
|
|||
enum iwl_led_mode led_mode;
|
||||
const bool rx_with_siso_diversity;
|
||||
const bool internal_wimax_coex;
|
||||
const bool host_interrupt_operation_mode;
|
||||
bool high_temp;
|
||||
};
|
||||
|
||||
|
@ -294,6 +297,8 @@ extern const struct iwl_cfg iwl3160_2ac_cfg;
|
|||
extern const struct iwl_cfg iwl3160_2n_cfg;
|
||||
extern const struct iwl_cfg iwl3160_n_cfg;
|
||||
extern const struct iwl_cfg iwl7265_2ac_cfg;
|
||||
extern const struct iwl_cfg iwl7265_2n_cfg;
|
||||
extern const struct iwl_cfg iwl7265_n_cfg;
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
|
||||
#endif /* __IWL_CONFIG_H__ */
|
||||
|
|
|
@ -495,14 +495,11 @@ enum secure_load_status_reg {
|
|||
* the CSR_INT_COALESCING is an 8 bit register in 32-usec unit
|
||||
*
|
||||
* default interrupt coalescing timer is 64 x 32 = 2048 usecs
|
||||
* default interrupt coalescing calibration timer is 16 x 32 = 512 usecs
|
||||
*/
|
||||
#define IWL_HOST_INT_TIMEOUT_MAX (0xFF)
|
||||
#define IWL_HOST_INT_TIMEOUT_DEF (0x40)
|
||||
#define IWL_HOST_INT_TIMEOUT_MIN (0x0)
|
||||
#define IWL_HOST_INT_CALIB_TIMEOUT_MAX (0xFF)
|
||||
#define IWL_HOST_INT_CALIB_TIMEOUT_DEF (0x10)
|
||||
#define IWL_HOST_INT_CALIB_TIMEOUT_MIN (0x0)
|
||||
#define IWL_HOST_INT_OPER_MODE BIT(31)
|
||||
|
||||
/*****************************************************************************
|
||||
* 7000/3000 series SHR DTS addresses *
|
||||
|
|
|
@ -391,7 +391,6 @@ int iwl_send_bt_init_conf(struct iwl_mvm *mvm)
|
|||
BT_VALID_LUT |
|
||||
BT_VALID_WIFI_RX_SW_PRIO_BOOST |
|
||||
BT_VALID_WIFI_TX_SW_PRIO_BOOST |
|
||||
BT_VALID_MULTI_PRIO_LUT |
|
||||
BT_VALID_CORUN_LUT_20 |
|
||||
BT_VALID_CORUN_LUT_40 |
|
||||
BT_VALID_ANT_ISOLATION |
|
||||
|
@ -842,6 +841,11 @@ static void iwl_mvm_bt_rssi_iterator(void *_data, u8 *mac,
|
|||
|
||||
sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id],
|
||||
lockdep_is_held(&mvm->mutex));
|
||||
|
||||
/* This can happen if the station has been removed right now */
|
||||
if (IS_ERR_OR_NULL(sta))
|
||||
return;
|
||||
|
||||
mvmsta = (void *)sta->drv_priv;
|
||||
|
||||
data->num_bss_ifaces++;
|
||||
|
|
|
@ -895,7 +895,7 @@ static int iwl_mvm_get_last_nonqos_seq(struct iwl_mvm *mvm,
|
|||
/* new API returns next, not last-used seqno */
|
||||
if (mvm->fw->ucode_capa.flags &
|
||||
IWL_UCODE_TLV_FLAGS_D3_CONTINUITY_API)
|
||||
err -= 0x10;
|
||||
err = (u16) (err - 0x10);
|
||||
}
|
||||
|
||||
iwl_free_resp(&cmd);
|
||||
|
@ -1549,7 +1549,7 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
|
|||
if (gtkdata.unhandled_cipher)
|
||||
return false;
|
||||
if (!gtkdata.num_keys)
|
||||
return true;
|
||||
goto out;
|
||||
if (!gtkdata.last_gtk)
|
||||
return false;
|
||||
|
||||
|
@ -1600,6 +1600,7 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
|
|||
(void *)&replay_ctr, GFP_KERNEL);
|
||||
}
|
||||
|
||||
out:
|
||||
mvmvif->seqno_valid = true;
|
||||
/* +0x10 because the set API expects next-to-use, not last-used */
|
||||
mvmvif->seqno = le16_to_cpu(status->non_qos_seq_ctr) + 0x10;
|
||||
|
|
|
@ -119,6 +119,10 @@ static ssize_t iwl_dbgfs_sta_drain_write(struct file *file,
|
|||
|
||||
if (sscanf(buf, "%d %d", &sta_id, &drain) != 2)
|
||||
return -EINVAL;
|
||||
if (sta_id < 0 || sta_id >= IWL_MVM_STATION_COUNT)
|
||||
return -EINVAL;
|
||||
if (drain < 0 || drain > 1)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&mvm->mutex);
|
||||
|
||||
|
|
|
@ -176,8 +176,11 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
|
|||
* P2P Device discoveribility, while there are other higher priority
|
||||
* events in the system).
|
||||
*/
|
||||
if (WARN_ONCE(!le32_to_cpu(notif->status),
|
||||
"Failed to schedule time event\n")) {
|
||||
if (!le32_to_cpu(notif->status)) {
|
||||
bool start = le32_to_cpu(notif->action) &
|
||||
TE_V2_NOTIF_HOST_EVENT_START;
|
||||
IWL_WARN(mvm, "Time Event %s notification failure\n",
|
||||
start ? "start" : "end");
|
||||
if (iwl_mvm_te_check_disconnect(mvm, te_data->vif, NULL)) {
|
||||
iwl_mvm_te_clear_data(mvm, te_data);
|
||||
return;
|
||||
|
|
|
@ -353,6 +353,27 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
|
|||
|
||||
/* 7265 Series */
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5012, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x500A, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5200, iwl7265_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5002, iwl7265_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5020, iwl7265_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x502A, iwl7265_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5420, iwl7265_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5090, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5290, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5490, iwl7265_2ac_cfg)},
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
|
||||
{0}
|
||||
|
|
|
@ -477,4 +477,12 @@ static inline bool iwl_is_rfkill_set(struct iwl_trans *trans)
|
|||
CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW);
|
||||
}
|
||||
|
||||
static inline void iwl_nic_error(struct iwl_trans *trans)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
|
||||
set_bit(STATUS_FW_ERROR, &trans_pcie->status);
|
||||
iwl_op_mode_nic_error(trans->op_mode);
|
||||
}
|
||||
|
||||
#endif /* __iwl_trans_int_pcie_h__ */
|
||||
|
|
|
@ -489,6 +489,10 @@ static void iwl_pcie_rx_hw_init(struct iwl_trans *trans, struct iwl_rxq *rxq)
|
|||
|
||||
/* Set interrupt coalescing timer to default (2048 usecs) */
|
||||
iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_TIMEOUT_DEF);
|
||||
|
||||
/* W/A for interrupt coalescing bug in 7260 and 3160 */
|
||||
if (trans->cfg->host_interrupt_operation_mode)
|
||||
iwl_set_bit(trans, CSR_INT_COALESCING, IWL_HOST_INT_OPER_MODE);
|
||||
}
|
||||
|
||||
static void iwl_pcie_rx_init_rxb_lists(struct iwl_rxq *rxq)
|
||||
|
@ -796,12 +800,13 @@ static void iwl_pcie_irq_handle_error(struct iwl_trans *trans)
|
|||
iwl_pcie_dump_csr(trans);
|
||||
iwl_dump_fh(trans, NULL);
|
||||
|
||||
/* set the ERROR bit before we wake up the caller */
|
||||
set_bit(STATUS_FW_ERROR, &trans_pcie->status);
|
||||
clear_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status);
|
||||
wake_up(&trans_pcie->wait_command_queue);
|
||||
|
||||
local_bh_disable();
|
||||
iwl_op_mode_nic_error(trans->op_mode);
|
||||
iwl_nic_error(trans);
|
||||
local_bh_enable();
|
||||
}
|
||||
|
||||
|
|
|
@ -279,9 +279,6 @@ static int iwl_pcie_nic_init(struct iwl_trans *trans)
|
|||
spin_lock_irqsave(&trans_pcie->irq_lock, flags);
|
||||
iwl_pcie_apm_init(trans);
|
||||
|
||||
/* Set interrupt coalescing calibration timer to default (512 usecs) */
|
||||
iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_CALIB_TIMEOUT_DEF);
|
||||
|
||||
spin_unlock_irqrestore(&trans_pcie->irq_lock, flags);
|
||||
|
||||
iwl_pcie_set_pwr(trans, false);
|
||||
|
|
|
@ -207,7 +207,7 @@ static void iwl_pcie_txq_stuck_timer(unsigned long data)
|
|||
IWL_ERR(trans, "scratch %d = 0x%08x\n", i,
|
||||
le32_to_cpu(txq->scratchbufs[i].scratch));
|
||||
|
||||
iwl_op_mode_nic_error(trans->op_mode);
|
||||
iwl_nic_error(trans);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1023,7 +1023,7 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
|
|||
if (nfreed++ > 0) {
|
||||
IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
|
||||
idx, q->write_ptr, q->read_ptr);
|
||||
iwl_op_mode_nic_error(trans->op_mode);
|
||||
iwl_nic_error(trans);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1562,7 +1562,7 @@ static int iwl_pcie_send_hcmd_sync(struct iwl_trans *trans,
|
|||
get_cmd_string(trans_pcie, cmd->id));
|
||||
ret = -ETIMEDOUT;
|
||||
|
||||
iwl_op_mode_nic_error(trans->op_mode);
|
||||
iwl_nic_error(trans);
|
||||
|
||||
goto cancel;
|
||||
}
|
||||
|
|
|
@ -383,6 +383,14 @@ struct hwsim_radiotap_hdr {
|
|||
__le16 rt_chbitmask;
|
||||
} __packed;
|
||||
|
||||
struct hwsim_radiotap_ack_hdr {
|
||||
struct ieee80211_radiotap_header hdr;
|
||||
u8 rt_flags;
|
||||
u8 pad;
|
||||
__le16 rt_channel;
|
||||
__le16 rt_chbitmask;
|
||||
} __packed;
|
||||
|
||||
/* MAC80211_HWSIM netlinf family */
|
||||
static struct genl_family hwsim_genl_family = {
|
||||
.id = GENL_ID_GENERATE,
|
||||
|
@ -500,7 +508,7 @@ static void mac80211_hwsim_monitor_ack(struct ieee80211_channel *chan,
|
|||
const u8 *addr)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct hwsim_radiotap_hdr *hdr;
|
||||
struct hwsim_radiotap_ack_hdr *hdr;
|
||||
u16 flags;
|
||||
struct ieee80211_hdr *hdr11;
|
||||
|
||||
|
@ -511,14 +519,14 @@ static void mac80211_hwsim_monitor_ack(struct ieee80211_channel *chan,
|
|||
if (skb == NULL)
|
||||
return;
|
||||
|
||||
hdr = (struct hwsim_radiotap_hdr *) skb_put(skb, sizeof(*hdr));
|
||||
hdr = (struct hwsim_radiotap_ack_hdr *) skb_put(skb, sizeof(*hdr));
|
||||
hdr->hdr.it_version = PKTHDR_RADIOTAP_VERSION;
|
||||
hdr->hdr.it_pad = 0;
|
||||
hdr->hdr.it_len = cpu_to_le16(sizeof(*hdr));
|
||||
hdr->hdr.it_present = cpu_to_le32((1 << IEEE80211_RADIOTAP_FLAGS) |
|
||||
(1 << IEEE80211_RADIOTAP_CHANNEL));
|
||||
hdr->rt_flags = 0;
|
||||
hdr->rt_rate = 0;
|
||||
hdr->pad = 0;
|
||||
hdr->rt_channel = cpu_to_le16(chan->center_freq);
|
||||
flags = IEEE80211_CHAN_2GHZ;
|
||||
hdr->rt_chbitmask = cpu_to_le16(flags);
|
||||
|
@ -1230,7 +1238,7 @@ static void mac80211_hwsim_bss_info_changed(struct ieee80211_hw *hw,
|
|||
HRTIMER_MODE_REL);
|
||||
} else if (!info->enable_beacon) {
|
||||
unsigned int count = 0;
|
||||
ieee80211_iterate_active_interfaces(
|
||||
ieee80211_iterate_active_interfaces_atomic(
|
||||
data->hw, IEEE80211_IFACE_ITER_NORMAL,
|
||||
mac80211_hwsim_bcn_en_iter, &count);
|
||||
wiphy_debug(hw->wiphy, " beaconing vifs remaining: %u",
|
||||
|
|
|
@ -319,8 +319,8 @@ int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss,
|
|||
if (bss_desc && bss_desc->ssid.ssid_len &&
|
||||
(!mwifiex_ssid_cmp(&priv->curr_bss_params.bss_descriptor.
|
||||
ssid, &bss_desc->ssid))) {
|
||||
kfree(bss_desc);
|
||||
return 0;
|
||||
ret = 0;
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Exit Adhoc mode first */
|
||||
|
|
|
@ -368,11 +368,11 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
|
|||
unsigned long rx_ring_ref, unsigned int tx_evtchn,
|
||||
unsigned int rx_evtchn)
|
||||
{
|
||||
struct task_struct *task;
|
||||
int err = -ENOMEM;
|
||||
|
||||
/* Already connected through? */
|
||||
if (vif->tx_irq)
|
||||
return 0;
|
||||
BUG_ON(vif->tx_irq);
|
||||
BUG_ON(vif->task);
|
||||
|
||||
err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
|
||||
if (err < 0)
|
||||
|
@ -411,14 +411,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
|
|||
}
|
||||
|
||||
init_waitqueue_head(&vif->wq);
|
||||
vif->task = kthread_create(xenvif_kthread,
|
||||
(void *)vif, "%s", vif->dev->name);
|
||||
if (IS_ERR(vif->task)) {
|
||||
task = kthread_create(xenvif_kthread,
|
||||
(void *)vif, "%s", vif->dev->name);
|
||||
if (IS_ERR(task)) {
|
||||
pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
|
||||
err = PTR_ERR(vif->task);
|
||||
err = PTR_ERR(task);
|
||||
goto err_rx_unbind;
|
||||
}
|
||||
|
||||
vif->task = task;
|
||||
|
||||
rtnl_lock();
|
||||
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
|
||||
dev_set_mtu(vif->dev, ETH_DATA_LEN);
|
||||
|
@ -461,8 +463,10 @@ void xenvif_disconnect(struct xenvif *vif)
|
|||
if (netif_carrier_ok(vif->dev))
|
||||
xenvif_carrier_off(vif);
|
||||
|
||||
if (vif->task)
|
||||
if (vif->task) {
|
||||
kthread_stop(vif->task);
|
||||
vif->task = NULL;
|
||||
}
|
||||
|
||||
if (vif->tx_irq) {
|
||||
if (vif->tx_irq == vif->rx_irq)
|
||||
|
|
|
@ -452,7 +452,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
|
|||
}
|
||||
|
||||
/* Set up a GSO prefix descriptor, if necessary */
|
||||
if ((1 << skb_shinfo(skb)->gso_type) & vif->gso_prefix_mask) {
|
||||
if ((1 << gso_type) & vif->gso_prefix_mask) {
|
||||
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
|
||||
meta = npo->meta + npo->meta_prod++;
|
||||
meta->gso_type = gso_type;
|
||||
|
@ -1149,75 +1149,92 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void maybe_pull_tail(struct sk_buff *skb, unsigned int len)
|
||||
static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
|
||||
unsigned int max)
|
||||
{
|
||||
if (skb_is_nonlinear(skb) && skb_headlen(skb) < len) {
|
||||
/* If we need to pullup then pullup to the max, so we
|
||||
* won't need to do it again.
|
||||
*/
|
||||
int target = min_t(int, skb->len, MAX_TCP_HEADER);
|
||||
__pskb_pull_tail(skb, target - skb_headlen(skb));
|
||||
}
|
||||
if (skb_headlen(skb) >= len)
|
||||
return 0;
|
||||
|
||||
/* If we need to pullup then pullup to the max, so we
|
||||
* won't need to do it again.
|
||||
*/
|
||||
if (max > skb->len)
|
||||
max = skb->len;
|
||||
|
||||
if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
if (skb_headlen(skb) < len)
|
||||
return -EPROTO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* This value should be large enough to cover a tagged ethernet header plus
|
||||
* maximally sized IP and TCP or UDP headers.
|
||||
*/
|
||||
#define MAX_IP_HDR_LEN 128
|
||||
|
||||
static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
|
||||
int recalculate_partial_csum)
|
||||
{
|
||||
struct iphdr *iph = (void *)skb->data;
|
||||
unsigned int header_size;
|
||||
unsigned int off;
|
||||
int err = -EPROTO;
|
||||
bool fragment;
|
||||
int err;
|
||||
|
||||
off = sizeof(struct iphdr);
|
||||
fragment = false;
|
||||
|
||||
header_size = skb->network_header + off + MAX_IPOPTLEN;
|
||||
maybe_pull_tail(skb, header_size);
|
||||
err = maybe_pull_tail(skb,
|
||||
sizeof(struct iphdr),
|
||||
MAX_IP_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
off = iph->ihl * 4;
|
||||
if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
|
||||
fragment = true;
|
||||
|
||||
switch (iph->protocol) {
|
||||
off = ip_hdrlen(skb);
|
||||
|
||||
err = -EPROTO;
|
||||
|
||||
switch (ip_hdr(skb)->protocol) {
|
||||
case IPPROTO_TCP:
|
||||
err = maybe_pull_tail(skb,
|
||||
off + sizeof(struct tcphdr),
|
||||
MAX_IP_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
if (!skb_partial_csum_set(skb, off,
|
||||
offsetof(struct tcphdr, check)))
|
||||
goto out;
|
||||
|
||||
if (recalculate_partial_csum) {
|
||||
struct tcphdr *tcph = tcp_hdr(skb);
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct tcphdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
|
||||
tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_TCP, 0);
|
||||
}
|
||||
if (recalculate_partial_csum)
|
||||
tcp_hdr(skb)->check =
|
||||
~csum_tcpudp_magic(ip_hdr(skb)->saddr,
|
||||
ip_hdr(skb)->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_TCP, 0);
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
err = maybe_pull_tail(skb,
|
||||
off + sizeof(struct udphdr),
|
||||
MAX_IP_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
if (!skb_partial_csum_set(skb, off,
|
||||
offsetof(struct udphdr, check)))
|
||||
goto out;
|
||||
|
||||
if (recalculate_partial_csum) {
|
||||
struct udphdr *udph = udp_hdr(skb);
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct udphdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
|
||||
udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_UDP, 0);
|
||||
}
|
||||
if (recalculate_partial_csum)
|
||||
udp_hdr(skb)->check =
|
||||
~csum_tcpudp_magic(ip_hdr(skb)->saddr,
|
||||
ip_hdr(skb)->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_UDP, 0);
|
||||
break;
|
||||
default:
|
||||
if (net_ratelimit())
|
||||
netdev_err(vif->dev,
|
||||
"Attempting to checksum a non-TCP/UDP packet, "
|
||||
"dropping a protocol %d packet\n",
|
||||
iph->protocol);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -1227,121 +1244,138 @@ out:
|
|||
return err;
|
||||
}
|
||||
|
||||
/* This value should be large enough to cover a tagged ethernet header plus
|
||||
* an IPv6 header, all options, and a maximal TCP or UDP header.
|
||||
*/
|
||||
#define MAX_IPV6_HDR_LEN 256
|
||||
|
||||
#define OPT_HDR(type, skb, off) \
|
||||
(type *)(skb_network_header(skb) + (off))
|
||||
|
||||
static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
|
||||
int recalculate_partial_csum)
|
||||
{
|
||||
int err = -EPROTO;
|
||||
struct ipv6hdr *ipv6h = (void *)skb->data;
|
||||
int err;
|
||||
u8 nexthdr;
|
||||
unsigned int header_size;
|
||||
unsigned int off;
|
||||
unsigned int len;
|
||||
bool fragment;
|
||||
bool done;
|
||||
|
||||
fragment = false;
|
||||
done = false;
|
||||
|
||||
off = sizeof(struct ipv6hdr);
|
||||
|
||||
header_size = skb->network_header + off;
|
||||
maybe_pull_tail(skb, header_size);
|
||||
err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
nexthdr = ipv6h->nexthdr;
|
||||
nexthdr = ipv6_hdr(skb)->nexthdr;
|
||||
|
||||
while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len)) &&
|
||||
!done) {
|
||||
len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
|
||||
while (off <= len && !done) {
|
||||
switch (nexthdr) {
|
||||
case IPPROTO_DSTOPTS:
|
||||
case IPPROTO_HOPOPTS:
|
||||
case IPPROTO_ROUTING: {
|
||||
struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
|
||||
struct ipv6_opt_hdr *hp;
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct ipv6_opt_hdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
err = maybe_pull_tail(skb,
|
||||
off +
|
||||
sizeof(struct ipv6_opt_hdr),
|
||||
MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
|
||||
nexthdr = hp->nexthdr;
|
||||
off += ipv6_optlen(hp);
|
||||
break;
|
||||
}
|
||||
case IPPROTO_AH: {
|
||||
struct ip_auth_hdr *hp = (void *)(skb->data + off);
|
||||
struct ip_auth_hdr *hp;
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct ip_auth_hdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
err = maybe_pull_tail(skb,
|
||||
off +
|
||||
sizeof(struct ip_auth_hdr),
|
||||
MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
hp = OPT_HDR(struct ip_auth_hdr, skb, off);
|
||||
nexthdr = hp->nexthdr;
|
||||
off += (hp->hdrlen+2)<<2;
|
||||
off += ipv6_authlen(hp);
|
||||
break;
|
||||
}
|
||||
case IPPROTO_FRAGMENT: {
|
||||
struct frag_hdr *hp;
|
||||
|
||||
err = maybe_pull_tail(skb,
|
||||
off +
|
||||
sizeof(struct frag_hdr),
|
||||
MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
hp = OPT_HDR(struct frag_hdr, skb, off);
|
||||
|
||||
if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
|
||||
fragment = true;
|
||||
|
||||
nexthdr = hp->nexthdr;
|
||||
off += sizeof(struct frag_hdr);
|
||||
break;
|
||||
}
|
||||
case IPPROTO_FRAGMENT:
|
||||
fragment = true;
|
||||
/* fall through */
|
||||
default:
|
||||
done = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!done) {
|
||||
if (net_ratelimit())
|
||||
netdev_err(vif->dev, "Failed to parse packet header\n");
|
||||
goto out;
|
||||
}
|
||||
err = -EPROTO;
|
||||
|
||||
if (fragment) {
|
||||
if (net_ratelimit())
|
||||
netdev_err(vif->dev, "Packet is a fragment!\n");
|
||||
if (!done || fragment)
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (nexthdr) {
|
||||
case IPPROTO_TCP:
|
||||
err = maybe_pull_tail(skb,
|
||||
off + sizeof(struct tcphdr),
|
||||
MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
if (!skb_partial_csum_set(skb, off,
|
||||
offsetof(struct tcphdr, check)))
|
||||
goto out;
|
||||
|
||||
if (recalculate_partial_csum) {
|
||||
struct tcphdr *tcph = tcp_hdr(skb);
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct tcphdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
|
||||
tcph->check = ~csum_ipv6_magic(&ipv6h->saddr,
|
||||
&ipv6h->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_TCP, 0);
|
||||
}
|
||||
if (recalculate_partial_csum)
|
||||
tcp_hdr(skb)->check =
|
||||
~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||
&ipv6_hdr(skb)->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_TCP, 0);
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
err = maybe_pull_tail(skb,
|
||||
off + sizeof(struct udphdr),
|
||||
MAX_IPV6_HDR_LEN);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
if (!skb_partial_csum_set(skb, off,
|
||||
offsetof(struct udphdr, check)))
|
||||
goto out;
|
||||
|
||||
if (recalculate_partial_csum) {
|
||||
struct udphdr *udph = udp_hdr(skb);
|
||||
|
||||
header_size = skb->network_header +
|
||||
off +
|
||||
sizeof(struct udphdr);
|
||||
maybe_pull_tail(skb, header_size);
|
||||
|
||||
udph->check = ~csum_ipv6_magic(&ipv6h->saddr,
|
||||
&ipv6h->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_UDP, 0);
|
||||
}
|
||||
if (recalculate_partial_csum)
|
||||
udp_hdr(skb)->check =
|
||||
~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||
&ipv6_hdr(skb)->daddr,
|
||||
skb->len - off,
|
||||
IPPROTO_UDP, 0);
|
||||
break;
|
||||
default:
|
||||
if (net_ratelimit())
|
||||
netdev_err(vif->dev,
|
||||
"Attempting to checksum a non-TCP/UDP packet, "
|
||||
"dropping a protocol %d packet\n",
|
||||
nexthdr);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -1411,14 +1445,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
|||
return false;
|
||||
}
|
||||
|
||||
static unsigned xenvif_tx_build_gops(struct xenvif *vif)
|
||||
static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
|
||||
{
|
||||
struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
|
||||
struct sk_buff *skb;
|
||||
int ret;
|
||||
|
||||
while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
|
||||
< MAX_PENDING_REQS)) {
|
||||
< MAX_PENDING_REQS) &&
|
||||
(skb_queue_len(&vif->tx_queue) < budget)) {
|
||||
struct xen_netif_tx_request txreq;
|
||||
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
|
||||
struct page *page;
|
||||
|
@ -1440,7 +1475,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif)
|
|||
continue;
|
||||
}
|
||||
|
||||
RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);
|
||||
work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
|
||||
if (!work_to_do)
|
||||
break;
|
||||
|
||||
|
@ -1580,14 +1615,13 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif)
|
|||
}
|
||||
|
||||
|
||||
static int xenvif_tx_submit(struct xenvif *vif, int budget)
|
||||
static int xenvif_tx_submit(struct xenvif *vif)
|
||||
{
|
||||
struct gnttab_copy *gop = vif->tx_copy_ops;
|
||||
struct sk_buff *skb;
|
||||
int work_done = 0;
|
||||
|
||||
while (work_done < budget &&
|
||||
(skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
|
||||
while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
|
||||
struct xen_netif_tx_request *txp;
|
||||
u16 pending_idx;
|
||||
unsigned data_len;
|
||||
|
@ -1662,14 +1696,14 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
|
|||
if (unlikely(!tx_work_todo(vif)))
|
||||
return 0;
|
||||
|
||||
nr_gops = xenvif_tx_build_gops(vif);
|
||||
nr_gops = xenvif_tx_build_gops(vif, budget);
|
||||
|
||||
if (nr_gops == 0)
|
||||
return 0;
|
||||
|
||||
gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
|
||||
|
||||
work_done = xenvif_tx_submit(vif, nr_gops);
|
||||
work_done = xenvif_tx_submit(vif);
|
||||
|
||||
return work_done;
|
||||
}
|
||||
|
|
|
@ -4165,6 +4165,14 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode,
|
|||
return 0;
|
||||
}
|
||||
|
||||
bool pci_device_is_present(struct pci_dev *pdev)
|
||||
{
|
||||
u32 v;
|
||||
|
||||
return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_device_is_present);
|
||||
|
||||
#define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
|
||||
static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
|
||||
static DEFINE_SPINLOCK(resource_alignment_lock);
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#include <uapi/linux/ipv6.h>
|
||||
|
||||
#define ipv6_optlen(p) (((p)->hdrlen+1) << 3)
|
||||
#define ipv6_authlen(p) (((p)->hdrlen+2) << 2)
|
||||
/*
|
||||
* This structure contains configuration options per IPv6 link.
|
||||
*/
|
||||
|
|
|
@ -22,6 +22,8 @@
|
|||
#define PHY_ID_KSZ8021 0x00221555
|
||||
#define PHY_ID_KSZ8031 0x00221556
|
||||
#define PHY_ID_KSZ8041 0x00221510
|
||||
/* undocumented */
|
||||
#define PHY_ID_KSZ8041RNLI 0x00221537
|
||||
#define PHY_ID_KSZ8051 0x00221550
|
||||
/* same id: ks8001 Rev. A/B, and ks8721 Rev 3. */
|
||||
#define PHY_ID_KSZ8001 0x0022161A
|
||||
|
|
|
@ -181,7 +181,7 @@ struct proto_ops {
|
|||
int offset, size_t size, int flags);
|
||||
ssize_t (*splice_read)(struct socket *sock, loff_t *ppos,
|
||||
struct pipe_inode_info *pipe, size_t len, unsigned int flags);
|
||||
void (*set_peek_off)(struct sock *sk, int val);
|
||||
int (*set_peek_off)(struct sock *sk, int val);
|
||||
};
|
||||
|
||||
#define DECLARE_SOCKADDR(type, dst, src) \
|
||||
|
|
|
@ -1255,7 +1255,7 @@ struct net_device {
|
|||
unsigned char perm_addr[MAX_ADDR_LEN]; /* permanent hw address */
|
||||
unsigned char addr_assign_type; /* hw address assignment type */
|
||||
unsigned char addr_len; /* hardware address length */
|
||||
unsigned char neigh_priv_len;
|
||||
unsigned short neigh_priv_len;
|
||||
unsigned short dev_id; /* Used to differentiate devices
|
||||
* that share the same link
|
||||
* layer address
|
||||
|
|
|
@ -960,6 +960,7 @@ void pci_update_resource(struct pci_dev *dev, int resno);
|
|||
int __must_check pci_assign_resource(struct pci_dev *dev, int i);
|
||||
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
|
||||
int pci_select_bars(struct pci_dev *dev, unsigned long flags);
|
||||
bool pci_device_is_present(struct pci_dev *pdev);
|
||||
|
||||
/* ROM control related routines */
|
||||
int pci_enable_rom(struct pci_dev *pdev);
|
||||
|
|
|
@ -2263,6 +2263,24 @@ static inline void skb_postpull_rcsum(struct sk_buff *skb,
|
|||
|
||||
unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
|
||||
|
||||
/**
|
||||
* pskb_trim_rcsum - trim received skb and update checksum
|
||||
* @skb: buffer to trim
|
||||
* @len: new length
|
||||
*
|
||||
* This is exactly the same as pskb_trim except that it ensures the
|
||||
* checksum of received packets are still valid after the operation.
|
||||
*/
|
||||
|
||||
static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
|
||||
{
|
||||
if (likely(len >= skb->len))
|
||||
return 0;
|
||||
if (skb->ip_summed == CHECKSUM_COMPLETE)
|
||||
skb->ip_summed = CHECKSUM_NONE;
|
||||
return __pskb_trim(skb, len);
|
||||
}
|
||||
|
||||
#define skb_queue_walk(queue, skb) \
|
||||
for (skb = (queue)->next; \
|
||||
skb != (struct sk_buff *)(queue); \
|
||||
|
@ -2360,27 +2378,6 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
|
|||
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
|
||||
__wsum csum);
|
||||
|
||||
/**
|
||||
* pskb_trim_rcsum - trim received skb and update checksum
|
||||
* @skb: buffer to trim
|
||||
* @len: new length
|
||||
*
|
||||
* This is exactly the same as pskb_trim except that it ensures the
|
||||
* checksum of received packets are still valid after the operation.
|
||||
*/
|
||||
|
||||
static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
|
||||
{
|
||||
if (likely(len >= skb->len))
|
||||
return 0;
|
||||
if (skb->ip_summed == CHECKSUM_COMPLETE) {
|
||||
__wsum adj = skb_checksum(skb, len, skb->len - len, 0);
|
||||
|
||||
skb->csum = csum_sub(skb->csum, adj);
|
||||
}
|
||||
return __pskb_trim(skb, len);
|
||||
}
|
||||
|
||||
static inline void *skb_header_pointer(const struct sk_buff *skb, int offset,
|
||||
int len, void *buffer)
|
||||
{
|
||||
|
|
|
@ -110,7 +110,8 @@ struct frag_hdr {
|
|||
__be32 identification;
|
||||
};
|
||||
|
||||
#define IP6_MF 0x0001
|
||||
#define IP6_MF 0x0001
|
||||
#define IP6_OFFSET 0xFFF8
|
||||
|
||||
#include <net/sock.h>
|
||||
|
||||
|
|
|
@ -1726,12 +1726,6 @@ struct sctp_association {
|
|||
/* How many duplicated TSNs have we seen? */
|
||||
int numduptsns;
|
||||
|
||||
/* Number of seconds of idle time before an association is closed.
|
||||
* In the association context, this is really used as a boolean
|
||||
* since the real timeout is stored in the timeouts array
|
||||
*/
|
||||
__u32 autoclose;
|
||||
|
||||
/* These are to support
|
||||
* "SCTP Extensions for Dynamic Reconfiguration of IP Addresses
|
||||
* and Enforcement of Flow and Message Limits"
|
||||
|
|
|
@ -1035,7 +1035,6 @@ enum cg_proto_flags {
|
|||
};
|
||||
|
||||
struct cg_proto {
|
||||
void (*enter_memory_pressure)(struct sock *sk);
|
||||
struct res_counter memory_allocated; /* Current allocated memory. */
|
||||
struct percpu_counter sockets_allocated; /* Current number of sockets. */
|
||||
int memory_pressure;
|
||||
|
@ -1155,8 +1154,7 @@ static inline void sk_leave_memory_pressure(struct sock *sk)
|
|||
struct proto *prot = sk->sk_prot;
|
||||
|
||||
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
|
||||
if (cg_proto->memory_pressure)
|
||||
cg_proto->memory_pressure = 0;
|
||||
cg_proto->memory_pressure = 0;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1171,7 +1169,7 @@ static inline void sk_enter_memory_pressure(struct sock *sk)
|
|||
struct proto *prot = sk->sk_prot;
|
||||
|
||||
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
|
||||
cg_proto->enter_memory_pressure(sk);
|
||||
cg_proto->memory_pressure = 1;
|
||||
}
|
||||
|
||||
sk->sk_prot->enter_memory_pressure(sk);
|
||||
|
|
|
@ -426,6 +426,16 @@ netdev_features_t br_features_recompute(struct net_bridge *br,
|
|||
int br_handle_frame_finish(struct sk_buff *skb);
|
||||
rx_handler_result_t br_handle_frame(struct sk_buff **pskb);
|
||||
|
||||
static inline bool br_rx_handler_check_rcu(const struct net_device *dev)
|
||||
{
|
||||
return rcu_dereference(dev->rx_handler) == br_handle_frame;
|
||||
}
|
||||
|
||||
static inline struct net_bridge_port *br_port_get_check_rcu(const struct net_device *dev)
|
||||
{
|
||||
return br_rx_handler_check_rcu(dev) ? br_port_get_rcu(dev) : NULL;
|
||||
}
|
||||
|
||||
/* br_ioctl.c */
|
||||
int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
|
||||
int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd,
|
||||
|
|
|
@ -153,7 +153,7 @@ void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb,
|
|||
if (buf[0] != 0 || buf[1] != 0 || buf[2] != 0)
|
||||
goto err;
|
||||
|
||||
p = br_port_get_rcu(dev);
|
||||
p = br_port_get_check_rcu(dev);
|
||||
if (!p)
|
||||
goto err;
|
||||
|
||||
|
|
|
@ -64,7 +64,6 @@ static struct genl_family net_drop_monitor_family = {
|
|||
.hdrsize = 0,
|
||||
.name = "NET_DM",
|
||||
.version = 2,
|
||||
.maxattr = NET_DM_CMD_MAX,
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
|
||||
|
|
|
@ -3584,6 +3584,7 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
|
|||
skb->tstamp.tv64 = 0;
|
||||
skb->pkt_type = PACKET_HOST;
|
||||
skb->skb_iif = 0;
|
||||
skb->local_df = 0;
|
||||
skb_dst_drop(skb);
|
||||
skb->mark = 0;
|
||||
secpath_reset(skb);
|
||||
|
|
|
@ -882,7 +882,7 @@ set_rcvbuf:
|
|||
|
||||
case SO_PEEK_OFF:
|
||||
if (sock->ops->set_peek_off)
|
||||
sock->ops->set_peek_off(sk, val);
|
||||
ret = sock->ops->set_peek_off(sk, val);
|
||||
else
|
||||
ret = -EOPNOTSUPP;
|
||||
break;
|
||||
|
|
|
@ -851,7 +851,6 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
usin->sin6_addr = flowlabel->dst;
|
||||
fl6_sock_release(flowlabel);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -104,7 +104,10 @@ errout:
|
|||
static bool fib4_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
|
||||
{
|
||||
struct fib_result *result = (struct fib_result *) arg->result;
|
||||
struct net_device *dev = result->fi->fib_dev;
|
||||
struct net_device *dev = NULL;
|
||||
|
||||
if (result->fi)
|
||||
dev = result->fi->fib_dev;
|
||||
|
||||
/* do not accept result if the route does
|
||||
* not meet the required prefix length
|
||||
|
|
|
@ -6,13 +6,6 @@
|
|||
#include <linux/memcontrol.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
static void memcg_tcp_enter_memory_pressure(struct sock *sk)
|
||||
{
|
||||
if (sk->sk_cgrp->memory_pressure)
|
||||
sk->sk_cgrp->memory_pressure = 1;
|
||||
}
|
||||
EXPORT_SYMBOL(memcg_tcp_enter_memory_pressure);
|
||||
|
||||
int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
|
||||
{
|
||||
/*
|
||||
|
|
|
@ -560,15 +560,11 @@ static inline struct sock *__udp4_lib_lookup_skb(struct sk_buff *skb,
|
|||
__be16 sport, __be16 dport,
|
||||
struct udp_table *udptable)
|
||||
{
|
||||
struct sock *sk;
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
|
||||
if (unlikely(sk = skb_steal_sock(skb)))
|
||||
return sk;
|
||||
else
|
||||
return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
|
||||
iph->daddr, dport, inet_iif(skb),
|
||||
udptable);
|
||||
return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
|
||||
iph->daddr, dport, inet_iif(skb),
|
||||
udptable);
|
||||
}
|
||||
|
||||
struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport,
|
||||
|
@ -1603,12 +1599,21 @@ static void flush_stack(struct sock **stack, unsigned int count,
|
|||
kfree_skb(skb1);
|
||||
}
|
||||
|
||||
static void udp_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
|
||||
/* For TCP sockets, sk_rx_dst is protected by socket lock
|
||||
* For UDP, we use sk_dst_lock to guard against concurrent changes.
|
||||
*/
|
||||
static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
|
||||
{
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct dst_entry *old;
|
||||
|
||||
dst_hold(dst);
|
||||
sk->sk_rx_dst = dst;
|
||||
spin_lock(&sk->sk_dst_lock);
|
||||
old = sk->sk_rx_dst;
|
||||
if (likely(old != dst)) {
|
||||
dst_hold(dst);
|
||||
sk->sk_rx_dst = dst;
|
||||
dst_release(old);
|
||||
}
|
||||
spin_unlock(&sk->sk_dst_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1739,15 +1744,16 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
|
|||
if (udp4_csum_init(skb, uh, proto))
|
||||
goto csum_error;
|
||||
|
||||
if (skb->sk) {
|
||||
sk = skb_steal_sock(skb);
|
||||
if (sk) {
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
int ret;
|
||||
sk = skb->sk;
|
||||
|
||||
if (unlikely(sk->sk_rx_dst == NULL))
|
||||
udp_sk_rx_dst_set(sk, skb);
|
||||
if (unlikely(sk->sk_rx_dst != dst))
|
||||
udp_sk_rx_dst_set(sk, dst);
|
||||
|
||||
ret = udp_queue_rcv_skb(sk, skb);
|
||||
|
||||
sock_put(sk);
|
||||
/* a return value > 0 means to resubmit the input, but
|
||||
* it wants the return to be -protocol, or 0
|
||||
*/
|
||||
|
@ -1913,17 +1919,20 @@ static struct sock *__udp4_lib_demux_lookup(struct net *net,
|
|||
|
||||
void udp_v4_early_demux(struct sk_buff *skb)
|
||||
{
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
const struct udphdr *uh = udp_hdr(skb);
|
||||
struct net *net = dev_net(skb->dev);
|
||||
const struct iphdr *iph;
|
||||
const struct udphdr *uh;
|
||||
struct sock *sk;
|
||||
struct dst_entry *dst;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
int dif = skb->dev->ifindex;
|
||||
|
||||
/* validate the packet */
|
||||
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
|
||||
return;
|
||||
|
||||
iph = ip_hdr(skb);
|
||||
uh = udp_hdr(skb);
|
||||
|
||||
if (skb->pkt_type == PACKET_BROADCAST ||
|
||||
skb->pkt_type == PACKET_MULTICAST)
|
||||
sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
|
||||
|
|
|
@ -2613,7 +2613,7 @@ static void init_loopback(struct net_device *dev)
|
|||
if (sp_ifa->rt)
|
||||
continue;
|
||||
|
||||
sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0);
|
||||
sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, false);
|
||||
|
||||
/* Failure cases are ignored */
|
||||
if (!IS_ERR(sp_rt)) {
|
||||
|
|
|
@ -73,7 +73,6 @@ int ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
usin->sin6_addr = flowlabel->dst;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -122,7 +122,11 @@ out:
|
|||
static bool fib6_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
|
||||
{
|
||||
struct rt6_info *rt = (struct rt6_info *) arg->result;
|
||||
struct net_device *dev = rt->rt6i_idev->dev;
|
||||
struct net_device *dev = NULL;
|
||||
|
||||
if (rt->rt6i_idev)
|
||||
dev = rt->rt6i_idev->dev;
|
||||
|
||||
/* do not accept result if the route does
|
||||
* not meet the required prefix length
|
||||
*/
|
||||
|
|
|
@ -1277,6 +1277,9 @@ skip_linkparms:
|
|||
ri->prefix_len == 0)
|
||||
continue;
|
||||
#endif
|
||||
if (ri->prefix_len == 0 &&
|
||||
!in6_dev->cnf.accept_ra_defrtr)
|
||||
continue;
|
||||
if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen)
|
||||
continue;
|
||||
rt6_route_rcv(skb->dev, (u8*)p, (p->nd_opt_len) << 3,
|
||||
|
|
|
@ -792,7 +792,6 @@ static int rawv6_sendmsg(struct kiocb *iocb, struct sock *sk,
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
daddr = &flowlabel->dst;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -84,6 +84,8 @@ static int ip6_dst_gc(struct dst_ops *ops);
|
|||
|
||||
static int ip6_pkt_discard(struct sk_buff *skb);
|
||||
static int ip6_pkt_discard_out(struct sk_buff *skb);
|
||||
static int ip6_pkt_prohibit(struct sk_buff *skb);
|
||||
static int ip6_pkt_prohibit_out(struct sk_buff *skb);
|
||||
static void ip6_link_failure(struct sk_buff *skb);
|
||||
static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
|
||||
struct sk_buff *skb, u32 mtu);
|
||||
|
@ -234,9 +236,6 @@ static const struct rt6_info ip6_null_entry_template = {
|
|||
|
||||
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
|
||||
|
||||
static int ip6_pkt_prohibit(struct sk_buff *skb);
|
||||
static int ip6_pkt_prohibit_out(struct sk_buff *skb);
|
||||
|
||||
static const struct rt6_info ip6_prohibit_entry_template = {
|
||||
.dst = {
|
||||
.__refcnt = ATOMIC_INIT(1),
|
||||
|
@ -1565,21 +1564,24 @@ int ip6_route_add(struct fib6_config *cfg)
|
|||
goto out;
|
||||
}
|
||||
}
|
||||
rt->dst.output = ip6_pkt_discard_out;
|
||||
rt->dst.input = ip6_pkt_discard;
|
||||
rt->rt6i_flags = RTF_REJECT|RTF_NONEXTHOP;
|
||||
switch (cfg->fc_type) {
|
||||
case RTN_BLACKHOLE:
|
||||
rt->dst.error = -EINVAL;
|
||||
rt->dst.output = dst_discard;
|
||||
rt->dst.input = dst_discard;
|
||||
break;
|
||||
case RTN_PROHIBIT:
|
||||
rt->dst.error = -EACCES;
|
||||
rt->dst.output = ip6_pkt_prohibit_out;
|
||||
rt->dst.input = ip6_pkt_prohibit;
|
||||
break;
|
||||
case RTN_THROW:
|
||||
rt->dst.error = -EAGAIN;
|
||||
break;
|
||||
default:
|
||||
rt->dst.error = -ENETUNREACH;
|
||||
rt->dst.error = (cfg->fc_type == RTN_THROW) ? -EAGAIN
|
||||
: -ENETUNREACH;
|
||||
rt->dst.output = ip6_pkt_discard_out;
|
||||
rt->dst.input = ip6_pkt_discard;
|
||||
break;
|
||||
}
|
||||
goto install_route;
|
||||
|
@ -2144,8 +2146,6 @@ static int ip6_pkt_discard_out(struct sk_buff *skb)
|
|||
return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_OUTNOROUTES);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
|
||||
|
||||
static int ip6_pkt_prohibit(struct sk_buff *skb)
|
||||
{
|
||||
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_INNOROUTES);
|
||||
|
@ -2157,8 +2157,6 @@ static int ip6_pkt_prohibit_out(struct sk_buff *skb)
|
|||
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_OUTNOROUTES);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Allocate a dst for local (unicast / anycast) address.
|
||||
*/
|
||||
|
@ -2168,12 +2166,10 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev,
|
|||
bool anycast)
|
||||
{
|
||||
struct net *net = dev_net(idev->dev);
|
||||
struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 0, NULL);
|
||||
|
||||
if (!rt) {
|
||||
net_warn_ratelimited("Maximum number of routes reached, consider increasing route/max_size\n");
|
||||
struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev,
|
||||
DST_NOCOUNT, NULL);
|
||||
if (!rt)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
in6_dev_hold(idev);
|
||||
|
||||
|
|
|
@ -156,7 +156,6 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
usin->sin6_addr = flowlabel->dst;
|
||||
fl6_sock_release(flowlabel);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1140,7 +1140,6 @@ do_udp_sendmsg:
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
daddr = &flowlabel->dst;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -528,7 +528,6 @@ static int l2tp_ip6_sendmsg(struct kiocb *iocb, struct sock *sk,
|
|||
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
|
||||
if (flowlabel == NULL)
|
||||
return -EINVAL;
|
||||
daddr = &flowlabel->dst;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue