Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: "Lots of fixes, mostly drivers as is usually the case. 1) Don't treat zero DMA address as invalid in vmxnet3, from Alexey Khoroshilov. 2) Fix element timeouts in netfilter's nft_dynset, from Anders K. Pedersen. 3) Don't put aead_req crypto struct on the stack in mac80211, from Ard Biesheuvel. 4) Several uninitialized variable warning fixes from Arnd Bergmann. 5) Fix memory leak in cxgb4, from Colin Ian King. 6) Fix bpf handling of VLAN header push/pop, from Daniel Borkmann. 7) Several VRF semantic fixes from David Ahern. 8) Set skb->protocol properly in ip6_tnl_xmit(), from Eli Cooper. 9) Socket needs to be locked in udp_disconnect(), from Eric Dumazet. 10) Div-by-zero on 32-bit fix in mlx4 driver, from Eugenia Emantayev. 11) Fix stale link state during failover in NCSCI driver, from Gavin Shan. 12) Fix netdev lower adjacency list traversal, from Ido Schimmel. 13) Propvide proper handle when emitting notifications of filter deletes, from Jamal Hadi Salim. 14) Memory leaks and big-endian issues in rtl8xxxu, from Jes Sorensen. 15) Fix DESYNC_FACTOR handling in ipv6, from Jiri Bohac. 16) Several routing offload fixes in mlxsw driver, from Jiri Pirko. 17) Fix broadcast sync problem in TIPC, from Jon Paul Maloy. 18) Validate chunk len before using it in SCTP, from Marcelo Ricardo Leitner. 19) Revert a netns locking change that causes regressions, from Paul Moore. 20) Add recursion limit to GRO handling, from Sabrina Dubroca. 21) GFP_KERNEL in irq context fix in ibmvnic, from Thomas Falcon. 22) Avoid accessing stale vxlan/geneve socket in data path, from Pravin Shelar" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (189 commits) geneve: avoid using stale geneve socket. vxlan: avoid using stale vxlan socket. qede: Fix out-of-bound fastpath memory access net: phy: dp83848: add dp83822 PHY support enic: fix rq disable tipc: fix broadcast link synchronization problem ibmvnic: Fix missing brackets in init_sub_crq_irqs ibmvnic: Fix releasing of sub-CRQ IRQs in interrupt context Revert "ibmvnic: Fix releasing of sub-CRQ IRQs in interrupt context" arch/powerpc: Update parameters for csum_tcpudp_magic & csum_tcpudp_nofold net/mlx4_en: Save slave ethtool stats command net/mlx4_en: Fix potential deadlock in port statistics flow net/mlx4: Fix firmware command timeout during interrupt test net/mlx4_core: Do not access comm channel if it has not yet been initialized net/mlx4_en: Fix panic during reboot net/mlx4_en: Process all completions in RX rings after port goes up net/mlx4_en: Resolve dividing by zero in 32-bit system net/mlx4_core: Change the default value of enable_qos net/mlx4_core: Avoid setting ports to auto when only one port type is supported net/mlx4_core: Fix the resource-type enum in res tracker to conform to FW spec ...
This commit is contained in:
commit
2a26d99b25
|
@ -49,6 +49,7 @@ Optional port properties:
|
|||
and
|
||||
|
||||
- phy-handle: See ethernet.txt file in the same directory.
|
||||
- phy-mode: See ethernet.txt file in the same directory.
|
||||
|
||||
or
|
||||
|
||||
|
|
|
@ -29,8 +29,8 @@ A: There are always two trees (git repositories) in play. Both are driven
|
|||
Linus, and net-next is where the new code goes for the future release.
|
||||
You can find the trees here:
|
||||
|
||||
http://git.kernel.org/?p=linux/kernel/git/davem/net.git
|
||||
http://git.kernel.org/?p=linux/kernel/git/davem/net-next.git
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git
|
||||
|
||||
Q: How often do changes from these trees make it to the mainline Linus tree?
|
||||
|
||||
|
@ -76,7 +76,7 @@ Q: So where are we now in this cycle?
|
|||
|
||||
A: Load the mainline (Linus) page here:
|
||||
|
||||
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
|
||||
|
||||
and note the top of the "tags" section. If it is rc1, it is early
|
||||
in the dev cycle. If it was tagged rc7 a week ago, then a release
|
||||
|
@ -123,7 +123,7 @@ A: Normally Greg Kroah-Hartman collects stable commits himself, but
|
|||
|
||||
It contains the patches which Dave has selected, but not yet handed
|
||||
off to Greg. If Greg already has the patch, then it will be here:
|
||||
http://git.kernel.org/cgit/linux/kernel/git/stable/stable-queue.git
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
|
||||
|
||||
A quick way to find whether the patch is in this stable-queue is
|
||||
to simply clone the repo, and then git grep the mainline commit ID, e.g.
|
||||
|
|
|
@ -33,24 +33,6 @@ nf_conntrack_events - BOOLEAN
|
|||
If this option is enabled, the connection tracking code will
|
||||
provide userspace with connection tracking events via ctnetlink.
|
||||
|
||||
nf_conntrack_events_retry_timeout - INTEGER (seconds)
|
||||
default 15
|
||||
|
||||
This option is only relevant when "reliable connection tracking
|
||||
events" are used. Normally, ctnetlink is "lossy", that is,
|
||||
events are normally dropped when userspace listeners can't keep up.
|
||||
|
||||
Userspace can request "reliable event mode". When this mode is
|
||||
active, the conntrack will only be destroyed after the event was
|
||||
delivered. If event delivery fails, the kernel periodically
|
||||
re-tries to send the event to userspace.
|
||||
|
||||
This is the maximum interval the kernel should use when re-trying
|
||||
to deliver the destroy event.
|
||||
|
||||
A higher number means there will be fewer delivery retries and it
|
||||
will take longer for a backlog to be processed.
|
||||
|
||||
nf_conntrack_expect_max - INTEGER
|
||||
Maximum size of expectation table. Default value is
|
||||
nf_conntrack_buckets / 256. Minimum is 1.
|
||||
|
|
41
MAINTAINERS
41
MAINTAINERS
|
@ -2552,15 +2552,18 @@ S: Supported
|
|||
F: drivers/net/ethernet/broadcom/genet/
|
||||
|
||||
BROADCOM BNX2 GIGABIT ETHERNET DRIVER
|
||||
M: Sony Chacko <sony.chacko@qlogic.com>
|
||||
M: Dept-HSGLinuxNICDev@qlogic.com
|
||||
M: Rasesh Mody <rasesh.mody@cavium.com>
|
||||
M: Harish Patil <harish.patil@cavium.com>
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/broadcom/bnx2.*
|
||||
F: drivers/net/ethernet/broadcom/bnx2_*
|
||||
|
||||
BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
|
||||
M: Ariel Elior <ariel.elior@qlogic.com>
|
||||
M: Yuval Mintz <Yuval.Mintz@cavium.com>
|
||||
M: Ariel Elior <ariel.elior@cavium.com>
|
||||
M: everest-linux-l2@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/broadcom/bnx2x/
|
||||
|
@ -2767,7 +2770,9 @@ S: Supported
|
|||
F: drivers/scsi/bfa/
|
||||
|
||||
BROCADE BNA 10 GIGABIT ETHERNET DRIVER
|
||||
M: Rasesh Mody <rasesh.mody@qlogic.com>
|
||||
M: Rasesh Mody <rasesh.mody@cavium.com>
|
||||
M: Sudarsana Kalluru <sudarsana.kalluru@cavium.com>
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/brocade/bna/
|
||||
|
@ -8517,11 +8522,10 @@ F: Documentation/devicetree/bindings/net/wireless/
|
|||
F: drivers/net/wireless/
|
||||
|
||||
NETXEN (1/10) GbE SUPPORT
|
||||
M: Manish Chopra <manish.chopra@qlogic.com>
|
||||
M: Sony Chacko <sony.chacko@qlogic.com>
|
||||
M: Rajesh Borundia <rajesh.borundia@qlogic.com>
|
||||
M: Manish Chopra <manish.chopra@cavium.com>
|
||||
M: Rahul Verma <rahul.verma@cavium.com>
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
W: http://www.qlogic.com
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/qlogic/netxen/
|
||||
|
||||
|
@ -9897,33 +9901,32 @@ F: Documentation/scsi/LICENSE.qla4xxx
|
|||
F: drivers/scsi/qla4xxx/
|
||||
|
||||
QLOGIC QLA3XXX NETWORK DRIVER
|
||||
M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
|
||||
M: Ron Mercer <ron.mercer@qlogic.com>
|
||||
M: linux-driver@qlogic.com
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/networking/LICENSE.qla3xxx
|
||||
F: drivers/net/ethernet/qlogic/qla3xxx.*
|
||||
|
||||
QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER
|
||||
M: Dept-GELinuxNICDev@qlogic.com
|
||||
M: Harish Patil <harish.patil@cavium.com>
|
||||
M: Manish Chopra <manish.chopra@cavium.com>
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/qlogic/qlcnic/
|
||||
|
||||
QLOGIC QLGE 10Gb ETHERNET DRIVER
|
||||
M: Harish Patil <harish.patil@qlogic.com>
|
||||
M: Sudarsana Kalluru <sudarsana.kalluru@qlogic.com>
|
||||
M: Dept-GELinuxNICDev@qlogic.com
|
||||
M: linux-driver@qlogic.com
|
||||
M: Harish Patil <harish.patil@cavium.com>
|
||||
M: Manish Chopra <manish.chopra@cavium.com>
|
||||
M: Dept-GELinuxNICDev@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/qlogic/qlge/
|
||||
|
||||
QLOGIC QL4xxx ETHERNET DRIVER
|
||||
M: Yuval Mintz <Yuval.Mintz@qlogic.com>
|
||||
M: Ariel Elior <Ariel.Elior@qlogic.com>
|
||||
M: everest-linux-l2@qlogic.com
|
||||
M: Yuval Mintz <Yuval.Mintz@cavium.com>
|
||||
M: Ariel Elior <Ariel.Elior@cavium.com>
|
||||
M: everest-linux-l2@cavium.com
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/qlogic/qed/
|
||||
|
|
|
@ -53,10 +53,8 @@ static inline __sum16 csum_fold(__wsum sum)
|
|||
return (__force __sum16)(~((__force u32)sum + tmp) >> 16);
|
||||
}
|
||||
|
||||
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
|
||||
unsigned short len,
|
||||
unsigned short proto,
|
||||
__wsum sum)
|
||||
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len,
|
||||
__u8 proto, __wsum sum)
|
||||
{
|
||||
#ifdef __powerpc64__
|
||||
unsigned long s = (__force u32)sum;
|
||||
|
@ -83,10 +81,8 @@ static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
|
|||
* computes the checksum of the TCP/UDP pseudo-header
|
||||
* returns a 16-bit checksum, already complemented
|
||||
*/
|
||||
static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
|
||||
unsigned short len,
|
||||
unsigned short proto,
|
||||
__wsum sum)
|
||||
static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr, __u32 len,
|
||||
__u8 proto, __wsum sum)
|
||||
{
|
||||
return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
|
||||
}
|
||||
|
|
|
@ -310,7 +310,7 @@ static int bt_ti_probe(struct platform_device *pdev)
|
|||
BT_DBG("HCI device registered (hdev %p)", hdev);
|
||||
|
||||
dev_set_drvdata(&pdev->dev, hst);
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bt_ti_remove(struct platform_device *pdev)
|
||||
|
|
|
@ -643,6 +643,14 @@ static const struct dmi_system_id bcm_wrong_irq_dmi_table[] = {
|
|||
},
|
||||
.driver_data = &acpi_active_low,
|
||||
},
|
||||
{ /* Handle ThinkPad 8 tablets with BCM2E55 chipset ACPI ID */
|
||||
.ident = "Lenovo ThinkPad 8",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "ThinkPad 8"),
|
||||
},
|
||||
.driver_data = &acpi_active_low,
|
||||
},
|
||||
{ }
|
||||
};
|
||||
|
||||
|
|
|
@ -1019,7 +1019,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
|
|||
resp.qp_tab_size = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp);
|
||||
if (mlx5_core_is_pf(dev->mdev) && MLX5_CAP_GEN(dev->mdev, bf))
|
||||
resp.bf_reg_size = 1 << MLX5_CAP_GEN(dev->mdev, log_bf_reg_size);
|
||||
resp.cache_line_size = L1_CACHE_BYTES;
|
||||
resp.cache_line_size = cache_line_size();
|
||||
resp.max_sq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq);
|
||||
resp.max_rq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq);
|
||||
resp.max_send_wqebb = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz);
|
||||
|
|
|
@ -52,7 +52,6 @@ enum {
|
|||
|
||||
enum {
|
||||
MLX5_IB_SQ_STRIDE = 6,
|
||||
MLX5_IB_CACHE_LINE_SIZE = 64,
|
||||
};
|
||||
|
||||
static const u32 mlx5_ib_opcode[] = {
|
||||
|
|
|
@ -2,6 +2,7 @@ config INFINIBAND_QEDR
|
|||
tristate "QLogic RoCE driver"
|
||||
depends on 64BIT && QEDE
|
||||
select QED_LL2
|
||||
select QED_RDMA
|
||||
---help---
|
||||
This driver provides low-level InfiniBand over Ethernet
|
||||
support for QLogic QED host channel adapters (HCAs).
|
||||
|
|
|
@ -63,6 +63,8 @@ enum ipoib_flush_level {
|
|||
|
||||
enum {
|
||||
IPOIB_ENCAP_LEN = 4,
|
||||
IPOIB_PSEUDO_LEN = 20,
|
||||
IPOIB_HARD_LEN = IPOIB_ENCAP_LEN + IPOIB_PSEUDO_LEN,
|
||||
|
||||
IPOIB_UD_HEAD_SIZE = IB_GRH_BYTES + IPOIB_ENCAP_LEN,
|
||||
IPOIB_UD_RX_SG = 2, /* max buffer needed for 4K mtu */
|
||||
|
@ -134,15 +136,21 @@ struct ipoib_header {
|
|||
u16 reserved;
|
||||
};
|
||||
|
||||
struct ipoib_cb {
|
||||
struct qdisc_skb_cb qdisc_cb;
|
||||
u8 hwaddr[INFINIBAND_ALEN];
|
||||
struct ipoib_pseudo_header {
|
||||
u8 hwaddr[INFINIBAND_ALEN];
|
||||
};
|
||||
|
||||
static inline struct ipoib_cb *ipoib_skb_cb(const struct sk_buff *skb)
|
||||
static inline void skb_add_pseudo_hdr(struct sk_buff *skb)
|
||||
{
|
||||
BUILD_BUG_ON(sizeof(skb->cb) < sizeof(struct ipoib_cb));
|
||||
return (struct ipoib_cb *)skb->cb;
|
||||
char *data = skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
|
||||
/*
|
||||
* only the ipoib header is present now, make room for a dummy
|
||||
* pseudo header and set skb field accordingly
|
||||
*/
|
||||
memset(data, 0, IPOIB_PSEUDO_LEN);
|
||||
skb_reset_mac_header(skb);
|
||||
skb_pull(skb, IPOIB_HARD_LEN);
|
||||
}
|
||||
|
||||
/* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */
|
||||
|
|
|
@ -63,6 +63,8 @@ MODULE_PARM_DESC(cm_data_debug_level,
|
|||
#define IPOIB_CM_RX_DELAY (3 * 256 * HZ)
|
||||
#define IPOIB_CM_RX_UPDATE_MASK (0x3)
|
||||
|
||||
#define IPOIB_CM_RX_RESERVE (ALIGN(IPOIB_HARD_LEN, 16) - IPOIB_ENCAP_LEN)
|
||||
|
||||
static struct ib_qp_attr ipoib_cm_err_attr = {
|
||||
.qp_state = IB_QPS_ERR
|
||||
};
|
||||
|
@ -146,15 +148,15 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
|
|||
struct sk_buff *skb;
|
||||
int i;
|
||||
|
||||
skb = dev_alloc_skb(IPOIB_CM_HEAD_SIZE + 12);
|
||||
skb = dev_alloc_skb(ALIGN(IPOIB_CM_HEAD_SIZE + IPOIB_PSEUDO_LEN, 16));
|
||||
if (unlikely(!skb))
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* IPoIB adds a 4 byte header. So we need 12 more bytes to align the
|
||||
* IPoIB adds a IPOIB_ENCAP_LEN byte header, this will align the
|
||||
* IP header to a multiple of 16.
|
||||
*/
|
||||
skb_reserve(skb, 12);
|
||||
skb_reserve(skb, IPOIB_CM_RX_RESERVE);
|
||||
|
||||
mapping[0] = ib_dma_map_single(priv->ca, skb->data, IPOIB_CM_HEAD_SIZE,
|
||||
DMA_FROM_DEVICE);
|
||||
|
@ -624,9 +626,9 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
|
|||
if (wc->byte_len < IPOIB_CM_COPYBREAK) {
|
||||
int dlen = wc->byte_len;
|
||||
|
||||
small_skb = dev_alloc_skb(dlen + 12);
|
||||
small_skb = dev_alloc_skb(dlen + IPOIB_CM_RX_RESERVE);
|
||||
if (small_skb) {
|
||||
skb_reserve(small_skb, 12);
|
||||
skb_reserve(small_skb, IPOIB_CM_RX_RESERVE);
|
||||
ib_dma_sync_single_for_cpu(priv->ca, rx_ring[wr_id].mapping[0],
|
||||
dlen, DMA_FROM_DEVICE);
|
||||
skb_copy_from_linear_data(skb, small_skb->data, dlen);
|
||||
|
@ -663,8 +665,7 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
|
|||
|
||||
copied:
|
||||
skb->protocol = ((struct ipoib_header *) skb->data)->proto;
|
||||
skb_reset_mac_header(skb);
|
||||
skb_pull(skb, IPOIB_ENCAP_LEN);
|
||||
skb_add_pseudo_hdr(skb);
|
||||
|
||||
++dev->stats.rx_packets;
|
||||
dev->stats.rx_bytes += skb->len;
|
||||
|
|
|
@ -128,16 +128,15 @@ static struct sk_buff *ipoib_alloc_rx_skb(struct net_device *dev, int id)
|
|||
|
||||
buf_size = IPOIB_UD_BUF_SIZE(priv->max_ib_mtu);
|
||||
|
||||
skb = dev_alloc_skb(buf_size + IPOIB_ENCAP_LEN);
|
||||
skb = dev_alloc_skb(buf_size + IPOIB_HARD_LEN);
|
||||
if (unlikely(!skb))
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* IB will leave a 40 byte gap for a GRH and IPoIB adds a 4 byte
|
||||
* header. So we need 4 more bytes to get to 48 and align the
|
||||
* IP header to a multiple of 16.
|
||||
* the IP header will be at IPOIP_HARD_LEN + IB_GRH_BYTES, that is
|
||||
* 64 bytes aligned
|
||||
*/
|
||||
skb_reserve(skb, 4);
|
||||
skb_reserve(skb, sizeof(struct ipoib_pseudo_header));
|
||||
|
||||
mapping = priv->rx_ring[id].mapping;
|
||||
mapping[0] = ib_dma_map_single(priv->ca, skb->data, buf_size,
|
||||
|
@ -253,8 +252,7 @@ static void ipoib_ib_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
|
|||
skb_pull(skb, IB_GRH_BYTES);
|
||||
|
||||
skb->protocol = ((struct ipoib_header *) skb->data)->proto;
|
||||
skb_reset_mac_header(skb);
|
||||
skb_pull(skb, IPOIB_ENCAP_LEN);
|
||||
skb_add_pseudo_hdr(skb);
|
||||
|
||||
++dev->stats.rx_packets;
|
||||
dev->stats.rx_bytes += skb->len;
|
||||
|
|
|
@ -925,9 +925,12 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
|
|||
ipoib_neigh_free(neigh);
|
||||
goto err_drop;
|
||||
}
|
||||
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE)
|
||||
if (skb_queue_len(&neigh->queue) <
|
||||
IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
__skb_queue_tail(&neigh->queue, skb);
|
||||
else {
|
||||
} else {
|
||||
ipoib_warn(priv, "queue length limit %d. Packet drop.\n",
|
||||
skb_queue_len(&neigh->queue));
|
||||
goto err_drop;
|
||||
|
@ -964,7 +967,7 @@ err_drop:
|
|||
}
|
||||
|
||||
static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
|
||||
struct ipoib_cb *cb)
|
||||
struct ipoib_pseudo_header *phdr)
|
||||
{
|
||||
struct ipoib_dev_priv *priv = netdev_priv(dev);
|
||||
struct ipoib_path *path;
|
||||
|
@ -972,16 +975,18 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
|
|||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
path = __path_find(dev, cb->hwaddr + 4);
|
||||
path = __path_find(dev, phdr->hwaddr + 4);
|
||||
if (!path || !path->valid) {
|
||||
int new_path = 0;
|
||||
|
||||
if (!path) {
|
||||
path = path_rec_create(dev, cb->hwaddr + 4);
|
||||
path = path_rec_create(dev, phdr->hwaddr + 4);
|
||||
new_path = 1;
|
||||
}
|
||||
if (path) {
|
||||
if (skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
__skb_queue_tail(&path->queue, skb);
|
||||
} else {
|
||||
++dev->stats.tx_dropped;
|
||||
|
@ -1009,10 +1014,12 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
|
|||
be16_to_cpu(path->pathrec.dlid));
|
||||
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
ipoib_send(dev, skb, path->ah, IPOIB_QPN(cb->hwaddr));
|
||||
ipoib_send(dev, skb, path->ah, IPOIB_QPN(phdr->hwaddr));
|
||||
return;
|
||||
} else if ((path->query || !path_rec_start(dev, path)) &&
|
||||
skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
__skb_queue_tail(&path->queue, skb);
|
||||
} else {
|
||||
++dev->stats.tx_dropped;
|
||||
|
@ -1026,13 +1033,15 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
{
|
||||
struct ipoib_dev_priv *priv = netdev_priv(dev);
|
||||
struct ipoib_neigh *neigh;
|
||||
struct ipoib_cb *cb = ipoib_skb_cb(skb);
|
||||
struct ipoib_pseudo_header *phdr;
|
||||
struct ipoib_header *header;
|
||||
unsigned long flags;
|
||||
|
||||
phdr = (struct ipoib_pseudo_header *) skb->data;
|
||||
skb_pull(skb, sizeof(*phdr));
|
||||
header = (struct ipoib_header *) skb->data;
|
||||
|
||||
if (unlikely(cb->hwaddr[4] == 0xff)) {
|
||||
if (unlikely(phdr->hwaddr[4] == 0xff)) {
|
||||
/* multicast, arrange "if" according to probability */
|
||||
if ((header->proto != htons(ETH_P_IP)) &&
|
||||
(header->proto != htons(ETH_P_IPV6)) &&
|
||||
|
@ -1045,13 +1054,13 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
return NETDEV_TX_OK;
|
||||
}
|
||||
/* Add in the P_Key for multicast*/
|
||||
cb->hwaddr[8] = (priv->pkey >> 8) & 0xff;
|
||||
cb->hwaddr[9] = priv->pkey & 0xff;
|
||||
phdr->hwaddr[8] = (priv->pkey >> 8) & 0xff;
|
||||
phdr->hwaddr[9] = priv->pkey & 0xff;
|
||||
|
||||
neigh = ipoib_neigh_get(dev, cb->hwaddr);
|
||||
neigh = ipoib_neigh_get(dev, phdr->hwaddr);
|
||||
if (likely(neigh))
|
||||
goto send_using_neigh;
|
||||
ipoib_mcast_send(dev, cb->hwaddr, skb);
|
||||
ipoib_mcast_send(dev, phdr->hwaddr, skb);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
|
@ -1060,16 +1069,16 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
case htons(ETH_P_IP):
|
||||
case htons(ETH_P_IPV6):
|
||||
case htons(ETH_P_TIPC):
|
||||
neigh = ipoib_neigh_get(dev, cb->hwaddr);
|
||||
neigh = ipoib_neigh_get(dev, phdr->hwaddr);
|
||||
if (unlikely(!neigh)) {
|
||||
neigh_add_path(skb, cb->hwaddr, dev);
|
||||
neigh_add_path(skb, phdr->hwaddr, dev);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
break;
|
||||
case htons(ETH_P_ARP):
|
||||
case htons(ETH_P_RARP):
|
||||
/* for unicast ARP and RARP should always perform path find */
|
||||
unicast_arp_send(skb, dev, cb);
|
||||
unicast_arp_send(skb, dev, phdr);
|
||||
return NETDEV_TX_OK;
|
||||
default:
|
||||
/* ethertype not supported by IPoIB */
|
||||
|
@ -1086,11 +1095,13 @@ send_using_neigh:
|
|||
goto unref;
|
||||
}
|
||||
} else if (neigh->ah) {
|
||||
ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(cb->hwaddr));
|
||||
ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(phdr->hwaddr));
|
||||
goto unref;
|
||||
}
|
||||
|
||||
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, sizeof(*phdr));
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
__skb_queue_tail(&neigh->queue, skb);
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
@ -1122,8 +1133,8 @@ static int ipoib_hard_header(struct sk_buff *skb,
|
|||
unsigned short type,
|
||||
const void *daddr, const void *saddr, unsigned len)
|
||||
{
|
||||
struct ipoib_pseudo_header *phdr;
|
||||
struct ipoib_header *header;
|
||||
struct ipoib_cb *cb = ipoib_skb_cb(skb);
|
||||
|
||||
header = (struct ipoib_header *) skb_push(skb, sizeof *header);
|
||||
|
||||
|
@ -1132,12 +1143,13 @@ static int ipoib_hard_header(struct sk_buff *skb,
|
|||
|
||||
/*
|
||||
* we don't rely on dst_entry structure, always stuff the
|
||||
* destination address into skb->cb so we can figure out where
|
||||
* destination address into skb hard header so we can figure out where
|
||||
* to send the packet later.
|
||||
*/
|
||||
memcpy(cb->hwaddr, daddr, INFINIBAND_ALEN);
|
||||
phdr = (struct ipoib_pseudo_header *) skb_push(skb, sizeof(*phdr));
|
||||
memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
|
||||
|
||||
return sizeof *header;
|
||||
return IPOIB_HARD_LEN;
|
||||
}
|
||||
|
||||
static void ipoib_set_mcast_list(struct net_device *dev)
|
||||
|
@ -1759,7 +1771,7 @@ void ipoib_setup(struct net_device *dev)
|
|||
|
||||
dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
|
||||
|
||||
dev->hard_header_len = IPOIB_ENCAP_LEN;
|
||||
dev->hard_header_len = IPOIB_HARD_LEN;
|
||||
dev->addr_len = INFINIBAND_ALEN;
|
||||
dev->type = ARPHRD_INFINIBAND;
|
||||
dev->tx_queue_len = ipoib_sendq_size * 2;
|
||||
|
|
|
@ -796,9 +796,11 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb)
|
|||
__ipoib_mcast_add(dev, mcast);
|
||||
list_add_tail(&mcast->list, &priv->multicast_list);
|
||||
}
|
||||
if (skb_queue_len(&mcast->pkt_queue) < IPOIB_MAX_MCAST_QUEUE)
|
||||
if (skb_queue_len(&mcast->pkt_queue) < IPOIB_MAX_MCAST_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, sizeof(struct ipoib_pseudo_header));
|
||||
skb_queue_tail(&mcast->pkt_queue, skb);
|
||||
else {
|
||||
} else {
|
||||
++dev->stats.tx_dropped;
|
||||
dev_kfree_skb_any(skb);
|
||||
}
|
||||
|
|
|
@ -256,6 +256,7 @@ static const struct of_device_id b53_mmap_of_table[] = {
|
|||
{ .compatible = "brcm,bcm63xx-switch" },
|
||||
{ /* sentinel */ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, b53_mmap_of_table);
|
||||
|
||||
static struct platform_driver b53_mmap_driver = {
|
||||
.probe = b53_mmap_probe,
|
||||
|
|
|
@ -1133,6 +1133,20 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void bcm_sf2_sw_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
|
||||
|
||||
/* For a kernel about to be kexec'd we want to keep the GPHY on for a
|
||||
* successful MDIO bus scan to occur. If we did turn off the GPHY
|
||||
* before (e.g: port_disable), this will also power it back on.
|
||||
*
|
||||
* Do not rely on kexec_in_progress, just power the PHY on.
|
||||
*/
|
||||
if (priv->hw_params.num_gphy == 1)
|
||||
bcm_sf2_gphy_enable_set(priv->dev->ds, true);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int bcm_sf2_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -1158,10 +1172,12 @@ static const struct of_device_id bcm_sf2_of_match[] = {
|
|||
{ .compatible = "brcm,bcm7445-switch-v4.0" },
|
||||
{ /* sentinel */ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, bcm_sf2_of_match);
|
||||
|
||||
static struct platform_driver bcm_sf2_driver = {
|
||||
.probe = bcm_sf2_sw_probe,
|
||||
.remove = bcm_sf2_sw_remove,
|
||||
.shutdown = bcm_sf2_sw_shutdown,
|
||||
.driver = {
|
||||
.name = "brcm-sf2",
|
||||
.of_match_table = bcm_sf2_of_match,
|
||||
|
|
|
@ -1358,6 +1358,7 @@ static const struct of_device_id nb8800_dt_ids[] = {
|
|||
},
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, nb8800_dt_ids);
|
||||
|
||||
static int nb8800_probe(struct platform_device *pdev)
|
||||
{
|
||||
|
|
|
@ -1126,7 +1126,8 @@ out_freeirq:
|
|||
free_irq(dev->irq, dev);
|
||||
|
||||
out_phy_disconnect:
|
||||
phy_disconnect(phydev);
|
||||
if (priv->has_phy)
|
||||
phy_disconnect(phydev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -1449,7 +1449,7 @@ static int bgmac_phy_connect(struct bgmac *bgmac)
|
|||
phy_dev = phy_connect(bgmac->net_dev, bus_id, &bgmac_adjust_link,
|
||||
PHY_INTERFACE_MODE_MII);
|
||||
if (IS_ERR(phy_dev)) {
|
||||
dev_err(bgmac->dev, "PHY connecton failed\n");
|
||||
dev_err(bgmac->dev, "PHY connection failed\n");
|
||||
return PTR_ERR(phy_dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -271,22 +271,25 @@ static inline u32 bnx2_tx_avail(struct bnx2 *bp, struct bnx2_tx_ring_info *txr)
|
|||
static u32
|
||||
bnx2_reg_rd_ind(struct bnx2 *bp, u32 offset)
|
||||
{
|
||||
unsigned long flags;
|
||||
u32 val;
|
||||
|
||||
spin_lock_bh(&bp->indirect_lock);
|
||||
spin_lock_irqsave(&bp->indirect_lock, flags);
|
||||
BNX2_WR(bp, BNX2_PCICFG_REG_WINDOW_ADDRESS, offset);
|
||||
val = BNX2_RD(bp, BNX2_PCICFG_REG_WINDOW);
|
||||
spin_unlock_bh(&bp->indirect_lock);
|
||||
spin_unlock_irqrestore(&bp->indirect_lock, flags);
|
||||
return val;
|
||||
}
|
||||
|
||||
static void
|
||||
bnx2_reg_wr_ind(struct bnx2 *bp, u32 offset, u32 val)
|
||||
{
|
||||
spin_lock_bh(&bp->indirect_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&bp->indirect_lock, flags);
|
||||
BNX2_WR(bp, BNX2_PCICFG_REG_WINDOW_ADDRESS, offset);
|
||||
BNX2_WR(bp, BNX2_PCICFG_REG_WINDOW, val);
|
||||
spin_unlock_bh(&bp->indirect_lock);
|
||||
spin_unlock_irqrestore(&bp->indirect_lock, flags);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -304,8 +307,10 @@ bnx2_shmem_rd(struct bnx2 *bp, u32 offset)
|
|||
static void
|
||||
bnx2_ctx_wr(struct bnx2 *bp, u32 cid_addr, u32 offset, u32 val)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
offset += cid_addr;
|
||||
spin_lock_bh(&bp->indirect_lock);
|
||||
spin_lock_irqsave(&bp->indirect_lock, flags);
|
||||
if (BNX2_CHIP(bp) == BNX2_CHIP_5709) {
|
||||
int i;
|
||||
|
||||
|
@ -322,7 +327,7 @@ bnx2_ctx_wr(struct bnx2 *bp, u32 cid_addr, u32 offset, u32 val)
|
|||
BNX2_WR(bp, BNX2_CTX_DATA_ADR, offset);
|
||||
BNX2_WR(bp, BNX2_CTX_DATA, val);
|
||||
}
|
||||
spin_unlock_bh(&bp->indirect_lock);
|
||||
spin_unlock_irqrestore(&bp->indirect_lock, flags);
|
||||
}
|
||||
|
||||
#ifdef BCM_CNIC
|
||||
|
|
|
@ -15241,7 +15241,7 @@ static void bnx2x_init_cyclecounter(struct bnx2x *bp)
|
|||
memset(&bp->cyclecounter, 0, sizeof(bp->cyclecounter));
|
||||
bp->cyclecounter.read = bnx2x_cyclecounter_read;
|
||||
bp->cyclecounter.mask = CYCLECOUNTER_MASK(64);
|
||||
bp->cyclecounter.shift = 1;
|
||||
bp->cyclecounter.shift = 0;
|
||||
bp->cyclecounter.mult = 1;
|
||||
}
|
||||
|
||||
|
|
|
@ -4057,7 +4057,7 @@ static void cfg_queues(struct adapter *adap)
|
|||
* capped by the number of available cores.
|
||||
*/
|
||||
if (n10g) {
|
||||
i = num_online_cpus();
|
||||
i = min_t(int, MAX_OFLD_QSETS, num_online_cpus());
|
||||
s->ofldqsets = roundup(i, adap->params.nports);
|
||||
} else {
|
||||
s->ofldqsets = adap->params.nports;
|
||||
|
|
|
@ -135,15 +135,17 @@ static int uldrx_handler(struct sge_rspq *q, const __be64 *rsp,
|
|||
}
|
||||
|
||||
static int alloc_uld_rxqs(struct adapter *adap,
|
||||
struct sge_uld_rxq_info *rxq_info,
|
||||
unsigned int nq, unsigned int offset, bool lro)
|
||||
struct sge_uld_rxq_info *rxq_info, bool lro)
|
||||
{
|
||||
struct sge *s = &adap->sge;
|
||||
struct sge_ofld_rxq *q = rxq_info->uldrxq + offset;
|
||||
unsigned short *ids = rxq_info->rspq_id + offset;
|
||||
unsigned int per_chan = nq / adap->params.nports;
|
||||
unsigned int nq = rxq_info->nrxq + rxq_info->nciq;
|
||||
struct sge_ofld_rxq *q = rxq_info->uldrxq;
|
||||
unsigned short *ids = rxq_info->rspq_id;
|
||||
unsigned int bmap_idx = 0;
|
||||
int i, err, msi_idx;
|
||||
unsigned int per_chan;
|
||||
int i, err, msi_idx, que_idx = 0;
|
||||
|
||||
per_chan = rxq_info->nrxq / adap->params.nports;
|
||||
|
||||
if (adap->flags & USING_MSIX)
|
||||
msi_idx = 1;
|
||||
|
@ -151,12 +153,18 @@ static int alloc_uld_rxqs(struct adapter *adap,
|
|||
msi_idx = -((int)s->intrq.abs_id + 1);
|
||||
|
||||
for (i = 0; i < nq; i++, q++) {
|
||||
if (i == rxq_info->nrxq) {
|
||||
/* start allocation of concentrator queues */
|
||||
per_chan = rxq_info->nciq / adap->params.nports;
|
||||
que_idx = 0;
|
||||
}
|
||||
|
||||
if (msi_idx >= 0) {
|
||||
bmap_idx = get_msix_idx_from_bmap(adap);
|
||||
msi_idx = adap->msix_info_ulds[bmap_idx].idx;
|
||||
}
|
||||
err = t4_sge_alloc_rxq(adap, &q->rspq, false,
|
||||
adap->port[i / per_chan],
|
||||
adap->port[que_idx++ / per_chan],
|
||||
msi_idx,
|
||||
q->fl.size ? &q->fl : NULL,
|
||||
uldrx_handler,
|
||||
|
@ -165,29 +173,19 @@ static int alloc_uld_rxqs(struct adapter *adap,
|
|||
if (err)
|
||||
goto freeout;
|
||||
if (msi_idx >= 0)
|
||||
rxq_info->msix_tbl[i + offset] = bmap_idx;
|
||||
rxq_info->msix_tbl[i] = bmap_idx;
|
||||
memset(&q->stats, 0, sizeof(q->stats));
|
||||
if (ids)
|
||||
ids[i] = q->rspq.abs_id;
|
||||
}
|
||||
return 0;
|
||||
freeout:
|
||||
q = rxq_info->uldrxq + offset;
|
||||
q = rxq_info->uldrxq;
|
||||
for ( ; i; i--, q++) {
|
||||
if (q->rspq.desc)
|
||||
free_rspq_fl(adap, &q->rspq,
|
||||
q->fl.size ? &q->fl : NULL);
|
||||
}
|
||||
|
||||
/* We need to free rxq also in case of ciq allocation failure */
|
||||
if (offset) {
|
||||
q = rxq_info->uldrxq + offset;
|
||||
for ( ; i; i--, q++) {
|
||||
if (q->rspq.desc)
|
||||
free_rspq_fl(adap, &q->rspq,
|
||||
q->fl.size ? &q->fl : NULL);
|
||||
}
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -205,9 +203,7 @@ setup_sge_queues_uld(struct adapter *adap, unsigned int uld_type, bool lro)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = !(!alloc_uld_rxqs(adap, rxq_info, rxq_info->nrxq, 0, lro) &&
|
||||
!alloc_uld_rxqs(adap, rxq_info, rxq_info->nciq,
|
||||
rxq_info->nrxq, lro));
|
||||
ret = !(!alloc_uld_rxqs(adap, rxq_info, lro));
|
||||
|
||||
/* Tell uP to route control queue completions to rdma rspq */
|
||||
if (adap->flags & FULL_INIT_DONE &&
|
||||
|
|
|
@ -210,8 +210,10 @@ static int t4_sched_queue_bind(struct port_info *pi, struct ch_sched_queue *p)
|
|||
|
||||
/* Unbind queue from any existing class */
|
||||
err = t4_sched_queue_unbind(pi, p);
|
||||
if (err)
|
||||
if (err) {
|
||||
t4_free_mem(qe);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Bind queue to specified class */
|
||||
memset(qe, 0, sizeof(*qe));
|
||||
|
|
|
@ -169,19 +169,28 @@ int vnic_rq_disable(struct vnic_rq *rq)
|
|||
{
|
||||
unsigned int wait;
|
||||
struct vnic_dev *vdev = rq->vdev;
|
||||
int i;
|
||||
|
||||
iowrite32(0, &rq->ctrl->enable);
|
||||
/* Due to a race condition with clearing RQ "mini-cache" in hw, we need
|
||||
* to disable the RQ twice to guarantee that stale descriptors are not
|
||||
* used when this RQ is re-enabled.
|
||||
*/
|
||||
for (i = 0; i < 2; i++) {
|
||||
iowrite32(0, &rq->ctrl->enable);
|
||||
|
||||
/* Wait for HW to ACK disable request */
|
||||
for (wait = 0; wait < 1000; wait++) {
|
||||
if (!(ioread32(&rq->ctrl->running)))
|
||||
return 0;
|
||||
udelay(10);
|
||||
/* Wait for HW to ACK disable request */
|
||||
for (wait = 20000; wait > 0; wait--)
|
||||
if (!ioread32(&rq->ctrl->running))
|
||||
break;
|
||||
if (!wait) {
|
||||
vdev_neterr(vdev, "Failed to disable RQ[%d]\n",
|
||||
rq->index);
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
}
|
||||
|
||||
vdev_neterr(vdev, "Failed to disable RQ[%d]\n", rq->index);
|
||||
|
||||
return -ETIMEDOUT;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void vnic_rq_clean(struct vnic_rq *rq,
|
||||
|
@ -212,6 +221,11 @@ void vnic_rq_clean(struct vnic_rq *rq,
|
|||
[fetch_index % VNIC_RQ_BUF_BLK_ENTRIES(count)];
|
||||
iowrite32(fetch_index, &rq->ctrl->posted_index);
|
||||
|
||||
/* Anytime we write fetch_index, we need to re-write 0 to rq->enable
|
||||
* to re-sync internal VIC state.
|
||||
*/
|
||||
iowrite32(0, &rq->ctrl->enable);
|
||||
|
||||
vnic_dev_clear_desc_ring(&rq->ring);
|
||||
}
|
||||
|
||||
|
|
|
@ -669,6 +669,7 @@ static const struct of_device_id nps_enet_dt_ids[] = {
|
|||
{ .compatible = "ezchip,nps-mgt-enet" },
|
||||
{ /* Sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, nps_enet_dt_ids);
|
||||
|
||||
static struct platform_driver nps_enet_driver = {
|
||||
.probe = nps_enet_probe,
|
||||
|
|
|
@ -1430,14 +1430,14 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
|
|||
skb_put(skb, pkt_len - 4);
|
||||
data = skb->data;
|
||||
|
||||
if (!is_copybreak && need_swap)
|
||||
swap_buffer(data, pkt_len);
|
||||
|
||||
#if !defined(CONFIG_M5272)
|
||||
if (fep->quirks & FEC_QUIRK_HAS_RACC)
|
||||
data = skb_pull_inline(skb, 2);
|
||||
#endif
|
||||
|
||||
if (!is_copybreak && need_swap)
|
||||
swap_buffer(data, pkt_len);
|
||||
|
||||
/* Extract the enhanced buffer descriptor */
|
||||
ebdp = NULL;
|
||||
if (fep->bufdesc_ex)
|
||||
|
|
|
@ -2751,6 +2751,7 @@ static const struct of_device_id g_dsaf_match[] = {
|
|||
{.compatible = "hisilicon,hns-dsaf-v2"},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, g_dsaf_match);
|
||||
|
||||
static struct platform_driver g_dsaf_driver = {
|
||||
.probe = hns_dsaf_probe,
|
||||
|
|
|
@ -563,6 +563,7 @@ static const struct of_device_id hns_mdio_match[] = {
|
|||
{.compatible = "hisilicon,hns-mdio"},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, hns_mdio_match);
|
||||
|
||||
static const struct acpi_device_id hns_mdio_acpi_match[] = {
|
||||
{ "HISI0141", 0 },
|
||||
|
|
|
@ -1190,7 +1190,7 @@ static struct ibmvnic_sub_crq_queue *init_sub_crq_queue(struct ibmvnic_adapter
|
|||
if (!scrq)
|
||||
return NULL;
|
||||
|
||||
scrq->msgs = (union sub_crq *)__get_free_pages(GFP_KERNEL, 2);
|
||||
scrq->msgs = (union sub_crq *)__get_free_pages(GFP_ATOMIC, 2);
|
||||
memset(scrq->msgs, 0, 4 * PAGE_SIZE);
|
||||
if (!scrq->msgs) {
|
||||
dev_warn(dev, "Couldn't allocate crq queue messages page\n");
|
||||
|
@ -1461,14 +1461,16 @@ static int init_sub_crq_irqs(struct ibmvnic_adapter *adapter)
|
|||
return rc;
|
||||
|
||||
req_rx_irq_failed:
|
||||
for (j = 0; j < i; j++)
|
||||
for (j = 0; j < i; j++) {
|
||||
free_irq(adapter->rx_scrq[j]->irq, adapter->rx_scrq[j]);
|
||||
irq_dispose_mapping(adapter->rx_scrq[j]->irq);
|
||||
}
|
||||
i = adapter->req_tx_queues;
|
||||
req_tx_irq_failed:
|
||||
for (j = 0; j < i; j++)
|
||||
for (j = 0; j < i; j++) {
|
||||
free_irq(adapter->tx_scrq[j]->irq, adapter->tx_scrq[j]);
|
||||
irq_dispose_mapping(adapter->rx_scrq[j]->irq);
|
||||
}
|
||||
release_sub_crqs_no_irqs(adapter);
|
||||
return rc;
|
||||
}
|
||||
|
@ -3232,6 +3234,27 @@ static void ibmvnic_free_inflight(struct ibmvnic_adapter *adapter)
|
|||
spin_unlock_irqrestore(&adapter->inflight_lock, flags);
|
||||
}
|
||||
|
||||
static void ibmvnic_xport_event(struct work_struct *work)
|
||||
{
|
||||
struct ibmvnic_adapter *adapter = container_of(work,
|
||||
struct ibmvnic_adapter,
|
||||
ibmvnic_xport);
|
||||
struct device *dev = &adapter->vdev->dev;
|
||||
long rc;
|
||||
|
||||
ibmvnic_free_inflight(adapter);
|
||||
release_sub_crqs(adapter);
|
||||
if (adapter->migrated) {
|
||||
rc = ibmvnic_reenable_crq_queue(adapter);
|
||||
if (rc)
|
||||
dev_err(dev, "Error after enable rc=%ld\n", rc);
|
||||
adapter->migrated = false;
|
||||
rc = ibmvnic_send_crq_init(adapter);
|
||||
if (rc)
|
||||
dev_err(dev, "Error sending init rc=%ld\n", rc);
|
||||
}
|
||||
}
|
||||
|
||||
static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
|
||||
struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
|
@ -3267,15 +3290,7 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
|
|||
if (gen_crq->cmd == IBMVNIC_PARTITION_MIGRATED) {
|
||||
dev_info(dev, "Re-enabling adapter\n");
|
||||
adapter->migrated = true;
|
||||
ibmvnic_free_inflight(adapter);
|
||||
release_sub_crqs(adapter);
|
||||
rc = ibmvnic_reenable_crq_queue(adapter);
|
||||
if (rc)
|
||||
dev_err(dev, "Error after enable rc=%ld\n", rc);
|
||||
adapter->migrated = false;
|
||||
rc = ibmvnic_send_crq_init(adapter);
|
||||
if (rc)
|
||||
dev_err(dev, "Error sending init rc=%ld\n", rc);
|
||||
schedule_work(&adapter->ibmvnic_xport);
|
||||
} else if (gen_crq->cmd == IBMVNIC_DEVICE_FAILOVER) {
|
||||
dev_info(dev, "Backing device failover detected\n");
|
||||
netif_carrier_off(netdev);
|
||||
|
@ -3284,8 +3299,7 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
|
|||
/* The adapter lost the connection */
|
||||
dev_err(dev, "Virtual Adapter failed (rc=%d)\n",
|
||||
gen_crq->cmd);
|
||||
ibmvnic_free_inflight(adapter);
|
||||
release_sub_crqs(adapter);
|
||||
schedule_work(&adapter->ibmvnic_xport);
|
||||
}
|
||||
return;
|
||||
case IBMVNIC_CRQ_CMD_RSP:
|
||||
|
@ -3654,6 +3668,7 @@ static void handle_crq_init_rsp(struct work_struct *work)
|
|||
goto task_failed;
|
||||
|
||||
netdev->real_num_tx_queues = adapter->req_tx_queues;
|
||||
netdev->mtu = adapter->req_mtu;
|
||||
|
||||
if (adapter->failover) {
|
||||
adapter->failover = false;
|
||||
|
@ -3725,6 +3740,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
|
|||
SET_NETDEV_DEV(netdev, &dev->dev);
|
||||
|
||||
INIT_WORK(&adapter->vnic_crq_init, handle_crq_init_rsp);
|
||||
INIT_WORK(&adapter->ibmvnic_xport, ibmvnic_xport_event);
|
||||
|
||||
spin_lock_init(&adapter->stats_lock);
|
||||
|
||||
|
@ -3792,6 +3808,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
|
|||
}
|
||||
|
||||
netdev->real_num_tx_queues = adapter->req_tx_queues;
|
||||
netdev->mtu = adapter->req_mtu;
|
||||
|
||||
rc = register_netdev(netdev);
|
||||
if (rc) {
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
/**************************************************************************/
|
||||
|
||||
#define IBMVNIC_NAME "ibmvnic"
|
||||
#define IBMVNIC_DRIVER_VERSION "1.0"
|
||||
#define IBMVNIC_DRIVER_VERSION "1.0.1"
|
||||
#define IBMVNIC_INVALID_MAP -1
|
||||
#define IBMVNIC_STATS_TIMEOUT 1
|
||||
/* basic structures plus 100 2k buffers */
|
||||
|
@ -1048,5 +1048,6 @@ struct ibmvnic_adapter {
|
|||
u8 map_id;
|
||||
|
||||
struct work_struct vnic_crq_init;
|
||||
struct work_struct ibmvnic_xport;
|
||||
bool failover;
|
||||
};
|
||||
|
|
|
@ -92,6 +92,7 @@
|
|||
#define I40E_AQ_LEN 256
|
||||
#define I40E_AQ_WORK_LIMIT 66 /* max number of VFs + a little */
|
||||
#define I40E_MAX_USER_PRIORITY 8
|
||||
#define I40E_DEFAULT_TRAFFIC_CLASS BIT(0)
|
||||
#define I40E_DEFAULT_MSG_ENABLE 4
|
||||
#define I40E_QUEUE_WAIT_RETRY_LIMIT 10
|
||||
#define I40E_INT_NAME_STR_LEN (IFNAMSIZ + 16)
|
||||
|
|
|
@ -4640,29 +4640,6 @@ static u8 i40e_pf_get_num_tc(struct i40e_pf *pf)
|
|||
return num_tc;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_pf_get_default_tc - Get bitmap for first enabled TC
|
||||
* @pf: PF being queried
|
||||
*
|
||||
* Return a bitmap for first enabled traffic class for this PF.
|
||||
**/
|
||||
static u8 i40e_pf_get_default_tc(struct i40e_pf *pf)
|
||||
{
|
||||
u8 enabled_tc = pf->hw.func_caps.enabled_tcmap;
|
||||
u8 i = 0;
|
||||
|
||||
if (!enabled_tc)
|
||||
return 0x1; /* TC0 */
|
||||
|
||||
/* Find the first enabled TC */
|
||||
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
|
||||
if (enabled_tc & BIT(i))
|
||||
break;
|
||||
}
|
||||
|
||||
return BIT(i);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_pf_get_pf_tc_map - Get bitmap for enabled traffic classes
|
||||
* @pf: PF being queried
|
||||
|
@ -4673,7 +4650,7 @@ static u8 i40e_pf_get_tc_map(struct i40e_pf *pf)
|
|||
{
|
||||
/* If DCB is not enabled for this PF then just return default TC */
|
||||
if (!(pf->flags & I40E_FLAG_DCB_ENABLED))
|
||||
return i40e_pf_get_default_tc(pf);
|
||||
return I40E_DEFAULT_TRAFFIC_CLASS;
|
||||
|
||||
/* SFP mode we want PF to be enabled for all TCs */
|
||||
if (!(pf->flags & I40E_FLAG_MFP_ENABLED))
|
||||
|
@ -4683,7 +4660,7 @@ static u8 i40e_pf_get_tc_map(struct i40e_pf *pf)
|
|||
if (pf->hw.func_caps.iscsi)
|
||||
return i40e_get_iscsi_tc_map(pf);
|
||||
else
|
||||
return i40e_pf_get_default_tc(pf);
|
||||
return I40E_DEFAULT_TRAFFIC_CLASS;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -5029,7 +5006,7 @@ static void i40e_dcb_reconfigure(struct i40e_pf *pf)
|
|||
if (v == pf->lan_vsi)
|
||||
tc_map = i40e_pf_get_tc_map(pf);
|
||||
else
|
||||
tc_map = i40e_pf_get_default_tc(pf);
|
||||
tc_map = I40E_DEFAULT_TRAFFIC_CLASS;
|
||||
#ifdef I40E_FCOE
|
||||
if (pf->vsi[v]->type == I40E_VSI_FCOE)
|
||||
tc_map = i40e_get_fcoe_tc_map(pf);
|
||||
|
@ -5717,7 +5694,7 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf,
|
|||
u8 type;
|
||||
|
||||
/* Not DCB capable or capability disabled */
|
||||
if (!(pf->flags & I40E_FLAG_DCB_ENABLED))
|
||||
if (!(pf->flags & I40E_FLAG_DCB_CAPABLE))
|
||||
return ret;
|
||||
|
||||
/* Ignore if event is not for Nearest Bridge */
|
||||
|
@ -7707,6 +7684,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
|
|||
pf->flags &= ~I40E_FLAG_MSIX_ENABLED;
|
||||
kfree(pf->msix_entries);
|
||||
pf->msix_entries = NULL;
|
||||
pci_disable_msix(pf->pdev);
|
||||
return -ENODEV;
|
||||
|
||||
} else if (v_actual == I40E_MIN_MSIX) {
|
||||
|
@ -9056,7 +9034,7 @@ static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
|
|||
return 0;
|
||||
|
||||
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode,
|
||||
nlflags, 0, 0, filter_mask, NULL);
|
||||
0, 0, nlflags, filter_mask, NULL);
|
||||
}
|
||||
|
||||
/* Hardware supports L4 tunnel length of 128B (=2^7) which includes
|
||||
|
|
|
@ -9135,10 +9135,14 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
|
|||
goto fwd_add_err;
|
||||
fwd_adapter->pool = pool;
|
||||
fwd_adapter->real_adapter = adapter;
|
||||
err = ixgbe_fwd_ring_up(vdev, fwd_adapter);
|
||||
if (err)
|
||||
goto fwd_add_err;
|
||||
netif_tx_start_all_queues(vdev);
|
||||
|
||||
if (netif_running(pdev)) {
|
||||
err = ixgbe_fwd_ring_up(vdev, fwd_adapter);
|
||||
if (err)
|
||||
goto fwd_add_err;
|
||||
netif_tx_start_all_queues(vdev);
|
||||
}
|
||||
|
||||
return fwd_adapter;
|
||||
fwd_add_err:
|
||||
/* unwind counter and free adapter struct */
|
||||
|
|
|
@ -2968,6 +2968,22 @@ static void set_params(struct mv643xx_eth_private *mp,
|
|||
mp->txq_count = pd->tx_queue_count ? : 1;
|
||||
}
|
||||
|
||||
static int get_phy_mode(struct mv643xx_eth_private *mp)
|
||||
{
|
||||
struct device *dev = mp->dev->dev.parent;
|
||||
int iface = -1;
|
||||
|
||||
if (dev->of_node)
|
||||
iface = of_get_phy_mode(dev->of_node);
|
||||
|
||||
/* Historical default if unspecified. We could also read/write
|
||||
* the interface state in the PSC1
|
||||
*/
|
||||
if (iface < 0)
|
||||
iface = PHY_INTERFACE_MODE_GMII;
|
||||
return iface;
|
||||
}
|
||||
|
||||
static struct phy_device *phy_scan(struct mv643xx_eth_private *mp,
|
||||
int phy_addr)
|
||||
{
|
||||
|
@ -2994,7 +3010,7 @@ static struct phy_device *phy_scan(struct mv643xx_eth_private *mp,
|
|||
"orion-mdio-mii", addr);
|
||||
|
||||
phydev = phy_connect(mp->dev, phy_id, mv643xx_eth_adjust_link,
|
||||
PHY_INTERFACE_MODE_GMII);
|
||||
get_phy_mode(mp));
|
||||
if (!IS_ERR(phydev)) {
|
||||
phy_addr_set(mp, addr);
|
||||
break;
|
||||
|
@ -3090,6 +3106,7 @@ static int mv643xx_eth_probe(struct platform_device *pdev)
|
|||
if (!dev)
|
||||
return -ENOMEM;
|
||||
|
||||
SET_NETDEV_DEV(dev, &pdev->dev);
|
||||
mp = netdev_priv(dev);
|
||||
platform_set_drvdata(pdev, mp);
|
||||
|
||||
|
@ -3129,7 +3146,7 @@ static int mv643xx_eth_probe(struct platform_device *pdev)
|
|||
if (pd->phy_node) {
|
||||
mp->phy = of_phy_connect(mp->dev, pd->phy_node,
|
||||
mv643xx_eth_adjust_link, 0,
|
||||
PHY_INTERFACE_MODE_GMII);
|
||||
get_phy_mode(mp));
|
||||
if (!mp->phy)
|
||||
err = -ENODEV;
|
||||
else
|
||||
|
@ -3187,8 +3204,6 @@ static int mv643xx_eth_probe(struct platform_device *pdev)
|
|||
dev->priv_flags |= IFF_UNICAST_FLT;
|
||||
dev->gso_max_segs = MV643XX_MAX_TSO_SEGS;
|
||||
|
||||
SET_NETDEV_DEV(dev, &pdev->dev);
|
||||
|
||||
if (mp->shared->win_protect)
|
||||
wrl(mp, WINDOW_PROTECT(mp->port_num), mp->shared->win_protect);
|
||||
|
||||
|
|
|
@ -2469,6 +2469,7 @@ err_comm_admin:
|
|||
kfree(priv->mfunc.master.slave_state);
|
||||
err_comm:
|
||||
iounmap(priv->mfunc.comm);
|
||||
priv->mfunc.comm = NULL;
|
||||
err_vhcr:
|
||||
dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
|
||||
priv->mfunc.vhcr,
|
||||
|
@ -2537,6 +2538,13 @@ void mlx4_report_internal_err_comm_event(struct mlx4_dev *dev)
|
|||
int slave;
|
||||
u32 slave_read;
|
||||
|
||||
/* If the comm channel has not yet been initialized,
|
||||
* skip reporting the internal error event to all
|
||||
* the communication channels.
|
||||
*/
|
||||
if (!priv->mfunc.comm)
|
||||
return;
|
||||
|
||||
/* Report an internal error event to all
|
||||
* communication channels.
|
||||
*/
|
||||
|
@ -2571,6 +2579,7 @@ void mlx4_multi_func_cleanup(struct mlx4_dev *dev)
|
|||
}
|
||||
|
||||
iounmap(priv->mfunc.comm);
|
||||
priv->mfunc.comm = NULL;
|
||||
}
|
||||
|
||||
void mlx4_cmd_cleanup(struct mlx4_dev *dev, int cleanup_mask)
|
||||
|
|
|
@ -245,8 +245,11 @@ static u32 freq_to_shift(u16 freq)
|
|||
{
|
||||
u32 freq_khz = freq * 1000;
|
||||
u64 max_val_cycles = freq_khz * 1000 * MLX4_EN_WRAP_AROUND_SEC;
|
||||
u64 tmp_rounded =
|
||||
roundup_pow_of_two(max_val_cycles) > max_val_cycles ?
|
||||
roundup_pow_of_two(max_val_cycles) - 1 : UINT_MAX;
|
||||
u64 max_val_cycles_rounded = is_power_of_2(max_val_cycles + 1) ?
|
||||
max_val_cycles : roundup_pow_of_two(max_val_cycles) - 1;
|
||||
max_val_cycles : tmp_rounded;
|
||||
/* calculate max possible multiplier in order to fit in 64bit */
|
||||
u64 max_mul = div_u64(0xffffffffffffffffULL, max_val_cycles_rounded);
|
||||
|
||||
|
|
|
@ -127,7 +127,15 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
|
|||
/* For TX we use the same irq per
|
||||
ring we assigned for the RX */
|
||||
struct mlx4_en_cq *rx_cq;
|
||||
int xdp_index;
|
||||
|
||||
/* The xdp tx irq must align with the rx ring that forwards to
|
||||
* it, so reindex these from 0. This should only happen when
|
||||
* tx_ring_num is not a multiple of rx_ring_num.
|
||||
*/
|
||||
xdp_index = (priv->xdp_ring_num - priv->tx_ring_num) + cq_idx;
|
||||
if (xdp_index >= 0)
|
||||
cq_idx = xdp_index;
|
||||
cq_idx = cq_idx % priv->rx_ring_num;
|
||||
rx_cq = priv->rx_cq[cq_idx];
|
||||
cq->vector = rx_cq->vector;
|
||||
|
|
|
@ -1733,6 +1733,13 @@ int mlx4_en_start_port(struct net_device *dev)
|
|||
udp_tunnel_get_rx_info(dev);
|
||||
|
||||
priv->port_up = true;
|
||||
|
||||
/* Process all completions if exist to prevent
|
||||
* the queues freezing if they are full
|
||||
*/
|
||||
for (i = 0; i < priv->rx_ring_num; i++)
|
||||
napi_schedule(&priv->rx_cq[i]->napi);
|
||||
|
||||
netif_tx_start_all_queues(dev);
|
||||
netif_device_attach(dev);
|
||||
|
||||
|
@ -1910,8 +1917,9 @@ static void mlx4_en_clear_stats(struct net_device *dev)
|
|||
struct mlx4_en_dev *mdev = priv->mdev;
|
||||
int i;
|
||||
|
||||
if (mlx4_en_DUMP_ETH_STATS(mdev, priv->port, 1))
|
||||
en_dbg(HW, priv, "Failed dumping statistics\n");
|
||||
if (!mlx4_is_slave(mdev->dev))
|
||||
if (mlx4_en_DUMP_ETH_STATS(mdev, priv->port, 1))
|
||||
en_dbg(HW, priv, "Failed dumping statistics\n");
|
||||
|
||||
memset(&priv->pstats, 0, sizeof(priv->pstats));
|
||||
memset(&priv->pkstats, 0, sizeof(priv->pkstats));
|
||||
|
@ -2194,6 +2202,7 @@ void mlx4_en_destroy_netdev(struct net_device *dev)
|
|||
|
||||
if (!shutdown)
|
||||
free_netdev(dev);
|
||||
dev->ethtool_ops = NULL;
|
||||
}
|
||||
|
||||
static int mlx4_en_change_mtu(struct net_device *dev, int new_mtu)
|
||||
|
|
|
@ -166,7 +166,7 @@ int mlx4_en_DUMP_ETH_STATS(struct mlx4_en_dev *mdev, u8 port, u8 reset)
|
|||
return PTR_ERR(mailbox);
|
||||
err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma, in_mod, 0,
|
||||
MLX4_CMD_DUMP_ETH_STATS, MLX4_CMD_TIME_CLASS_B,
|
||||
MLX4_CMD_WRAPPED);
|
||||
MLX4_CMD_NATIVE);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -322,7 +322,7 @@ int mlx4_en_DUMP_ETH_STATS(struct mlx4_en_dev *mdev, u8 port, u8 reset)
|
|||
err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma,
|
||||
in_mod | MLX4_DUMP_ETH_STATS_FLOW_CONTROL,
|
||||
0, MLX4_CMD_DUMP_ETH_STATS,
|
||||
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED);
|
||||
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -118,6 +118,29 @@ mlx4_en_test_loopback_exit:
|
|||
return !loopback_ok;
|
||||
}
|
||||
|
||||
static int mlx4_en_test_interrupts(struct mlx4_en_priv *priv)
|
||||
{
|
||||
struct mlx4_en_dev *mdev = priv->mdev;
|
||||
int err = 0;
|
||||
int i = 0;
|
||||
|
||||
err = mlx4_test_async(mdev->dev);
|
||||
/* When not in MSI_X or slave, test only async */
|
||||
if (!(mdev->dev->flags & MLX4_FLAG_MSI_X) || mlx4_is_slave(mdev->dev))
|
||||
return err;
|
||||
|
||||
/* A loop over all completion vectors of current port,
|
||||
* for each vector check whether it works by mapping command
|
||||
* completions to that vector and performing a NOP command
|
||||
*/
|
||||
for (i = 0; i < priv->rx_ring_num; i++) {
|
||||
err = mlx4_test_interrupt(mdev->dev, priv->rx_cq[i]->vector);
|
||||
if (err)
|
||||
break;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int mlx4_en_test_link(struct mlx4_en_priv *priv)
|
||||
{
|
||||
|
@ -151,7 +174,6 @@ static int mlx4_en_test_speed(struct mlx4_en_priv *priv)
|
|||
void mlx4_en_ex_selftest(struct net_device *dev, u32 *flags, u64 *buf)
|
||||
{
|
||||
struct mlx4_en_priv *priv = netdev_priv(dev);
|
||||
struct mlx4_en_dev *mdev = priv->mdev;
|
||||
int i, carrier_ok;
|
||||
|
||||
memset(buf, 0, sizeof(u64) * MLX4_EN_NUM_SELF_TEST);
|
||||
|
@ -177,7 +199,7 @@ void mlx4_en_ex_selftest(struct net_device *dev, u32 *flags, u64 *buf)
|
|||
netif_carrier_on(dev);
|
||||
|
||||
}
|
||||
buf[0] = mlx4_test_interrupts(mdev->dev);
|
||||
buf[0] = mlx4_en_test_interrupts(priv);
|
||||
buf[1] = mlx4_en_test_link(priv);
|
||||
buf[2] = mlx4_en_test_speed(priv);
|
||||
|
||||
|
|
|
@ -1361,53 +1361,49 @@ void mlx4_cleanup_eq_table(struct mlx4_dev *dev)
|
|||
kfree(priv->eq_table.uar_map);
|
||||
}
|
||||
|
||||
/* A test that verifies that we can accept interrupts on all
|
||||
* the irq vectors of the device.
|
||||
/* A test that verifies that we can accept interrupts
|
||||
* on the vector allocated for asynchronous events
|
||||
*/
|
||||
int mlx4_test_async(struct mlx4_dev *dev)
|
||||
{
|
||||
return mlx4_NOP(dev);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx4_test_async);
|
||||
|
||||
/* A test that verifies that we can accept interrupts
|
||||
* on the given irq vector of the tested port.
|
||||
* Interrupts are checked using the NOP command.
|
||||
*/
|
||||
int mlx4_test_interrupts(struct mlx4_dev *dev)
|
||||
int mlx4_test_interrupt(struct mlx4_dev *dev, int vector)
|
||||
{
|
||||
struct mlx4_priv *priv = mlx4_priv(dev);
|
||||
int i;
|
||||
int err;
|
||||
|
||||
err = mlx4_NOP(dev);
|
||||
/* When not in MSI_X, there is only one irq to check */
|
||||
if (!(dev->flags & MLX4_FLAG_MSI_X) || mlx4_is_slave(dev))
|
||||
return err;
|
||||
/* Temporary use polling for command completions */
|
||||
mlx4_cmd_use_polling(dev);
|
||||
|
||||
/* A loop over all completion vectors, for each vector we will check
|
||||
* whether it works by mapping command completions to that vector
|
||||
* and performing a NOP command
|
||||
*/
|
||||
for(i = 0; !err && (i < dev->caps.num_comp_vectors); ++i) {
|
||||
/* Make sure request_irq was called */
|
||||
if (!priv->eq_table.eq[i].have_irq)
|
||||
continue;
|
||||
|
||||
/* Temporary use polling for command completions */
|
||||
mlx4_cmd_use_polling(dev);
|
||||
|
||||
/* Map the new eq to handle all asynchronous events */
|
||||
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
||||
priv->eq_table.eq[i].eqn);
|
||||
if (err) {
|
||||
mlx4_warn(dev, "Failed mapping eq for interrupt test\n");
|
||||
mlx4_cmd_use_events(dev);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Go back to using events */
|
||||
mlx4_cmd_use_events(dev);
|
||||
err = mlx4_NOP(dev);
|
||||
/* Map the new eq to handle all asynchronous events */
|
||||
err = mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
||||
priv->eq_table.eq[MLX4_CQ_TO_EQ_VECTOR(vector)].eqn);
|
||||
if (err) {
|
||||
mlx4_warn(dev, "Failed mapping eq for interrupt test\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Go back to using events */
|
||||
mlx4_cmd_use_events(dev);
|
||||
err = mlx4_NOP(dev);
|
||||
|
||||
/* Return to default */
|
||||
mlx4_cmd_use_polling(dev);
|
||||
out:
|
||||
mlx4_MAP_EQ(dev, get_async_ev_mask(dev), 0,
|
||||
priv->eq_table.eq[MLX4_EQ_ASYNC].eqn);
|
||||
mlx4_cmd_use_events(dev);
|
||||
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(mlx4_test_interrupts);
|
||||
EXPORT_SYMBOL(mlx4_test_interrupt);
|
||||
|
||||
bool mlx4_is_eq_vector_valid(struct mlx4_dev *dev, u8 port, int vector)
|
||||
{
|
||||
|
|
|
@ -49,9 +49,9 @@ enum {
|
|||
extern void __buggy_use_of_MLX4_GET(void);
|
||||
extern void __buggy_use_of_MLX4_PUT(void);
|
||||
|
||||
static bool enable_qos = true;
|
||||
static bool enable_qos;
|
||||
module_param(enable_qos, bool, 0444);
|
||||
MODULE_PARM_DESC(enable_qos, "Enable Enhanced QoS support (default: on)");
|
||||
MODULE_PARM_DESC(enable_qos, "Enable Enhanced QoS support (default: off)");
|
||||
|
||||
#define MLX4_GET(dest, source, offset) \
|
||||
do { \
|
||||
|
|
|
@ -1102,6 +1102,14 @@ static int __set_port_type(struct mlx4_port_info *info,
|
|||
int i;
|
||||
int err = 0;
|
||||
|
||||
if ((port_type & mdev->caps.supported_type[info->port]) != port_type) {
|
||||
mlx4_err(mdev,
|
||||
"Requested port type for port %d is not supported on this HCA\n",
|
||||
info->port);
|
||||
err = -EINVAL;
|
||||
goto err_sup;
|
||||
}
|
||||
|
||||
mlx4_stop_sense(mdev);
|
||||
mutex_lock(&priv->port_mutex);
|
||||
info->tmp_type = port_type;
|
||||
|
@ -1147,7 +1155,7 @@ static int __set_port_type(struct mlx4_port_info *info,
|
|||
out:
|
||||
mlx4_start_sense(mdev);
|
||||
mutex_unlock(&priv->port_mutex);
|
||||
|
||||
err_sup:
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -145,9 +145,10 @@ enum mlx4_resource {
|
|||
RES_MTT,
|
||||
RES_MAC,
|
||||
RES_VLAN,
|
||||
RES_EQ,
|
||||
RES_NPORT_ID,
|
||||
RES_COUNTER,
|
||||
RES_FS_RULE,
|
||||
RES_EQ,
|
||||
MLX4_NUM_OF_RESOURCE_TYPE
|
||||
};
|
||||
|
||||
|
@ -1329,8 +1330,6 @@ int mlx4_SET_VLAN_FLTR_wrapper(struct mlx4_dev *dev, int slave,
|
|||
struct mlx4_cmd_info *cmd);
|
||||
int mlx4_common_set_vlan_fltr(struct mlx4_dev *dev, int function,
|
||||
int port, void *buf);
|
||||
int mlx4_common_dump_eth_stats(struct mlx4_dev *dev, int slave, u32 in_mod,
|
||||
struct mlx4_cmd_mailbox *outbox);
|
||||
int mlx4_DUMP_ETH_STATS_wrapper(struct mlx4_dev *dev, int slave,
|
||||
struct mlx4_vhcr *vhcr,
|
||||
struct mlx4_cmd_mailbox *inbox,
|
||||
|
|
|
@ -1728,24 +1728,13 @@ int mlx4_SET_VLAN_FLTR_wrapper(struct mlx4_dev *dev, int slave,
|
|||
return err;
|
||||
}
|
||||
|
||||
int mlx4_common_dump_eth_stats(struct mlx4_dev *dev, int slave,
|
||||
u32 in_mod, struct mlx4_cmd_mailbox *outbox)
|
||||
{
|
||||
return mlx4_cmd_box(dev, 0, outbox->dma, in_mod, 0,
|
||||
MLX4_CMD_DUMP_ETH_STATS, MLX4_CMD_TIME_CLASS_B,
|
||||
MLX4_CMD_NATIVE);
|
||||
}
|
||||
|
||||
int mlx4_DUMP_ETH_STATS_wrapper(struct mlx4_dev *dev, int slave,
|
||||
struct mlx4_vhcr *vhcr,
|
||||
struct mlx4_cmd_mailbox *inbox,
|
||||
struct mlx4_cmd_mailbox *outbox,
|
||||
struct mlx4_cmd_info *cmd)
|
||||
{
|
||||
if (slave != dev->caps.function)
|
||||
return 0;
|
||||
return mlx4_common_dump_eth_stats(dev, slave,
|
||||
vhcr->in_modifier, outbox);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int mlx4_get_slave_from_roce_gid(struct mlx4_dev *dev, int port, u8 *gid,
|
||||
|
|
|
@ -1605,13 +1605,14 @@ static int eq_res_start_move_to(struct mlx4_dev *dev, int slave, int index,
|
|||
r->com.from_state = r->com.state;
|
||||
r->com.to_state = state;
|
||||
r->com.state = RES_EQ_BUSY;
|
||||
if (eq)
|
||||
*eq = r;
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_irq(mlx4_tlock(dev));
|
||||
|
||||
if (!err && eq)
|
||||
*eq = r;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -41,6 +41,13 @@
|
|||
|
||||
#include "mlx5_core.h"
|
||||
|
||||
struct mlx5_db_pgdir {
|
||||
struct list_head list;
|
||||
unsigned long *bitmap;
|
||||
__be32 *db_page;
|
||||
dma_addr_t db_dma;
|
||||
};
|
||||
|
||||
/* Handling for queue buffers -- we allocate a bunch of memory and
|
||||
* register it in a memory region at HCA virtual address 0.
|
||||
*/
|
||||
|
@ -102,17 +109,28 @@ EXPORT_SYMBOL_GPL(mlx5_buf_free);
|
|||
static struct mlx5_db_pgdir *mlx5_alloc_db_pgdir(struct mlx5_core_dev *dev,
|
||||
int node)
|
||||
{
|
||||
u32 db_per_page = PAGE_SIZE / cache_line_size();
|
||||
struct mlx5_db_pgdir *pgdir;
|
||||
|
||||
pgdir = kzalloc(sizeof(*pgdir), GFP_KERNEL);
|
||||
if (!pgdir)
|
||||
return NULL;
|
||||
|
||||
bitmap_fill(pgdir->bitmap, MLX5_DB_PER_PAGE);
|
||||
pgdir->bitmap = kcalloc(BITS_TO_LONGS(db_per_page),
|
||||
sizeof(unsigned long),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!pgdir->bitmap) {
|
||||
kfree(pgdir);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bitmap_fill(pgdir->bitmap, db_per_page);
|
||||
|
||||
pgdir->db_page = mlx5_dma_zalloc_coherent_node(dev, PAGE_SIZE,
|
||||
&pgdir->db_dma, node);
|
||||
if (!pgdir->db_page) {
|
||||
kfree(pgdir->bitmap);
|
||||
kfree(pgdir);
|
||||
return NULL;
|
||||
}
|
||||
|
@ -123,18 +141,19 @@ static struct mlx5_db_pgdir *mlx5_alloc_db_pgdir(struct mlx5_core_dev *dev,
|
|||
static int mlx5_alloc_db_from_pgdir(struct mlx5_db_pgdir *pgdir,
|
||||
struct mlx5_db *db)
|
||||
{
|
||||
u32 db_per_page = PAGE_SIZE / cache_line_size();
|
||||
int offset;
|
||||
int i;
|
||||
|
||||
i = find_first_bit(pgdir->bitmap, MLX5_DB_PER_PAGE);
|
||||
if (i >= MLX5_DB_PER_PAGE)
|
||||
i = find_first_bit(pgdir->bitmap, db_per_page);
|
||||
if (i >= db_per_page)
|
||||
return -ENOMEM;
|
||||
|
||||
__clear_bit(i, pgdir->bitmap);
|
||||
|
||||
db->u.pgdir = pgdir;
|
||||
db->index = i;
|
||||
offset = db->index * L1_CACHE_BYTES;
|
||||
offset = db->index * cache_line_size();
|
||||
db->db = pgdir->db_page + offset / sizeof(*pgdir->db_page);
|
||||
db->dma = pgdir->db_dma + offset;
|
||||
|
||||
|
@ -181,14 +200,16 @@ EXPORT_SYMBOL_GPL(mlx5_db_alloc);
|
|||
|
||||
void mlx5_db_free(struct mlx5_core_dev *dev, struct mlx5_db *db)
|
||||
{
|
||||
u32 db_per_page = PAGE_SIZE / cache_line_size();
|
||||
mutex_lock(&dev->priv.pgdir_mutex);
|
||||
|
||||
__set_bit(db->index, db->u.pgdir->bitmap);
|
||||
|
||||
if (bitmap_full(db->u.pgdir->bitmap, MLX5_DB_PER_PAGE)) {
|
||||
if (bitmap_full(db->u.pgdir->bitmap, db_per_page)) {
|
||||
dma_free_coherent(&(dev->pdev->dev), PAGE_SIZE,
|
||||
db->u.pgdir->db_page, db->u.pgdir->db_dma);
|
||||
list_del(&db->u.pgdir->list);
|
||||
kfree(db->u.pgdir->bitmap);
|
||||
kfree(db->u.pgdir);
|
||||
}
|
||||
|
||||
|
|
|
@ -85,6 +85,9 @@
|
|||
#define MLX5_MPWRQ_SMALL_PACKET_THRESHOLD (128)
|
||||
|
||||
#define MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ (64 * 1024)
|
||||
#define MLX5E_DEFAULT_LRO_TIMEOUT 32
|
||||
#define MLX5E_LRO_TIMEOUT_ARR_SIZE 4
|
||||
|
||||
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC 0x10
|
||||
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE 0x3
|
||||
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS 0x20
|
||||
|
@ -221,6 +224,7 @@ struct mlx5e_params {
|
|||
struct ieee_ets ets;
|
||||
#endif
|
||||
bool rx_am_enabled;
|
||||
u32 lro_timeout;
|
||||
};
|
||||
|
||||
struct mlx5e_tstamp {
|
||||
|
@ -888,5 +892,6 @@ int mlx5e_attach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
|
|||
void mlx5e_detach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
|
||||
struct rtnl_link_stats64 *
|
||||
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats);
|
||||
u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout);
|
||||
|
||||
#endif /* __MLX5_EN_H__ */
|
||||
|
|
|
@ -1971,9 +1971,7 @@ static void mlx5e_build_tir_ctx_lro(void *tirc, struct mlx5e_priv *priv)
|
|||
MLX5_SET(tirc, tirc, lro_max_ip_payload_size,
|
||||
(priv->params.lro_wqe_sz -
|
||||
ROUGH_MAX_L2_L3_HDR_SZ) >> 8);
|
||||
MLX5_SET(tirc, tirc, lro_timeout_period_usecs,
|
||||
MLX5_CAP_ETH(priv->mdev,
|
||||
lro_timer_supported_periods[2]));
|
||||
MLX5_SET(tirc, tirc, lro_timeout_period_usecs, priv->params.lro_timeout);
|
||||
}
|
||||
|
||||
void mlx5e_build_tir_ctx_hash(void *tirc, struct mlx5e_priv *priv)
|
||||
|
@ -3401,6 +3399,18 @@ static void mlx5e_query_min_inline(struct mlx5_core_dev *mdev,
|
|||
}
|
||||
}
|
||||
|
||||
u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout)
|
||||
{
|
||||
int i;
|
||||
|
||||
/* The supported periods are organized in ascending order */
|
||||
for (i = 0; i < MLX5E_LRO_TIMEOUT_ARR_SIZE - 1; i++)
|
||||
if (MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]) >= wanted_timeout)
|
||||
break;
|
||||
|
||||
return MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]);
|
||||
}
|
||||
|
||||
static void mlx5e_build_nic_netdev_priv(struct mlx5_core_dev *mdev,
|
||||
struct net_device *netdev,
|
||||
const struct mlx5e_profile *profile,
|
||||
|
@ -3419,6 +3429,9 @@ static void mlx5e_build_nic_netdev_priv(struct mlx5_core_dev *mdev,
|
|||
priv->profile = profile;
|
||||
priv->ppriv = ppriv;
|
||||
|
||||
priv->params.lro_timeout =
|
||||
mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
|
||||
|
||||
priv->params.log_sq_size = MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE;
|
||||
|
||||
/* set CQE compression */
|
||||
|
@ -4035,7 +4048,6 @@ void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv)
|
|||
const struct mlx5e_profile *profile = priv->profile;
|
||||
struct net_device *netdev = priv->netdev;
|
||||
|
||||
unregister_netdev(netdev);
|
||||
destroy_workqueue(priv->wq);
|
||||
if (profile->cleanup)
|
||||
profile->cleanup(priv);
|
||||
|
@ -4052,6 +4064,7 @@ static void mlx5e_remove(struct mlx5_core_dev *mdev, void *vpriv)
|
|||
for (vport = 1; vport < total_vfs; vport++)
|
||||
mlx5_eswitch_unregister_vport_rep(esw, vport);
|
||||
|
||||
unregister_netdev(priv->netdev);
|
||||
mlx5e_detach(mdev, vpriv);
|
||||
mlx5e_destroy_netdev(mdev, priv);
|
||||
}
|
||||
|
|
|
@ -457,6 +457,7 @@ void mlx5e_vport_rep_unload(struct mlx5_eswitch *esw,
|
|||
struct mlx5e_priv *priv = rep->priv_data;
|
||||
struct net_device *netdev = priv->netdev;
|
||||
|
||||
unregister_netdev(netdev);
|
||||
mlx5e_detach_netdev(esw->dev, netdev);
|
||||
mlx5e_destroy_netdev(esw->dev, priv);
|
||||
}
|
||||
|
|
|
@ -931,8 +931,8 @@ static void esw_vport_change_handler(struct work_struct *work)
|
|||
mutex_unlock(&esw->state_lock);
|
||||
}
|
||||
|
||||
static void esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
|
||||
struct mlx5_vport *vport)
|
||||
static int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
|
||||
struct mlx5_vport *vport)
|
||||
{
|
||||
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
|
||||
struct mlx5_flow_group *vlan_grp = NULL;
|
||||
|
@ -949,9 +949,11 @@ static void esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
|
|||
int table_size = 2;
|
||||
int err = 0;
|
||||
|
||||
if (!MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support) ||
|
||||
!IS_ERR_OR_NULL(vport->egress.acl))
|
||||
return;
|
||||
if (!MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!IS_ERR_OR_NULL(vport->egress.acl))
|
||||
return 0;
|
||||
|
||||
esw_debug(dev, "Create vport[%d] egress ACL log_max_size(%d)\n",
|
||||
vport->vport, MLX5_CAP_ESW_EGRESS_ACL(dev, log_max_ft_size));
|
||||
|
@ -959,12 +961,12 @@ static void esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
|
|||
root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_EGRESS);
|
||||
if (!root_ns) {
|
||||
esw_warn(dev, "Failed to get E-Switch egress flow namespace\n");
|
||||
return;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
flow_group_in = mlx5_vzalloc(inlen);
|
||||
if (!flow_group_in)
|
||||
return;
|
||||
return -ENOMEM;
|
||||
|
||||
acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
|
||||
if (IS_ERR(acl)) {
|
||||
|
@ -1009,6 +1011,7 @@ out:
|
|||
mlx5_destroy_flow_group(vlan_grp);
|
||||
if (err && !IS_ERR_OR_NULL(acl))
|
||||
mlx5_destroy_flow_table(acl);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
|
||||
|
@ -1041,8 +1044,8 @@ static void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
|
|||
vport->egress.acl = NULL;
|
||||
}
|
||||
|
||||
static void esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
|
||||
struct mlx5_vport *vport)
|
||||
static int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
|
||||
struct mlx5_vport *vport)
|
||||
{
|
||||
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
|
||||
struct mlx5_core_dev *dev = esw->dev;
|
||||
|
@ -1063,9 +1066,11 @@ static void esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
|
|||
int table_size = 4;
|
||||
int err = 0;
|
||||
|
||||
if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support) ||
|
||||
!IS_ERR_OR_NULL(vport->ingress.acl))
|
||||
return;
|
||||
if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!IS_ERR_OR_NULL(vport->ingress.acl))
|
||||
return 0;
|
||||
|
||||
esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
|
||||
vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
|
||||
|
@ -1073,12 +1078,12 @@ static void esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
|
|||
root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS);
|
||||
if (!root_ns) {
|
||||
esw_warn(dev, "Failed to get E-Switch ingress flow namespace\n");
|
||||
return;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
flow_group_in = mlx5_vzalloc(inlen);
|
||||
if (!flow_group_in)
|
||||
return;
|
||||
return -ENOMEM;
|
||||
|
||||
acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
|
||||
if (IS_ERR(acl)) {
|
||||
|
@ -1167,6 +1172,7 @@ out:
|
|||
}
|
||||
|
||||
kvfree(flow_group_in);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
|
||||
|
@ -1225,7 +1231,13 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
|
|||
return 0;
|
||||
}
|
||||
|
||||
esw_vport_enable_ingress_acl(esw, vport);
|
||||
err = esw_vport_enable_ingress_acl(esw, vport);
|
||||
if (err) {
|
||||
mlx5_core_warn(esw->dev,
|
||||
"failed to enable ingress acl (%d) on vport[%d]\n",
|
||||
err, vport->vport);
|
||||
return err;
|
||||
}
|
||||
|
||||
esw_debug(esw->dev,
|
||||
"vport[%d] configure ingress rules, vlan(%d) qos(%d)\n",
|
||||
|
@ -1299,7 +1311,13 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
|
|||
return 0;
|
||||
}
|
||||
|
||||
esw_vport_enable_egress_acl(esw, vport);
|
||||
err = esw_vport_enable_egress_acl(esw, vport);
|
||||
if (err) {
|
||||
mlx5_core_warn(esw->dev,
|
||||
"failed to enable egress acl (%d) on vport[%d]\n",
|
||||
err, vport->vport);
|
||||
return err;
|
||||
}
|
||||
|
||||
esw_debug(esw->dev,
|
||||
"vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
|
||||
|
|
|
@ -436,6 +436,9 @@ static void del_flow_group(struct fs_node *node)
|
|||
fs_get_obj(ft, fg->node.parent);
|
||||
dev = get_dev(&ft->node);
|
||||
|
||||
if (ft->autogroup.active)
|
||||
ft->autogroup.num_groups--;
|
||||
|
||||
if (mlx5_cmd_destroy_flow_group(dev, ft, fg->id))
|
||||
mlx5_core_warn(dev, "flow steering can't destroy fg %d of ft %d\n",
|
||||
fg->id, ft->id);
|
||||
|
@ -879,7 +882,7 @@ static struct mlx5_flow_group *create_flow_group_common(struct mlx5_flow_table *
|
|||
tree_init_node(&fg->node, !is_auto_fg, del_flow_group);
|
||||
tree_add_node(&fg->node, &ft->node);
|
||||
/* Add node to group list */
|
||||
list_add(&fg->node.list, ft->node.children.prev);
|
||||
list_add(&fg->node.list, prev_fg);
|
||||
|
||||
return fg;
|
||||
}
|
||||
|
@ -893,7 +896,7 @@ struct mlx5_flow_group *mlx5_create_flow_group(struct mlx5_flow_table *ft,
|
|||
return ERR_PTR(-EPERM);
|
||||
|
||||
lock_ref_node(&ft->node);
|
||||
fg = create_flow_group_common(ft, fg_in, &ft->node.children, false);
|
||||
fg = create_flow_group_common(ft, fg_in, ft->node.children.prev, false);
|
||||
unlock_ref_node(&ft->node);
|
||||
|
||||
return fg;
|
||||
|
@ -1012,7 +1015,7 @@ static struct mlx5_flow_group *create_autogroup(struct mlx5_flow_table *ft,
|
|||
u32 *match_criteria)
|
||||
{
|
||||
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
|
||||
struct list_head *prev = &ft->node.children;
|
||||
struct list_head *prev = ft->node.children.prev;
|
||||
unsigned int candidate_index = 0;
|
||||
struct mlx5_flow_group *fg;
|
||||
void *match_criteria_addr;
|
||||
|
|
|
@ -218,6 +218,7 @@ struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging)
|
|||
goto err_out;
|
||||
|
||||
if (aging) {
|
||||
counter->cache.lastuse = jiffies;
|
||||
counter->aging = true;
|
||||
|
||||
spin_lock(&fc_stats->addlist_lock);
|
||||
|
|
|
@ -61,10 +61,15 @@ enum {
|
|||
enum {
|
||||
MLX5_NIC_IFC_FULL = 0,
|
||||
MLX5_NIC_IFC_DISABLED = 1,
|
||||
MLX5_NIC_IFC_NO_DRAM_NIC = 2
|
||||
MLX5_NIC_IFC_NO_DRAM_NIC = 2,
|
||||
MLX5_NIC_IFC_INVALID = 3
|
||||
};
|
||||
|
||||
static u8 get_nic_interface(struct mlx5_core_dev *dev)
|
||||
enum {
|
||||
MLX5_DROP_NEW_HEALTH_WORK,
|
||||
};
|
||||
|
||||
static u8 get_nic_state(struct mlx5_core_dev *dev)
|
||||
{
|
||||
return (ioread32be(&dev->iseg->cmdq_addr_l_sz) >> 8) & 3;
|
||||
}
|
||||
|
@ -97,7 +102,7 @@ static int in_fatal(struct mlx5_core_dev *dev)
|
|||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
struct health_buffer __iomem *h = health->health;
|
||||
|
||||
if (get_nic_interface(dev) == MLX5_NIC_IFC_DISABLED)
|
||||
if (get_nic_state(dev) == MLX5_NIC_IFC_DISABLED)
|
||||
return 1;
|
||||
|
||||
if (ioread32be(&h->fw_ver) == 0xffffffff)
|
||||
|
@ -127,7 +132,7 @@ unlock:
|
|||
|
||||
static void mlx5_handle_bad_state(struct mlx5_core_dev *dev)
|
||||
{
|
||||
u8 nic_interface = get_nic_interface(dev);
|
||||
u8 nic_interface = get_nic_state(dev);
|
||||
|
||||
switch (nic_interface) {
|
||||
case MLX5_NIC_IFC_FULL:
|
||||
|
@ -149,8 +154,34 @@ static void mlx5_handle_bad_state(struct mlx5_core_dev *dev)
|
|||
mlx5_disable_device(dev);
|
||||
}
|
||||
|
||||
static void health_recover(struct work_struct *work)
|
||||
{
|
||||
struct mlx5_core_health *health;
|
||||
struct delayed_work *dwork;
|
||||
struct mlx5_core_dev *dev;
|
||||
struct mlx5_priv *priv;
|
||||
u8 nic_state;
|
||||
|
||||
dwork = container_of(work, struct delayed_work, work);
|
||||
health = container_of(dwork, struct mlx5_core_health, recover_work);
|
||||
priv = container_of(health, struct mlx5_priv, health);
|
||||
dev = container_of(priv, struct mlx5_core_dev, priv);
|
||||
|
||||
nic_state = get_nic_state(dev);
|
||||
if (nic_state == MLX5_NIC_IFC_INVALID) {
|
||||
dev_err(&dev->pdev->dev, "health recovery flow aborted since the nic state is invalid\n");
|
||||
return;
|
||||
}
|
||||
|
||||
dev_err(&dev->pdev->dev, "starting health recovery flow\n");
|
||||
mlx5_recover_device(dev);
|
||||
}
|
||||
|
||||
/* How much time to wait until health resetting the driver (in msecs) */
|
||||
#define MLX5_RECOVERY_DELAY_MSECS 60000
|
||||
static void health_care(struct work_struct *work)
|
||||
{
|
||||
unsigned long recover_delay = msecs_to_jiffies(MLX5_RECOVERY_DELAY_MSECS);
|
||||
struct mlx5_core_health *health;
|
||||
struct mlx5_core_dev *dev;
|
||||
struct mlx5_priv *priv;
|
||||
|
@ -160,6 +191,14 @@ static void health_care(struct work_struct *work)
|
|||
dev = container_of(priv, struct mlx5_core_dev, priv);
|
||||
mlx5_core_warn(dev, "handling bad device here\n");
|
||||
mlx5_handle_bad_state(dev);
|
||||
|
||||
spin_lock(&health->wq_lock);
|
||||
if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
|
||||
schedule_delayed_work(&health->recover_work, recover_delay);
|
||||
else
|
||||
dev_err(&dev->pdev->dev,
|
||||
"new health works are not permitted at this stage\n");
|
||||
spin_unlock(&health->wq_lock);
|
||||
}
|
||||
|
||||
static const char *hsynd_str(u8 synd)
|
||||
|
@ -272,7 +311,13 @@ static void poll_health(unsigned long data)
|
|||
if (in_fatal(dev) && !health->sick) {
|
||||
health->sick = true;
|
||||
print_health_info(dev);
|
||||
schedule_work(&health->work);
|
||||
spin_lock(&health->wq_lock);
|
||||
if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
|
||||
queue_work(health->wq, &health->work);
|
||||
else
|
||||
dev_err(&dev->pdev->dev,
|
||||
"new health works are not permitted at this stage\n");
|
||||
spin_unlock(&health->wq_lock);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -281,6 +326,8 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
|
|||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
|
||||
init_timer(&health->timer);
|
||||
health->sick = 0;
|
||||
clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
|
||||
health->health = &dev->iseg->health;
|
||||
health->health_counter = &dev->iseg->health_counter;
|
||||
|
||||
|
@ -297,11 +344,22 @@ void mlx5_stop_health_poll(struct mlx5_core_dev *dev)
|
|||
del_timer_sync(&health->timer);
|
||||
}
|
||||
|
||||
void mlx5_drain_health_wq(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
|
||||
spin_lock(&health->wq_lock);
|
||||
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
|
||||
spin_unlock(&health->wq_lock);
|
||||
cancel_delayed_work_sync(&health->recover_work);
|
||||
cancel_work_sync(&health->work);
|
||||
}
|
||||
|
||||
void mlx5_health_cleanup(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
|
||||
flush_work(&health->work);
|
||||
destroy_workqueue(health->wq);
|
||||
}
|
||||
|
||||
int mlx5_health_init(struct mlx5_core_dev *dev)
|
||||
|
@ -316,9 +374,13 @@ int mlx5_health_init(struct mlx5_core_dev *dev)
|
|||
|
||||
strcpy(name, "mlx5_health");
|
||||
strcat(name, dev_name(&dev->pdev->dev));
|
||||
health->wq = create_singlethread_workqueue(name);
|
||||
kfree(name);
|
||||
|
||||
if (!health->wq)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&health->wq_lock);
|
||||
INIT_WORK(&health->work, health_care);
|
||||
INIT_DELAYED_WORK(&health->recover_work, health_recover);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -844,12 +844,6 @@ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
|
|||
struct pci_dev *pdev = dev->pdev;
|
||||
int err;
|
||||
|
||||
err = mlx5_query_hca_caps(dev);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "query hca failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = mlx5_query_board_id(dev);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "query board id failed\n");
|
||||
|
@ -1023,6 +1017,12 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
|
|||
|
||||
mlx5_start_health_poll(dev);
|
||||
|
||||
err = mlx5_query_hca_caps(dev);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "query hca failed\n");
|
||||
goto err_stop_poll;
|
||||
}
|
||||
|
||||
if (boot && mlx5_init_once(dev, priv)) {
|
||||
dev_err(&pdev->dev, "sw objs init failed\n");
|
||||
goto err_stop_poll;
|
||||
|
@ -1313,10 +1313,16 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
|
|||
struct mlx5_priv *priv = &dev->priv;
|
||||
|
||||
dev_info(&pdev->dev, "%s was called\n", __func__);
|
||||
|
||||
mlx5_enter_error_state(dev);
|
||||
mlx5_unload_one(dev, priv, false);
|
||||
pci_save_state(pdev);
|
||||
mlx5_pci_disable_device(dev);
|
||||
/* In case of kernel call save the pci state and drain health wq */
|
||||
if (state) {
|
||||
pci_save_state(pdev);
|
||||
mlx5_drain_health_wq(dev);
|
||||
mlx5_pci_disable_device(dev);
|
||||
}
|
||||
|
||||
return state == pci_channel_io_perm_failure ?
|
||||
PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
|
||||
}
|
||||
|
@ -1373,11 +1379,6 @@ static pci_ers_result_t mlx5_pci_slot_reset(struct pci_dev *pdev)
|
|||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
void mlx5_disable_device(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_pci_err_detected(dev->pdev, 0);
|
||||
}
|
||||
|
||||
static void mlx5_pci_resume(struct pci_dev *pdev)
|
||||
{
|
||||
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
|
||||
|
@ -1427,6 +1428,18 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(pci, mlx5_core_pci_table);
|
||||
|
||||
void mlx5_disable_device(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_pci_err_detected(dev->pdev, 0);
|
||||
}
|
||||
|
||||
void mlx5_recover_device(struct mlx5_core_dev *dev)
|
||||
{
|
||||
mlx5_pci_disable_device(dev);
|
||||
if (mlx5_pci_slot_reset(dev->pdev) == PCI_ERS_RESULT_RECOVERED)
|
||||
mlx5_pci_resume(dev->pdev);
|
||||
}
|
||||
|
||||
static struct pci_driver mlx5_core_driver = {
|
||||
.name = DRIVER_NAME,
|
||||
.id_table = mlx5_core_pci_table,
|
||||
|
|
|
@ -83,6 +83,7 @@ void mlx5_core_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event,
|
|||
unsigned long param);
|
||||
void mlx5_enter_error_state(struct mlx5_core_dev *dev);
|
||||
void mlx5_disable_device(struct mlx5_core_dev *dev);
|
||||
void mlx5_recover_device(struct mlx5_core_dev *dev);
|
||||
int mlx5_sriov_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_sriov_cleanup(struct mlx5_core_dev *dev);
|
||||
int mlx5_sriov_attach(struct mlx5_core_dev *dev);
|
||||
|
|
|
@ -209,6 +209,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr)
|
|||
static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id)
|
||||
{
|
||||
struct page *page;
|
||||
u64 zero_addr = 1;
|
||||
u64 addr;
|
||||
int err;
|
||||
int nid = dev_to_node(&dev->pdev->dev);
|
||||
|
@ -218,26 +219,35 @@ static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id)
|
|||
mlx5_core_warn(dev, "failed to allocate page\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
map:
|
||||
addr = dma_map_page(&dev->pdev->dev, page, 0,
|
||||
PAGE_SIZE, DMA_BIDIRECTIONAL);
|
||||
if (dma_mapping_error(&dev->pdev->dev, addr)) {
|
||||
mlx5_core_warn(dev, "failed dma mapping page\n");
|
||||
err = -ENOMEM;
|
||||
goto out_alloc;
|
||||
goto err_mapping;
|
||||
}
|
||||
|
||||
/* Firmware doesn't support page with physical address 0 */
|
||||
if (addr == 0) {
|
||||
zero_addr = addr;
|
||||
goto map;
|
||||
}
|
||||
|
||||
err = insert_page(dev, addr, page, func_id);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "failed to track allocated page\n");
|
||||
goto out_mapping;
|
||||
dma_unmap_page(&dev->pdev->dev, addr, PAGE_SIZE,
|
||||
DMA_BIDIRECTIONAL);
|
||||
}
|
||||
|
||||
return 0;
|
||||
err_mapping:
|
||||
if (err)
|
||||
__free_page(page);
|
||||
|
||||
out_mapping:
|
||||
dma_unmap_page(&dev->pdev->dev, addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
|
||||
|
||||
out_alloc:
|
||||
__free_page(page);
|
||||
if (zero_addr == 0)
|
||||
dma_unmap_page(&dev->pdev->dev, zero_addr, PAGE_SIZE,
|
||||
DMA_BIDIRECTIONAL);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -1838,11 +1838,17 @@ static const struct mlxsw_bus mlxsw_pci_bus = {
|
|||
.cmd_exec = mlxsw_pci_cmd_exec,
|
||||
};
|
||||
|
||||
static int mlxsw_pci_sw_reset(struct mlxsw_pci *mlxsw_pci)
|
||||
static int mlxsw_pci_sw_reset(struct mlxsw_pci *mlxsw_pci,
|
||||
const struct pci_device_id *id)
|
||||
{
|
||||
unsigned long end;
|
||||
|
||||
mlxsw_pci_write32(mlxsw_pci, SW_RESET, MLXSW_PCI_SW_RESET_RST_BIT);
|
||||
if (id->device == PCI_DEVICE_ID_MELLANOX_SWITCHX2) {
|
||||
msleep(MLXSW_PCI_SW_RESET_TIMEOUT_MSECS);
|
||||
return 0;
|
||||
}
|
||||
|
||||
wmb(); /* reset needs to be written before we read control register */
|
||||
end = jiffies + msecs_to_jiffies(MLXSW_PCI_SW_RESET_TIMEOUT_MSECS);
|
||||
do {
|
||||
|
@ -1909,7 +1915,7 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
mlxsw_pci->pdev = pdev;
|
||||
pci_set_drvdata(pdev, mlxsw_pci);
|
||||
|
||||
err = mlxsw_pci_sw_reset(mlxsw_pci);
|
||||
err = mlxsw_pci_sw_reset(mlxsw_pci, id);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Software reset failed\n");
|
||||
goto err_sw_reset;
|
||||
|
|
|
@ -320,6 +320,8 @@ mlxsw_sp_lpm_tree_create(struct mlxsw_sp *mlxsw_sp,
|
|||
lpm_tree);
|
||||
if (err)
|
||||
goto err_left_struct_set;
|
||||
memcpy(&lpm_tree->prefix_usage, prefix_usage,
|
||||
sizeof(lpm_tree->prefix_usage));
|
||||
return lpm_tree;
|
||||
|
||||
err_left_struct_set:
|
||||
|
@ -343,7 +345,8 @@ mlxsw_sp_lpm_tree_get(struct mlxsw_sp *mlxsw_sp,
|
|||
|
||||
for (i = 0; i < MLXSW_SP_LPM_TREE_COUNT; i++) {
|
||||
lpm_tree = &mlxsw_sp->router.lpm_trees[i];
|
||||
if (lpm_tree->proto == proto &&
|
||||
if (lpm_tree->ref_count != 0 &&
|
||||
lpm_tree->proto == proto &&
|
||||
mlxsw_sp_prefix_usage_eq(&lpm_tree->prefix_usage,
|
||||
prefix_usage))
|
||||
goto inc_ref_count;
|
||||
|
@ -1820,19 +1823,17 @@ err_fib_entry_insert:
|
|||
return err;
|
||||
}
|
||||
|
||||
static int mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp,
|
||||
struct fib_entry_notifier_info *fen_info)
|
||||
static void mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp,
|
||||
struct fib_entry_notifier_info *fen_info)
|
||||
{
|
||||
struct mlxsw_sp_fib_entry *fib_entry;
|
||||
|
||||
if (mlxsw_sp->router.aborted)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
fib_entry = mlxsw_sp_fib_entry_find(mlxsw_sp, fen_info);
|
||||
if (!fib_entry) {
|
||||
dev_warn(mlxsw_sp->bus_info->dev, "Failed to find FIB4 entry being removed.\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
if (!fib_entry)
|
||||
return;
|
||||
|
||||
if (fib_entry->ref_count == 1) {
|
||||
mlxsw_sp_fib_entry_del(mlxsw_sp, fib_entry);
|
||||
|
@ -1840,7 +1841,6 @@ static int mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp,
|
|||
}
|
||||
|
||||
mlxsw_sp_fib_entry_put(mlxsw_sp, fib_entry);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp)
|
||||
|
@ -1862,7 +1862,8 @@ static int mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
mlxsw_reg_raltb_pack(raltb_pl, 0, MLXSW_REG_RALXX_PROTOCOL_IPV4, 0);
|
||||
mlxsw_reg_raltb_pack(raltb_pl, 0, MLXSW_REG_RALXX_PROTOCOL_IPV4,
|
||||
MLXSW_SP_LPM_TREE_MIN);
|
||||
err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(raltb), raltb_pl);
|
||||
if (err)
|
||||
return err;
|
||||
|
|
|
@ -1088,6 +1088,7 @@ err_port_stp_state_set:
|
|||
err_port_admin_status_set:
|
||||
err_port_mtu_set:
|
||||
err_port_speed_set:
|
||||
mlxsw_sx_port_swid_set(mlxsw_sx_port, MLXSW_PORT_SWID_DISABLED_PORT);
|
||||
err_port_swid_set:
|
||||
err_port_system_port_mapping_set:
|
||||
port_not_usable:
|
||||
|
|
|
@ -107,4 +107,7 @@ config QEDE
|
|||
---help---
|
||||
This enables the support for ...
|
||||
|
||||
config QED_RDMA
|
||||
bool
|
||||
|
||||
endif # NET_VENDOR_QLOGIC
|
||||
|
|
|
@ -5,4 +5,4 @@ qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
|
|||
qed_selftest.o qed_dcbx.o qed_debug.o
|
||||
qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
|
||||
qed-$(CONFIG_QED_LL2) += qed_ll2.o
|
||||
qed-$(CONFIG_INFINIBAND_QEDR) += qed_roce.o
|
||||
qed-$(CONFIG_QED_RDMA) += qed_roce.o
|
||||
|
|
|
@ -47,13 +47,8 @@
|
|||
#define TM_ALIGN BIT(TM_SHIFT)
|
||||
#define TM_ELEM_SIZE 4
|
||||
|
||||
/* ILT constants */
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
/* For RoCE we configure to 64K to cover for RoCE max tasks 256K purpose. */
|
||||
#define ILT_DEFAULT_HW_P_SIZE 4
|
||||
#else
|
||||
#define ILT_DEFAULT_HW_P_SIZE 3
|
||||
#endif
|
||||
#define ILT_DEFAULT_HW_P_SIZE (IS_ENABLED(CONFIG_QED_RDMA) ? 4 : 3)
|
||||
|
||||
#define ILT_PAGE_IN_BYTES(hw_p_size) (1U << ((hw_p_size) + 12))
|
||||
#define ILT_CFG_REG(cli, reg) PSWRQ2_REG_ ## cli ## _ ## reg ## _RT_OFFSET
|
||||
|
@ -349,14 +344,14 @@ static struct qed_tid_seg *qed_cxt_tid_seg_info(struct qed_hwfn *p_hwfn,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
void qed_cxt_set_srq_count(struct qed_hwfn *p_hwfn, u32 num_srqs)
|
||||
static void qed_cxt_set_srq_count(struct qed_hwfn *p_hwfn, u32 num_srqs)
|
||||
{
|
||||
struct qed_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
|
||||
|
||||
p_mgr->srq_count = num_srqs;
|
||||
}
|
||||
|
||||
u32 qed_cxt_get_srq_count(struct qed_hwfn *p_hwfn)
|
||||
static u32 qed_cxt_get_srq_count(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
struct qed_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
|
||||
|
||||
|
@ -1804,8 +1799,8 @@ int qed_cxt_get_cid_info(struct qed_hwfn *p_hwfn, struct qed_cxt_info *p_info)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void qed_rdma_set_pf_params(struct qed_hwfn *p_hwfn,
|
||||
struct qed_rdma_pf_params *p_params)
|
||||
static void qed_rdma_set_pf_params(struct qed_hwfn *p_hwfn,
|
||||
struct qed_rdma_pf_params *p_params)
|
||||
{
|
||||
u32 num_cons, num_tasks, num_qps, num_mrs, num_srqs;
|
||||
enum protocol_type proto;
|
||||
|
|
|
@ -1190,6 +1190,7 @@ int qed_dcbx_get_config_params(struct qed_hwfn *p_hwfn,
|
|||
if (!dcbx_info)
|
||||
return -ENOMEM;
|
||||
|
||||
memset(dcbx_info, 0, sizeof(*dcbx_info));
|
||||
rc = qed_dcbx_query_params(p_hwfn, dcbx_info, QED_DCBX_OPERATIONAL_MIB);
|
||||
if (rc) {
|
||||
kfree(dcbx_info);
|
||||
|
@ -1225,6 +1226,7 @@ static struct qed_dcbx_get *qed_dcbnl_get_dcbx(struct qed_hwfn *hwfn,
|
|||
if (!dcbx_info)
|
||||
return NULL;
|
||||
|
||||
memset(dcbx_info, 0, sizeof(*dcbx_info));
|
||||
if (qed_dcbx_query_params(hwfn, dcbx_info, type)) {
|
||||
kfree(dcbx_info);
|
||||
return NULL;
|
||||
|
|
|
@ -405,7 +405,7 @@ struct phy_defs {
|
|||
/***************************** Constant Arrays *******************************/
|
||||
|
||||
/* Debug arrays */
|
||||
static struct dbg_array s_dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE] = { {0} };
|
||||
static struct dbg_array s_dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE] = { {NULL} };
|
||||
|
||||
/* Chip constant definitions array */
|
||||
static struct chip_defs s_chip_defs[MAX_CHIP_IDS] = {
|
||||
|
@ -4028,10 +4028,10 @@ static enum dbg_status qed_mcp_trace_read_meta(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Dump MCP Trace */
|
||||
enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
static enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
{
|
||||
u32 trace_data_grc_addr, trace_data_size_bytes, trace_data_size_dwords;
|
||||
u32 trace_meta_size_dwords, running_bundle_id, offset = 0;
|
||||
|
@ -4130,10 +4130,10 @@ enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Dump GRC FIFO */
|
||||
enum dbg_status qed_reg_fifo_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
static enum dbg_status qed_reg_fifo_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
{
|
||||
u32 offset = 0, dwords_read, size_param_offset;
|
||||
bool fifo_has_data;
|
||||
|
@ -4192,10 +4192,10 @@ enum dbg_status qed_reg_fifo_dump(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Dump IGU FIFO */
|
||||
enum dbg_status qed_igu_fifo_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
static enum dbg_status qed_igu_fifo_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
{
|
||||
u32 offset = 0, dwords_read, size_param_offset;
|
||||
bool fifo_has_data;
|
||||
|
@ -4255,10 +4255,11 @@ enum dbg_status qed_igu_fifo_dump(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Protection Override dump */
|
||||
enum dbg_status qed_protection_override_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump, u32 *num_dumped_dwords)
|
||||
static enum dbg_status qed_protection_override_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
u32 *dump_buf,
|
||||
bool dump,
|
||||
u32 *num_dumped_dwords)
|
||||
{
|
||||
u32 offset = 0, size_param_offset, override_window_dwords;
|
||||
|
||||
|
@ -6339,10 +6340,11 @@ enum dbg_status qed_print_fw_asserts_results(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Wrapper for unifying the idle_chk and mcp_trace api */
|
||||
enum dbg_status qed_print_idle_chk_results_wrapper(struct qed_hwfn *p_hwfn,
|
||||
u32 *dump_buf,
|
||||
u32 num_dumped_dwords,
|
||||
char *results_buf)
|
||||
static enum dbg_status
|
||||
qed_print_idle_chk_results_wrapper(struct qed_hwfn *p_hwfn,
|
||||
u32 *dump_buf,
|
||||
u32 num_dumped_dwords,
|
||||
char *results_buf)
|
||||
{
|
||||
u32 num_errors, num_warnnings;
|
||||
|
||||
|
@ -6413,8 +6415,8 @@ static void qed_dbg_print_feature(u8 *p_text_buf, u32 text_size)
|
|||
|
||||
#define QED_RESULTS_BUF_MIN_SIZE 16
|
||||
/* Generic function for decoding debug feature info */
|
||||
enum dbg_status format_feature(struct qed_hwfn *p_hwfn,
|
||||
enum qed_dbg_features feature_idx)
|
||||
static enum dbg_status format_feature(struct qed_hwfn *p_hwfn,
|
||||
enum qed_dbg_features feature_idx)
|
||||
{
|
||||
struct qed_dbg_feature *feature =
|
||||
&p_hwfn->cdev->dbg_params.features[feature_idx];
|
||||
|
@ -6480,8 +6482,9 @@ enum dbg_status format_feature(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
|
||||
/* Generic function for performing the dump of a debug feature. */
|
||||
enum dbg_status qed_dbg_dump(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
||||
enum qed_dbg_features feature_idx)
|
||||
static enum dbg_status qed_dbg_dump(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
enum qed_dbg_features feature_idx)
|
||||
{
|
||||
struct qed_dbg_feature *feature =
|
||||
&p_hwfn->cdev->dbg_params.features[feature_idx];
|
||||
|
|
|
@ -497,12 +497,13 @@ int qed_resc_alloc(struct qed_dev *cdev)
|
|||
if (p_hwfn->hw_info.personality == QED_PCI_ETH_ROCE) {
|
||||
num_cons = qed_cxt_get_proto_cid_count(p_hwfn,
|
||||
PROTOCOLID_ROCE,
|
||||
0) * 2;
|
||||
NULL) * 2;
|
||||
n_eqes += num_cons + 2 * MAX_NUM_VFS_BB;
|
||||
} else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
|
||||
num_cons =
|
||||
qed_cxt_get_proto_cid_count(p_hwfn,
|
||||
PROTOCOLID_ISCSI, 0);
|
||||
PROTOCOLID_ISCSI,
|
||||
NULL);
|
||||
n_eqes += 2 * num_cons;
|
||||
}
|
||||
|
||||
|
@ -1422,19 +1423,19 @@ static void qed_hw_set_feat(struct qed_hwfn *p_hwfn)
|
|||
u32 *feat_num = p_hwfn->hw_info.feat_num;
|
||||
int num_features = 1;
|
||||
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
/* Roce CNQ each requires: 1 status block + 1 CNQ. We divide the
|
||||
* status blocks equally between L2 / RoCE but with consideration as
|
||||
* to how many l2 queues / cnqs we have
|
||||
*/
|
||||
if (p_hwfn->hw_info.personality == QED_PCI_ETH_ROCE) {
|
||||
if (IS_ENABLED(CONFIG_QED_RDMA) &&
|
||||
p_hwfn->hw_info.personality == QED_PCI_ETH_ROCE) {
|
||||
/* Roce CNQ each requires: 1 status block + 1 CNQ. We divide
|
||||
* the status blocks equally between L2 / RoCE but with
|
||||
* consideration as to how many l2 queues / cnqs we have.
|
||||
*/
|
||||
num_features++;
|
||||
|
||||
feat_num[QED_RDMA_CNQ] =
|
||||
min_t(u32, RESC_NUM(p_hwfn, QED_SB) / num_features,
|
||||
RESC_NUM(p_hwfn, QED_RDMA_CNQ_RAM));
|
||||
}
|
||||
#endif
|
||||
|
||||
feat_num[QED_PF_L2_QUE] = min_t(u32, RESC_NUM(p_hwfn, QED_SB) /
|
||||
num_features,
|
||||
RESC_NUM(p_hwfn, QED_L2_QUEUE));
|
||||
|
|
|
@ -38,6 +38,7 @@
|
|||
#include "qed_mcp.h"
|
||||
#include "qed_reg_addr.h"
|
||||
#include "qed_sp.h"
|
||||
#include "qed_roce.h"
|
||||
|
||||
#define QED_LL2_RX_REGISTERED(ll2) ((ll2)->rx_queue.b_cb_registred)
|
||||
#define QED_LL2_TX_REGISTERED(ll2) ((ll2)->tx_queue.b_cb_registred)
|
||||
|
@ -140,11 +141,11 @@ static void qed_ll2_kill_buffers(struct qed_dev *cdev)
|
|||
qed_ll2_dealloc_buffer(cdev, buffer);
|
||||
}
|
||||
|
||||
void qed_ll2b_complete_rx_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
struct qed_ll2_rx_packet *p_pkt,
|
||||
struct core_rx_fast_path_cqe *p_cqe,
|
||||
bool b_last_packet)
|
||||
static void qed_ll2b_complete_rx_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
struct qed_ll2_rx_packet *p_pkt,
|
||||
struct core_rx_fast_path_cqe *p_cqe,
|
||||
bool b_last_packet)
|
||||
{
|
||||
u16 packet_length = le16_to_cpu(p_cqe->packet_length);
|
||||
struct qed_ll2_buffer *buffer = p_pkt->cookie;
|
||||
|
@ -515,7 +516,7 @@ static int qed_ll2_rxq_completion(struct qed_hwfn *p_hwfn, void *cookie)
|
|||
return rc;
|
||||
}
|
||||
|
||||
void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
|
||||
static void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
|
||||
{
|
||||
struct qed_ll2_info *p_ll2_conn = NULL;
|
||||
struct qed_ll2_rx_packet *p_pkt = NULL;
|
||||
|
@ -537,8 +538,7 @@ void qed_ll2_rxq_flush(struct qed_hwfn *p_hwfn, u8 connection_handle)
|
|||
if (!p_pkt)
|
||||
break;
|
||||
|
||||
list_del(&p_pkt->list_entry);
|
||||
list_add_tail(&p_pkt->list_entry, &p_rx->free_descq);
|
||||
list_move_tail(&p_pkt->list_entry, &p_rx->free_descq);
|
||||
|
||||
rx_buf_addr = p_pkt->rx_buf_addr;
|
||||
cookie = p_pkt->cookie;
|
||||
|
@ -992,9 +992,8 @@ static void qed_ll2_post_rx_buffer_notify_fw(struct qed_hwfn *p_hwfn,
|
|||
p_posting_packet = list_first_entry(&p_rx->posting_descq,
|
||||
struct qed_ll2_rx_packet,
|
||||
list_entry);
|
||||
list_del(&p_posting_packet->list_entry);
|
||||
list_add_tail(&p_posting_packet->list_entry,
|
||||
&p_rx->active_descq);
|
||||
list_move_tail(&p_posting_packet->list_entry,
|
||||
&p_rx->active_descq);
|
||||
b_notify_fw = true;
|
||||
}
|
||||
|
||||
|
@ -1123,9 +1122,6 @@ static void qed_ll2_prepare_tx_packet_set_bd(struct qed_hwfn *p_hwfn,
|
|||
DMA_REGPAIR_LE(start_bd->addr, first_frag);
|
||||
start_bd->nbytes = cpu_to_le16(first_frag_len);
|
||||
|
||||
SET_FIELD(start_bd->bd_flags.as_bitfield, CORE_TX_BD_FLAGS_ROCE_FLAV,
|
||||
type);
|
||||
|
||||
DP_VERBOSE(p_hwfn,
|
||||
(NETIF_MSG_TX_QUEUED | QED_MSG_LL2),
|
||||
"LL2 [q 0x%02x cid 0x%08x type 0x%08x] Tx Producer at [0x%04x] - set with a %04x bytes %02x BDs buffer at %08x:%08x\n",
|
||||
|
@ -1188,8 +1184,7 @@ static void qed_ll2_tx_packet_notify(struct qed_hwfn *p_hwfn,
|
|||
if (!p_pkt)
|
||||
break;
|
||||
|
||||
list_del(&p_pkt->list_entry);
|
||||
list_add_tail(&p_pkt->list_entry, &p_tx->active_descq);
|
||||
list_move_tail(&p_pkt->list_entry, &p_tx->active_descq);
|
||||
}
|
||||
|
||||
SET_FIELD(db_msg.params, CORE_DB_DATA_DEST, DB_DEST_XCM);
|
||||
|
|
|
@ -293,24 +293,4 @@ void qed_ll2_setup(struct qed_hwfn *p_hwfn,
|
|||
*/
|
||||
void qed_ll2_free(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ll2_info *p_ll2_connections);
|
||||
void qed_ll2b_complete_rx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t rx_buf_addr,
|
||||
u16 data_length,
|
||||
u8 data_length_error,
|
||||
u16 parse_flags,
|
||||
u16 vlan,
|
||||
u32 src_mac_addr_hi,
|
||||
u16 src_mac_addr_lo, bool b_last_packet);
|
||||
void qed_ll2b_complete_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment, bool b_last_packet);
|
||||
void qed_ll2b_release_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment, bool b_last_packet);
|
||||
#endif
|
||||
|
|
|
@ -33,10 +33,8 @@
|
|||
#include "qed_hw.h"
|
||||
#include "qed_selftest.h"
|
||||
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
#define QED_ROCE_QPS (8192)
|
||||
#define QED_ROCE_DPIS (8)
|
||||
#endif
|
||||
|
||||
static char version[] =
|
||||
"QLogic FastLinQ 4xxxx Core Module qed " DRV_MODULE_VERSION "\n";
|
||||
|
@ -682,9 +680,7 @@ static int qed_slowpath_setup_int(struct qed_dev *cdev,
|
|||
enum qed_int_mode int_mode)
|
||||
{
|
||||
struct qed_sb_cnt_info sb_cnt_info;
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
int num_l2_queues;
|
||||
#endif
|
||||
int num_l2_queues = 0;
|
||||
int rc;
|
||||
int i;
|
||||
|
||||
|
@ -715,8 +711,9 @@ static int qed_slowpath_setup_int(struct qed_dev *cdev,
|
|||
cdev->int_params.fp_msix_cnt = cdev->int_params.out.num_vectors -
|
||||
cdev->num_hwfns;
|
||||
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
num_l2_queues = 0;
|
||||
if (!IS_ENABLED(CONFIG_QED_RDMA))
|
||||
return 0;
|
||||
|
||||
for_each_hwfn(cdev, i)
|
||||
num_l2_queues += FEAT_NUM(&cdev->hwfns[i], QED_PF_L2_QUE);
|
||||
|
||||
|
@ -738,7 +735,6 @@ static int qed_slowpath_setup_int(struct qed_dev *cdev,
|
|||
DP_VERBOSE(cdev, QED_MSG_RDMA, "roce_msix_cnt=%d roce_msix_base=%d\n",
|
||||
cdev->int_params.rdma_msix_cnt,
|
||||
cdev->int_params.rdma_msix_base);
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -843,18 +839,20 @@ static void qed_update_pf_params(struct qed_dev *cdev,
|
|||
{
|
||||
int i;
|
||||
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
params->rdma_pf_params.num_qps = QED_ROCE_QPS;
|
||||
params->rdma_pf_params.min_dpis = QED_ROCE_DPIS;
|
||||
/* divide by 3 the MRs to avoid MF ILT overflow */
|
||||
params->rdma_pf_params.num_mrs = RDMA_MAX_TIDS;
|
||||
params->rdma_pf_params.gl_pi = QED_ROCE_PROTOCOL_INDEX;
|
||||
#endif
|
||||
for (i = 0; i < cdev->num_hwfns; i++) {
|
||||
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
|
||||
|
||||
p_hwfn->pf_params = *params;
|
||||
}
|
||||
|
||||
if (!IS_ENABLED(CONFIG_QED_RDMA))
|
||||
return;
|
||||
|
||||
params->rdma_pf_params.num_qps = QED_ROCE_QPS;
|
||||
params->rdma_pf_params.min_dpis = QED_ROCE_DPIS;
|
||||
/* divide by 3 the MRs to avoid MF ILT overflow */
|
||||
params->rdma_pf_params.num_mrs = RDMA_MAX_TIDS;
|
||||
params->rdma_pf_params.gl_pi = QED_ROCE_PROTOCOL_INDEX;
|
||||
}
|
||||
|
||||
static int qed_slowpath_start(struct qed_dev *cdev,
|
||||
|
@ -880,6 +878,7 @@ static int qed_slowpath_start(struct qed_dev *cdev,
|
|||
}
|
||||
}
|
||||
|
||||
cdev->rx_coalesce_usecs = QED_DEFAULT_RX_USECS;
|
||||
rc = qed_nic_setup(cdev);
|
||||
if (rc)
|
||||
goto err;
|
||||
|
@ -1432,7 +1431,7 @@ static int qed_set_led(struct qed_dev *cdev, enum qed_led_mode mode)
|
|||
return status;
|
||||
}
|
||||
|
||||
struct qed_selftest_ops qed_selftest_ops_pass = {
|
||||
static struct qed_selftest_ops qed_selftest_ops_pass = {
|
||||
.selftest_memory = &qed_selftest_memory,
|
||||
.selftest_interrupt = &qed_selftest_interrupt,
|
||||
.selftest_register = &qed_selftest_register,
|
||||
|
|
|
@ -129,17 +129,12 @@ static void qed_bmap_release_id(struct qed_hwfn *p_hwfn,
|
|||
}
|
||||
}
|
||||
|
||||
u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id)
|
||||
static u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id)
|
||||
{
|
||||
/* First sb id for RoCE is after all the l2 sb */
|
||||
return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id;
|
||||
}
|
||||
|
||||
u32 qed_rdma_query_cau_timer_res(void *rdma_cxt)
|
||||
{
|
||||
return QED_CAU_DEF_RX_TIMER_RES;
|
||||
}
|
||||
|
||||
static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
struct qed_rdma_start_in_params *params)
|
||||
|
@ -162,7 +157,8 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
|
|||
p_hwfn->p_rdma_info = p_rdma_info;
|
||||
p_rdma_info->proto = PROTOCOLID_ROCE;
|
||||
|
||||
num_cons = qed_cxt_get_proto_cid_count(p_hwfn, p_rdma_info->proto, 0);
|
||||
num_cons = qed_cxt_get_proto_cid_count(p_hwfn, p_rdma_info->proto,
|
||||
NULL);
|
||||
|
||||
p_rdma_info->num_qps = num_cons / 2;
|
||||
|
||||
|
@ -275,7 +271,7 @@ free_rdma_info:
|
|||
return rc;
|
||||
}
|
||||
|
||||
void qed_rdma_resc_free(struct qed_hwfn *p_hwfn)
|
||||
static void qed_rdma_resc_free(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info;
|
||||
|
||||
|
@ -527,6 +523,26 @@ static int qed_rdma_start_fw(struct qed_hwfn *p_hwfn,
|
|||
return qed_spq_post(p_hwfn, p_ent, NULL);
|
||||
}
|
||||
|
||||
static int qed_rdma_alloc_tid(void *rdma_cxt, u32 *itid)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
int rc;
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocate TID\n");
|
||||
|
||||
spin_lock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
rc = qed_rdma_bmap_alloc_id(p_hwfn,
|
||||
&p_hwfn->p_rdma_info->tid_map, itid);
|
||||
spin_unlock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rc = qed_cxt_dynamic_ilt_alloc(p_hwfn, QED_ELEM_TASK, *itid);
|
||||
out:
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocate TID - done, rc = %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int qed_rdma_reserve_lkey(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
struct qed_rdma_device *dev = p_hwfn->p_rdma_info->dev;
|
||||
|
@ -573,7 +589,7 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn,
|
|||
return qed_rdma_start_fw(p_hwfn, params, p_ptt);
|
||||
}
|
||||
|
||||
int qed_rdma_stop(void *rdma_cxt)
|
||||
static int qed_rdma_stop(void *rdma_cxt)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct rdma_close_func_ramrod_data *p_ramrod;
|
||||
|
@ -629,8 +645,8 @@ out:
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_add_user(void *rdma_cxt,
|
||||
struct qed_rdma_add_user_out_params *out_params)
|
||||
static int qed_rdma_add_user(void *rdma_cxt,
|
||||
struct qed_rdma_add_user_out_params *out_params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
u32 dpi_start_offset;
|
||||
|
@ -664,7 +680,7 @@ int qed_rdma_add_user(void *rdma_cxt,
|
|||
return rc;
|
||||
}
|
||||
|
||||
struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt)
|
||||
static struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct qed_rdma_port *p_port = p_hwfn->p_rdma_info->port;
|
||||
|
@ -680,7 +696,7 @@ struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt)
|
|||
return p_port;
|
||||
}
|
||||
|
||||
struct qed_rdma_device *qed_rdma_query_device(void *rdma_cxt)
|
||||
static struct qed_rdma_device *qed_rdma_query_device(void *rdma_cxt)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
|
||||
|
@ -690,7 +706,7 @@ struct qed_rdma_device *qed_rdma_query_device(void *rdma_cxt)
|
|||
return p_hwfn->p_rdma_info->dev;
|
||||
}
|
||||
|
||||
void qed_rdma_free_tid(void *rdma_cxt, u32 itid)
|
||||
static void qed_rdma_free_tid(void *rdma_cxt, u32 itid)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
|
||||
|
@ -701,27 +717,7 @@ void qed_rdma_free_tid(void *rdma_cxt, u32 itid)
|
|||
spin_unlock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
}
|
||||
|
||||
int qed_rdma_alloc_tid(void *rdma_cxt, u32 *itid)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
int rc;
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocate TID\n");
|
||||
|
||||
spin_lock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
rc = qed_rdma_bmap_alloc_id(p_hwfn,
|
||||
&p_hwfn->p_rdma_info->tid_map, itid);
|
||||
spin_unlock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rc = qed_cxt_dynamic_ilt_alloc(p_hwfn, QED_ELEM_TASK, *itid);
|
||||
out:
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocate TID - done, rc = %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
void qed_rdma_cnq_prod_update(void *rdma_cxt, u8 qz_offset, u16 prod)
|
||||
static void qed_rdma_cnq_prod_update(void *rdma_cxt, u8 qz_offset, u16 prod)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn;
|
||||
u16 qz_num;
|
||||
|
@ -816,7 +812,7 @@ static int qed_rdma_get_int(struct qed_dev *cdev, struct qed_int_info *info)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int qed_rdma_alloc_pd(void *rdma_cxt, u16 *pd)
|
||||
static int qed_rdma_alloc_pd(void *rdma_cxt, u16 *pd)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
u32 returned_id;
|
||||
|
@ -836,7 +832,7 @@ int qed_rdma_alloc_pd(void *rdma_cxt, u16 *pd)
|
|||
return rc;
|
||||
}
|
||||
|
||||
void qed_rdma_free_pd(void *rdma_cxt, u16 pd)
|
||||
static void qed_rdma_free_pd(void *rdma_cxt, u16 pd)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
|
||||
|
@ -873,8 +869,9 @@ qed_rdma_toggle_bit_create_resize_cq(struct qed_hwfn *p_hwfn, u16 icid)
|
|||
return toggle_bit;
|
||||
}
|
||||
|
||||
int qed_rdma_create_cq(void *rdma_cxt,
|
||||
struct qed_rdma_create_cq_in_params *params, u16 *icid)
|
||||
static int qed_rdma_create_cq(void *rdma_cxt,
|
||||
struct qed_rdma_create_cq_in_params *params,
|
||||
u16 *icid)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct qed_rdma_info *p_info = p_hwfn->p_rdma_info;
|
||||
|
@ -957,98 +954,10 @@ err:
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_resize_cq(void *rdma_cxt,
|
||||
struct qed_rdma_resize_cq_in_params *in_params,
|
||||
struct qed_rdma_resize_cq_out_params *out_params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct rdma_resize_cq_output_params *p_ramrod_res;
|
||||
struct rdma_resize_cq_ramrod_data *p_ramrod;
|
||||
enum qed_rdma_toggle_bit toggle_bit;
|
||||
struct qed_sp_init_data init_data;
|
||||
struct qed_spq_entry *p_ent;
|
||||
dma_addr_t ramrod_res_phys;
|
||||
u8 fw_return_code;
|
||||
int rc = -ENOMEM;
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "icid = %08x\n", in_params->icid);
|
||||
|
||||
p_ramrod_res =
|
||||
(struct rdma_resize_cq_output_params *)
|
||||
dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
|
||||
sizeof(struct rdma_resize_cq_output_params),
|
||||
&ramrod_res_phys, GFP_KERNEL);
|
||||
if (!p_ramrod_res) {
|
||||
DP_NOTICE(p_hwfn,
|
||||
"qed resize cq failed: cannot allocate memory (ramrod)\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Get SPQ entry */
|
||||
memset(&init_data, 0, sizeof(init_data));
|
||||
init_data.cid = in_params->icid;
|
||||
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
|
||||
init_data.comp_mode = QED_SPQ_MODE_EBLOCK;
|
||||
|
||||
rc = qed_sp_init_request(p_hwfn, &p_ent,
|
||||
RDMA_RAMROD_RESIZE_CQ,
|
||||
p_hwfn->p_rdma_info->proto, &init_data);
|
||||
if (rc)
|
||||
goto err;
|
||||
|
||||
p_ramrod = &p_ent->ramrod.rdma_resize_cq;
|
||||
|
||||
p_ramrod->flags = 0;
|
||||
|
||||
/* toggle the bit for every resize or create cq for a given icid */
|
||||
toggle_bit = qed_rdma_toggle_bit_create_resize_cq(p_hwfn,
|
||||
in_params->icid);
|
||||
|
||||
SET_FIELD(p_ramrod->flags,
|
||||
RDMA_RESIZE_CQ_RAMROD_DATA_TOGGLE_BIT, toggle_bit);
|
||||
|
||||
SET_FIELD(p_ramrod->flags,
|
||||
RDMA_RESIZE_CQ_RAMROD_DATA_IS_TWO_LEVEL_PBL,
|
||||
in_params->pbl_two_level);
|
||||
|
||||
p_ramrod->pbl_log_page_size = in_params->pbl_page_size_log - 12;
|
||||
p_ramrod->pbl_num_pages = cpu_to_le16(in_params->pbl_num_pages);
|
||||
p_ramrod->max_cqes = cpu_to_le32(in_params->cq_size);
|
||||
DMA_REGPAIR_LE(p_ramrod->pbl_addr, in_params->pbl_ptr);
|
||||
DMA_REGPAIR_LE(p_ramrod->output_params_addr, ramrod_res_phys);
|
||||
|
||||
rc = qed_spq_post(p_hwfn, p_ent, &fw_return_code);
|
||||
if (rc)
|
||||
goto err;
|
||||
|
||||
if (fw_return_code != RDMA_RETURN_OK) {
|
||||
DP_NOTICE(p_hwfn, "fw_return_code = %d\n", fw_return_code);
|
||||
rc = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
out_params->prod = le32_to_cpu(p_ramrod_res->old_cq_prod);
|
||||
out_params->cons = le32_to_cpu(p_ramrod_res->old_cq_cons);
|
||||
|
||||
dma_free_coherent(&p_hwfn->cdev->pdev->dev,
|
||||
sizeof(struct rdma_resize_cq_output_params),
|
||||
p_ramrod_res, ramrod_res_phys);
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Resized CQ, rc = %d\n", rc);
|
||||
|
||||
return rc;
|
||||
|
||||
err: dma_free_coherent(&p_hwfn->cdev->pdev->dev,
|
||||
sizeof(struct rdma_resize_cq_output_params),
|
||||
p_ramrod_res, ramrod_res_phys);
|
||||
DP_NOTICE(p_hwfn, "Resized CQ, Failed - rc = %d\n", rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_destroy_cq(void *rdma_cxt,
|
||||
struct qed_rdma_destroy_cq_in_params *in_params,
|
||||
struct qed_rdma_destroy_cq_out_params *out_params)
|
||||
static int
|
||||
qed_rdma_destroy_cq(void *rdma_cxt,
|
||||
struct qed_rdma_destroy_cq_in_params *in_params,
|
||||
struct qed_rdma_destroy_cq_out_params *out_params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct rdma_destroy_cq_output_params *p_ramrod_res;
|
||||
|
@ -1169,7 +1078,7 @@ static enum roce_flavor qed_roce_mode_to_flavor(enum roce_mode roce_mode)
|
|||
return flavor;
|
||||
}
|
||||
|
||||
int qed_roce_alloc_cid(struct qed_hwfn *p_hwfn, u16 *cid)
|
||||
static int qed_roce_alloc_cid(struct qed_hwfn *p_hwfn, u16 *cid)
|
||||
{
|
||||
struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info;
|
||||
u32 responder_icid;
|
||||
|
@ -1793,9 +1702,9 @@ err:
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_roce_query_qp(struct qed_hwfn *p_hwfn,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_query_qp_out_params *out_params)
|
||||
static int qed_roce_query_qp(struct qed_hwfn *p_hwfn,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_query_qp_out_params *out_params)
|
||||
{
|
||||
struct roce_query_qp_resp_output_params *p_resp_ramrod_res;
|
||||
struct roce_query_qp_req_output_params *p_req_ramrod_res;
|
||||
|
@ -1936,7 +1845,7 @@ err_resp:
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_roce_destroy_qp(struct qed_hwfn *p_hwfn, struct qed_rdma_qp *qp)
|
||||
static int qed_roce_destroy_qp(struct qed_hwfn *p_hwfn, struct qed_rdma_qp *qp)
|
||||
{
|
||||
u32 num_invalidated_mw = 0;
|
||||
u32 num_bound_mw = 0;
|
||||
|
@ -1985,9 +1894,9 @@ int qed_roce_destroy_qp(struct qed_hwfn *p_hwfn, struct qed_rdma_qp *qp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int qed_rdma_query_qp(void *rdma_cxt,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_query_qp_out_params *out_params)
|
||||
static int qed_rdma_query_qp(void *rdma_cxt,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_query_qp_out_params *out_params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
int rc;
|
||||
|
@ -2022,7 +1931,7 @@ int qed_rdma_query_qp(void *rdma_cxt,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_destroy_qp(void *rdma_cxt, struct qed_rdma_qp *qp)
|
||||
static int qed_rdma_destroy_qp(void *rdma_cxt, struct qed_rdma_qp *qp)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
int rc = 0;
|
||||
|
@ -2038,7 +1947,7 @@ int qed_rdma_destroy_qp(void *rdma_cxt, struct qed_rdma_qp *qp)
|
|||
return rc;
|
||||
}
|
||||
|
||||
struct qed_rdma_qp *
|
||||
static struct qed_rdma_qp *
|
||||
qed_rdma_create_qp(void *rdma_cxt,
|
||||
struct qed_rdma_create_qp_in_params *in_params,
|
||||
struct qed_rdma_create_qp_out_params *out_params)
|
||||
|
@ -2215,9 +2124,9 @@ static int qed_roce_modify_qp(struct qed_hwfn *p_hwfn,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_modify_qp(void *rdma_cxt,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_modify_qp_in_params *params)
|
||||
static int qed_rdma_modify_qp(void *rdma_cxt,
|
||||
struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_modify_qp_in_params *params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
enum qed_roce_qp_state prev_state;
|
||||
|
@ -2312,8 +2221,9 @@ int qed_rdma_modify_qp(void *rdma_cxt,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_register_tid(void *rdma_cxt,
|
||||
struct qed_rdma_register_tid_in_params *params)
|
||||
static int
|
||||
qed_rdma_register_tid(void *rdma_cxt,
|
||||
struct qed_rdma_register_tid_in_params *params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct rdma_register_tid_ramrod_data *p_ramrod;
|
||||
|
@ -2450,7 +2360,7 @@ int qed_rdma_register_tid(void *rdma_cxt,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int qed_rdma_deregister_tid(void *rdma_cxt, u32 itid)
|
||||
static int qed_rdma_deregister_tid(void *rdma_cxt, u32 itid)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct rdma_deregister_tid_ramrod_data *p_ramrod;
|
||||
|
@ -2561,7 +2471,8 @@ void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
|
|||
qed_rdma_dpm_conf(p_hwfn, p_ptt);
|
||||
}
|
||||
|
||||
int qed_rdma_start(void *rdma_cxt, struct qed_rdma_start_in_params *params)
|
||||
static int qed_rdma_start(void *rdma_cxt,
|
||||
struct qed_rdma_start_in_params *params)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
struct qed_ptt *p_ptt;
|
||||
|
@ -2601,7 +2512,7 @@ static int qed_rdma_init(struct qed_dev *cdev,
|
|||
return qed_rdma_start(QED_LEADING_HWFN(cdev), params);
|
||||
}
|
||||
|
||||
void qed_rdma_remove_user(void *rdma_cxt, u16 dpi)
|
||||
static void qed_rdma_remove_user(void *rdma_cxt, u16 dpi)
|
||||
{
|
||||
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
|
||||
|
||||
|
@ -2809,11 +2720,6 @@ static int qed_roce_ll2_stop(struct qed_dev *cdev)
|
|||
struct qed_roce_ll2_info *roce_ll2 = hwfn->ll2;
|
||||
int rc;
|
||||
|
||||
if (!cdev) {
|
||||
DP_ERR(cdev, "qed roce ll2 stop: invalid cdev\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (roce_ll2->handle == QED_LL2_UNUSED_HANDLE) {
|
||||
DP_ERR(cdev, "qed roce ll2 stop: cannot stop an unused LL2\n");
|
||||
return -EINVAL;
|
||||
|
@ -2850,7 +2756,7 @@ static int qed_roce_ll2_tx(struct qed_dev *cdev,
|
|||
int rc;
|
||||
int i;
|
||||
|
||||
if (!cdev || !pkt || !params) {
|
||||
if (!pkt || !params) {
|
||||
DP_ERR(cdev,
|
||||
"roce ll2 tx: failed tx because one of the following is NULL - drv=%p, pkt=%p, params=%p\n",
|
||||
cdev, pkt, params);
|
||||
|
|
|
@ -95,26 +95,6 @@ struct qed_rdma_info {
|
|||
enum protocol_type proto;
|
||||
};
|
||||
|
||||
struct qed_rdma_resize_cq_in_params {
|
||||
u16 icid;
|
||||
u32 cq_size;
|
||||
bool pbl_two_level;
|
||||
u64 pbl_ptr;
|
||||
u16 pbl_num_pages;
|
||||
u8 pbl_page_size_log;
|
||||
};
|
||||
|
||||
struct qed_rdma_resize_cq_out_params {
|
||||
u32 prod;
|
||||
u32 cons;
|
||||
};
|
||||
|
||||
struct qed_rdma_resize_cnq_in_params {
|
||||
u32 cnq_id;
|
||||
u32 pbl_page_size_log;
|
||||
u64 pbl_ptr;
|
||||
};
|
||||
|
||||
struct qed_rdma_qp {
|
||||
struct regpair qp_handle;
|
||||
struct regpair qp_handle_async;
|
||||
|
@ -181,36 +161,55 @@ struct qed_rdma_qp {
|
|||
dma_addr_t shared_queue_phys_addr;
|
||||
};
|
||||
|
||||
int
|
||||
qed_rdma_add_user(void *rdma_cxt,
|
||||
struct qed_rdma_add_user_out_params *out_params);
|
||||
int qed_rdma_alloc_pd(void *rdma_cxt, u16 *pd);
|
||||
int qed_rdma_alloc_tid(void *rdma_cxt, u32 *tid);
|
||||
int qed_rdma_deregister_tid(void *rdma_cxt, u32 tid);
|
||||
void qed_rdma_free_tid(void *rdma_cxt, u32 tid);
|
||||
struct qed_rdma_device *qed_rdma_query_device(void *rdma_cxt);
|
||||
struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt);
|
||||
int
|
||||
qed_rdma_register_tid(void *rdma_cxt,
|
||||
struct qed_rdma_register_tid_in_params *params);
|
||||
void qed_rdma_remove_user(void *rdma_cxt, u16 dpi);
|
||||
int qed_rdma_start(void *p_hwfn, struct qed_rdma_start_in_params *params);
|
||||
int qed_rdma_stop(void *rdma_cxt);
|
||||
u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id);
|
||||
u32 qed_rdma_query_cau_timer_res(void *p_hwfn);
|
||||
void qed_rdma_cnq_prod_update(void *rdma_cxt, u8 cnq_index, u16 prod);
|
||||
void qed_rdma_resc_free(struct qed_hwfn *p_hwfn);
|
||||
#if IS_ENABLED(CONFIG_QED_RDMA)
|
||||
void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
|
||||
void qed_async_roce_event(struct qed_hwfn *p_hwfn,
|
||||
struct event_ring_entry *p_eqe);
|
||||
int qed_rdma_destroy_qp(void *rdma_cxt, struct qed_rdma_qp *qp);
|
||||
int qed_rdma_modify_qp(void *rdma_cxt, struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_modify_qp_in_params *params);
|
||||
int qed_rdma_query_qp(void *rdma_cxt, struct qed_rdma_qp *qp,
|
||||
struct qed_rdma_query_qp_out_params *out_params);
|
||||
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
|
||||
void qed_ll2b_complete_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment, bool b_last_packet);
|
||||
void qed_ll2b_release_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment, bool b_last_packet);
|
||||
void qed_ll2b_complete_rx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t rx_buf_addr,
|
||||
u16 data_length,
|
||||
u8 data_length_error,
|
||||
u16 parse_flags,
|
||||
u16 vlan,
|
||||
u32 src_mac_addr_hi,
|
||||
u16 src_mac_addr_lo, bool b_last_packet);
|
||||
#else
|
||||
void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {}
|
||||
static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {}
|
||||
static inline void qed_async_roce_event(struct qed_hwfn *p_hwfn, struct event_ring_entry *p_eqe) {}
|
||||
static inline void qed_ll2b_complete_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment,
|
||||
bool b_last_packet) {}
|
||||
static inline void qed_ll2b_release_tx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t first_frag_addr,
|
||||
bool b_last_fragment,
|
||||
bool b_last_packet) {}
|
||||
static inline void qed_ll2b_complete_rx_gsi_packet(struct qed_hwfn *p_hwfn,
|
||||
u8 connection_handle,
|
||||
void *cookie,
|
||||
dma_addr_t rx_buf_addr,
|
||||
u16 data_length,
|
||||
u8 data_length_error,
|
||||
u16 parse_flags,
|
||||
u16 vlan,
|
||||
u32 src_mac_addr_hi,
|
||||
u16 src_mac_addr_lo,
|
||||
bool b_last_packet) {}
|
||||
#endif
|
||||
#endif
|
||||
|
|
|
@ -80,7 +80,6 @@ union ramrod_data {
|
|||
struct roce_destroy_qp_resp_ramrod_data roce_destroy_qp_resp;
|
||||
struct roce_destroy_qp_req_ramrod_data roce_destroy_qp_req;
|
||||
struct rdma_create_cq_ramrod_data rdma_create_cq;
|
||||
struct rdma_resize_cq_ramrod_data rdma_resize_cq;
|
||||
struct rdma_destroy_cq_ramrod_data rdma_destroy_cq;
|
||||
struct rdma_srq_create_ramrod_data rdma_create_srq;
|
||||
struct rdma_srq_destroy_ramrod_data rdma_destroy_srq;
|
||||
|
|
|
@ -28,9 +28,7 @@
|
|||
#include "qed_reg_addr.h"
|
||||
#include "qed_sp.h"
|
||||
#include "qed_sriov.h"
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
#include "qed_roce.h"
|
||||
#endif
|
||||
|
||||
/***************************************************************************
|
||||
* Structures & Definitions
|
||||
|
@ -240,11 +238,9 @@ qed_async_event_completion(struct qed_hwfn *p_hwfn,
|
|||
struct event_ring_entry *p_eqe)
|
||||
{
|
||||
switch (p_eqe->protocol_id) {
|
||||
#if IS_ENABLED(CONFIG_INFINIBAND_QEDR)
|
||||
case PROTOCOLID_ROCE:
|
||||
qed_async_roce_event(p_hwfn, p_eqe);
|
||||
return 0;
|
||||
#endif
|
||||
case PROTOCOLID_COMMON:
|
||||
return qed_sriov_eqe_event(p_hwfn,
|
||||
p_eqe->opcode,
|
||||
|
|
|
@ -2,4 +2,4 @@ obj-$(CONFIG_QEDE) := qede.o
|
|||
|
||||
qede-y := qede_main.o qede_ethtool.o
|
||||
qede-$(CONFIG_DCB) += qede_dcbnl.o
|
||||
qede-$(CONFIG_INFINIBAND_QEDR) += qede_roce.o
|
||||
qede-$(CONFIG_QED_RDMA) += qede_roce.o
|
||||
|
|
|
@ -348,12 +348,13 @@ bool qede_has_rx_work(struct qede_rx_queue *rxq);
|
|||
int qede_txq_has_work(struct qede_tx_queue *txq);
|
||||
void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, struct qede_dev *edev,
|
||||
u8 count);
|
||||
void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq);
|
||||
|
||||
#define RX_RING_SIZE_POW 13
|
||||
#define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW))
|
||||
#define NUM_RX_BDS_MAX (RX_RING_SIZE - 1)
|
||||
#define NUM_RX_BDS_MIN 128
|
||||
#define NUM_RX_BDS_DEF NUM_RX_BDS_MAX
|
||||
#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1)
|
||||
|
||||
#define TX_RING_SIZE_POW 13
|
||||
#define TX_RING_SIZE ((u16)BIT(TX_RING_SIZE_POW))
|
||||
|
|
|
@ -756,6 +756,8 @@ static void qede_get_channels(struct net_device *dev,
|
|||
struct qede_dev *edev = netdev_priv(dev);
|
||||
|
||||
channels->max_combined = QEDE_MAX_RSS_CNT(edev);
|
||||
channels->max_rx = QEDE_MAX_RSS_CNT(edev);
|
||||
channels->max_tx = QEDE_MAX_RSS_CNT(edev);
|
||||
channels->combined_count = QEDE_QUEUE_CNT(edev) - edev->fp_num_tx -
|
||||
edev->fp_num_rx;
|
||||
channels->tx_count = edev->fp_num_tx;
|
||||
|
@ -820,6 +822,13 @@ static int qede_set_channels(struct net_device *dev,
|
|||
edev->req_queues = count;
|
||||
edev->req_num_tx = channels->tx_count;
|
||||
edev->req_num_rx = channels->rx_count;
|
||||
/* Reset the indirection table if rx queue count is updated */
|
||||
if ((edev->req_queues - edev->req_num_tx) != QEDE_RSS_COUNT(edev)) {
|
||||
edev->rss_params_inited &= ~QEDE_RSS_INDIR_INITED;
|
||||
memset(&edev->rss_params.rss_ind_table, 0,
|
||||
sizeof(edev->rss_params.rss_ind_table));
|
||||
}
|
||||
|
||||
if (netif_running(dev))
|
||||
qede_reload(edev, NULL, NULL);
|
||||
|
||||
|
@ -1053,6 +1062,12 @@ static int qede_set_rxfh(struct net_device *dev, const u32 *indir,
|
|||
struct qede_dev *edev = netdev_priv(dev);
|
||||
int i;
|
||||
|
||||
if (edev->dev_info.common.num_hwfns > 1) {
|
||||
DP_INFO(edev,
|
||||
"RSS configuration is not supported for 100G devices\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
|
@ -1184,8 +1199,8 @@ static int qede_selftest_transmit_traffic(struct qede_dev *edev,
|
|||
}
|
||||
|
||||
first_bd = (struct eth_tx_1st_bd *)qed_chain_consume(&txq->tx_pbl);
|
||||
dma_unmap_page(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd), DMA_TO_DEVICE);
|
||||
dma_unmap_single(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd), DMA_TO_DEVICE);
|
||||
txq->sw_tx_cons++;
|
||||
txq->sw_tx_ring[idx].skb = NULL;
|
||||
|
||||
|
@ -1199,8 +1214,8 @@ static int qede_selftest_receive_traffic(struct qede_dev *edev)
|
|||
struct qede_rx_queue *rxq = NULL;
|
||||
struct sw_rx_data *sw_rx_data;
|
||||
union eth_rx_cqe *cqe;
|
||||
int i, rc = 0;
|
||||
u8 *data_ptr;
|
||||
int i;
|
||||
|
||||
for_each_queue(i) {
|
||||
if (edev->fp_array[i].type & QEDE_FASTPATH_RX) {
|
||||
|
@ -1219,46 +1234,60 @@ static int qede_selftest_receive_traffic(struct qede_dev *edev)
|
|||
* queue and that the loopback traffic is not IP.
|
||||
*/
|
||||
for (i = 0; i < QEDE_SELFTEST_POLL_COUNT; i++) {
|
||||
if (qede_has_rx_work(rxq))
|
||||
if (!qede_has_rx_work(rxq)) {
|
||||
usleep_range(100, 200);
|
||||
continue;
|
||||
}
|
||||
|
||||
hw_comp_cons = le16_to_cpu(*rxq->hw_cons_ptr);
|
||||
sw_comp_cons = qed_chain_get_cons_idx(&rxq->rx_comp_ring);
|
||||
|
||||
/* Memory barrier to prevent the CPU from doing speculative
|
||||
* reads of CQE/BD before reading hw_comp_cons. If the CQE is
|
||||
* read before it is written by FW, then FW writes CQE and SB,
|
||||
* and then the CPU reads the hw_comp_cons, it will use an old
|
||||
* CQE.
|
||||
*/
|
||||
rmb();
|
||||
|
||||
/* Get the CQE from the completion ring */
|
||||
cqe = (union eth_rx_cqe *)qed_chain_consume(&rxq->rx_comp_ring);
|
||||
|
||||
/* Get the data from the SW ring */
|
||||
sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS_MAX;
|
||||
sw_rx_data = &rxq->sw_rx_ring[sw_rx_index];
|
||||
fp_cqe = &cqe->fast_path_regular;
|
||||
len = le16_to_cpu(fp_cqe->len_on_first_bd);
|
||||
data_ptr = (u8 *)(page_address(sw_rx_data->data) +
|
||||
fp_cqe->placement_offset +
|
||||
sw_rx_data->page_offset);
|
||||
if (ether_addr_equal(data_ptr, edev->ndev->dev_addr) &&
|
||||
ether_addr_equal(data_ptr + ETH_ALEN,
|
||||
edev->ndev->dev_addr)) {
|
||||
for (i = ETH_HLEN; i < len; i++)
|
||||
if (data_ptr[i] != (unsigned char)(i & 0xff)) {
|
||||
rc = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
qede_recycle_rx_bd_ring(rxq, edev, 1);
|
||||
qed_chain_recycle_consumed(&rxq->rx_comp_ring);
|
||||
break;
|
||||
usleep_range(100, 200);
|
||||
}
|
||||
|
||||
DP_INFO(edev, "Not the transmitted packet\n");
|
||||
qede_recycle_rx_bd_ring(rxq, edev, 1);
|
||||
qed_chain_recycle_consumed(&rxq->rx_comp_ring);
|
||||
}
|
||||
|
||||
if (!qede_has_rx_work(rxq)) {
|
||||
if (i == QEDE_SELFTEST_POLL_COUNT) {
|
||||
DP_NOTICE(edev, "Failed to receive the traffic\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
hw_comp_cons = le16_to_cpu(*rxq->hw_cons_ptr);
|
||||
sw_comp_cons = qed_chain_get_cons_idx(&rxq->rx_comp_ring);
|
||||
qede_update_rx_prod(edev, rxq);
|
||||
|
||||
/* Memory barrier to prevent the CPU from doing speculative reads of CQE
|
||||
* / BD before reading hw_comp_cons. If the CQE is read before it is
|
||||
* written by FW, then FW writes CQE and SB, and then the CPU reads the
|
||||
* hw_comp_cons, it will use an old CQE.
|
||||
*/
|
||||
rmb();
|
||||
|
||||
/* Get the CQE from the completion ring */
|
||||
cqe = (union eth_rx_cqe *)qed_chain_consume(&rxq->rx_comp_ring);
|
||||
|
||||
/* Get the data from the SW ring */
|
||||
sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS_MAX;
|
||||
sw_rx_data = &rxq->sw_rx_ring[sw_rx_index];
|
||||
fp_cqe = &cqe->fast_path_regular;
|
||||
len = le16_to_cpu(fp_cqe->len_on_first_bd);
|
||||
data_ptr = (u8 *)(page_address(sw_rx_data->data) +
|
||||
fp_cqe->placement_offset + sw_rx_data->page_offset);
|
||||
for (i = ETH_HLEN; i < len; i++)
|
||||
if (data_ptr[i] != (unsigned char)(i & 0xff)) {
|
||||
DP_NOTICE(edev, "Loopback test failed\n");
|
||||
qede_recycle_rx_bd_ring(rxq, edev, 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
qede_recycle_rx_bd_ring(rxq, edev, 1);
|
||||
|
||||
return 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int qede_selftest_run_loopback(struct qede_dev *edev, u32 loopback_mode)
|
||||
|
|
|
@ -313,8 +313,8 @@ static int qede_free_tx_pkt(struct qede_dev *edev,
|
|||
split_bd_len = BD_UNMAP_LEN(split);
|
||||
bds_consumed++;
|
||||
}
|
||||
dma_unmap_page(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd) + split_bd_len, DMA_TO_DEVICE);
|
||||
dma_unmap_single(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd) + split_bd_len, DMA_TO_DEVICE);
|
||||
|
||||
/* Unmap the data of the skb frags */
|
||||
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++, bds_consumed++) {
|
||||
|
@ -359,8 +359,8 @@ static void qede_free_failed_tx_pkt(struct qede_dev *edev,
|
|||
nbd--;
|
||||
}
|
||||
|
||||
dma_unmap_page(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd) + split_bd_len, DMA_TO_DEVICE);
|
||||
dma_unmap_single(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
|
||||
BD_UNMAP_LEN(first_bd) + split_bd_len, DMA_TO_DEVICE);
|
||||
|
||||
/* Unmap the data of the skb frags */
|
||||
for (i = 0; i < nbd; i++) {
|
||||
|
@ -943,8 +943,7 @@ static inline int qede_realloc_rx_buffer(struct qede_dev *edev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void qede_update_rx_prod(struct qede_dev *edev,
|
||||
struct qede_rx_queue *rxq)
|
||||
void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
|
||||
{
|
||||
u16 bd_prod = qed_chain_get_prod_idx(&rxq->rx_bd_ring);
|
||||
u16 cqe_prod = qed_chain_get_prod_idx(&rxq->rx_comp_ring);
|
||||
|
@ -2941,7 +2940,7 @@ static int qede_alloc_mem_txq(struct qede_dev *edev, struct qede_tx_queue *txq)
|
|||
txq->num_tx_buffers = edev->q_num_tx_buffers;
|
||||
|
||||
/* Allocate the parallel driver ring for Tx buffers */
|
||||
size = sizeof(*txq->sw_tx_ring) * NUM_TX_BDS_MAX;
|
||||
size = sizeof(*txq->sw_tx_ring) * TX_RING_SIZE;
|
||||
txq->sw_tx_ring = kzalloc(size, GFP_KERNEL);
|
||||
if (!txq->sw_tx_ring) {
|
||||
DP_NOTICE(edev, "Tx buffers ring allocation failed\n");
|
||||
|
@ -2952,7 +2951,7 @@ static int qede_alloc_mem_txq(struct qede_dev *edev, struct qede_tx_queue *txq)
|
|||
QED_CHAIN_USE_TO_CONSUME_PRODUCE,
|
||||
QED_CHAIN_MODE_PBL,
|
||||
QED_CHAIN_CNT_TYPE_U16,
|
||||
NUM_TX_BDS_MAX,
|
||||
TX_RING_SIZE,
|
||||
sizeof(*p_virt), &txq->tx_pbl);
|
||||
if (rc)
|
||||
goto err;
|
||||
|
|
|
@ -1021,14 +1021,18 @@ void emac_mac_down(struct emac_adapter *adpt)
|
|||
napi_disable(&adpt->rx_q.napi);
|
||||
|
||||
phy_stop(adpt->phydev);
|
||||
phy_disconnect(adpt->phydev);
|
||||
|
||||
/* disable mac irq */
|
||||
/* Interrupts must be disabled before the PHY is disconnected, to
|
||||
* avoid a race condition where adjust_link is null when we get
|
||||
* an interrupt.
|
||||
*/
|
||||
writel(DIS_INT, adpt->base + EMAC_INT_STATUS);
|
||||
writel(0, adpt->base + EMAC_INT_MASK);
|
||||
synchronize_irq(adpt->irq.irq);
|
||||
free_irq(adpt->irq.irq, &adpt->irq);
|
||||
|
||||
phy_disconnect(adpt->phydev);
|
||||
|
||||
emac_mac_reset(adpt);
|
||||
|
||||
emac_tx_q_descs_free(adpt);
|
||||
|
|
|
@ -575,6 +575,7 @@ static const struct of_device_id emac_dt_match[] = {
|
|||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, emac_dt_match);
|
||||
|
||||
#if IS_ENABLED(CONFIG_ACPI)
|
||||
static const struct acpi_device_id emac_acpi_match[] = {
|
||||
|
|
|
@ -8273,7 +8273,8 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if ((sizeof(dma_addr_t) > 4) &&
|
||||
(use_dac == 1 || (use_dac == -1 && pci_is_pcie(pdev) &&
|
||||
tp->mac_version >= RTL_GIGA_MAC_VER_18)) &&
|
||||
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
|
||||
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
|
||||
!pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
|
||||
|
||||
/* CPlusCmd Dual Access Cycle is only needed for non-PCIe */
|
||||
if (!pci_is_pcie(pdev))
|
||||
|
|
|
@ -1471,7 +1471,7 @@ static int rocker_world_check_init(struct rocker_port *rocker_port)
|
|||
if (rocker->wops) {
|
||||
if (rocker->wops->mode != mode) {
|
||||
dev_err(&rocker->pdev->dev, "hardware has ports in different worlds, which is not supported\n");
|
||||
return err;
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1493,8 +1493,6 @@ static int ofdpa_port_ipv4_nh(struct ofdpa_port *ofdpa_port,
|
|||
spin_lock_irqsave(&ofdpa->neigh_tbl_lock, lock_flags);
|
||||
|
||||
found = ofdpa_neigh_tbl_find(ofdpa, ip_addr);
|
||||
if (found)
|
||||
*index = found->index;
|
||||
|
||||
updating = found && adding;
|
||||
removing = found && !adding;
|
||||
|
@ -1508,9 +1506,11 @@ static int ofdpa_port_ipv4_nh(struct ofdpa_port *ofdpa_port,
|
|||
resolved = false;
|
||||
} else if (removing) {
|
||||
ofdpa_neigh_del(trans, found);
|
||||
*index = found->index;
|
||||
} else if (updating) {
|
||||
ofdpa_neigh_update(found, trans, NULL, false);
|
||||
resolved = !is_zero_ether_addr(found->eth_dst);
|
||||
*index = found->index;
|
||||
} else {
|
||||
err = -ENOENT;
|
||||
}
|
||||
|
|
|
@ -347,10 +347,9 @@ static void dwmac4_display_ring(void *head, unsigned int size, bool rx)
|
|||
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
if (p->des0)
|
||||
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(p),
|
||||
p->des0, p->des1, p->des2, p->des3);
|
||||
pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
i, (unsigned int)virt_to_phys(p),
|
||||
p->des0, p->des1, p->des2, p->des3);
|
||||
p++;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -145,7 +145,7 @@ int stmmac_mdio_register(struct net_device *ndev);
|
|||
int stmmac_mdio_reset(struct mii_bus *mii);
|
||||
void stmmac_set_ethtool_ops(struct net_device *netdev);
|
||||
|
||||
int stmmac_ptp_register(struct stmmac_priv *priv);
|
||||
void stmmac_ptp_register(struct stmmac_priv *priv);
|
||||
void stmmac_ptp_unregister(struct stmmac_priv *priv);
|
||||
int stmmac_resume(struct device *dev);
|
||||
int stmmac_suspend(struct device *dev);
|
||||
|
|
|
@ -676,7 +676,9 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
|
|||
priv->hwts_tx_en = 0;
|
||||
priv->hwts_rx_en = 0;
|
||||
|
||||
return stmmac_ptp_register(priv);
|
||||
stmmac_ptp_register(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void stmmac_release_ptp(struct stmmac_priv *priv)
|
||||
|
@ -1710,7 +1712,7 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
|
|||
if (init_ptp) {
|
||||
ret = stmmac_init_ptp(priv);
|
||||
if (ret)
|
||||
netdev_warn(priv->dev, "PTP support cannot init.\n");
|
||||
netdev_warn(priv->dev, "fail to init PTP.\n");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
|
|
@ -177,7 +177,7 @@ static struct ptp_clock_info stmmac_ptp_clock_ops = {
|
|||
* Description: this function will register the ptp clock driver
|
||||
* to kernel. It also does some house keeping work.
|
||||
*/
|
||||
int stmmac_ptp_register(struct stmmac_priv *priv)
|
||||
void stmmac_ptp_register(struct stmmac_priv *priv)
|
||||
{
|
||||
spin_lock_init(&priv->ptp_lock);
|
||||
priv->ptp_clock_ops = stmmac_ptp_clock_ops;
|
||||
|
@ -185,15 +185,10 @@ int stmmac_ptp_register(struct stmmac_priv *priv)
|
|||
priv->ptp_clock = ptp_clock_register(&priv->ptp_clock_ops,
|
||||
priv->device);
|
||||
if (IS_ERR(priv->ptp_clock)) {
|
||||
netdev_err(priv->dev, "ptp_clock_register failed\n");
|
||||
priv->ptp_clock = NULL;
|
||||
return PTR_ERR(priv->ptp_clock);
|
||||
}
|
||||
|
||||
spin_lock_init(&priv->ptp_lock);
|
||||
|
||||
netdev_dbg(priv->dev, "Added PTP HW clock successfully\n");
|
||||
|
||||
return 0;
|
||||
} else if (priv->ptp_clock)
|
||||
netdev_info(priv->dev, "registered PTP clock\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -982,11 +982,13 @@ static int dwceqos_mii_probe(struct net_device *ndev)
|
|||
if (netif_msg_probe(lp))
|
||||
phy_attached_info(phydev);
|
||||
|
||||
phydev->supported &= PHY_GBIT_FEATURES;
|
||||
phydev->supported &= PHY_GBIT_FEATURES | SUPPORTED_Pause |
|
||||
SUPPORTED_Asym_Pause;
|
||||
|
||||
lp->link = 0;
|
||||
lp->speed = 0;
|
||||
lp->duplex = DUPLEX_UNKNOWN;
|
||||
lp->flowcontrol.autoneg = AUTONEG_ENABLE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -58,9 +58,9 @@ struct geneve_dev {
|
|||
struct hlist_node hlist; /* vni hash table */
|
||||
struct net *net; /* netns for packet i/o */
|
||||
struct net_device *dev; /* netdev for geneve tunnel */
|
||||
struct geneve_sock *sock4; /* IPv4 socket used for geneve tunnel */
|
||||
struct geneve_sock __rcu *sock4; /* IPv4 socket used for geneve tunnel */
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
struct geneve_sock *sock6; /* IPv6 socket used for geneve tunnel */
|
||||
struct geneve_sock __rcu *sock6; /* IPv6 socket used for geneve tunnel */
|
||||
#endif
|
||||
u8 vni[3]; /* virtual network ID for tunnel */
|
||||
u8 ttl; /* TTL override */
|
||||
|
@ -453,7 +453,7 @@ static struct sk_buff **geneve_gro_receive(struct sock *sk,
|
|||
|
||||
skb_gro_pull(skb, gh_len);
|
||||
skb_gro_postpull_rcsum(skb, gh, gh_len);
|
||||
pp = ptype->callbacks.gro_receive(head, skb);
|
||||
pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
|
||||
flush = 0;
|
||||
|
||||
out_unlock:
|
||||
|
@ -543,9 +543,19 @@ static void __geneve_sock_release(struct geneve_sock *gs)
|
|||
|
||||
static void geneve_sock_release(struct geneve_dev *geneve)
|
||||
{
|
||||
__geneve_sock_release(geneve->sock4);
|
||||
struct geneve_sock *gs4 = rtnl_dereference(geneve->sock4);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
__geneve_sock_release(geneve->sock6);
|
||||
struct geneve_sock *gs6 = rtnl_dereference(geneve->sock6);
|
||||
|
||||
rcu_assign_pointer(geneve->sock6, NULL);
|
||||
#endif
|
||||
|
||||
rcu_assign_pointer(geneve->sock4, NULL);
|
||||
synchronize_net();
|
||||
|
||||
__geneve_sock_release(gs4);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
__geneve_sock_release(gs6);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -586,10 +596,10 @@ out:
|
|||
gs->flags = geneve->flags;
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (ipv6)
|
||||
geneve->sock6 = gs;
|
||||
rcu_assign_pointer(geneve->sock6, gs);
|
||||
else
|
||||
#endif
|
||||
geneve->sock4 = gs;
|
||||
rcu_assign_pointer(geneve->sock4, gs);
|
||||
|
||||
hash = geneve_net_vni_hash(geneve->vni);
|
||||
hlist_add_head_rcu(&geneve->hlist, &gs->vni_list[hash]);
|
||||
|
@ -603,9 +613,7 @@ static int geneve_open(struct net_device *dev)
|
|||
bool metadata = geneve->collect_md;
|
||||
int ret = 0;
|
||||
|
||||
geneve->sock4 = NULL;
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
geneve->sock6 = NULL;
|
||||
if (ipv6 || metadata)
|
||||
ret = geneve_sock_add(geneve, true);
|
||||
#endif
|
||||
|
@ -720,6 +728,9 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
|
|||
struct rtable *rt = NULL;
|
||||
__u8 tos;
|
||||
|
||||
if (!rcu_dereference(geneve->sock4))
|
||||
return ERR_PTR(-EIO);
|
||||
|
||||
memset(fl4, 0, sizeof(*fl4));
|
||||
fl4->flowi4_mark = skb->mark;
|
||||
fl4->flowi4_proto = IPPROTO_UDP;
|
||||
|
@ -772,11 +783,15 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
|
|||
{
|
||||
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
struct geneve_sock *gs6 = geneve->sock6;
|
||||
struct dst_entry *dst = NULL;
|
||||
struct dst_cache *dst_cache;
|
||||
struct geneve_sock *gs6;
|
||||
__u8 prio;
|
||||
|
||||
gs6 = rcu_dereference(geneve->sock6);
|
||||
if (!gs6)
|
||||
return ERR_PTR(-EIO);
|
||||
|
||||
memset(fl6, 0, sizeof(*fl6));
|
||||
fl6->flowi6_mark = skb->mark;
|
||||
fl6->flowi6_proto = IPPROTO_UDP;
|
||||
|
@ -842,7 +857,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
struct ip_tunnel_info *info)
|
||||
{
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
struct geneve_sock *gs4 = geneve->sock4;
|
||||
struct geneve_sock *gs4;
|
||||
struct rtable *rt = NULL;
|
||||
const struct iphdr *iip; /* interior IP header */
|
||||
int err = -EINVAL;
|
||||
|
@ -853,6 +868,10 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
bool xnet = !net_eq(geneve->net, dev_net(geneve->dev));
|
||||
u32 flags = geneve->flags;
|
||||
|
||||
gs4 = rcu_dereference(geneve->sock4);
|
||||
if (!gs4)
|
||||
goto tx_error;
|
||||
|
||||
if (geneve->collect_md) {
|
||||
if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) {
|
||||
netdev_dbg(dev, "no tunnel metadata\n");
|
||||
|
@ -932,9 +951,9 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
struct ip_tunnel_info *info)
|
||||
{
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
struct geneve_sock *gs6 = geneve->sock6;
|
||||
struct dst_entry *dst = NULL;
|
||||
const struct iphdr *iip; /* interior IP header */
|
||||
struct geneve_sock *gs6;
|
||||
int err = -EINVAL;
|
||||
struct flowi6 fl6;
|
||||
__u8 prio, ttl;
|
||||
|
@ -943,6 +962,10 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
bool xnet = !net_eq(geneve->net, dev_net(geneve->dev));
|
||||
u32 flags = geneve->flags;
|
||||
|
||||
gs6 = rcu_dereference(geneve->sock6);
|
||||
if (!gs6)
|
||||
goto tx_error;
|
||||
|
||||
if (geneve->collect_md) {
|
||||
if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) {
|
||||
netdev_dbg(dev, "no tunnel metadata\n");
|
||||
|
|
|
@ -447,7 +447,7 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net)
|
|||
* Setup the sendside checksum offload only if this is not a
|
||||
* GSO packet.
|
||||
*/
|
||||
if (skb_is_gso(skb)) {
|
||||
if ((net_trans_info & (INFO_TCP | INFO_UDP)) && skb_is_gso(skb)) {
|
||||
struct ndis_tcp_lso_info *lso_info;
|
||||
|
||||
rndis_msg_size += NDIS_LSO_PPI_SIZE;
|
||||
|
@ -607,15 +607,18 @@ static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net,
|
|||
packet->total_data_buflen);
|
||||
|
||||
skb->protocol = eth_type_trans(skb, net);
|
||||
if (csum_info) {
|
||||
/* We only look at the IP checksum here.
|
||||
* Should we be dropping the packet if checksum
|
||||
* failed? How do we deal with other checksums - TCP/UDP?
|
||||
*/
|
||||
if (csum_info->receive.ip_checksum_succeeded)
|
||||
|
||||
/* skb is already created with CHECKSUM_NONE */
|
||||
skb_checksum_none_assert(skb);
|
||||
|
||||
/*
|
||||
* In Linux, the IP checksum is always checked.
|
||||
* Do L4 checksum offload if enabled and present.
|
||||
*/
|
||||
if (csum_info && (net->features & NETIF_F_RXCSUM)) {
|
||||
if (csum_info->receive.tcp_checksum_succeeded ||
|
||||
csum_info->receive.udp_checksum_succeeded)
|
||||
skb->ip_summed = CHECKSUM_UNNECESSARY;
|
||||
else
|
||||
skb->ip_summed = CHECKSUM_NONE;
|
||||
}
|
||||
|
||||
if (vlan_tci & VLAN_TAG_PRESENT)
|
||||
|
@ -696,12 +699,8 @@ int netvsc_recv_callback(struct hv_device *device_obj,
|
|||
static void netvsc_get_drvinfo(struct net_device *net,
|
||||
struct ethtool_drvinfo *info)
|
||||
{
|
||||
struct net_device_context *net_device_ctx = netdev_priv(net);
|
||||
struct hv_device *dev = net_device_ctx->device_ctx;
|
||||
|
||||
strlcpy(info->driver, KBUILD_MODNAME, sizeof(info->driver));
|
||||
strlcpy(info->fw_version, "N/A", sizeof(info->fw_version));
|
||||
strlcpy(info->bus_info, vmbus_dev_name(dev), sizeof(info->bus_info));
|
||||
}
|
||||
|
||||
static void netvsc_get_channels(struct net_device *net,
|
||||
|
|
|
@ -397,6 +397,14 @@ static struct macsec_cb *macsec_skb_cb(struct sk_buff *skb)
|
|||
#define DEFAULT_ENCRYPT false
|
||||
#define DEFAULT_ENCODING_SA 0
|
||||
|
||||
static bool send_sci(const struct macsec_secy *secy)
|
||||
{
|
||||
const struct macsec_tx_sc *tx_sc = &secy->tx_sc;
|
||||
|
||||
return tx_sc->send_sci ||
|
||||
(secy->n_rx_sc > 1 && !tx_sc->end_station && !tx_sc->scb);
|
||||
}
|
||||
|
||||
static sci_t make_sci(u8 *addr, __be16 port)
|
||||
{
|
||||
sci_t sci;
|
||||
|
@ -437,15 +445,15 @@ static unsigned int macsec_extra_len(bool sci_present)
|
|||
|
||||
/* Fill SecTAG according to IEEE 802.1AE-2006 10.5.3 */
|
||||
static void macsec_fill_sectag(struct macsec_eth_header *h,
|
||||
const struct macsec_secy *secy, u32 pn)
|
||||
const struct macsec_secy *secy, u32 pn,
|
||||
bool sci_present)
|
||||
{
|
||||
const struct macsec_tx_sc *tx_sc = &secy->tx_sc;
|
||||
|
||||
memset(&h->tci_an, 0, macsec_sectag_len(tx_sc->send_sci));
|
||||
memset(&h->tci_an, 0, macsec_sectag_len(sci_present));
|
||||
h->eth.h_proto = htons(ETH_P_MACSEC);
|
||||
|
||||
if (tx_sc->send_sci ||
|
||||
(secy->n_rx_sc > 1 && !tx_sc->end_station && !tx_sc->scb)) {
|
||||
if (sci_present) {
|
||||
h->tci_an |= MACSEC_TCI_SC;
|
||||
memcpy(&h->secure_channel_id, &secy->sci,
|
||||
sizeof(h->secure_channel_id));
|
||||
|
@ -650,6 +658,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
|
|||
struct macsec_tx_sc *tx_sc;
|
||||
struct macsec_tx_sa *tx_sa;
|
||||
struct macsec_dev *macsec = macsec_priv(dev);
|
||||
bool sci_present;
|
||||
u32 pn;
|
||||
|
||||
secy = &macsec->secy;
|
||||
|
@ -687,7 +696,8 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
|
|||
|
||||
unprotected_len = skb->len;
|
||||
eth = eth_hdr(skb);
|
||||
hh = (struct macsec_eth_header *)skb_push(skb, macsec_extra_len(tx_sc->send_sci));
|
||||
sci_present = send_sci(secy);
|
||||
hh = (struct macsec_eth_header *)skb_push(skb, macsec_extra_len(sci_present));
|
||||
memmove(hh, eth, 2 * ETH_ALEN);
|
||||
|
||||
pn = tx_sa_update_pn(tx_sa, secy);
|
||||
|
@ -696,7 +706,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
|
|||
kfree_skb(skb);
|
||||
return ERR_PTR(-ENOLINK);
|
||||
}
|
||||
macsec_fill_sectag(hh, secy, pn);
|
||||
macsec_fill_sectag(hh, secy, pn, sci_present);
|
||||
macsec_set_shortlen(hh, unprotected_len - 2 * ETH_ALEN);
|
||||
|
||||
skb_put(skb, secy->icv_len);
|
||||
|
@ -726,10 +736,10 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
|
|||
skb_to_sgvec(skb, sg, 0, skb->len);
|
||||
|
||||
if (tx_sc->encrypt) {
|
||||
int len = skb->len - macsec_hdr_len(tx_sc->send_sci) -
|
||||
int len = skb->len - macsec_hdr_len(sci_present) -
|
||||
secy->icv_len;
|
||||
aead_request_set_crypt(req, sg, sg, len, iv);
|
||||
aead_request_set_ad(req, macsec_hdr_len(tx_sc->send_sci));
|
||||
aead_request_set_ad(req, macsec_hdr_len(sci_present));
|
||||
} else {
|
||||
aead_request_set_crypt(req, sg, sg, 0, iv);
|
||||
aead_request_set_ad(req, skb->len - secy->icv_len);
|
||||
|
|
|
@ -42,19 +42,24 @@
|
|||
#define AT803X_MMD_ACCESS_CONTROL 0x0D
|
||||
#define AT803X_MMD_ACCESS_CONTROL_DATA 0x0E
|
||||
#define AT803X_FUNC_DATA 0x4003
|
||||
#define AT803X_REG_CHIP_CONFIG 0x1f
|
||||
#define AT803X_BT_BX_REG_SEL 0x8000
|
||||
|
||||
#define AT803X_DEBUG_ADDR 0x1D
|
||||
#define AT803X_DEBUG_DATA 0x1E
|
||||
|
||||
#define AT803X_MODE_CFG_MASK 0x0F
|
||||
#define AT803X_MODE_CFG_SGMII 0x01
|
||||
|
||||
#define AT803X_PSSR 0x11 /*PHY-Specific Status Register*/
|
||||
#define AT803X_PSSR_MR_AN_COMPLETE 0x0200
|
||||
|
||||
#define AT803X_DEBUG_REG_0 0x00
|
||||
#define AT803X_DEBUG_RX_CLK_DLY_EN BIT(15)
|
||||
|
||||
#define AT803X_DEBUG_REG_5 0x05
|
||||
#define AT803X_DEBUG_TX_CLK_DLY_EN BIT(8)
|
||||
|
||||
#define AT803X_REG_CHIP_CONFIG 0x1f
|
||||
#define AT803X_BT_BX_REG_SEL 0x8000
|
||||
|
||||
#define ATH8030_PHY_ID 0x004dd076
|
||||
#define ATH8031_PHY_ID 0x004dd074
|
||||
#define ATH8035_PHY_ID 0x004dd072
|
||||
|
@ -209,7 +214,6 @@ static int at803x_suspend(struct phy_device *phydev)
|
|||
{
|
||||
int value;
|
||||
int wol_enabled;
|
||||
int ccr;
|
||||
|
||||
mutex_lock(&phydev->lock);
|
||||
|
||||
|
@ -225,16 +229,6 @@ static int at803x_suspend(struct phy_device *phydev)
|
|||
|
||||
phy_write(phydev, MII_BMCR, value);
|
||||
|
||||
if (phydev->interface != PHY_INTERFACE_MODE_SGMII)
|
||||
goto done;
|
||||
|
||||
/* also power-down SGMII interface */
|
||||
ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG);
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr & ~AT803X_BT_BX_REG_SEL);
|
||||
phy_write(phydev, MII_BMCR, phy_read(phydev, MII_BMCR) | BMCR_PDOWN);
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr | AT803X_BT_BX_REG_SEL);
|
||||
|
||||
done:
|
||||
mutex_unlock(&phydev->lock);
|
||||
|
||||
return 0;
|
||||
|
@ -243,7 +237,6 @@ done:
|
|||
static int at803x_resume(struct phy_device *phydev)
|
||||
{
|
||||
int value;
|
||||
int ccr;
|
||||
|
||||
mutex_lock(&phydev->lock);
|
||||
|
||||
|
@ -251,17 +244,6 @@ static int at803x_resume(struct phy_device *phydev)
|
|||
value &= ~(BMCR_PDOWN | BMCR_ISOLATE);
|
||||
phy_write(phydev, MII_BMCR, value);
|
||||
|
||||
if (phydev->interface != PHY_INTERFACE_MODE_SGMII)
|
||||
goto done;
|
||||
|
||||
/* also power-up SGMII interface */
|
||||
ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG);
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr & ~AT803X_BT_BX_REG_SEL);
|
||||
value = phy_read(phydev, MII_BMCR) & ~(BMCR_PDOWN | BMCR_ISOLATE);
|
||||
phy_write(phydev, MII_BMCR, value);
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr | AT803X_BT_BX_REG_SEL);
|
||||
|
||||
done:
|
||||
mutex_unlock(&phydev->lock);
|
||||
|
||||
return 0;
|
||||
|
@ -381,6 +363,36 @@ static void at803x_link_change_notify(struct phy_device *phydev)
|
|||
}
|
||||
}
|
||||
|
||||
static int at803x_aneg_done(struct phy_device *phydev)
|
||||
{
|
||||
int ccr;
|
||||
|
||||
int aneg_done = genphy_aneg_done(phydev);
|
||||
if (aneg_done != BMSR_ANEGCOMPLETE)
|
||||
return aneg_done;
|
||||
|
||||
/*
|
||||
* in SGMII mode, if copper side autoneg is successful,
|
||||
* also check SGMII side autoneg result
|
||||
*/
|
||||
ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG);
|
||||
if ((ccr & AT803X_MODE_CFG_MASK) != AT803X_MODE_CFG_SGMII)
|
||||
return aneg_done;
|
||||
|
||||
/* switch to SGMII/fiber page */
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr & ~AT803X_BT_BX_REG_SEL);
|
||||
|
||||
/* check if the SGMII link is OK. */
|
||||
if (!(phy_read(phydev, AT803X_PSSR) & AT803X_PSSR_MR_AN_COMPLETE)) {
|
||||
pr_warn("803x_aneg_done: SGMII link is not ok\n");
|
||||
aneg_done = 0;
|
||||
}
|
||||
/* switch back to copper page */
|
||||
phy_write(phydev, AT803X_REG_CHIP_CONFIG, ccr | AT803X_BT_BX_REG_SEL);
|
||||
|
||||
return aneg_done;
|
||||
}
|
||||
|
||||
static struct phy_driver at803x_driver[] = {
|
||||
{
|
||||
/* ATHEROS 8035 */
|
||||
|
@ -432,6 +444,7 @@ static struct phy_driver at803x_driver[] = {
|
|||
.flags = PHY_HAS_INTERRUPT,
|
||||
.config_aneg = genphy_config_aneg,
|
||||
.read_status = genphy_read_status,
|
||||
.aneg_done = at803x_aneg_done,
|
||||
.ack_interrupt = &at803x_ack_interrupt,
|
||||
.config_intr = &at803x_config_intr,
|
||||
} };
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#define TI_DP83848C_PHY_ID 0x20005ca0
|
||||
#define NS_DP83848C_PHY_ID 0x20005c90
|
||||
#define TLK10X_PHY_ID 0x2000a210
|
||||
#define TI_DP83822_PHY_ID 0x2000a240
|
||||
|
||||
/* Registers */
|
||||
#define DP83848_MICR 0x11 /* MII Interrupt Control Register */
|
||||
|
@ -77,6 +78,7 @@ static struct mdio_device_id __maybe_unused dp83848_tbl[] = {
|
|||
{ TI_DP83848C_PHY_ID, 0xfffffff0 },
|
||||
{ NS_DP83848C_PHY_ID, 0xfffffff0 },
|
||||
{ TLK10X_PHY_ID, 0xfffffff0 },
|
||||
{ TI_DP83822_PHY_ID, 0xfffffff0 },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(mdio, dp83848_tbl);
|
||||
|
@ -105,6 +107,7 @@ static struct phy_driver dp83848_driver[] = {
|
|||
DP83848_PHY_DRIVER(TI_DP83848C_PHY_ID, "TI DP83848C 10/100 Mbps PHY"),
|
||||
DP83848_PHY_DRIVER(NS_DP83848C_PHY_ID, "NS DP83848C 10/100 Mbps PHY"),
|
||||
DP83848_PHY_DRIVER(TLK10X_PHY_ID, "TI TLK10X 10/100 Mbps PHY"),
|
||||
DP83848_PHY_DRIVER(TI_DP83822_PHY_ID, "TI DP83822 10/100 Mbps PHY"),
|
||||
};
|
||||
module_phy_driver(dp83848_driver);
|
||||
|
||||
|
|
|
@ -433,13 +433,13 @@ int asix_mdio_read(struct net_device *netdev, int phy_id, int loc)
|
|||
mutex_lock(&dev->phy_mutex);
|
||||
do {
|
||||
ret = asix_set_sw_mii(dev, 0);
|
||||
if (ret == -ENODEV)
|
||||
if (ret == -ENODEV || ret == -ETIMEDOUT)
|
||||
break;
|
||||
usleep_range(1000, 1100);
|
||||
ret = asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG,
|
||||
0, 0, 1, &smsr, 0);
|
||||
} while (!(smsr & AX_HOST_EN) && (i++ < 30) && (ret != -ENODEV));
|
||||
if (ret == -ENODEV) {
|
||||
if (ret == -ENODEV || ret == -ETIMEDOUT) {
|
||||
mutex_unlock(&dev->phy_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
@ -497,13 +497,13 @@ int asix_mdio_read_nopm(struct net_device *netdev, int phy_id, int loc)
|
|||
mutex_lock(&dev->phy_mutex);
|
||||
do {
|
||||
ret = asix_set_sw_mii(dev, 1);
|
||||
if (ret == -ENODEV)
|
||||
if (ret == -ENODEV || ret == -ETIMEDOUT)
|
||||
break;
|
||||
usleep_range(1000, 1100);
|
||||
ret = asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG,
|
||||
0, 0, 1, &smsr, 1);
|
||||
} while (!(smsr & AX_HOST_EN) && (i++ < 30) && (ret != -ENODEV));
|
||||
if (ret == -ENODEV) {
|
||||
if (ret == -ENODEV || ret == -ETIMEDOUT) {
|
||||
mutex_unlock(&dev->phy_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -151,7 +151,7 @@ kalmia_bind(struct usbnet *dev, struct usb_interface *intf)
|
|||
|
||||
status = kalmia_init_and_get_ethernet_addr(dev, ethernet_addr);
|
||||
|
||||
if (status < 0) {
|
||||
if (status) {
|
||||
usb_set_intfdata(intf, NULL);
|
||||
usb_driver_release_interface(driver_of(intf), intf);
|
||||
return status;
|
||||
|
|
|
@ -2279,6 +2279,7 @@ vmxnet3_set_mc(struct net_device *netdev)
|
|||
&adapter->shared->devRead.rxFilterConf;
|
||||
u8 *new_table = NULL;
|
||||
dma_addr_t new_table_pa = 0;
|
||||
bool new_table_pa_valid = false;
|
||||
u32 new_mode = VMXNET3_RXM_UCAST;
|
||||
|
||||
if (netdev->flags & IFF_PROMISC) {
|
||||
|
@ -2307,13 +2308,15 @@ vmxnet3_set_mc(struct net_device *netdev)
|
|||
new_table,
|
||||
sz,
|
||||
PCI_DMA_TODEVICE);
|
||||
if (!dma_mapping_error(&adapter->pdev->dev,
|
||||
new_table_pa)) {
|
||||
new_mode |= VMXNET3_RXM_MCAST;
|
||||
new_table_pa_valid = true;
|
||||
rxConf->mfTablePA = cpu_to_le64(
|
||||
new_table_pa);
|
||||
}
|
||||
}
|
||||
|
||||
if (!dma_mapping_error(&adapter->pdev->dev,
|
||||
new_table_pa)) {
|
||||
new_mode |= VMXNET3_RXM_MCAST;
|
||||
rxConf->mfTablePA = cpu_to_le64(new_table_pa);
|
||||
} else {
|
||||
if (!new_table_pa_valid) {
|
||||
netdev_info(netdev,
|
||||
"failed to copy mcast list, setting ALL_MULTI\n");
|
||||
new_mode |= VMXNET3_RXM_ALL_MULTI;
|
||||
|
@ -2338,7 +2341,7 @@ vmxnet3_set_mc(struct net_device *netdev)
|
|||
VMXNET3_CMD_UPDATE_MAC_FILTERS);
|
||||
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
|
||||
|
||||
if (new_table_pa)
|
||||
if (new_table_pa_valid)
|
||||
dma_unmap_single(&adapter->pdev->dev, new_table_pa,
|
||||
rxConf->mfTableLen, PCI_DMA_TODEVICE);
|
||||
kfree(new_table);
|
||||
|
|
|
@ -956,6 +956,7 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
|
|||
if (skb->pkt_type == PACKET_LOOPBACK) {
|
||||
skb->dev = vrf_dev;
|
||||
skb->skb_iif = vrf_dev->ifindex;
|
||||
IP6CB(skb)->flags |= IP6SKB_L3SLAVE;
|
||||
skb->pkt_type = PACKET_HOST;
|
||||
goto out;
|
||||
}
|
||||
|
@ -996,6 +997,7 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
|
|||
{
|
||||
skb->dev = vrf_dev;
|
||||
skb->skb_iif = vrf_dev->ifindex;
|
||||
IPCB(skb)->flags |= IPSKB_L3SLAVE;
|
||||
|
||||
/* loopback traffic; do not push through packet taps again.
|
||||
* Reset pkt_type for upper layers to process skb
|
||||
|
|
|
@ -583,7 +583,7 @@ static struct sk_buff **vxlan_gro_receive(struct sock *sk,
|
|||
}
|
||||
}
|
||||
|
||||
pp = eth_gro_receive(head, skb);
|
||||
pp = call_gro_receive(eth_gro_receive, head, skb);
|
||||
flush = 0;
|
||||
|
||||
out:
|
||||
|
@ -943,17 +943,20 @@ static bool vxlan_snoop(struct net_device *dev,
|
|||
static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
|
||||
{
|
||||
struct vxlan_dev *vxlan;
|
||||
struct vxlan_sock *sock4;
|
||||
struct vxlan_sock *sock6 = NULL;
|
||||
unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
|
||||
|
||||
sock4 = rtnl_dereference(dev->vn4_sock);
|
||||
|
||||
/* The vxlan_sock is only used by dev, leaving group has
|
||||
* no effect on other vxlan devices.
|
||||
*/
|
||||
if (family == AF_INET && dev->vn4_sock &&
|
||||
atomic_read(&dev->vn4_sock->refcnt) == 1)
|
||||
if (family == AF_INET && sock4 && atomic_read(&sock4->refcnt) == 1)
|
||||
return false;
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (family == AF_INET6 && dev->vn6_sock &&
|
||||
atomic_read(&dev->vn6_sock->refcnt) == 1)
|
||||
sock6 = rtnl_dereference(dev->vn6_sock);
|
||||
if (family == AF_INET6 && sock6 && atomic_read(&sock6->refcnt) == 1)
|
||||
return false;
|
||||
#endif
|
||||
|
||||
|
@ -961,10 +964,12 @@ static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
|
|||
if (!netif_running(vxlan->dev) || vxlan == dev)
|
||||
continue;
|
||||
|
||||
if (family == AF_INET && vxlan->vn4_sock != dev->vn4_sock)
|
||||
if (family == AF_INET &&
|
||||
rtnl_dereference(vxlan->vn4_sock) != sock4)
|
||||
continue;
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (family == AF_INET6 && vxlan->vn6_sock != dev->vn6_sock)
|
||||
if (family == AF_INET6 &&
|
||||
rtnl_dereference(vxlan->vn6_sock) != sock6)
|
||||
continue;
|
||||
#endif
|
||||
|
||||
|
@ -1005,22 +1010,25 @@ static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
|
|||
|
||||
static void vxlan_sock_release(struct vxlan_dev *vxlan)
|
||||
{
|
||||
bool ipv4 = __vxlan_sock_release_prep(vxlan->vn4_sock);
|
||||
struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
bool ipv6 = __vxlan_sock_release_prep(vxlan->vn6_sock);
|
||||
struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
|
||||
|
||||
rcu_assign_pointer(vxlan->vn6_sock, NULL);
|
||||
#endif
|
||||
|
||||
rcu_assign_pointer(vxlan->vn4_sock, NULL);
|
||||
synchronize_net();
|
||||
|
||||
if (ipv4) {
|
||||
udp_tunnel_sock_release(vxlan->vn4_sock->sock);
|
||||
kfree(vxlan->vn4_sock);
|
||||
if (__vxlan_sock_release_prep(sock4)) {
|
||||
udp_tunnel_sock_release(sock4->sock);
|
||||
kfree(sock4);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (ipv6) {
|
||||
udp_tunnel_sock_release(vxlan->vn6_sock->sock);
|
||||
kfree(vxlan->vn6_sock);
|
||||
if (__vxlan_sock_release_prep(sock6)) {
|
||||
udp_tunnel_sock_release(sock6->sock);
|
||||
kfree(sock6);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
@ -1036,18 +1044,21 @@ static int vxlan_igmp_join(struct vxlan_dev *vxlan)
|
|||
int ret = -EINVAL;
|
||||
|
||||
if (ip->sa.sa_family == AF_INET) {
|
||||
struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
|
||||
struct ip_mreqn mreq = {
|
||||
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
|
||||
.imr_ifindex = ifindex,
|
||||
};
|
||||
|
||||
sk = vxlan->vn4_sock->sock->sk;
|
||||
sk = sock4->sock->sk;
|
||||
lock_sock(sk);
|
||||
ret = ip_mc_join_group(sk, &mreq);
|
||||
release_sock(sk);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
} else {
|
||||
sk = vxlan->vn6_sock->sock->sk;
|
||||
struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
|
||||
|
||||
sk = sock6->sock->sk;
|
||||
lock_sock(sk);
|
||||
ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
|
||||
&ip->sin6.sin6_addr);
|
||||
|
@ -1067,18 +1078,21 @@ static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
|
|||
int ret = -EINVAL;
|
||||
|
||||
if (ip->sa.sa_family == AF_INET) {
|
||||
struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
|
||||
struct ip_mreqn mreq = {
|
||||
.imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
|
||||
.imr_ifindex = ifindex,
|
||||
};
|
||||
|
||||
sk = vxlan->vn4_sock->sock->sk;
|
||||
sk = sock4->sock->sk;
|
||||
lock_sock(sk);
|
||||
ret = ip_mc_leave_group(sk, &mreq);
|
||||
release_sock(sk);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
} else {
|
||||
sk = vxlan->vn6_sock->sock->sk;
|
||||
struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
|
||||
|
||||
sk = sock6->sock->sk;
|
||||
lock_sock(sk);
|
||||
ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
|
||||
&ip->sin6.sin6_addr);
|
||||
|
@ -1828,11 +1842,15 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
|
|||
struct dst_cache *dst_cache,
|
||||
const struct ip_tunnel_info *info)
|
||||
{
|
||||
struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
|
||||
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
|
||||
struct dst_entry *ndst;
|
||||
struct flowi6 fl6;
|
||||
int err;
|
||||
|
||||
if (!sock6)
|
||||
return ERR_PTR(-EIO);
|
||||
|
||||
if (tos && !info)
|
||||
use_cache = false;
|
||||
if (use_cache) {
|
||||
|
@ -1850,7 +1868,7 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
|
|||
fl6.flowi6_proto = IPPROTO_UDP;
|
||||
|
||||
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
|
||||
vxlan->vn6_sock->sock->sk,
|
||||
sock6->sock->sk,
|
||||
&ndst, &fl6);
|
||||
if (err < 0)
|
||||
return ERR_PTR(err);
|
||||
|
@ -1995,9 +2013,11 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
}
|
||||
|
||||
if (dst->sa.sa_family == AF_INET) {
|
||||
if (!vxlan->vn4_sock)
|
||||
struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
|
||||
|
||||
if (!sock4)
|
||||
goto drop;
|
||||
sk = vxlan->vn4_sock->sock->sk;
|
||||
sk = sock4->sock->sk;
|
||||
|
||||
rt = vxlan_get_route(vxlan, skb,
|
||||
rdst ? rdst->remote_ifindex : 0, tos,
|
||||
|
@ -2050,12 +2070,13 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
src_port, dst_port, xnet, !udp_sum);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
} else {
|
||||
struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
|
||||
struct dst_entry *ndst;
|
||||
u32 rt6i_flags;
|
||||
|
||||
if (!vxlan->vn6_sock)
|
||||
if (!sock6)
|
||||
goto drop;
|
||||
sk = vxlan->vn6_sock->sock->sk;
|
||||
sk = sock6->sock->sk;
|
||||
|
||||
ndst = vxlan6_get_route(vxlan, skb,
|
||||
rdst ? rdst->remote_ifindex : 0, tos,
|
||||
|
@ -2415,9 +2436,10 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
|||
dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
|
||||
|
||||
if (ip_tunnel_info_af(info) == AF_INET) {
|
||||
struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
|
||||
struct rtable *rt;
|
||||
|
||||
if (!vxlan->vn4_sock)
|
||||
if (!sock4)
|
||||
return -EINVAL;
|
||||
rt = vxlan_get_route(vxlan, skb, 0, info->key.tos,
|
||||
info->key.u.ipv4.dst,
|
||||
|
@ -2429,8 +2451,6 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
|||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
struct dst_entry *ndst;
|
||||
|
||||
if (!vxlan->vn6_sock)
|
||||
return -EINVAL;
|
||||
ndst = vxlan6_get_route(vxlan, skb, 0, info->key.tos,
|
||||
info->key.label, &info->key.u.ipv6.dst,
|
||||
&info->key.u.ipv6.src, NULL, info);
|
||||
|
@ -2740,10 +2760,10 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
|
|||
return PTR_ERR(vs);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (ipv6)
|
||||
vxlan->vn6_sock = vs;
|
||||
rcu_assign_pointer(vxlan->vn6_sock, vs);
|
||||
else
|
||||
#endif
|
||||
vxlan->vn4_sock = vs;
|
||||
rcu_assign_pointer(vxlan->vn4_sock, vs);
|
||||
vxlan_vs_add_dev(vs, vxlan);
|
||||
return 0;
|
||||
}
|
||||
|
@ -2754,9 +2774,9 @@ static int vxlan_sock_add(struct vxlan_dev *vxlan)
|
|||
bool metadata = vxlan->flags & VXLAN_F_COLLECT_METADATA;
|
||||
int ret = 0;
|
||||
|
||||
vxlan->vn4_sock = NULL;
|
||||
RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
vxlan->vn6_sock = NULL;
|
||||
RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
|
||||
if (ipv6 || metadata)
|
||||
ret = __vxlan_sock_add(vxlan, true);
|
||||
#endif
|
||||
|
|
|
@ -294,7 +294,7 @@ config FSL_UCC_HDLC
|
|||
config SLIC_DS26522
|
||||
tristate "Slic Maxim ds26522 card support"
|
||||
depends on SPI
|
||||
depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE
|
||||
depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE || COMPILE_TEST
|
||||
help
|
||||
This module initializes and configures the slic maxim card
|
||||
in T1 or E1 mode.
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue