One of the RF modules we support has been deprecated and never
released publicly. Remove support for this module.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This will allow to print the name of the commands in the
logs when we sent it.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
If the supported firmware versions are not found, we currently only
print "no suitable firmware found". This is not very informative for
the user trying to find the correct version to use. Improve this by
printing the exact firmware name(s) the driver supports and pointing
to the git repository where they can be found.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
For a000 devices the binding API needs to include relevant
lmac ID - support the new API.
The new API should be used regardless if the device had CDB or
not. If there is no actual CDB support the binding is bound
to first lmac regardless of the band.
There are some functionality changes in binding restrictions
and quota allocations that will be handled in future patches.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Benjamin Herrenschmidt says:
====================
ftgmac100: Rework batch 3 - TX path
This is version 2 of the third batch of updates to
the ftgmac100 driver.
This one tackles the TX path of the driver. This provides the
bulk of the performance improvements by adding support for
fragmented sends along with a bunch of cleanups.
Version 2 fixes a patch splitting mistake and uses
eth_skb_pad() (which uses skb_put_padto) to pad ethernet
frames rather than skb_padto(), thus removing the need to
also pad the packet headlen in a couple of places.
Subsequent batches will add various features (ethtool functions,
vlan offlan, ...) and cleanups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Directly access the fields when needed. The accessors add clutter
not clarity and in some cases cause unnecessary read-modify-write
type access on the slow (uncached) descriptor memory.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add NETIF_F_SG and create multiple TX ring entries for skb fragments.
On reclaim, the skb is only freed on the segment marked as "last".
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Those are non-cachable stores, let's avoid those we don't need. Remove
the helper, it's not particularly helpful and since it uses "priv"
I can't move it to the header file.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This moves the packet freeing to a separate function
which is also used by ftgmac100_free_buffers() and will
be used more in the error path of fragmented sends.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We'll use variants of this accessor without barriers when
building series of descriptors for fragmented sends
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We have a private lock which isn't terribly useful, and we maintain
a "tx_pending" counter for information that's already available
via a trivial arithmetic operation. Then we unconditionaly wake
the queue even when not stopped. Finally our code in tx isn't
really safe vs. a concurrent reclaim. The aspeed chips aren't SMP
today but I prefer the code being right and future proof.
So rip that out and replace it with more "standard" queue handling,
currently with a threshold of 1 queue element, which will be
increased when we implement fragmented sends.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rather than in the descriptor. The descriptor is mapped non-cachable
and rather slow to access.
Since to do that we need to keep track of the tx "pointer" we also
have no use of all the accesors to manipulate it, just open code
it, it's as clear and will help when adding fragmented sends.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rather than just transmitting garbage past the end of the small
packet.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use a simple goto to a drop path at the tail of the function,
it will be used in a few more cases soon
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This will make subsequent rework of the tx path simpler
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move it below ftgmac100_xmit() and the rest of the tx path
No code change.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We have a reset task to reset our chip, use it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes build errors seen with CONFIG_GPIOLIB disabled and warnings enabled:
drivers/net/dsa/mt7530.c: In function 'mt7530_setup':
drivers/net/dsa/mt7530.c:948:3: error: implicit declaration of function 'gpiod_set_value_cansleep' [-Werror=implicit-function-declaration]
gpiod_set_value_cansleep(priv->reset, 0);
^~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mt7530.c: In function 'mt7530_probe':
drivers/net/dsa/mt7530.c:1068:17: error: implicit declaration of function 'devm_gpiod_get_optional' [-Werror=implicit-function-declaration]
priv->reset = devm_gpiod_get_optional(&mdiodev->dev, "reset",
^~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mt7530.c:1069:13: error: 'GPIOD_OUT_LOW' undeclared (first use in this function)
GPIOD_OUT_LOW);
^~~~~~~~~~~~~
drivers/net/dsa/mt7530.c:1069:13:
Fixes: b8f126a8d5 ("net-next: dsa: add dsa support for Mediatek MT7530 switch")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
o s/bpf_bpf_get_socket_cookie/bpf_get_socket_cookie
Signed-off-by: Alexander Alemayhu <alexander@alemayhu.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows using deferred skb freeing and with NAPI. And get buffer
recycling.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lots of bugfixes as usual but also some new features.
Major changes:
ath10k
* improve firmware download time for QCA6174 and QCA9377, especially
helps resume time
ath9k_htc
* add support AirTies 1eda:2315 AR9271 device
rt2x00
* add support MT7620
mwifiex
* enable auto deep sleep mode for USB chipsets
brcmfmac
* add support for network namespaces (WIPHY_FLAG_NETNS_OK)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJY55+nAAoJEG4XJFUm622bRUAIAJfCM5YFyh9Y/XV147JshdGi
xScDAgwIA1/o+iHfvTsjSH3/uoH3JhsiqfcXN7R80kyvG7HrUeXaebmUDgbBvOOj
FTMuytiD+xkgEKVjWIwtXUYqctyzQ8ofxIQJ2W5E9CbEYAZ43uEcXzdnhBKVaVuY
XPw59MF5vRGDDXTnTf4af4OC+L1QqwUqsyi4j7oMIfexieMGQxQL0JYOyHweCnCV
gMz/kTxAIcmC1yOiunu8VyU4kK8borW36wC7XEE3MOXhqSKnQjyhI/efA76AKX0j
O7sfEKCFlZU1xXQbkB9ecbKc2jyMefiE0gwLWWI5dwKhnUS2qQ6GS8ML+NMfV0g=
=B2X+
-----END PGP SIGNATURE-----
Merge tag 'wireless-drivers-next-for-davem-2017-04-07' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for 4.12
Lots of bugfixes as usual but also some new features.
Major changes:
ath10k
* improve firmware download time for QCA6174 and QCA9377, especially
helps resume time
ath9k_htc
* add support AirTies 1eda:2315 AR9271 device
rt2x00
* add support MT7620
mwifiex
* enable auto deep sleep mode for USB chipsets
brcmfmac
* add support for network namespaces (WIPHY_FLAG_NETNS_OK)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit def12888c1.
As per discussion between Roopa Prabhu and David Ahern, it is
advisable that we instead have the code collect the setlink triggered
events into a bitmask emitted in the IFLA_EVENT netlink attribute.
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2017-04-08
This series contains updates to i40e and i40evf only.
Mitch fixes an issue where the client driver (i40iw) was attempting to
load on x710 devices (which do not support iWARP), so only register with
the client if iWARP is supported.
Jake fixes up error messages to better clarify to the user when adding a
invalid flow type. Updates the driver to look up the MAC address from
eth_get_platform_mac_address() first before checking what the firmware
provides. Cleans up code so we are not repeating a duplicate loop, by
checking both transmit and receive queues in a single loop. Also cleans
up flags never used, so remove the definitions.
Alex does cleanup so that we are always updating pf->flags when a change
is made to the private flags. Adds support for 3K buffers to the receive
path so that we can provide the additional padding needed in the event
of NET_IP_ALIGN being non-zero or a cache line being greater than 64.
Adds support for build_skb() to i40e/i40evf.
Maciej adjusts the scope of the rtnl lock held during reset because it
was stopping other PFs from running their reset procedures.
Alan reduces code complexity in i40e_detect_recover_hung_queue().
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli says:
====================
net: dsa: Receive path simplifications
This patch series does factor the common code found in all tag implementations
into dsa_switch_rcv(). The original motivation was to add GRO support, but this
may be a lot of work with unclear benefits at this point.
Changes in v2:
- take care of tag_mtk.c in the process
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
All DSA tag receive functions do strictly the same thing after they have located
the originating source port from their tag specific protocol:
- push ETH_HLEN bytes
- set pkt_type to PACKET_HOST
- call eth_type_trans()
- bump up counters
- call netif_receive_skb()
Factor all of that into dsa_switch_rcv(). This also makes us return a pointer to
a sk_buff, which makes us symetric with the xmit function.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All DSA tag receive functions need to unshare the skb before mangling it, move
this to the generic dsa_switch_rcv() function which will allow us to make the
tag receive function return their mangled skb without caring about freeing a
NULL skb.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
dsa_switch_rcv() already tests for dst == NULL, so there is no need to duplicate
the same check within the tag receive functions.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All available gso_type flags are currently in use, so
extend gso_type from 'unsigned short' to 'unsigned int'
to be able to add further flags.
We reorder the struct skb_shared_info to use
two bytes of the four byte hole before dataref.
All fields before dataref are cleared, i.e.
four bytes more than before the change.
The remaining two byte hole is moved to the
beginning of the structure, this protects us
from immediate overwites on out of bound writes
to the sk_buff head.
Structure layout on x86-64 before the change:
struct skb_shared_info {
unsigned char nr_frags; /* 0 1 */
__u8 tx_flags; /* 1 1 */
short unsigned int gso_size; /* 2 2 */
short unsigned int gso_segs; /* 4 2 */
short unsigned int gso_type; /* 6 2 */
struct sk_buff * frag_list; /* 8 8 */
struct skb_shared_hwtstamps hwtstamps; /* 16 8 */
u32 tskey; /* 24 4 */
__be32 ip6_frag_id; /* 28 4 */
atomic_t dataref; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
void * destructor_arg; /* 40 8 */
skb_frag_t frags[17]; /* 48 272 */
/* --- cacheline 5 boundary (320 bytes) --- */
/* size: 320, cachelines: 5, members: 12 */
/* sum members: 316, holes: 1, sum holes: 4 */
};
Structure layout on x86-64 after the change:
struct skb_shared_info {
short unsigned int _unused; /* 0 2 */
unsigned char nr_frags; /* 2 1 */
__u8 tx_flags; /* 3 1 */
short unsigned int gso_size; /* 4 2 */
short unsigned int gso_segs; /* 6 2 */
struct sk_buff * frag_list; /* 8 8 */
struct skb_shared_hwtstamps hwtstamps; /* 16 8 */
unsigned int gso_type; /* 24 4 */
u32 tskey; /* 28 4 */
__be32 ip6_frag_id; /* 32 4 */
atomic_t dataref; /* 36 4 */
void * destructor_arg; /* 40 8 */
skb_frag_t frags[17]; /* 48 272 */
/* --- cacheline 5 boundary (320 bytes) --- */
/* size: 320, cachelines: 5, members: 13 */
};
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For security reasons, NIC firmware does not allow VF to set its VLAN if PF
set it already. Firmware allows VF to set its VLAN if PF did not set it.
After the VF instructs the firmware to set the VLAN, VF always indicates
(via return 0) that the operation is successful--even for the times when it
isn't.
Put in a mechanism for the VF's set VLAN function to receive the firmware
response code, then make that function return -EPERM if the firmware
forbids the operation.
Make that mechanism available for other functions that may, in the future,
be interested in receiving the response code from the firmware. That
mechanism involves adding new fields to struct octnic_ctrl_pkt, so make all
users of struct octnic_ctrl_pkt initialize the struct to zero before using
it; otherwise, the mechanism might act on uninitialized garbage.
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Derek Chickles <derek.chickles@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prior to opening the channel we should have all the state setup to handle
interrupts. The current code does not do that; fix the bug. This bug
can result in faults in the interrupt path.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The SMI clause 22 & 45 read/write operations are local to the global2.c file,
so make them static. This eliminates the following warning:
drivers/net/dsa/mv88e6xxx/global2.c:571:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c45' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_read_c45(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:602:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_read_c22' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_read_c22(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:635:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c45' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_write_c45(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/dsa/mv88e6xxx/global2.c:664:5: warning: no previous prototype for 'mv88e6xxx_g2_smi_phy_write_c22' [-Wmissing-prototypes]
int mv88e6xxx_g2_smi_phy_write_c22(struct mv88e6xxx_chip *chip, int addr,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's rather confusing that the netlink message flags are
numbered 1, 2, 4, 8, 16, 32, <unused>, 0x100. Make that
more understandable by numbering the lower ones with hex
constants as well.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On the case where nn->eth_port is null the warning message
is printing the port by dereferencing this null pointer.
Remove the deference to avoid a crash when printing the
warning message.
Detected by CoverityScan, CID#1426198 ("Dereference after null check")
Fixes: ce22f5a2cb ("nfp: separate high level and low level NSP headers")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Chenbo Feng says:
====================
New getsockopt option to retrieve socket cookie
In the current kernel socket cookie implementation, there is no simple
and direct way to retrieve the socket cookie based on file descriptor. A
process mat need to get it from sock fd if it want to correlate with
sock_diag output or use a bpf map with new socket cookie function.
If userspace wants to receive the socket cookie for a given socket fd,
it must send a SOCK_DIAG_BY_FAMILY dump request and look for the 5-tuple.
This is slow and can be ambiguous in the case of sockets that have the
same 5-tuple (e.g., tproxy / transparent sockets, SO_REUSEPORT sockets,
etc.).
As shown in the example program. The xt_eBPF program is using socket cookie
to record the network traffics statistics and with the socket cookie
retrieved by getsockopt. The program can directly access to a specific
socket data without scanning the whole bpf map.
====================
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Added a per socket traffic monitoring option to illustrate the usage
of new getsockopt SO_COOKIE. The program is based on the socket traffic
monitoring program using xt_eBPF and in the new option the data entry
can be directly accessed using socket cookie. The cookie retrieved
allow us to lookup an element in the eBPF for a specific socket.
Signed-off-by: Chenbo Feng <fengc@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce a new getsockopt operation to retrieve the socket cookie
for a specific socket based on the socket fd. It returns a unique
non-decreasing cookie for each socket.
Tested: https://android-review.googlesource.com/#/c/358163/
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Chenbo Feng <fengc@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patchset provides some updates for the mlx5 drivers.
From Majd,
1st patch, Adds ConnectX-6 and ConnectX-6 VF PCI IDs support.
From Guy,
2nd patch, Adds RXFCS scatter support.
3rd patch, Small cleanup to make a function static.
From Eran,
4th patch, Adds 4 zeros padding to ethtool FW version.
6th patch, Trevial code reuse cleanup
From Inbar,
5th patch, Show board id in ethtool driver information
From Saeed,
7th patch, Set default RX moderation parameters on driver load
as a small fix for the latest fail-safe config feature.
Thanks,
Saeed.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJY55utAAoJEEg/ir3gV/o+Ek4H/jSsJGtNcQuOw3D2PC/8QSTv
0HiGqBjqhfNEdwgq9RGgjDbuh1ifNli3NyVlc5bJmDj42X7V1yaBbfMBEqAH8T1V
if3+4/bA/69FmT/PcLTGt1ql03WiXDfB22gXtqON6/yTPLgNVHHqSMZccFyM6lsB
/N5eRv2z7jrn80Y4dFCwCszA9QSUtUXLYmaCDaUm+KP5Kbh1569SDeON76uofMhH
1AsL5sAK9GRye5la2Z8hi+JG6XvvNfbj0aQXVW7IABDqHs8fYxQdTY3+kfDktF8L
DEkKx5aIbWb6PSogGbdOvY3yjTpNOxHeH1yVVebSCQJrx8NTNoQIU5pUxI2EnuU=
=FpdT
-----END PGP SIGNATURE-----
Merge tag 'mlx5-updates-2017-04-16' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2017-04-16
This patchset provides some updates for the mlx5 drivers.
From Majd,
1st patch, Adds ConnectX-6 and ConnectX-6 VF PCI IDs support.
From Guy,
2nd patch, Adds RXFCS scatter support.
3rd patch, Small cleanup to make a function static.
From Eran,
4th patch, Adds 4 zeros padding to ethtool FW version.
6th patch, Trevial code reuse cleanup
From Inbar,
5th patch, Show board id in ethtool driver information
From Saeed,
7th patch, Set default RX moderation parameters on driver load
as a small fix for the latest fail-safe config feature.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is meant to improve the performance of the Rx path.
Specifically by using build_skb we have several distinct advantages.
In the case of small frames we were previously using a copy-break approach.
This means that we were allocating a page fragment to use for skb->head,
and were having to copy the packet into that region. Both of those calls
are now avoided since we just build the skb around the data.
In the case of large frames the gains are much more significant.
Specifically we were having to allocate skb->head, and copy the headers as
before. However in addition we were having to parse the header using
eth_get_headlen which could be quite expensive. All of this is avoided by
building the frame around the data. I have seen gains as high as 30% when
using VXLAN for instance due to just header pulling overhead.
Finally with all this in place it also sets us up to start looking at
enabling XDP. Specifically we now have a path in which the data is in the
page and the frame is built around it. So if we parse it with XDP before
we call build_skb we can take care of any necessary processing there.
Change-ID: Id4bdd618e94473d41f892417e5d8019639e421e3
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds padding to the start of frames to make room for headroom
for us to eventually start using build_skb. Right now we guarantee at
least NET_SKB_PAD + NET_IP_ALIGN, however we allocate more space if more is
available. For example on x86 the headroom should be 192 bytes.
On systems that have too large of a cache line size to support storing 1.5K
padding and shared info we default to using 3K buffers and reserve
everything that isn't used for skb_shared_info or the data buffer for
headroom.
Change-ID: I33c641c9a1ea10cf7cc484c2d20985368d2d709a
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There are situations where adding padding to the front and back of an Rx
buffer will require that we add additional padding. Specifically if
NET_IP_ALIGN is non-zero, or the MTU size is larger than 7.5K we would need
to use 2K buffers which leaves us with no room for the padding.
To preemptively address these cases I am adding support for 3K buffers to
the Rx path so that we can provide the additional padding needed in the
event of NET_IP_ALIGN being non-zero or a cache line being greater than 64.
Change-ID: I938bc1ba611285428df39a613cd66f98e60b55c7
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since an early commit a few flags have no longer
been used. Remove these definitions to reduce code clutter.
Change-ID: I3589be4622574e747013cd4dc403e18b039f4965
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The I40E_FLAG_NEED_LINK_UPDATE was never used. Remove the flag
definitions.
Change-ID: If59d0c6b4af85ca27281f3183c54b055adb439a4
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We can simply check both Tx and Rx queues in a single loop, rather than
repeating the loop twice.
Change-ID: Ic06f26b0e3c2620e0e33c1a2999edda488e647ad
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Look up the MAC address from the eth_get_platform_mac_address() function
first before checking what the firmware provides. We already handle the
case of re-writing the MAC-VLAN filter, so there is no need to add extra
code for this. However, update the comment where we do this to indicate
that it does impact the Open Firmware MAC address case.
Change-ID: I73e59fbe0b0e7e6f3ee9f5170d0bd3a4d5faf4db
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch greatly reduces the unneeded complexity in the
i40e_detect_recover_hung_queue code path. The previous implementation
set a 'hung bit' which would then get cleared while polling. If the
detection routine was called a second time with the bit already set, we
would issue a software interrupt. This patch makes it such that if
interrupts are disabled and we have pending TX descriptors, we trigger a
software interrupt since in, the worst case, queues are already clean
and we have an extra interrupt.
Additionally this patch removes the workaround for lost interrupts as
calling napi_reschedule in this context can cause software interrupts to
fire on the wrong CPU.
Change-ID: Iae108582a3ceb6229ed1d22e4ed6e69cf97aad8d
Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Previously rtnl lock was held during whole reset procedure that
was stopping other PFs running their reset procedures. In the result
reset was not handled properly and host reset was the only way
to recover.
Change-ID: I23c0771c0303caaa7bd64badbf0c667e25142954
Signed-off-by: Maciej Sosin <maciej.sosin@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is a minor cleanup so that we are always updating pf->flags when we
make a change to the private flags instead of updating a mix of either
pf->flags and/or pf->hw_disabled_flags.
In addition I went through and cleaned out all the spots where we were
using the X722 define in regards to this flag.
Lastly since we changed the logic I went through and flushed out any
redundancy and cleaned up the handling of the flags in the Tx path.
Change-ID: I79ff95a7272bb2533251ff11ef91e89ccb80b610
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Re-word the error message displayed when adding a filter with an
invalid flow type. Additionally, report a distinct error message when
the IPv4 protocol is at fault.
Change-ID: Iba3d85b87f8d383c97c8bdd180df34a6adf3ee67
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The client interface is only intended for use on devices that support
iWarp. Only register with the client if this is the case.
This fixes a panic when loading i40iw on X710 devices.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Reported-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Adding support for TSO and checksum hardware offloads for ipv6.
Signed-off-by: Thanneeru Srinivasulu <tsrinivasulu@cavium.com>
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>