Merge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2018-06-04

This series contains a smorgasbord of updates to documentation, e1000e,
igb, ixgbe, ixgbevf and i40e.

Benjamin Poirier fixes a potential kernel crash due to NULL pointer
dereference in e1000e.

Jeff updates the kernel documentation for e100 and e1000 to correct
default values and URLs which were incorrect in the documentation.  Also
took the time to update these to the new reStructured text format for
kernel documentation.

Joanna Yurdal fixes a missing PTP transmit timestamp by ensuring that
TSICR gets cleared when ICR is cleared.

Sergey updates igb to reset all the transmit queues at one time so that
we only have to wait once for all the queues to be reset.

Alex fixes ixgbevf so that malicious driver detection (MDD) can co-exist
with XDP.

Emil and Tony extend the RTNL lock to ensure we get the most up-to-date
values for the bits and avoid a possible race condition when going down.

YueHaibing from Huawei introduces a helper function in ixgbe for
operation reads to simplify the code a bit more.

Daniel Borkmann adds support for XDP meta data when using build SKB
for i40e.

Shannon Nelson provides twp fixes for the IPSec code in ixgbe, first is
to make sure we do not try to offload the decryption of any incoming
packet that is destined for the management engine.  The other fix is to
resolve a cast problem introduced by a sparse cleanup patch.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2018-06-04 17:35:35 -04:00
commit d67b66b45a
12 changed files with 236 additions and 154 deletions

View File

@ -1,7 +1,7 @@
Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters
============================================================== ==============================================================
March 15, 2011 June 1, 2018
Contents Contents
======== ========
@ -36,16 +36,9 @@ Channel Bonding documentation can be found in the Linux kernel source:
Identifying Your Adapter Identifying Your Adapter
======================== ========================
For more information on how to identify your adapter, go to the Adapter & For information on how to identify your adapter, and for the latest Intel
Driver ID Guide at: network drivers, refer to the Intel Support website:
http://www.intel.com/support
http://support.intel.com/support/network/adapter/pro100/21397.htm
For the latest Intel network drivers for Linux, refer to the following
website. In the search field, enter your adapter name or type, or use the
networking link on the left to search for your adapter:
http://downloadfinder.intel.com/scripts-df/support_intel.asp
Driver Configuration Parameters Driver Configuration Parameters
=============================== ===============================
@ -57,22 +50,26 @@ Rx Descriptors: Number of receive descriptors. A receive descriptor is a data
structure that describes a receive buffer and its attributes to the network structure that describes a receive buffer and its attributes to the network
controller. The data in the descriptor is used by the controller to write controller. The data in the descriptor is used by the controller to write
data from the controller to host memory. In the 3.x.x driver the valid range data from the controller to host memory. In the 3.x.x driver the valid range
for this parameter is 64-256. The default value is 64. This parameter can be for this parameter is 64-256. The default value is 256. This parameter can be
changed using the command: changed using the command::
ethtool -G eth? rx n, where n is the number of desired rx descriptors. ethtool -G eth? rx n
Where n is the number of desired Rx descriptors.
Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data
structure that describes a transmit buffer and its attributes to the network structure that describes a transmit buffer and its attributes to the network
controller. The data in the descriptor is used by the controller to read controller. The data in the descriptor is used by the controller to read
data from the host memory to the controller. In the 3.x.x driver the valid data from the host memory to the controller. In the 3.x.x driver the valid
range for this parameter is 64-256. The default value is 64. This parameter range for this parameter is 64-256. The default value is 128. This parameter
can be changed using the command: can be changed using the command::
ethtool -G eth? tx n, where n is the number of desired tx descriptors. ethtool -G eth? tx n
Where n is the number of desired Tx descriptors.
Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by
default. The ethtool utility can be used as follows to force speed/duplex. default. The ethtool utility can be used as follows to force speed/duplex.::
ethtool -s eth? autoneg off speed {10|100} duplex {full|half} ethtool -s eth? autoneg off speed {10|100} duplex {full|half}
@ -81,7 +78,7 @@ Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by
Event Log Message Level: The driver uses the message level flag to log events Event Log Message Level: The driver uses the message level flag to log events
to syslog. The message level can be set at driver load time. It can also be to syslog. The message level can be set at driver load time. It can also be
set using the command: set using the command::
ethtool -s eth? msglvl n ethtool -s eth? msglvl n
@ -112,9 +109,9 @@ Additional Configurations
--------------------- ---------------------
In order to see link messages and other Intel driver information on your In order to see link messages and other Intel driver information on your
console, you must set the dmesg level up to six. This can be done by console, you must set the dmesg level up to six. This can be done by
entering the following on the command line before loading the e100 driver: entering the following on the command line before loading the e100 driver::
dmesg -n 8 dmesg -n 6
If you wish to see all messages issued by the driver, including debug If you wish to see all messages issued by the driver, including debug
messages, set the dmesg level to eight. messages, set the dmesg level to eight.
@ -146,7 +143,8 @@ Additional Configurations
NAPI (Rx polling mode) is supported in the e100 driver. NAPI (Rx polling mode) is supported in the e100 driver.
See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI. See https://wiki.linuxfoundation.org/networking/napi for more information
on NAPI.
Multiple Interfaces on Same Ethernet Broadcast Network Multiple Interfaces on Same Ethernet Broadcast Network
------------------------------------------------------ ------------------------------------------------------
@ -160,7 +158,7 @@ Additional Configurations
If you have multiple interfaces in a server, either turn on ARP If you have multiple interfaces in a server, either turn on ARP
filtering by filtering by
(1) entering: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
(this only works if your kernel's version is higher than 2.4.5), or (this only works if your kernel's version is higher than 2.4.5), or
(2) installing the interfaces in separate broadcast domains (either (2) installing the interfaces in separate broadcast domains (either
@ -169,15 +167,11 @@ Additional Configurations
Support Support
======= =======
For general information, go to the Intel support website at: For general information, go to the Intel support website at:
http://www.intel.com/support/
http://support.intel.com or the Intel Wired Networking project hosted by Sourceforge at:
http://sourceforge.net/projects/e1000
or the Intel Wired Networking project hosted by Sourceforge at: If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
http://sourceforge.net/projects/e1000 to e1000-devel@lists.sf.net.
If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related to the
issue to e1000-devel@lists.sourceforge.net.

View File

@ -154,7 +154,7 @@ NOTE: When e1000 is loaded with default settings and multiple adapters
are in use simultaneously, the CPU utilization may increase non- are in use simultaneously, the CPU utilization may increase non-
linearly. In order to limit the CPU utilization without impacting linearly. In order to limit the CPU utilization without impacting
the overall throughput, we recommend that you load the driver as the overall throughput, we recommend that you load the driver as
follows: follows::
modprobe e1000 InterruptThrottleRate=3000,3000,3000 modprobe e1000 InterruptThrottleRate=3000,3000,3000
@ -167,8 +167,8 @@ NOTE: When e1000 is loaded with default settings and multiple adapters
RxDescriptors RxDescriptors
------------- -------------
Valid Range: 80-256 for 82542 and 82543-based adapters Valid Range: 48-256 for 82542 and 82543-based adapters
80-4096 for all other supported adapters 48-4096 for all other supported adapters
Default Value: 256 Default Value: 256
This value specifies the number of receive buffer descriptors allocated This value specifies the number of receive buffer descriptors allocated
@ -230,8 +230,8 @@ speed. Duplex should also be set when Speed is set to either 10 or 100.
TxDescriptors TxDescriptors
------------- -------------
Valid Range: 80-256 for 82542 and 82543-based adapters Valid Range: 48-256 for 82542 and 82543-based adapters
80-4096 for all other supported adapters 48-4096 for all other supported adapters
Default Value: 256 Default Value: 256
This value is the number of transmit descriptors allocated by the driver. This value is the number of transmit descriptors allocated by the driver.
@ -242,41 +242,10 @@ NOTE: Depending on the available system resources, the request for a
higher number of transmit descriptors may be denied. In this case, higher number of transmit descriptors may be denied. In this case,
use a lower number. use a lower number.
TxDescriptorStep
----------------
Valid Range: 1 (use every Tx Descriptor)
4 (use every 4th Tx Descriptor)
Default Value: 1 (use every Tx Descriptor)
On certain non-Intel architectures, it has been observed that intense TX
traffic bursts of short packets may result in an improper descriptor
writeback. If this occurs, the driver will report a "TX Timeout" and reset
the adapter, after which the transmit flow will restart, though data may
have stalled for as much as 10 seconds before it resumes.
The improper writeback does not occur on the first descriptor in a system
memory cache-line, which is typically 32 bytes, or 4 descriptors long.
Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors
are aligned to the start of a system memory cache line, and so this problem
will not occur.
NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of
TxDescriptors available for transmits to 1/4 of the normal allocation.
This has a possible negative performance impact, which may be
compensated for by allocating more descriptors using the TxDescriptors
module parameter.
There are other conditions which may result in "TX Timeout", which will
not be resolved by the use of the TxDescriptorStep parameter. As the
issue addressed by this parameter has never been observed on Intel
Architecture platforms, it should not be used on Intel platforms.
TxIntDelay TxIntDelay
---------- ----------
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
Default Value: 64 Default Value: 8
This value delays the generation of transmit interrupts in units of This value delays the generation of transmit interrupts in units of
1.024 microseconds. Transmit interrupt reduction can improve CPU 1.024 microseconds. Transmit interrupt reduction can improve CPU
@ -288,7 +257,7 @@ TxAbsIntDelay
------------- -------------
(This parameter is supported only on 82540, 82545 and later adapters.) (This parameter is supported only on 82540, 82545 and later adapters.)
Valid Range: 0-65535 (0=off) Valid Range: 0-65535 (0=off)
Default Value: 64 Default Value: 32
This value, in units of 1.024 microseconds, limits the delay in which a This value, in units of 1.024 microseconds, limits the delay in which a
transmit interrupt is generated. Useful only if TxIntDelay is non-zero, transmit interrupt is generated. Useful only if TxIntDelay is non-zero,
@ -310,7 +279,7 @@ Copybreak
--------- ---------
Valid Range: 0-xxxxxxx (0=off) Valid Range: 0-xxxxxxx (0=off)
Default Value: 256 Default Value: 256
Usage: insmod e1000.ko copybreak=128 Usage: modprobe e1000.ko copybreak=128
Driver copies all packets below or equaling this size to a fresh RX Driver copies all packets below or equaling this size to a fresh RX
buffer before handing it up the stack. buffer before handing it up the stack.
@ -328,14 +297,6 @@ Default Value: 0 (disabled)
Allows PHY to turn off in lower power states. The user can turn off Allows PHY to turn off in lower power states. The user can turn off
this parameter in supported chipsets. this parameter in supported chipsets.
KumeranLockLoss
---------------
Valid Range: 0-1
Default Value: 1 (enabled)
This workaround skips resetting the PHY at shutdown for the initial
silicon releases of ICH8 systems.
Speed and Duplex Configuration Speed and Duplex Configuration
============================== ==============================
@ -397,12 +358,12 @@ Additional Configurations
------------ ------------
Jumbo Frames support is enabled by changing the MTU to a value larger than Jumbo Frames support is enabled by changing the MTU to a value larger than
the default of 1500. Use the ifconfig command to increase the MTU size. the default of 1500. Use the ifconfig command to increase the MTU size.
For example: For example::
ifconfig eth<x> mtu 9000 up ifconfig eth<x> mtu 9000 up
This setting is not saved across reboots. It can be made permanent if This setting is not saved across reboots. It can be made permanent if
you add: you add::
MTU=9000 MTU=9000

View File

@ -10,6 +10,8 @@ Contents:
batman-adv batman-adv
can can
dpaa2/index dpaa2/index
e100
e1000
kapi kapi
z8530book z8530book
msg_zerocopy msg_zerocopy

View File

@ -7089,8 +7089,8 @@ Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git
S: Supported S: Supported
F: Documentation/networking/e100.txt F: Documentation/networking/e100.rst
F: Documentation/networking/e1000.txt F: Documentation/networking/e1000.rst
F: Documentation/networking/e1000e.txt F: Documentation/networking/e1000e.txt
F: Documentation/networking/igb.txt F: Documentation/networking/igb.txt
F: Documentation/networking/igbvf.txt F: Documentation/networking/igbvf.txt

View File

@ -3527,15 +3527,12 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
} }
break; break;
case e1000_pch_spt: case e1000_pch_spt:
if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) { /* Stable 24MHz frequency */
/* Stable 24MHz frequency */ incperiod = INCPERIOD_24MHZ;
incperiod = INCPERIOD_24MHZ; incvalue = INCVALUE_24MHZ;
incvalue = INCVALUE_24MHZ; shift = INCVALUE_SHIFT_24MHZ;
shift = INCVALUE_SHIFT_24MHZ; adapter->cc.shift = shift;
adapter->cc.shift = shift; break;
break;
}
return -EINVAL;
case e1000_pch_cnp: case e1000_pch_cnp:
if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) { if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
/* Stable 24MHz frequency */ /* Stable 24MHz frequency */

View File

@ -2032,6 +2032,21 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
#if L1_CACHE_BYTES < 128 #if L1_CACHE_BYTES < 128
prefetch(xdp->data + L1_CACHE_BYTES); prefetch(xdp->data + L1_CACHE_BYTES);
#endif #endif
/* Note, we get here by enabling legacy-rx via:
*
* ethtool --set-priv-flags <dev> legacy-rx on
*
* In this mode, we currently get 0 extra XDP headroom as
* opposed to having legacy-rx off, where we process XDP
* packets going to stack via i40e_build_skb(). The latter
* provides us currently with 192 bytes of headroom.
*
* For i40e_construct_skb() mode it means that the
* xdp->data_meta will always point to xdp->data, since
* the helper cannot expand the head. Should this ever
* change in future for legacy-rx mode on, then lets also
* add xdp->data_meta handling here.
*/
/* allocate a skb to store the frags */ /* allocate a skb to store the frags */
skb = __napi_alloc_skb(&rx_ring->q_vector->napi, skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
@ -2083,19 +2098,25 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
struct i40e_rx_buffer *rx_buffer, struct i40e_rx_buffer *rx_buffer,
struct xdp_buff *xdp) struct xdp_buff *xdp)
{ {
unsigned int size = xdp->data_end - xdp->data; unsigned int metasize = xdp->data - xdp->data_meta;
#if (PAGE_SIZE < 8192) #if (PAGE_SIZE < 8192)
unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2; unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2;
#else #else
unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
SKB_DATA_ALIGN(I40E_SKB_PAD + size); SKB_DATA_ALIGN(I40E_SKB_PAD +
(xdp->data_end -
xdp->data_hard_start));
#endif #endif
struct sk_buff *skb; struct sk_buff *skb;
/* prefetch first cache line of first page */ /* Prefetch first cache line of first page. If xdp->data_meta
prefetch(xdp->data); * is unused, this points exactly as xdp->data, otherwise we
* likely have a consumer accessing first few bytes of meta
* data, and then actual data.
*/
prefetch(xdp->data_meta);
#if L1_CACHE_BYTES < 128 #if L1_CACHE_BYTES < 128
prefetch(xdp->data + L1_CACHE_BYTES); prefetch(xdp->data_meta + L1_CACHE_BYTES);
#endif #endif
/* build an skb around the page buffer */ /* build an skb around the page buffer */
skb = build_skb(xdp->data_hard_start, truesize); skb = build_skb(xdp->data_hard_start, truesize);
@ -2103,8 +2124,10 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
return NULL; return NULL;
/* update pointers within the skb to store the data */ /* update pointers within the skb to store the data */
skb_reserve(skb, I40E_SKB_PAD); skb_reserve(skb, I40E_SKB_PAD + (xdp->data - xdp->data_hard_start));
__skb_put(skb, size); __skb_put(skb, xdp->data_end - xdp->data);
if (metasize)
skb_metadata_set(skb, metasize);
/* buffer is used by skb, update page_offset */ /* buffer is used by skb, update page_offset */
#if (PAGE_SIZE < 8192) #if (PAGE_SIZE < 8192)
@ -2341,7 +2364,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
if (!skb) { if (!skb) {
xdp.data = page_address(rx_buffer->page) + xdp.data = page_address(rx_buffer->page) +
rx_buffer->page_offset; rx_buffer->page_offset;
xdp_set_data_meta_invalid(&xdp); xdp.data_meta = xdp.data;
xdp.data_hard_start = xdp.data - xdp.data_hard_start = xdp.data -
i40e_rx_offset(rx_ring); i40e_rx_offset(rx_ring);
xdp.data_end = xdp.data + size; xdp.data_end = xdp.data + size;

View File

@ -2058,6 +2058,7 @@ int igb_up(struct igb_adapter *adapter)
igb_assign_vector(adapter->q_vector[0], 0); igb_assign_vector(adapter->q_vector[0], 0);
/* Clear any pending interrupts. */ /* Clear any pending interrupts. */
rd32(E1000_TSICR);
rd32(E1000_ICR); rd32(E1000_ICR);
igb_irq_enable(adapter); igb_irq_enable(adapter);
@ -3865,6 +3866,7 @@ static int __igb_open(struct net_device *netdev, bool resuming)
napi_enable(&(adapter->q_vector[i]->napi)); napi_enable(&(adapter->q_vector[i]->napi));
/* Clear any pending interrupts. */ /* Clear any pending interrupts. */
rd32(E1000_TSICR);
rd32(E1000_ICR); rd32(E1000_ICR);
igb_irq_enable(adapter); igb_irq_enable(adapter);
@ -4053,11 +4055,6 @@ void igb_configure_tx_ring(struct igb_adapter *adapter,
u64 tdba = ring->dma; u64 tdba = ring->dma;
int reg_idx = ring->reg_idx; int reg_idx = ring->reg_idx;
/* disable the queue */
wr32(E1000_TXDCTL(reg_idx), 0);
wrfl();
mdelay(10);
wr32(E1000_TDLEN(reg_idx), wr32(E1000_TDLEN(reg_idx),
ring->count * sizeof(union e1000_adv_tx_desc)); ring->count * sizeof(union e1000_adv_tx_desc));
wr32(E1000_TDBAL(reg_idx), wr32(E1000_TDBAL(reg_idx),
@ -4088,8 +4085,16 @@ void igb_configure_tx_ring(struct igb_adapter *adapter,
**/ **/
static void igb_configure_tx(struct igb_adapter *adapter) static void igb_configure_tx(struct igb_adapter *adapter)
{ {
struct e1000_hw *hw = &adapter->hw;
int i; int i;
/* disable the queues */
for (i = 0; i < adapter->num_tx_queues; i++)
wr32(E1000_TXDCTL(adapter->tx_ring[i]->reg_idx), 0);
wrfl();
usleep_range(10000, 20000);
for (i = 0; i < adapter->num_tx_queues; i++) for (i = 0; i < adapter->num_tx_queues; i++)
igb_configure_tx_ring(adapter, adapter->tx_ring[i]); igb_configure_tx_ring(adapter, adapter->tx_ring[i]);
} }

View File

@ -10,15 +10,9 @@ static struct dentry *ixgbe_dbg_root;
static char ixgbe_dbg_reg_ops_buf[256] = ""; static char ixgbe_dbg_reg_ops_buf[256] = "";
/** static ssize_t ixgbe_dbg_common_ops_read(struct file *filp, char __user *buffer,
* ixgbe_dbg_reg_ops_read - read for reg_ops datum size_t count, loff_t *ppos,
* @filp: the opened file char *dbg_buf)
* @buffer: where to write the data for the user to read
* @count: the size of the user's buffer
* @ppos: file position offset
**/
static ssize_t ixgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{ {
struct ixgbe_adapter *adapter = filp->private_data; struct ixgbe_adapter *adapter = filp->private_data;
char *buf; char *buf;
@ -29,8 +23,7 @@ static ssize_t ixgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer,
return 0; return 0;
buf = kasprintf(GFP_KERNEL, "%s: %s\n", buf = kasprintf(GFP_KERNEL, "%s: %s\n",
adapter->netdev->name, adapter->netdev->name, dbg_buf);
ixgbe_dbg_reg_ops_buf);
if (!buf) if (!buf)
return -ENOMEM; return -ENOMEM;
@ -45,6 +38,20 @@ static ssize_t ixgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer,
return len; return len;
} }
/**
* ixgbe_dbg_reg_ops_read - read for reg_ops datum
* @filp: the opened file
* @buffer: where to write the data for the user to read
* @count: the size of the user's buffer
* @ppos: file position offset
**/
static ssize_t ixgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{
return ixgbe_dbg_common_ops_read(filp, buffer, count, ppos,
ixgbe_dbg_reg_ops_buf);
}
/** /**
* ixgbe_dbg_reg_ops_write - write into reg_ops datum * ixgbe_dbg_reg_ops_write - write into reg_ops datum
* @filp: the opened file * @filp: the opened file
@ -121,33 +128,11 @@ static char ixgbe_dbg_netdev_ops_buf[256] = "";
* @count: the size of the user's buffer * @count: the size of the user's buffer
* @ppos: file position offset * @ppos: file position offset
**/ **/
static ssize_t ixgbe_dbg_netdev_ops_read(struct file *filp, static ssize_t ixgbe_dbg_netdev_ops_read(struct file *filp, char __user *buffer,
char __user *buffer,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
struct ixgbe_adapter *adapter = filp->private_data; return ixgbe_dbg_common_ops_read(filp, buffer, count, ppos,
char *buf; ixgbe_dbg_netdev_ops_buf);
int len;
/* don't allow partial reads */
if (*ppos != 0)
return 0;
buf = kasprintf(GFP_KERNEL, "%s: %s\n",
adapter->netdev->name,
ixgbe_dbg_netdev_ops_buf);
if (!buf)
return -ENOMEM;
if (count < strlen(buf)) {
kfree(buf);
return -ENOSPC;
}
len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
kfree(buf);
return len;
} }
/** /**

View File

@ -444,6 +444,89 @@ static int ixgbe_ipsec_parse_proto_keys(struct xfrm_state *xs,
return 0; return 0;
} }
/**
* ixgbe_ipsec_check_mgmt_ip - make sure there is no clash with mgmt IP filters
* @xs: pointer to transformer state struct
**/
static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
{
struct net_device *dev = xs->xso.dev;
struct ixgbe_adapter *adapter = netdev_priv(dev);
struct ixgbe_hw *hw = &adapter->hw;
u32 mfval, manc, reg;
int num_filters = 4;
bool manc_ipv4;
u32 bmcipval;
int i, j;
#define MANC_EN_IPV4_FILTER BIT(24)
#define MFVAL_IPV4_FILTER_SHIFT 16
#define MFVAL_IPV6_FILTER_SHIFT 24
#define MIPAF_ARR(_m, _n) (IXGBE_MIPAF + ((_m) * 0x10) + ((_n) * 4))
#define IXGBE_BMCIP(_n) (0x5050 + ((_n) * 4))
#define IXGBE_BMCIPVAL 0x5060
#define BMCIP_V4 0x2
#define BMCIP_V6 0x3
#define BMCIP_MASK 0x3
manc = IXGBE_READ_REG(hw, IXGBE_MANC);
manc_ipv4 = !!(manc & MANC_EN_IPV4_FILTER);
mfval = IXGBE_READ_REG(hw, IXGBE_MFVAL);
bmcipval = IXGBE_READ_REG(hw, IXGBE_BMCIPVAL);
if (xs->props.family == AF_INET) {
/* are there any IPv4 filters to check? */
if (manc_ipv4) {
/* the 4 ipv4 filters are all in MIPAF(3, i) */
for (i = 0; i < num_filters; i++) {
if (!(mfval & BIT(MFVAL_IPV4_FILTER_SHIFT + i)))
continue;
reg = IXGBE_READ_REG(hw, MIPAF_ARR(3, i));
if (reg == xs->id.daddr.a4)
return 1;
}
}
if ((bmcipval & BMCIP_MASK) == BMCIP_V4) {
reg = IXGBE_READ_REG(hw, IXGBE_BMCIP(3));
if (reg == xs->id.daddr.a4)
return 1;
}
} else {
/* if there are ipv4 filters, they are in the last ipv6 slot */
if (manc_ipv4)
num_filters = 3;
for (i = 0; i < num_filters; i++) {
if (!(mfval & BIT(MFVAL_IPV6_FILTER_SHIFT + i)))
continue;
for (j = 0; j < 4; j++) {
reg = IXGBE_READ_REG(hw, MIPAF_ARR(i, j));
if (reg != xs->id.daddr.a6[j])
break;
}
if (j == 4) /* did we match all 4 words? */
return 1;
}
if ((bmcipval & BMCIP_MASK) == BMCIP_V6) {
for (j = 0; j < 4; j++) {
reg = IXGBE_READ_REG(hw, IXGBE_BMCIP(j));
if (reg != xs->id.daddr.a6[j])
break;
}
if (j == 4) /* did we match all 4 words? */
return 1;
}
}
return 0;
}
/** /**
* ixgbe_ipsec_add_sa - program device with a security association * ixgbe_ipsec_add_sa - program device with a security association
* @xs: pointer to transformer state struct * @xs: pointer to transformer state struct
@ -465,6 +548,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL; return -EINVAL;
} }
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
return -EINVAL;
}
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) { if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
struct rx_sa rsa; struct rx_sa rsa;
@ -575,7 +663,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
/* hash the new entry for faster search in Rx path */ /* hash the new entry for faster search in Rx path */
hash_add_rcu(ipsec->rx_sa_list, &ipsec->rx_tbl[sa_idx].hlist, hash_add_rcu(ipsec->rx_sa_list, &ipsec->rx_tbl[sa_idx].hlist,
(__force u64)rsa.xs->id.spi); (__force u32)rsa.xs->id.spi);
} else { } else {
struct tx_sa tsa; struct tx_sa tsa;

View File

@ -7621,17 +7621,19 @@ static void ixgbe_reset_subtask(struct ixgbe_adapter *adapter)
if (!test_and_clear_bit(__IXGBE_RESET_REQUESTED, &adapter->state)) if (!test_and_clear_bit(__IXGBE_RESET_REQUESTED, &adapter->state))
return; return;
rtnl_lock();
/* If we're already down, removing or resetting, just bail */ /* If we're already down, removing or resetting, just bail */
if (test_bit(__IXGBE_DOWN, &adapter->state) || if (test_bit(__IXGBE_DOWN, &adapter->state) ||
test_bit(__IXGBE_REMOVING, &adapter->state) || test_bit(__IXGBE_REMOVING, &adapter->state) ||
test_bit(__IXGBE_RESETTING, &adapter->state)) test_bit(__IXGBE_RESETTING, &adapter->state)) {
rtnl_unlock();
return; return;
}
ixgbe_dump(adapter); ixgbe_dump(adapter);
netdev_err(adapter->netdev, "Reset adapter\n"); netdev_err(adapter->netdev, "Reset adapter\n");
adapter->tx_timeout_count++; adapter->tx_timeout_count++;
rtnl_lock();
ixgbe_reinit_locked(adapter); ixgbe_reinit_locked(adapter);
rtnl_unlock(); rtnl_unlock();
} }

View File

@ -76,6 +76,7 @@ enum ixgbevf_ring_state_t {
__IXGBEVF_TX_DETECT_HANG, __IXGBEVF_TX_DETECT_HANG,
__IXGBEVF_HANG_CHECK_ARMED, __IXGBEVF_HANG_CHECK_ARMED,
__IXGBEVF_TX_XDP_RING, __IXGBEVF_TX_XDP_RING,
__IXGBEVF_TX_XDP_RING_PRIMED,
}; };
#define ring_is_xdp(ring) \ #define ring_is_xdp(ring) \

View File

@ -991,24 +991,45 @@ static int ixgbevf_xmit_xdp_ring(struct ixgbevf_ring *ring,
return IXGBEVF_XDP_CONSUMED; return IXGBEVF_XDP_CONSUMED;
/* record the location of the first descriptor for this packet */ /* record the location of the first descriptor for this packet */
tx_buffer = &ring->tx_buffer_info[ring->next_to_use];
tx_buffer->bytecount = len;
tx_buffer->gso_segs = 1;
tx_buffer->protocol = 0;
i = ring->next_to_use; i = ring->next_to_use;
tx_desc = IXGBEVF_TX_DESC(ring, i); tx_buffer = &ring->tx_buffer_info[i];
dma_unmap_len_set(tx_buffer, len, len); dma_unmap_len_set(tx_buffer, len, len);
dma_unmap_addr_set(tx_buffer, dma, dma); dma_unmap_addr_set(tx_buffer, dma, dma);
tx_buffer->data = xdp->data; tx_buffer->data = xdp->data;
tx_desc->read.buffer_addr = cpu_to_le64(dma); tx_buffer->bytecount = len;
tx_buffer->gso_segs = 1;
tx_buffer->protocol = 0;
/* Populate minimal context descriptor that will provide for the
* fact that we are expected to process Ethernet frames.
*/
if (!test_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state)) {
struct ixgbe_adv_tx_context_desc *context_desc;
set_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state);
context_desc = IXGBEVF_TX_CTXTDESC(ring, 0);
context_desc->vlan_macip_lens =
cpu_to_le32(ETH_HLEN << IXGBE_ADVTXD_MACLEN_SHIFT);
context_desc->seqnum_seed = 0;
context_desc->type_tucmd_mlhl =
cpu_to_le32(IXGBE_TXD_CMD_DEXT |
IXGBE_ADVTXD_DTYP_CTXT);
context_desc->mss_l4len_idx = 0;
i = 1;
}
/* put descriptor type bits */ /* put descriptor type bits */
cmd_type = IXGBE_ADVTXD_DTYP_DATA | cmd_type = IXGBE_ADVTXD_DTYP_DATA |
IXGBE_ADVTXD_DCMD_DEXT | IXGBE_ADVTXD_DCMD_DEXT |
IXGBE_ADVTXD_DCMD_IFCS; IXGBE_ADVTXD_DCMD_IFCS;
cmd_type |= len | IXGBE_TXD_CMD; cmd_type |= len | IXGBE_TXD_CMD;
tx_desc = IXGBEVF_TX_DESC(ring, i);
tx_desc->read.buffer_addr = cpu_to_le64(dma);
tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type); tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
tx_desc->read.olinfo_status = tx_desc->read.olinfo_status =
cpu_to_le32((len << IXGBE_ADVTXD_PAYLEN_SHIFT) | cpu_to_le32((len << IXGBE_ADVTXD_PAYLEN_SHIFT) |
@ -1688,6 +1709,7 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter,
sizeof(struct ixgbevf_tx_buffer) * ring->count); sizeof(struct ixgbevf_tx_buffer) * ring->count);
clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state); clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state);
clear_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state);
IXGBE_WRITE_REG(hw, IXGBE_VFTXDCTL(reg_idx), txdctl); IXGBE_WRITE_REG(hw, IXGBE_VFTXDCTL(reg_idx), txdctl);
@ -3119,15 +3141,17 @@ static void ixgbevf_reset_subtask(struct ixgbevf_adapter *adapter)
if (!test_and_clear_bit(__IXGBEVF_RESET_REQUESTED, &adapter->state)) if (!test_and_clear_bit(__IXGBEVF_RESET_REQUESTED, &adapter->state))
return; return;
rtnl_lock();
/* If we're already down or resetting, just bail */ /* If we're already down or resetting, just bail */
if (test_bit(__IXGBEVF_DOWN, &adapter->state) || if (test_bit(__IXGBEVF_DOWN, &adapter->state) ||
test_bit(__IXGBEVF_REMOVING, &adapter->state) || test_bit(__IXGBEVF_REMOVING, &adapter->state) ||
test_bit(__IXGBEVF_RESETTING, &adapter->state)) test_bit(__IXGBEVF_RESETTING, &adapter->state)) {
rtnl_unlock();
return; return;
}
adapter->tx_timeout_count++; adapter->tx_timeout_count++;
rtnl_lock();
ixgbevf_reinit_locked(adapter); ixgbevf_reinit_locked(adapter);
rtnl_unlock(); rtnl_unlock();
} }