Some of the tx_ring arguments can be deleted since they are not used.
Change-ID: I99275b0f191d7f63ec2f05061919904940c36f31
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
A local variable could move down inside the context where it is used.
Change-ID: I9caba9e1eacf921037077f2665cbce83fd8e95d6
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch moves the HW flush routine to the end of the reset flow,
after the completion of writing to the device VFLR registers- the
benefit is to avoid problems in the passthrough routines.
Change-ID: Ieb56866f21895e6c1fc514b7328c3df79807a57c
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don't set our internal debug_mask at startup unless we get specific signal
to from the debug module parameter.
This should take care of the issue with all the device capabilities getting
printed even when we hadn't asked for the debug info.
Change-ID: I7fbc6bd8b11ed9b0631ec018ff36015a04100b6c
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Implement the appropriate DCB ops and allow a user to configure certain
traffic classes as lossless.
The operation configures PFC for both the egress (respecting PFC frames)
and ingress (sending PFC frames) parts of the port.
At egress, when a PFC frame is received for a PFC enabled priority, then
all the priorities mapped to the same TC are stopped.
At ingress, the priority group (PG) buffers to which the enabled PFC
priorities are mapped are configured to be lossless. PFC frames will be
transmitted when the Xoff threshold is crossed.
The user-supplied delay parameter is used to determine the PG's size
according to the following formula:
PG_SIZE = PG_SIZE_LOSSY + delay * CELL_FACTOR + MTU
In the worst case scenario the delay will be made up of packets that
are all of size CELL_SIZE + 1, which means each packet will require
almost twice its true size when buffered in the switch. We therefore
multiply this value by the "cell factor", which is close to 2.
Another MTU is added in case the transmitting host already started
transmitting a maximum length frame when the PFC packet was received.
As with PAUSE enabled ports, when the port's MTU is changed both the
PGs' size and threshold are adjusted accordingly.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We are going to add support for PFC as part of DCB ops, which requires us
to report the number of PFC frames sent and received per priority.
Add per priority counters in order to report number of PFC frames sent
and received per priority.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a packet ingress the switch it's placed in its assigned priority
group (PG) buffer in the port's headroom buffer while it goes through
the switch's pipeline. After going through the pipeline - which
determines its egress port(s) and traffic class - it's moved to the
switch's shared buffer awaiting transmission.
However, some packets are not eligible to enter the shared buffer due to
exceeded quotas or insufficient space. Marking their associated PGs as
lossless will cause the packets to accumulate in the PG buffer. Another
reason for packets accumulation are complicated pipelines (e.g.
involving a lot of ACLs).
To prevent packets from being dropped a user can enable PAUSE frames on
the port. This will mark all the active PGs as lossless and set their
size according to the maximum delay, as it's not configured by user.
+----------------+ +
| | |
| | |
| | |
| | |
| | |
| | | Delay
| | |
| | |
| | |
| | |
| | |
Xon/Xoff threshold +----------------+ +
| | |
| | | 2 * MTU
| | |
+----------------+ +
The delay (612 [Cells]) was calculated according to worst-case scenario
involving maximum MTU and 100m cables.
After marking the PGs as lossless the device is configured to respect
incoming PAUSE frames (Rx PAUSE) and generate PAUSE frames (Tx PAUSE)
according to user's settings.
Whenever the port's headroom configuration changes we take into account
the PAUSE configuration, so that we correctly set the PG's type (lossy /
lossless), size and threshold. This can happen when:
a) The port's MTU changes, as it directly affects the PG's size.
b) A PG is created following user configuration, by binding a priority
to it.
Note that the relevant SUPPORTED flags were already mistakenly set by
the driver before this commit.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When configuring PAUSE frames and PFC we'll need to configure the
Xon/Xoff threshold for the priority group (PG) buffers.
Add the Xon/Xoff threshold fields to the PBMC register so that we can
configure these when needed.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the Port Flow Control Configuration (PFCC) register, which
configures both flow control and Priority-based Flow Control (PFC).
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow a user to set maximum rate for a particular TC using DCB ops.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement the appropriate DCB ops and allow a user to configure:
* Priority to traffic class (TC) mapping with a total of 8
supported TCs
* Transmission selection algorithm (TSA) for each TC and the
corresponding weights in case of weighted round robin (WRR)
As previously explained, we treat the priority group (PG) buffer in the
port's headroom as the ingress counterpart of the egress TC. Therefore,
when a certain priority to TC mapping is configured, we also configure
the port's headroom buffer.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce basic infrastructure for DCB and add the missing ops in
following patches.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before introducing support for DCB ops we should first make sure we
initialize the relevant parts in the device correctly. Specifically, the
egress scheduling.
The device supports a superset of the 802.1Qaz standard with 4 hierarchy
levels that can be linked to each other in multiple ways and with
different transmission selection algorithms (TSA) employed between them.
However, since we only intend to support the 802.1Qaz standard we
flatten the hierarchies and let the user configure via DCB ops the TSA
and max rate shaper at the subgroup hierarchy (see figure below) and the
mapping between switch priority to traffic class. By default, all switch
priorities are mapped to traffic class 0, strict priority is employed
and max shaper is disabled.
Default configuration:
switch priority 0 ... switch priority 7
+ +
| |
+----------------------------------+
|
+--v--+ +-----+
Traffic Class | | | |
Hierarchy | TC0 | ... | TC7 |
| | | |
+--+--+ +--+--+
| |
+--v--+ +--v--+
Subgroup | SG0 | | SG7 |
Hierarchy | | | |
+-----+ +-----+
| TSA | | TSA |
+-----+ ... +-----+
| MAX | | MAX |
+--+--+ +--+--+
| |
+---------------+----------------+
|
+--v--+
Group | |
Hierarchy | GR0 |
| |
+--+--+
|
+--v--+
Port | |
Hierarchy | PR0 |
| |
+-----+
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As part of DCB ops we'll have to configure the priority to traffic class
mapping of a port.
Add the QoS Switch Traffic Class Table (QTCT) register, which configures
the mapping between the packet switch priority and traffic class on the
transmit port.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We are going to introduce support for DCB, so we need to be able to
configure the traffic selection algorithm (TSA) used by each traffic
class (TC), as well as the bandwidth percentage allocated to each TC in
case of ETS.
Add the QoS ETS Element Configuration register, which controls the
above parameters.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In addition to the priority group (PG) buffers in the headroom, the
device enables the allocation of headroom shared buffer, which can
be shared between different PGs.
However, we are not going to use the headroom shared buffer and instead
allow the user to use its size for PGs or the switch's shared buffer.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The last field of the PBMC register is at offset 0x64 and its size is
0x8, so the correct register's length is 0x6C bytes.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When packets ingress the switch they are assigned a switch priority and
directed to the corresponding priority group (PG) buffer in the port's
headroom buffer.
Since we now map all switch priorities to priority group 0 (PG0) by
default, there is no need to allocate the other priority groups during
initialization. The only exception is PG9, which is used for control
traffic.
At minimum, the PG should be able to store the currently classified
packet (pipeline latency isn't 0) and also the packets arriving during
the classification time. However, an incoming packet will not be
buffered if there is no available MTU-sized buffer space for storing it.
The buffer needed to accommodate for pipeline latency is variable and
needs to take into account both the current link speed and current
latency of the pipeline, which is time-dependent. Testing showed that
setting the PG's size to twice the current MTU is optimal.
Since PG9 is used strictly for control packets and not subject to flow
control, we are not going to resize it according to user configuration,
so we simply set it according to worst case scenario, which is twice the
maximum MTU.
In any case, later patches in the series will allow a user to direct
lossless flows to other PGs than PG0 and set their size to accommodate
for round-trip propagation delay.
The above change also requires us to resize the PG buffer whenever the
port's MTU is changed.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Buffers in the switch store packets in units called buffer cells. Add a
helper to convert from bytes to cells, so that the actual number of
cells required (result is round up) is returned.
Also, drop the SB (shared buffer) acronym from the BYTES_PER_CELL macro,
as this unit is also used in the ports' buffers and not only the
switch's shared buffer.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
During transmission, the skb's priority is used to map the skb to a
traffic class, where the idea is to group priorities with similar
characteristics (e.g. lossy, lossless) to the same traffic class. By
default, all priorities are mapped to traffic class 0.
In the device, we model the skb's priority as the switch priority, which
is assigned to a packet according to its PCP value and ingress port
(untagged packets are assigned the port's default switch priority - 0).
At ingress, the packet is directed to a priority group (PG) buffer in
the port's headroom buffer according to the packet's switch priority and
switch priority to buffer mapping.
While it's possible to configure the egress mapping between skb's
priority (switch priority) and traffic class, there is no mechanism to
configure the ingress mapping to a PG.
In order to keep things simple and since grouping certain priorities into
a traffic class at egress also implies they should be grouped the same
at ingress, treat a PG as the ingress counterpart of an egress traffic
class.
Having established the above, during initialization map all the switch
priorities to PG0 in accordance with the Linux defaults for traffic
class mapping.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When packets ingress the switch they are assigned a switch priority
number that dictates the packet's priority group (PG) buffer in the
port's headroom buffer.
Add the Port Prio To Buffer (PPTB) register, which configures the switch
priority to PG mapping.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2016-04-05
This series contains updates to i40e and i40evf only.
Colin Ian King cleaned up a redundant NULL check which was found by static
analysis.
Anjali enables geneve receive offload for XL710/X710 devices.
Mitch cleans up unused variable in i40e_vc_get_vf_resources_msg().
Fixed the driver to actually be able to adjust VLAN tagging features
through ethtool, as expected. Fixed a problem where VF resets would
get lost by the PF preventing the VF driver from initializing. Also
put users mind at ease by lowering some message levels since many of
these conditions can happen any time VFs are enabled or disabled and
are not really indicative a fatal problems, unless they happen
continuously.
Shannon disables the link polling to lessen the admin queue traffic
especially since the link event mask usage has been fixed recently.
Alex Duyck fixes the i40e and i40evf drivers to correctly update
checksums for frames up to 16776960 in length which should be more than
large enough for all possible TSO frames in the near future.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement VXLAN-GPE. Only COLLECT_METADATA is supported for now (it is
possible to support static configuration, too, if there is demand for it).
The GPE header parsing has to be moved before iptunnel_pull_header, as we
need to know the protocol.
v2: Removed what was called "L2 mode" in v1 of the patchset. Only "L3 mode"
(now called "raw mode") is added by this patch. This mode does not allow
Ethernet header to be encapsulated in VXLAN-GPE when using ip route to
specify the encapsulation, IP header is encapsulated instead. The patch
does support Ethernet to be encapsulated, though, using ETH_P_TEB in
skb->protocol. This will be utilized by other COLLECT_METADATA users
(openvswitch in particular).
If there is ever demand for Ethernet encapsulation with VXLAN-GPE using
ip route, it's easy to add a new flag switching the interface to
"Ethernet mode" (called "L2 mode" in v1 of this patchset). For now,
leave this out, it seems we don't need it.
Disallowed more flag combinations, especially RCO with GPE.
Added comment explaining that GBP and GPE cannot be set together.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Handle VXLAN_F_COLLECT_METADATA before VXLAN_F_PROXY. The latter does not
make sense with the former, as it needs populated fdb which does not happen
in metadata mode.
After this cleanup, the fdb code in vxlan_xmit is moved to a common location
and can be later skipped for VXLAN-GPE which does not necessarily carry
inner Ethernet header.
v2: changed commit description to not reference L3 mode
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This will allow to initialize vxlan in ARPHRD_NONE mode based on the passed
rtnl attributes.
v2: renamed "l2mode" to "ether".
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Message level can be set through ethtool, so deprecate module parameter
which is used to set the same.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With IPv4 and IPv6 now using the same format for checksums based on the
length of the frame we need to update the i40e and i40evf drivers so that
they correctly account for lengths greater than or equal to 64K.
With this patch the driver should now correctly update checksums for frames
up to 16776960 in length which should be more than large enough for all
possible TSO frames in the near future.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the Media Not Available flag to the link event mask. It seems
that event comes first if you have a DA cable pulled out, but there's no
follow-up event for Link Down; if you're not looking for MEDIA_NA you will
get no event, even though there's now no Link.
Change-ID: cb3340a2849805bb881f64f6f2ae810eef46eba7
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
These conditions can happen any time VFs are enabled or disabled and are
not really indicative of fatal problems unless they happen continuously.
Lower the log level so that people don't get scared.
Change-ID: I1ceb4adbd10d03cbeed54d1f5b7f20d60328351d
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
100baseT/Full is now listed and supported link mode for 10GBaseT PHY.
This is a fix to list all the supported link modes of 10GBaseT PHY.
Change-ID: If2be3212ef0fef85fd5d6e4550c7783de2f915e9
Signed-off-by: Avinash Dayanand <avinash.dayanand@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We were passing in the seed where we should just be passing false
because we want the VSI table not the pf table.
Change-ID: I9b633ab06eb59468087f0c0af8539857e99f9495
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Periodic link polling was added when the link events were found not to be
trustworthy. This was the case early on, but was likely because the link
event mask was being used incorrectly. As this has been fixed in recent
code, we can disable the link polling to lessen the AQ traffic.
Change-ID: Id890b5ee3c2d04381fc76ffa434777644f5d8eb0
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Upon module remove, wait a little longer after requesting a reset before
checking to see if the firmware responded. This change prevents double
resets when the firmware is busy.
Change-ID: Ieedc988ee82fac1f32a074bf4d9e4dba426bfa58
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Clear the VFLR bit immediately after triggering a reset instead of
waiting until after cleanup is complete. Make sure to trigger a reset
every time, not just if the PF is up.
These changes fix a problem where VF resets would get lost by the PF,
preventing the VF driver from initializing.
Change-ID: I5945cf2884095b7b0554867c64df8617e71d9d29
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The new device ID is 0x37D3 and it should follow the same flows and
branding string as for 0x37D0.
Change-ID: Ia5ad4a1910268c4666a3fd46a7afffbec55b4fc2
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Users of ethtool were being given the mistaken impression that this
driver was able to change its VLAN tagging features, and were
disappointed that this was not actually the case. Implement
ndo_fix_features method so that we can adjust these flags as needed to
avoid false impressions.
Change-ID: I08584f103a4fa73d6a4128d472e4ef44dcfda57f
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This variable is vestigial, a remnant of the primordial code from which
this driver spawned. We can safely remove it.
Change-ID: I24e0fe338e7c7c50d27dc5515564f33caefbb93a
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch enables the Capability for XL710/X710 devices with FW API
version higher than 1.4 to do geneve Rx offload.
Change-ID: I9a8f87772c48d7d67dc85e3701d2e0b845034c0b
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
active_vlans is an unsigned long array, hence a null check on this
array is superfluous and can be removed.
Detected with static analysis by smatch:
drivers/net/ethernet/intel/i40e/i40e_debugfs.c:386
i40e_dbg_dump_vsi_seid() warn: this array is probably
non-NULL. 'vsi->active_vlans'
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Enable RX Checksum offload feature in the ibmvnic driver.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow the VNIC driver to provide descriptors containing
L2/L3/L4 headers to firmware. This feature is needed
for greater hardware compatibility and enablement of checksum
and TCP offloading features.
A new function is included for the hypervisor call,
H_SEND_SUBCRQ_INDIRECT, allowing a DMA-mapped array of SCRQ
descriptor elements to be sent to the VNIC server.
These additions will help fully enable checksum offloading as
well as other features as they are included later.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
dmadesc_set() is used for setting the Tx buffer DMA address, length,
and status bits on a Tx ring descriptor when a frame is being Tx'ed.
Always set the Tx buffer DMA address first, before updating the length
and status bits, i.e. giving the Tx descriptor to the hardware.
The reason this is a cleanup rather than a fix is that the hardware
won't transmit anything from a Tx ring until the TDMA producer index
has been incremented. As long as the dmadesc_set() writes complete
before the TDMA producer index write, life is good.
Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add frag_size = skb_frag_size(frag) and use it when needed.
Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
1. Readability: Move nr_frags assignment a few lines down in order
to bundle index -> ring -> txq calculations together.
2. Readability: Add parentheses around nr_frags + 1.
3. Minor fix: Stop the Tx queue and throw the error message only if
the Tx queue hasn't already been stopped.
Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2016-04-05
This series contains updates to i40e and i40evf only.
Stefan converts dev_close() to ndo_stop() for ethtool offline self test,
since dev_close() causes IFF_UP to be cleared which will remove the
interface routes and addresses.
Alex bumps up the size of the transmit data buffer to 12K rather than 8K,
which provides a gain in throughput and a reduction in overhead for
putting together the frame. Fixed an issue in the polling routines where
we were using bitwise operators to avoid the side effects of the
logical operators. Then added support for bulk transmit clean for skbs.
Jesse fixed a sparse issue in the type casting in the transmit code and
fixed i40e_aq_set_phy_debug() to use i40e_status as a return code.
Catherine cleans up duplicated code.
Shannon fixed the cleaning up of the interrupt handling to clean up the
IRQs only if we actually got them set up. Also fixed up the error
scenarios where we were trying to remove a non-existent timer or
worktask, which causes the kernel heartburn.
Mitch changes the notification of resets to the reset interrupt handler,
instead of the actual reset initiation code. This allows the VFs to get
properly notified for all resets, including resets initiated by different
PFs on the same physical device. Also moved the clearing of VFLR bit
after reset processing, instead of before which could lead to double
resets on VF init. Fixed code comment to match the actual function name.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
If autoneg is off, we should always report the speed and duplex settings
even if it is link down so the user knows the current settings. The
unknown speed and duplex should only be used for autoneg when link is
down.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check that the forced speed is a valid speed supported by firmware.
If not supported, return -EINVAL.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the PORT_CONN_NOT_ALLOWED async event handling logic. The driver
will print an appropriate warning to reflect the SFP+ module enforcement
policy done in the firmware.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>