When PVID is toggled off on a port member in a VLAN filtering bridge or
the PVID VLAN is deleted, make the port drop untagged packets. Reverse
the operation when PVID is toggled back on.
Set the PVID back to the default (1), when leaving the bridge so that
untagged traffic will be directed to the CPU.
Fixes: 56ade8fe3f ("mlxsw: spectrum: Add initial support for Spectrum ASIC")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When VLAN filtering is enabled on a bridge and PVID is deleted from a
bridge port, then untagged frames are not allowed to ingress into the
bridge from this port.
Add the Switch Port Acceptable Frame Types (SPAFT) register, which
configures the frame admittance of the port.
Fixes: 56ade8fe3f ("mlxsw: spectrum: Add initial support for Spectrum ASIC")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2016-02-17
This series contains updates to i40e/i40evf once again.
Mitch updates the use of a define instead of a magic number. Adds support
for packet split receive on VFs, which is disabled by default. Expands on
a code comment which was not verbose or really helpful. Fixes an issue
where if a reset fails to complete and was not properly setting the
adapter state, which would cause a panic on rmmod, so set the adpater
state to DOWN to avoid a panic.
Jesse cleans up a "dump" in debugfs that never panned out to be useful.
Anjali adds a workaround for cases where we might have interrupts that get
lost but wright-back (WB) happened. Fixes an issue by falling back to
enabling unicast, multicast and broadcast promiscuous mode when the driver
must disable it's use of "default port" (defport mode) due to internal
incompatibility with Multiple Function per Port (MFP). Fixes an issue
where queues should never be enabled/disabled in the interrupt handler.
Kiran cleans up th code which used hard coded base VEB SEID since it was
removed from the specification.
Shannon adds a few bits for better debug messages. Fixes an obscure corner
case, where it was possible to clear the NVM update wait flag when no
update_done message was actually received.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ever since commit 04ed3e741d
("net: change netdev->features to u32") the format string
fmt_long_hex has not been used, so we may as well remove it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Starting from Intel Sunrisepoint (Skylake PCH) the iTCO watchdog
resources have been moved to reside under the i801 SMBus host
controller whereas previously they were under the LPC device.
This patch adds Intel lewisburg SMBus support for iTCO device.
It allows to load watchdog dynamically when the hardware is
present.
Fixes: cdc5a3110e ("i2c: i801: add Intel Lewisburg device IDs")
Reviewed-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Alexandra Yates <alexandra.yates@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Cc: stable@kernel.org
A non-atomic PCM stream may take snd_pcm_link_rwsem rw semaphore twice
in the same code path, e.g. one in snd_pcm_action_nonatomic() and
another in snd_pcm_stream_lock(). Usually this is OK, but when a
write lock is issued between these two read locks, the problem
happens: the write lock is blocked due to the first reade lock, and
the second read lock is also blocked by the write lock. This
eventually deadlocks.
The reason is the way rwsem manages waiters; it's queued like FIFO, so
even if the writer itself doesn't take the lock yet, it blocks all the
waiters (including reads) queued after it.
As a workaround, in this patch, we replace the standard down_write()
with an spinning loop. This is far from optimal, but it's good
enough, as the spinning time is supposed to be relatively short for
normal PCM operations, and the code paths requiring the write lock
aren't called so often.
Reported-by: Vinod Koul <vinod.koul@intel.com>
Tested-by: Ramesh Babu <ramesh.babu@intel.com>
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Takashi Iwai <tiwai@suse.de>
A kernel page fault oops with the callstack below was observed
when a read syscall was made to a pmem device after a huge amount
(>512GB) of vmalloc ranges was allocated by ioremap() on a x86_64
system:
BUG: unable to handle kernel paging request at ffff880840000ff8
IP: vmalloc_fault+0x1be/0x300
PGD c7f03a067 PUD 0
Oops: 0000 [#1] SM
Call Trace:
__do_page_fault+0x285/0x3e0
do_page_fault+0x2f/0x80
? put_prev_entity+0x35/0x7a0
page_fault+0x28/0x30
? memcpy_erms+0x6/0x10
? schedule+0x35/0x80
? pmem_rw_bytes+0x6a/0x190 [nd_pmem]
? schedule_timeout+0x183/0x240
btt_log_read+0x63/0x140 [nd_btt]
:
? __symbol_put+0x60/0x60
? kernel_read+0x50/0x80
SyS_finit_module+0xb9/0xf0
entry_SYSCALL_64_fastpath+0x1a/0xa4
Since v4.1, ioremap() supports large page (pud/pmd) mappings in
x86_64 and PAE. vmalloc_fault() however assumes that the vmalloc
range is limited to pte mappings.
vmalloc faults do not normally happen in ioremap'd ranges since
ioremap() sets up the kernel page tables, which are shared by
user processes. pgd_ctor() sets the kernel's PGD entries to
user's during fork(). When allocation of the vmalloc ranges
crosses a 512GB boundary, ioremap() allocates a new pud table
and updates the kernel PGD entry to point it. If user process's
PGD entry does not have this update yet, a read/write syscall
to the range will cause a vmalloc fault, which hits the Oops
above as it does not handle a large page properly.
Following changes are made to vmalloc_fault().
64-bit:
- No change for the PGD sync operation as it handles large
pages already.
- Add pud_huge() and pmd_huge() to the validation code to
handle large pages.
- Change pud_page_vaddr() to pud_pfn() since an ioremap range
is not directly mapped (while the if-statement still works
with a bogus addr).
- Change pmd_page() to pmd_pfn() since an ioremap range is not
backed by struct page (while the if-statement still works
with a bogus addr).
32-bit:
- No change for the sync operation since the index3 PGD entry
covers the entire vmalloc range, which is always valid.
(A separate change to sync PGD entry is necessary if this
memory layout is changed regardless of the page size.)
- Add pmd_huge() to the validation code to handle large pages.
This is for completeness since vmalloc_fault() won't happen
in ioremap'd ranges as its PGD entry is always valid.
Reported-by: Henning Schild <henning.schild@siemens.com>
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Acked-by: Borislav Petkov <bp@alien8.de>
Cc: <stable@vger.kernel.org> # 4.1+
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: linux-mm@kvack.org
Cc: linux-nvdimm@lists.01.org
Link: http://lkml.kernel.org/r/1455758214-24623-1-git-send-email-toshi.kani@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In MFP mode particularly when we were setting the PF VSI in limited
promiscuous, the HW switch was still mirroring the outgoing packets
from other VSIs (VF/VMdq) onto the PF VSI.
With this new bit set, the mirroring doesn't happen any more and so
we are in limited promiscuous on the PF VSI in MFP which is similar
to defport.
An API check is not required, since this bit is reserved for FW API
version < 1.5
Also update copyright year in file headers.
Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In one obscure corner case, it was possible to clear the NVM update wait
flag when no update_done message was actually received. This patch
cleans the event descriptor before use, and moves the opcode check to
where it won't get done if there was no event to clean.
Also update copyright year in file headers.
Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If a reset fails to complete, the driver gets its affairs in order and
awaits the cold solace of rmmod. Unfortunately, it was not properly
setting the adapter state, which would cause a panic on rmmod, instead
of the desired surcease.
Set the adapter state to DOWN in this case, and avoid a panic.
Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Make sure we return EBUSY while finishing up a reset, and add a few bits
for better debug messages.
Change-ID: I23f6c28a8d96d7aa171abcc265737cec7826c292
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Explain why we cannot remove this code, even though it works differently
than any of our other interrupt cause handling code.
Change-ID: Ie66203bd037a466066036611c31d44f759ec5176
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The queues should never be enabled/disabled in the interrupt handler,
ICR0 interrupt enable should be the only thing that needs to be
dynamically changed in the handler.
This patch fixes that. Without this patch X722 platforms were
seeing weird ping timings when in Legacy mode since it takes
a whole lot of time for the HW/FW to re-enable queues.
Change-ID: If065afc45d81c5a19d4a94a00cd5b8f61cefc40c
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the case where we have a page fully used by receive data, we need to
release the page fully to the stack. Instead of calling get_page (which
increments the page count) followed by free_page (which decrements the
page count), just donate our reference to the stack. Although this
donation is not tax deductible, it does allow us to avoid two very
expensive atomic operations that reverse each other.
Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Fixed mapping of SEID is removed from specification. Hence
this patch removes code which was using hard coded base VEB SEID.
Changed FCoE code to use "hw->pf_id" to obtain correct "idx"
and verified.
Removed defines for BASE VSI/VEB SEID and BASE_PF_SEID since it
is not used anymore.
Change-ID: Id507cf4b1fae1c0145e3f08ae9ea5846ea5840de
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch falls back to enabling unicast, multicast and
broadcast promiscuous mode when the driver must disable it's use
of "default port" aka defport mode (which is normally used to
provide a promiscuous mode), due to internal incompatibility
with Multiple Function per Port (aka MFP).
The situation that requires this patch is when Physical
Function 0 is the device being used, and it can support SR-IOV
when MFP is enabled, via the driver creating a VEB on an MFP
enabled adapter.
Change-ID: Ie90b00d0d58782a5dfcf2c3c9725a2eb90bd63d8
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds a workaround for cases where we might have
interrupts that got lost but WB happened.
If that happens without this patch we will see a tx_timeout.
To work around it, this patch goes ahead and reschedules NAPI
in that situation, if NAPI is not already scheduled.
We also add a counter in ethtool to keep track of when
we detect a case of tx_lost_interrupt.
Note: napi_reschedule() can be safely called from process/service_task
context and is done in other drivers as well without an issue.
Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch makes use of a pointer called hw consistent
in the i40e_remove function.
Change-ID: Idacc7ff0a09a68289c57457a78618bf5497de077
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Support packet split receive on VFs. This is off by default but can be
enabled using ethtool private flags. Because we need to trigger a reset
from outside of i40evf_main.c, create a new function to do so, and
export it.
Also update copyright year in file headers.
Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
There was a completely unused file "dump" in debugfs that
never panned out to be useful.
Change-ID: I12bb9e37b5a83299725dda815a8746157baf6562
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We have a define for this, use it. No functional change.
Change-ID: Ic0e3ea4f562e46de63b2a8de07f291ccc10205fd
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jiri Benc says:
====================
vxlan: clean up rx path, consolidating extension handling
The rx path of VXLAN turned over time into kind of spaghetti code. The rx
processing is split between vxlan_udp_encap_recv and vxlan_rcv but in an
artificial way: vxlan_rcv is just called at the end of vxlan_udp_encap_recv,
continuing the rx processing where vxlan_udp_encap_recv left it. There's no
clear border between those two functions.
It makes sense to combine those functions into one; this will be actually
needed for VXLAN-GPE where we'll need to skip part of the processing which
is hard to do with the current code.
However, both functions are too long already. This patchset is shortening
them, consolidating extension handling that is spread all around together
and moving it to separate functions. (Later patchsets will do more
consolidation in other parts of the functions with the final goal of merging
vxlan_udp_encap_recv and vxlan_rcv.)
In process of consolidation of the extension handling, I needed to deal with
vni field in a generic way, as its lower 8 bits mean different things for
different extensions. While cleaning up the code to strictly distinguish
between "vni" and "vni field" (which contains vni plus an additional byte),
I also converted the code not to convert endianess back and forth.
The full picture can be seen at:
https://github.com/jbenc/linux-vxlan/commits/master
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
For metadata based tunnels, VNI is ignored when doing vxlan device lookups
(because such tunnel receives all VNIs). However, this was not honored by
vxlan_xmit_one when doing encapsulation bypass. Move the check for metadata
based tunnel to the common place where it belongs.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When there are unrecognized flags present in the vxlan header, it doesn't
make much sense to return the packet for further UDP processing, especially
considering that for other invalid flag combinations we drop the packet
because of previous checks.
This means we return positive value only at the beginning of the function
where tun_dst is not yet allocated. This allows us to get rid of the
bad_flags and error jump labels.
When we're dropping packet, we need to free tun_dst now.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bring the extension handling to a single place and move the actual handling
logic out of vxlan_udp_encap_recv as much as possible.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To make vxlan_udp_encap_recv shorter and more comprehensible.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Part of the parameters is not needed. Simplify the caller of this function
in preparation of making vxlan rx more comprehensible.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prevent repeated conversions from and to network order in the fast path.
To achieve this, define all flag constants in big endian order and store VNI
as __be32. To prevent confusion between the actual VNI value and the VNI
field from the header (which contains additional reserved byte), strictly
distinguish between "vni" and "vni_field".
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, pointer to the vxlan header is kept in a local variable. It has
to be reloaded whenever the pskb pull operations are performed which usually
happens somewhere deep in called functions.
Create a vxlan_hdr function and use it to reference the vxlan header
instead.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2016-02-17
This series contains updates to i40e/i40evf only (again).
Jesse moves sync_vsi_filters() up in the service_task because it may need
to request a reset, and we do not want to wait another round of service
task time. Refactored the enable_icr0() in order to allow it to be
decided by the caller whether the CLEARPBA (clear pending events) bit will
be set while re-enabling the interrupt. Also provides the "Don't Give Up"
patch, where the driver will keep polling trying to allocate receive buffers
until it succeeds. This should keep all receive queues running even in
the face of memory pressure. Cleans up the debugging helpers by putting
everything in hex to be consistent.
Neerav updates the DCB firmware version related checkes specific to X710
and XL710 only since the checks are not required for X722 devices.
Shannon adds the use of the new shared MAC filter bit for multicast and
broadcast filters in order to make better use of the filters available
from the device. Added a parameter to allow the driver to set the
enable/disable of statistics gathering in the hardware switch. Also the
L2 cloud filtering parameter is removed since it was never used.
Anjali refactors the force_wb and WB_ON_ITR functionality since
Force-WriteBack functionality in X710/XL710 devices has been moved out of
the clean routine and into the service task, so we need to make sure
WriteBack-On-ITR is separated out since it is still called from clean.
Catherine changes the VF driver string to reflect all the products that
are supported.
Mitch refactors the packet split receive code to properly use half-pages
for receives. Also changes the use of bitwise operators to logical
operators on clean_complete variable, while making a witty reference to
Mr. Spock. Cleans up (i.e. removes) the hsplit field in the ring
structure and use the existing macro to detect packet split enablement,
which allows debugfs dumps of the VSI to properly show which recevie
routine is in use.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
For error handling, dma_alloc_coherent's return value
needs to be checked, not argument.
Signed-off-by: Insu Yun <wuninsu@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko says:
====================
rocker: do world split
This patchset allows new rocker worlds to be easily added in future.
Two new worlds are now under development: P4 and eBPF.
The main part of the patchset is the OF-DPA carve-out. It resuts in OF-DPA
specific file. Clean cut.
Note this patchset is based on my original attempt in October 2015.
I had to rebase, included all suggestions and did lot of small changes.
Main change to go with all-port-one-world approach. Port world is set according
to what is setup in HW. Not possible to change worlds from driver.
v1->v2:
patch 12/13:
- split port_init into pre-init and init
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Carve out OF-DPA would specific code from the common file to the world
file. This change required struct rocker and struct rocker_port split
into world specific struct ofdpa and struct ofdpa_port. Along with this
the world specific functions and defines were renamed from prefix
"rocker_" to "ofdpa_".
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need to push down rocker flags just to check if this is nowait or
not. Let the caller handle that.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The only purpose of passing this parameter is to check for
prepare phase. The only reason for a failure in that state is if
TLVs don't fit into descriptor. That is highly unlikely and if that
happens, it is a driver bug. So remove this parameter from
rocker_cmd_exec, and check for prepare phase in caller.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This avoids need to alloc/free wait structure for every command call.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Be consistent with the rest of the setting functions, and pass
"learning" as a bool function parameter.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is another step on the way to per-world clean cut. Introduce world
ops hooks which each world can implement in world-specific way.
Also introduce world infrastructure along with OF-DPA world stub.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
And take some other related thing along. They are going to be pushed
into of-dpa part anyway.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce a helper to ask HW for the port mode (world).
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Carve out TLV processing helpers into separate files.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since "rocker.c" is going to be split into multiple files, start with
renaming original "rocker.c" file to "rocker_main.c". Multiple code
parts are going to be cut from "rocker_main.c" later on.
Fix couple of checkpatch issues on the way.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since "rocker.h" file is going to be used for different purpose,
rename the hardware-specific header to "rocker_hw.h".
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need to pass rocker_port around to alloc/free rocker functions,
since they now use switchdev_trans for memory management storage.
With the param removal, shorten the name of the functions since they now
has nothing to do with rocker port.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Goutham says:
====================
net: thunderx: Miscellaneous fixes
This patch series fixes couple of issues w.r.t multiqset mode
and receive packet statastics.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Counting rx packets for every CQE_RX in CQ irq handler is incorrect.
Synchronization is missing when multiple queues are receiving packets
simultaneously. Like transmit packet stats use HW stats here.
Also removed unused 'cqe_type' parameter in nicvf_rcv_pkt_handler().
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For secondary Qsets 'hw_tso' is not getting set as probe() returns
much earlier. Fixed it by moving silicon revision check.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a interface is assigned morethan 8 queues and the logical interface
is toggled i.e down & up, additional queues or qsets are not initialized
as secondary qset count is being set to zero while tearing down.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>