Signed-off-by: Matvejchikov Ilya <matvejchikov@gmail.com>
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
create DCB related states in function state-machine
allow handling of DCB errors from FW
allow disablement of DCB in FW, when peer disappears or error
clean up unused functions/variables as pointed by
David Binderman <dcb314@hotmail.com>
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It may take some time to cnic to respond, this prevents tx_timeout
when it happens.
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e542a2269f (r8169: adjust the RxConfig settings)
broke the return from promiscuous mode to physical address match mode.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Cc: Signed-off-by: Hayes Wang <hayeswang@realtek.com>
As we now only update used ring after enabling
the backend, we can write flags with __put_user:
as that's done on data path, it matters.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Fix get/put refcount imbalance with zero copy,
which caused qemu to hang forever on guest driver unload.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
We need to log writes when updating used flags and avail event
fields. Otherwise the guest may see a stale value after migration and
miss notifying the host.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Move the used ring initialization after backend was set. This
makes it possible to disable the backend and tweak the used ring,
then restart. This will also make it possible to log the used ring
write correctly.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Wolfgang Grandegger <wg@grandegger.com>
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds new field 'force_sf_dma_mode' to plat_stmmacenet_data
struct to allow users to specify if they want to use force store forward
eventhough tx_coe is not available in hw.
without this flag stmmac driver will use cut-thru mode not use
store-forward mode.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch, provided by ST SPEAr developers,
has fixed a problem raised on ARM CA9 where
happened that the dma_transmission was enabled before
the dma descriptors were properly filled. To guarantee this
data memory barriers have been explicity used in the driver.
Signed-off-by: Shiraz Hashim <shiraz.hashim@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
>From: Shirley Ma <mashirle@us.ibm.com>
This adds experimental zero copy support in vhost-net,
disabled by default. To enable, set
experimental_zcopytx module option to 1.
This patch maintains the outstanding userspace buffers in the
sequence it is delivered to vhost. The outstanding userspace buffers
will be marked as done once the lower device buffers DMA has finished.
This is monitored through last reference of kfree_skb callback. Two
buffer indices are used for this purpose.
The vhost-net device passes the userspace buffers info to lower device
skb through message control. DMA done status check and guest
notification are handled by handle_tx: in the worst case is all buffers
in the vq are in pending/done status, so we need to notify guest to
release DMA done buffers first before we get any new buffers from the
vq.
One known problem is that if the guest stops submitting
buffers, buffers might never get used until some
further action, e.g. device reset. This does not
seem to affect linux guests.
Signed-off-by: Shirley <xma@us.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Compiler is not smart enough to avoid double BSWAP instructions in
ntohl(inet_make_mask(plen)).
Lets cache this value in struct leaf_info, (fill a hole on 64bit arches)
With route cache disabled, this saves ~2% of cpu in udpflood bench on
x86_64 machine.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the future dst entries will be neigh-less. In that environment we
need to have an easy transition point for current users of
dst->neighbour outside of the packet output fast path.
Signed-off-by: David S. Miller <davem@davemloft.net>
It just makes it harder to see 1) what the code is doing
and 2) grep for all users of dst{->,.}neighbour
Signed-off-by: David S. Miller <davem@davemloft.net>
This will get us closer to being able to do "neigh stuff"
completely independent of the underlying dst_entry for
protocols (ipv4/ipv6) that wish to do so.
We will also be able to make dst entries neigh-less.
Signed-off-by: David S. Miller <davem@davemloft.net>
there is only one user of vlan_find_dev outside of the actual vlan code:
qlcnic uses it to iterate over some VLANs it knows.
let's just make vlan_find_dev private to the VLAN code and have the
iteration in qlcnic be a bit more direct. (a few rcu dereferences less
too)
Signed-off-by: David Lamparter <equinox@diac24.net>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Amit Kumar Salecha <amit.salecha@qlogic.com>
Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Cc: linux-driver@qlogic.com
Signed-off-by: David S. Miller <davem@davemloft.net>
define ETH_P_8021AD to 88a8 (assigned by IEEE) and add ETH_P_QINQ{1,2,3}
for the pre-standard 9{1,2,3}00 types. all of them use 802.1q frame
format, with 1 bit used differently in some cases.
also define ETH_P_8021AH to 88e7 (assigned by IEEE). this is Mac-in-Mac
and uses a different, 16-byte header.
Signed-off-by: David Lamparter <equinox@diac24.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's just taking on one of two possible values, either
neigh_ops->output or dev_queue_xmit(). And this is purely depending
upon whether nud_state has NUD_CONNECTED set or not.
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that hh_cache entries are embedded inside of neighbour
entries, their lifetimes and accesses are now synchronous
to that of the encompassing neighbour object.
Therefore we don't need to hook up the blackhole op to
hh_output on destroy.
Signed-off-by: David S. Miller <davem@davemloft.net>
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
68360enet.c no longer exists, and from the research, it appears that
68360enet.c became fec.c back in 2004. The Kconfig and Makefile
references were never cleaned up. This patch removes this "dead"
references.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on original patch and description from Flavio Leitner <fbl@redhat.com>
When bnx2_reset_task() is called, it will stop,
(re)initialize and start the interface to restore
the working condition.
The bnx2_init_nic() calls bnx2_reset_nic() which will
reset the chip and then calls bnx2_free_skbs() to free
all the skbs.
The problem happens when bnx2_init_chip() fails because
bnx2_reset_nic() will just return skipping the ring
initializations at bnx2_init_all_rings(). Later, the
reset task starts the interface again and the system
crashes due a NULL pointer access (no skb in the ring).
To fix it, we call dev_close() if bnx2_init_nic() fails.
One minor wrinkle to deal with is the cancel_work_sync()
call in bnx2_close() to cancel bnx2_reset_task(). The
call will wait forever because it is trying to cancel
itself and the workqueue will be stuck.
Since bnx2_reset_task() holds the rtnl_lock() and checks
for netif_running() before proceeding, there is no need
to cancel bnx2_reset_task() in bnx2_close() even if
bnx2_close() and bnx2_reset_task() are running concurrently.
The rtnl_lock() serializes the 2 calls.
We need to move the cancel_work_sync() call to
bnx2_remove_one() to make sure it is canceled before freeing
the netdev struct.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Cc: Flavio Leitner <fbl@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The registers and descriptors bits are identical to the pre-8168
8169 chipsets : {RxDesc / TxDesc}.opts2 can only contain VLAN information.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Reviewed-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch removes obsolete code in the initialisation/creation of
slcan devices.
It follows the suggested cleanups from Ilya Matvejchikov in
drivers/net/slip.c that where recently applied to net-next-2.6:
- slip: remove dead code within the slip initialization
- slip: remove redundant check slip_devs for NULL
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 5f77898de1 does not completely
fix the problem of handling allocations with irqs disabled.. The
below patch on top of it fixes the problem completely.
Based on review by "Ivan Vecera" <ivecera@redhat.com>..
"
Small note, the root of the problem was that non-atomic allocation was requested with IRQs disabled. Your patch description does not contain wwhy were the IRQs disabled.
The function bnad_mbox_irq_alloc incorrectly uses 'flags' var for two different things, 1) to save current CPU flags and 2) for request_irq
call.
First the spin_lock_irqsave disables the IRQs and saves _all_ CPU flags (including one that enables/disables interrupts) to 'flags'. Then the 'flags' is overwritten by 0 or 0x80 (IRQF_SHARED). Finally the spin_unlock_irqrestore should restore saved flags, but these flags are now either 0x00 or 0x80. The interrupt bit value in flags register on x86 arch is 0x100.
This means that the interrupt bit is zero (IRQs disabled) after spin_unlock_irqrestore so the request_irq function is called with disabled interrupts.
"
Signed-off-by: Shyam Iyer <shyam_iyer@dell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix the condition where PS3 network RX hangs when no network
TX is occurring by calling gelic_card_enable_rxdmac() during
RX_DMA_CHAIN_END event processing.
The gelic hardware automatically clears its RX_DMA_EN flag when
it detects an RX_DMA_CHAIN_END event. In its processing of
RX_DMA_CHAIN_END the gelic driver is required to set RX_DMA_EN
(with a call to gelic_card_enable_rxdmac()) to restart RX DMA
transfers. The existing gelic driver code does not set
RX_DMA_EN directly in its processing of the RX_DMA_CHAIN_END
event, but uses a flag variable card->rx_dma_restart_required
to schedule the setting of RX_DMA_EN until next inside the
interrupt handler.
It seems this delayed setting of RX_DMA_EN causes the hang since
the next RX interrupt after the RX_DMA_CHAIN_END event where
RX_DMA_EN is scheduled to be set will not occur since RX_DMA_EN
was not set. In the case were network TX is occuring, RX_DMA_EN
is set in the next TX interrupt and RX processing continues.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Overview:
Support mapping of priorities to traffic classes and
traffic classes to transmission queues ranges in the net device.
The queue ranges are (count, offset) pairs relating to the txq
array.
This can be done via DCBX negotiation or by kernel.
As a result Enhanced Transmission Selection (ETS) and Priority Flow
Control (PFC) are supported between L2 network traffic classes.
Mapping:
This patch uses the netdev_set_num_tc, netdev_set_prio_tc_map and
netdev_set_tc_queue functions to map priorities to traffic classes
and traffic classes to transmission queue ranges.
This mapping is performed by bnx2x_setup_tc function which is
connected to the ndo_setup_tc.
This function is always called at nic load where by default it
maps all priorities to tc 0, and it may also be called by the
kernel or by the bnx2x upon DCBX negotiation to modify the mapping.
rtnl lock:
When the ndo_setup_tc is called at nic load or by kernel the rtnl
lock is already taken. However, when DCBX negotiation takes place
the lock is not taken. The work is therefore scheduled to be
handled by the sp_rtnl task.
Fastpath:
The fastpath structure of the bnx2x which was previously used
to hold the information of one tx queue and one rx queue was
redesigned to represent multiple tx queues, one for each traffic
class.
The transmission queue supplied in the skb by the kernel can no
longer be interpreted as a straightforward index into the fastpath
structure array, but it must rather be decoded to the appropriate
fastpath index and the tc within that fastpath.
Slowpath:
The bnx2x's queue object was redesigned to accommodate multiple
transmission queues. The queue object's state machine was enhanced
to allow opening multiple transmission-only connections on top of
the regular tx-rx connection.
Firmware:
This feature relies on the tx-only queue feature introduced in the
bnx2x 7.0.23 firmware and the FW likewise must have the bnx2x multi
cos support.
Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Renaming the "reset_task" to a more general purpose name,
"sp_rtnl_task", as it is already used for another purpose
other than reset which is parity recovery, and since I
plan to add a third operation for this task, updating the
priority to traffic class and traffic class to transmission
queues mappings after dcbx negotiation takes place.
Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>