Allocating 2 buffers per page is insanely inefficient when MTU is 1500
and PAGE_SIZE is 64K (as it usually is on POWER). Allocate as many as
we can fit, and choose the refill batch size at run-time so that we
still always use a whole page at once.
[bwh: Fix loop condition to allow for compound pages; rebase]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
On POWER systems, DMA mapping/unmapping operations are very expensive.
These changes reduce these costs by trying to reuse DMA mapped pages.
After all the buffers associated with a page have been processed and
passed up, the page is placed into a ring (if there is room). For
each page that is required for a refill operation, a page in the ring
is examined to determine if its page count has fallen to 1, ie. the
kernel has released its reference to these packets. If this is the
case, the page can be immediately added back into the RX descriptor
ring, without having to re-map it for DMA.
If the kernel is still holding a reference to this page, it is removed
from the ring and unmapped for DMA. Then a new page, which can
immediately be used by RX buffers in the descriptor ring, is allocated
and DMA mapped.
The time a page needs to spend in the recycle ring before the kernel
has released its page references is based on the number of buffers
that use this page. As large pages can hold more RX buffers, the RX
recycle ring can be shorter. This reduces memory usage on POWER
systems, while maintaining the performance gain achieved by recycling
pages, following the driver change to pack more than two RX buffers
into large pages.
When an IOMMU is not present, the recycle ring can be small to reduce
memory usage, since DMA mapping operations are inexpensive.
With a small recycle ring, attempting to refill the descriptor queue
with more buffers than the equivalent size of the recycle ring could
ultimately lead to memory leaks if page entries in the recycle ring
were overwritten. To prevent this, the check to see if the recycle
ring is full is changed to check if the next entry to be written is
NULL.
[bwh: Combine and rebase several commits so this is complete
before the following buffer-packing changes. Remove module
parameter.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Enable RX DMA scattering iff an RX buffer large enough for the current
MTU will not fit into a single page and the NIC supports DMA
scattering for kernel-mode RX queues.
On Falcon and Siena, the RX_USR_BUF_SIZE field is used as the DMA
limit for both all RX queues with scatter enabled. Set it to 1824,
matching what Onload uses now.
Maintain a statistic for frames truncated due to lack of descriptors
(rx_nodesc_trunc). This is distinct from rx_frm_trunc which may be
incremented when scattering is disabled and implies an over-length
frame.
Whenever an MTU change causes scattering to be turned on or off,
update filters that point to the PF queues, but leave others
unchanged, as VF drivers assume scattering is off.
Add n_frags parameters to various functions, and make them iterate:
- efx_rx_packet()
- efx_recycle_rx_buffers()
- efx_rx_mk_skb()
- efx_rx_deliver()
Make efx_handle_rx_event() responsible for updating
efx_rx_queue::removed_count.
Change the RX pipeline state to a starting ring index and number of
fragments, and make __efx_rx_packet() responsible for clearing it.
Based on earlier versions by David Riddoch and Jon Cooper.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Adjust rx_buf->page_offset when we eat the RX hash prefix. Remove
efx_rx_buf_offset(), which is now redundant.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Currently we prefetch from the Ethernet header, but we will also read
the hash prefix. In practice they should be in the same cache line
and this won't hurt, but it is still pointless to add on the hash
prefix size.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_rx_buf_va() returns the virtual address of the current start of
the buffer. The callers must add the hash prefix size themselves.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The pipeline mechanism will need to change a bit for scattered
packets. Add a wrapper to insulate efx_process_channel() from this.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The Linux side of EEH is triggered by MMIO reads, but this
driver's data path does not issue any MMIO reads (except in
legacy interrupt mode). Therefore add a monitor function
to poll EEH periodically.
When preparing to reset the device based on our own error
detection, also poll EEH and defer to its recovery mechanism
if appropriate.
[bwh: Use a separate condition for the initial link poll; fix some
style errors]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
On Siena, VFs share RSS configuration with the PF. We attempted to
support configurations where the PF only uses 1 RX queue and VFs use
multiple RX queues, by (1) setting up RSS for the number of RX queues
per VF (2) disabling RSS in the PF's RX default filters.
Unfortunately commit cd2d5b529c ('sfc: Add SR-IOV back-end support
for SFC9000 family') only included (1). This is (2).
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_filter_insert_filter() uses the first table entry in the hash chain
that either has the same match values or is empty. This means that
replacement doesn't always work correctly:
1. Insert filter F1 with match values M1, hashing to H1, at first
possible entry E1.
2. Insert filter F2 with match values M2, hashing to H1, at second
possible entry E2.
3. Remove filter F1.
4. Insert filter F3 with match values M2, hashing to H1, at first
possible entry E1.
F3 should have either replaced F2 or been rejected (depending on
priority and the replace_equal parameter).
Instead, search for both a matching filter that the inserted filter
would replace, and an available insertion point, up to the applicable
maximum search depths. If we insert at lower depth than a replaced
filter, clear the replaced filter.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_filter_search() is only called from efx_filter_insert(), and
neither function is very long. The following bug fix requires a more
sophisticated search with a third result, which is going to be easier
to implement as part of the same function.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
These functions happen to work for default MAC filters: they generate
an initial index of 1/0 for unicast/multicast respectively and an
increment of 1 for either, so a search succeeds at depth 2. But this
is a matter of luck rather than design, and it really won't work well
with the bug fix we're about to do.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The 'for_insert' parameter is redundant since there are no longer
any other operations that need to search based on a filter spec.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The 'replace' flag to efx_filter_insert_filter() controls whether the
new filter may replace *any* filter, and is checked even before
priority comparison. But lower-priority filters should never
block insertion of higher-priority filters.
Change the priority checking so that lower-priority filters are
replaced regardless of the value of the flag, and rename the
flag to 'replace_equal'.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
[bwh: Remove more dead code, and make efx_ptp_rx() pull the data it
needs into the header area.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
There is a long-standing problem with the packet-timestamp matching in
the driver. When a PTP packet is received by the MC, the FPGA
timestamps the packet and the MC sends the timestamp and 6 bytes of
the UUID to the driver. The driver then matches the timestamp against
received packets using the same 6 bytes of UUID.
The problem comes from the choice of which 6 bytes to use. The PTP
spec is slightly contradictory and misleading in one of the two places
where the UUIDs are discussed. From section 7.2.2.2 of the spec, a
PTPD2 UUID can be either a EUI-64 or a EUI-64 constructed from a
EUI-48. The typical ethernet based implementation uses a EUI-64
constructed from a EUI-48. This works by taking the first 3 bytes of
the MAC address of the NIC being used for PTP (the OUI), then
inserting 0xFF, 0xFE, then taking the last 3 bytes of the MAC address
giving
MAC[0], MAC[1], MAC[2], 0xFF, 0xFE, MAC[3], MAC[4], MAC[5]
The current MC firmware and driver discard the first two bytes of this
UUID and packets are matched against timestamps using bytes 2 to 7 so
there is a small risk that in a deployment of Solarflare PTP NICs used
with other vendors NICs, that a PTP packet could be matched against
the wrong timestamp. This applies to all other organisations whose
third byte of the OUI is 0x53. It's a long list but I notice that it
includes Cisco.
The necessary modifications to use bytes 0-2 and 5-7 of the UUID to
match against are quite small but introduce incompatibility between
older version of the firmware and driver.
When PTP is enabled via SO_TIMESTAMPING specifying PTP V2, the driver
will try to enable PTP in the firmware using the enhanced mode
(above). If the firmware returns an error, the driver will enable PTP
in the firmware using the old mode.
[bwh: Fix some style errors; remove private ioctl bits]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Instead of having efx_ptp_rx() call netif_receive_skb() for an invalid
PTP packet, make it return false for rejected packets and have
efx_rx_deliver() pass them up.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
RX DMA buffers start at an offset of EFX_PAGE_IP_ALIGN bytes from the
start of a cache line. This offset obviously needs to be included in
the virtual address, but this was missed in commit b590ace09d
('sfc: Fix efx_rx_buf_offset() in the presence of swiotlb') since
EFX_PAGE_IP_ALIGN is equal to 0 on both x86 and powerpc.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_device_detach_sync() locks all TX queues before marking the device
detached and thus disabling further TX scheduling. But it can still
be interrupted by TX completions which then result in TX scheduling in
soft interrupt context. This will deadlock when it tries to acquire
a TX queue lock that efx_device_detach_sync() already acquired.
To avoid deadlock, we must use netif_tx_{,un}lock_bh().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We must only ever stop TX queues when they are full or the net device
is not 'ready' so far as the net core, and specifically the watchdog,
is concerned. Otherwise, the watchdog may fire *immediately* if no
packets have been added to the queue in the last 5 seconds.
The device is ready if all the following are true:
(a) It has a qdisc
(b) It is marked present
(c) It is running
(d) The link is reported up
(a) and (c) are normally true, and must not be changed by a driver.
(d) is under our control, but fake link changes may disturb userland.
This leaves (b). We already mark the device absent during reset
and self-test, but we need to do the same during MTU changes and ring
reallocation. We don't need to do this when the device is brought
down because then (c) is already false.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We assume that the mapping between DMA and virtual addresses is done
on whole pages, so we can find the page offset of an RX buffer using
the lower bits of the DMA address. However, swiotlb maps in units of
2K, breaking this assumption.
Add an explicit page_offset field to struct efx_rx_buffer.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We may currently allocate two RX DMA buffers to a page, and only unmap
the page when the second is completed. We do not sync the first RX
buffer to be completed; this can result in packet loss or corruption
if the last RX buffer completed in a NAPI poll is the first in a page
and is not DMA-coherent. (In the middle of a NAPI poll, we will
handle the following RX completion and unmap the page *before* looking
at the content of the first buffer.)
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Delete successive tests to the same location. rc was previously tested and
not subsequently updated. efx_phc_adjtime can return an error code, so the
call is updated so that is tested instead.
A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@s exists@
local idexpression y;
expression x,e;
@@
*if ( \(x == NULL\|IS_ERR(x)\|y != 0\) )
{ ... when forall
return ...; }
... when != \(y = e\|y += e\|y -= e\|y |= e\|y &= e\|y++\|y--\|&y\)
when != \(XT_GETPAGE(...,y)\|WMI_CMD_BUF(...)\)
*if ( \(x == NULL\|IS_ERR(x)\|y != 0\) )
{ ... when forall
return ...; }
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The __dev* removal patches for the network drivers ended up messing up
the function prototypes for a bunch of drivers. This patch fixes all of
them back up to be properly aligned.
Bonus is that this almost removes 100 lines of code, always a nice
surprise.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
CONFIG_HOTPLUG is going away as an option. As result the __dev*
markings will be going away.
Remove use of __devinit, __devexit_p, __devinitdata, __devinitconst,
and __devexit.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Cc: Solarflare linux maintainers <linux-net-drivers@solarflare.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Most of the module parameters treated as boolean are currently exposed
as type int or uint. Defining them with the proper type is useful
documentation for both users and developers.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_mcdi_poll() uses get_seconds() to read the current time and to
implement a polling timeout. The use of this function was chosen
partly because it could easily be replaced in a co-sim environment
with a macro that read the simulated time.
Unfortunately the real get_seconds() returns the system time (real
time) which is subject to adjustment by e.g. ntpd. If the system time
is adjusted forward during a polled MCDI operation, the effective
timeout can be shorter than the intended 10 seconds, resulting in a
spurious failure. It is also possible for a backward adjustment to
delay detection of a areal failure.
Use jiffies instead, and change MCDI_RPC_TIMEOUT to be denominated in
jiffies. Also correct rounding of the timeout: check time > finish
(or rather time_after(time, finish)) and not time >= finish.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
The assertion of netif_device_present() at the top of
efx_hard_start_xmit() may fail if we don't do this.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
We sometimes hit a "failed to flush" timeout on some TX queues, but the
flushes have completed and the flush completion events seem to go missing.
In this case, we can check the TX_DESC_PTR_TBL register and drain the
queues if the flushes had finished.
[bwh: Minor fixes to coding style]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
If the MC reboots then the stats it reports to us will have been
reset. We need to reset ours to get efx_update_diff_stat() working
properly.
(Ideally we would maintain stats across the reboot, but as this should
only happen immediately after a firmware upgrade it's not really worth
the trouble.)
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Currently we initialise the newly allocated buffer to all-1s, which is
important for event queues but not for descriptor queues. And since
we also do that in efx_nic_init_eventq(), it is completely pointless
to do it here.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_writed_table() uses a step of 16 bytes but efx_readd_table() uses
a step of 4 bytes. Why are they different?
Firstly, register access is asymmetric:
- The EVQ_RPTR table and RX_INDIRECTION_TBL can (or must?) be written
as dwords even though they have a step size of 16 bytes, unlike
most other CSRs.
- In general, a read of any width is valid for registers, so long as
it does not cross register boundaries. There is also no latching
behaviour in the BIU, contrary to rumour.
We write to the EVQ_RPTR table with efx_writed_table() but never read
it back as it's write-only. We write to the RX_INDIRECTION_TBL with
efx_writed_table(), but only read it back for the register dump, where
we use efx_reado_table() as for any other table with step size of 16.
We read MC_TREG_SMEM with efx_readd_table() for the register dump, but
normally read and write it with efx_readd() and efx_writed() using
offsets calculated in bytes.
Since these functions are trivial and have few callers, it's clearer
to open-code them at the call sites. While we're at it, update the
comments on the BIU behaviour again.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_mcdi_rpc_start() returns a negative value on error or zero on
success. However one caller that can't properly handle failure then
does WARN_ON(rc > 0). Change it to WARN_ON(rc < 0).
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Receiving pause frames can block TX queue flushes. Earlier changes
work around this by reconfiguring the MAC during flushes for VFs, but
during flushes for the PF we would only change the fc_disable counter.
Unless the MAC is reconfigured for some other reason during the flush
(which I would not expect to happen) this had no effect at all.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
sparse has got a bit more picky since I last ran it over this. Add
forced casts for use of ~0 as a big-endian value. Undo the pointless
optimisation of parameter validation with '|'; using '||' avoids these
warnings.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Various drivers depend on INET because they used to select INET_LRO,
but they have all been converted to use GRO which has no such
dependency.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This was missed in commit a24006ed12
('ptp: Enable clock drivers along with associated net/PHY drivers')
which enabled sfc's clock driver unconditionally.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Where a PTP clock driver is associated with a net or PHY driver, it
should be enabled automatically whenever that driver is enabled.
Therefore:
- Make PTP clock drivers select rather than depending on PTP_1588_CLOCK
- Remove separate boolean options for PTP clock drivers that are built
as part of net driver modules. (This also fixes cases where the PTP
subsystem is wrongly forced to be built-in.)
- Set 'default y' for PTP clock drivers that depend on specific net
drivers but are built separately
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Using list_move() instead of list_del() + list_add().
dpatch engine is used to auto generate this patch.
(https://github.com/weiyj/dpatch)
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are now standard functions for dealing with little-endian bit
arrays, so use them instead of our own implementations.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ben Hutchings says:
====================
Some bug fixes that should go into 3.7:
1. Fix oops when removing device with SR-IOV enabled. (This regression
was introduced by the last set of changes, so the fix does not need to
be applied to any earlier kernel versions.)
2. Fix firmware structure field lookup bug that resulted in missing
sensor information.
3. Fix bug that makes self-test do very little in some configurations.
4. Fix the numbering of ethtool RX flow steering filters to reflect the
real hardware priorities.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>