This patch fixes the retun value of switchdev_port_fdb_dump() when
CONFIG_NET_SWITCHDEV is not set. This avoids getting "warning: return makes
integer from pointer without a cast [-Wint-conversion]" when building
when CONFIG_NET_SWITCHDEV is not set under several compiler versions.
This warning is due to commit d297653dd6
("rtnetlink: fdb dump: optimize by saving last interface markers").
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov says:
====================
perf, bpf: add support for bpf in sw/hw perf_events
this patch set is a follow up to the discussion:
https://lkml.kernel.org/r/20160804142853.GO6862%20()%20twins%20!%20programming%20!%20kicks-ass%20!%20net
It turned out to be simpler than what we discussed.
Patches 1-3 is bpf-side prep for the main patch 4
that adds bpf program as an overflow_handler to sw and hw perf_events.
Patches 5 and 6 are examples from myself and Brendan.
Peter,
to implement your suggestion to add ifdef CONFIG_BPF_SYSCALL
inside struct perf_event, I had to shuffle ifdefs in events/core.c
Please double check whether that is what you wanted to see.
v2->v3: fixed few more minor issues
v1->v2: fixed issues spotted by Peter and Daniel.
====================
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
sample instruction pointer and frequency count in a BPF map
Signed-off-by: Brendan Gregg <bgregg@netflix.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The bpf program is called 50 times a second and does hashmap[kern&user_stackid]++
It's primary purpose to check that key bpf helpers like map lookup, update,
get_stackid, trace_printk and ctx access are all working.
It checks:
- PERF_COUNT_HW_CPU_CYCLES on all cpus
- PERF_COUNT_HW_CPU_CYCLES for current process and inherited perf_events to children
- PERF_COUNT_SW_CPU_CLOCK on all cpus
- PERF_COUNT_SW_CPU_CLOCK for current process
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow attaching BPF_PROG_TYPE_PERF_EVENT programs to sw and hw perf events
via overflow_handler mechanism.
When program is attached the overflow_handlers become stacked.
The program acts as a filter.
Returning zero from the program means that the normal perf_event_output handler
will not be called and sampling event won't be stored in the ring buffer.
The overflow_handler_context==NULL is an additional safety check
to make sure programs are not attached to hw breakpoints and watchdog
in case other checks (that prevent that now anyway) get accidentally
relaxed in the future.
The program refcnt is incremented in case perf_events are inhereted
when target task is forked.
Similar to kprobe and tracepoint programs there is no ioctl to
detach the program or swap already attached program. The user space
expected to close(perf_event_fd) like it does right now for kprobe+bpf.
That restriction simplifies the code quite a bit.
The invocation of overflow_handler in __perf_event_overflow() is now
done via READ_ONCE, since that pointer can be replaced when the program
is attached while perf_event itself could have been active already.
There is no need to do similar treatment for event->prog, since it's
assigned only once before it's accessed.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make sure that BPF_PROG_TYPE_PERF_EVENT programs only use
preallocated hash maps, since doing memory allocation
in overflow_handler can crash depending on where nmi got triggered.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce BPF_PROG_TYPE_PERF_EVENT programs that can be attached to
HW and SW perf events (PERF_TYPE_HARDWARE and PERF_TYPE_SOFTWARE
correspondingly in uapi/linux/perf_event.h)
The program visible context meta structure is
struct bpf_perf_event_data {
struct pt_regs regs;
__u64 sample_period;
};
which is accessible directly from the program:
int bpf_prog(struct bpf_perf_event_data *ctx)
{
... ctx->sample_period ...
... ctx->regs.ip ...
}
The bpf verifier rewrites the accesses into kernel internal
struct bpf_perf_event_data_kern which allows changing
struct perf_sample_data without affecting bpf programs.
New fields can be added to the end of struct bpf_perf_event_data
in the future.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The verifier supported only 4-byte metafields in
struct __sk_buff and struct xdp_md. The metafields in upcoming
struct bpf_perf_event are 8-byte to match register width in struct pt_regs.
Teach verifier to recognize 8-byte metafield access.
The patch doesn't affect safety of sockets and xdp programs.
They check for 4-byte only ctx access before these conditions are hit.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
We get a few warnings when building kernel with W=1:
drivers/isdn/hardware/mISDN/hfcmulti.c:568:1: warning: no previous declaration for 'enablepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:574:1: warning: no previous declaration for 'disablepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:580:1: warning: no previous declaration for 'readpcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:608:1: warning: no previous declaration for 'writepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:638:1: warning: no previous declaration for 'cpld_set_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:645:1: warning: no previous declaration for 'cpld_write_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:657:1: warning: no previous declaration for 'cpld_read_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:674:1: warning: no previous declaration for 'vpm_write_address' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:681:1: warning: no previous declaration for 'vpm_read_address' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:695:1: warning: no previous declaration for 'vpm_in' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:716:1: warning: no previous declaration for 'vpm_out' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:1028:1: warning: no previous declaration for 'plxsd_checksync' [-Wmissing-declarations]
....
In fact, these functions are only used in the file in which they are
declared and don't need a declaration, but can be made static.
so this patch marks these functions with 'static'.
Signed-off-by: Baoyou Xie <baoyou.xie@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the Qualcomm Technologies, Inc. EMAC gigabit Ethernet
controller.
This driver supports the following features:
1) Checksum offload.
2) Interrupt coalescing support.
3) SGMII phy.
4) phylib interface for external phy
Based on original work by
Niranjana Vishwanathapura <nvishwan@codeaurora.org>
Gilad Avidov <gavidov@codeaurora.org>
Signed-off-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Access the priv member of the dsa_switch structure directly, instead of
having an unnecessary helper.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov says:
====================
net: bridge: add per-port unknown multicast flood control
The first patch prepares the forwarding path by having the exact packet
type passed down so we can later filter based on it and the per-port
unknown mcast flood flag introduced in the second patch. It is similar to
how the per-port unknown unicast flood flag works.
Nice side-effects of patch 01 are the slight reduction of tests in the
fast-path and a few minor checkpatch fixes.
v3: don't change br_auto_mask as that will change user-visible behaviour
v2: make pkt_type an enum as per Stephen's comment
====================
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a per-port flag to control the unknown multicast flood, similar to the
unknown unicast flood flag and break a few long lines in the netlink flag
exports.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove the unicast flag and introduce an exact pkt_type. That would help us
for the upcoming per-port multicast flood flag and also slightly reduce the
tests in the input fast path.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
fdb dumps spanning multiple skb's currently restart from the first
interface again for every skb. This results in unnecessary
iterations on the already visited interfaces and their fdb
entries. In large scale setups, we have seen this to slow
down fdb dumps considerably. On a system with 30k macs we
see fdb dumps spanning across more than 300 skbs.
To fix the problem, this patch replaces the existing single fdb
marker with three markers: netdev hash entries, netdevs and fdb
index to continue where we left off instead of restarting from the
first netdev. This is consistent with link dumps.
In the process of fixing the performance issue, this patch also
re-implements fix done by
commit 472681d57a ("net: ndo_fdb_dump should report -EMSGSIZE to rtnl_fdb_dump")
(with an internal fix from Wilson Kok) in the following ways:
- change ndo_fdb_dump handlers to return error code instead
of the last fdb index
- use cb->args strictly for dump frag markers and not error codes.
This is consistent with other dump functions.
Below results were taken on a system with 1000 netdevs
and 35085 fdb entries:
before patch:
$time bridge fdb show | wc -l
15065
real 1m11.791s
user 0m0.070s
sys 1m8.395s
(existing code does not return all macs)
after patch:
$time bridge fdb show | wc -l
35085
real 0m2.017s
user 0m0.113s
sys 0m1.942s
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: Wilson Kok <wkok@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the const for the parameter of flow_keys_have_l4 for the readability.
Signed-off-by: Gao Feng <fgao@ikuai8.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.
This makes the following possibilities more achievable:
(1) Call refcounting can be made simpler if skbs don't hold refs to calls.
(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.
(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.
(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.
(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.
(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
To make this work, the following interface function has been added:
int rxrpc_kernel_recv_data(
struct socket *sock, struct rxrpc_call *call,
void *buffer, size_t bufsize, size_t *_offset,
bool want_more, u32 *_abort_code);
This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.
afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.
Five interface functions have been removed:
rxrpc_kernel_is_data_last()
rxrpc_kernel_get_abort_code()
rxrpc_kernel_get_error_number()
rxrpc_kernel_free_skb()
rxrpc_kernel_data_consumed()
As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The workqueue "pegasus_workqueue" queues a single work item per pegasus
instance and hence it doesn't require execution ordering. Hence,
alloc_workqueue has been used to replace the deprecated
create_singlethread_workqueue instance.
The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
memory pressure since it's a network driver.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
alloc_ordered_workqueue() with WQ_MEM_RECLAIM set, replaces
deprecated create_singlethread_workqueue(). This is the identity
conversion.
The workqueue "wq" queues multiple work items viz
&bond->mcast_work, &nnw->work, &bond->mii_work, &bond->arp_work,
&bond->alb_work, &bond->mii_work, &bond->ad_work, &bond->slave_arr_work
which require strict execution ordering. Hence, an ordered dedicated
workqueue has been used.
Since, it is a network driver, WQ_MEM_RECLAIM has been set to
ensure forward progress under memory pressure.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update the sky2 driver to pass number of packets done to NAPI.
The driver was never updated when napi_complete_done was added.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexandre TORGUE says:
====================
Add Ethernet support on STM32F429
STM32F429 Chip embeds a Synopsys 3.50a MAC IP.
This series enhance current stmmac driver to control it (code already
available) and adds basic glue for STM32F429 chip.
Changes since v5:
-Fix typo in bindings documentation patch.
-Change clocks names in stm32-dwmac glue driver / Documentation.
-After rebase, stm32 ethernet node is now available. It has to be updated
according to new clocks names.
Changes since v4:
-Fix dirty copy/past in bindings documentation patch.
Changes since v3:
-Fix "tx-clk" and "rx-clk" as required clocks. Driver and bindings are
modified.
Changes since v2:
-Fix alphabetic order in Kconfig and Makefile.
-Improve code according to Joachim review.
-Binding: remove useless entry.
Changes since v1:
-Fix Kbuild issue in Kconfig.
-Remove init/exit callbacks. Suspend/Resume and remove driver is no more
driven in stmmac_pltfr but directly in dwmac-stm32 glue driver.
-Take into account Joachim review.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Adds support of Synopsys 3.50a MAC IP in stmmac driver.
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Tested-by: Maxime Coquelin <maxime.coquelin@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stm324xx family chips support Synopsys MAC 3.510 IP.
This patch adds settings for logical glue logic:
-clocks
-mode selection MII or RMII.
Reviewed-by: Joachim Eastwood <manabian@gmail.com>
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Tested-by: Maxime Coquelin <maxime.coquelin@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
return at end of function is useless.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
A bugfix for backward compatibility handling introduced undefined
behavior for the case that of_parse_phandle() does not return
a valid entry, as "gcc -Wmaybe-unused" reports:
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_phy_connect':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_mdio_config':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
We can work around this by removing the check for zero "np", as
of_phy_connect() will correctly handle a NULL argument so we fall
back into the normal error handling case.
Note that I had previously fixed another bug that resulted in the
exact same warning, but this is a different problem that was
introduced after my original fix.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 03377e381b ("drivers: net: xgene: Fix backward compatibility")
Signed-off-by: David S. Miller <davem@davemloft.net>
check the coding style with checkpatch.pl and fix the warnings and errors.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss says:
====================
net/usb: asix driver improvements
This is a resubmission of v3, since the netdev
mailinlist was not sent the previous submission.
This series improves power management of the asix driver.
- Suspend/resume support is improved to save needed registers.
- Device disconnection is improved.
- Fixes AX88772x resume failures
- Implementes IEEE 802.3 spec section "22.2.4.1.1 Reset" correctly
- Fixes AX_CMD_WRITE_MEDIUM_MODE being set incorrectly
Changes since v1:
- Added proper metadata tags to series.
- Added two more patches to series.
Changes since v2:
- Added coverletter
- Tested patches on AX88772A/AX88772B/AX88178/AX88179 hardware
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Grant Grundler <grundler@chromium.org>
The miii_nway_restart() causes a PHY link change activity and
ax88772_link_reset will be called. link_reset will set
AX_CMD_WRITE_MEDIUM_MODE register correctly.
The asix_write_medium_mode in reset() fills in a default value to the register
which may be different from the negotiation result. So do this first.
Ignore the ret value since it's ignored in XXX_link_reset() functions.
Signed-off-by: Grant Grundler <grundler@google.com>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Grant Grundler <grundler@chromium.org>
https://lkml.org/lkml/2014/11/11/947
Ben Hutchings is correct. IEEE 802.3 spec section "22.2.4.1.1 Reset" requires
up to 500ms delay. Mitigate the "max" delay by polling the phy until BCM_RESET
bit is clear.
Signed-off-by: Grant Grundler <grundler@chromium.org>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Allan Chou <allan@asix.com.tw>
The change fixes AX88772x resume failure by
- Restore incorrect AX88772A PHY registers when resetting
- Need to stop MAC operation when suspending
- Need to restart MII when restoring PHY
Signed-off-by: Allan Chou <allan@asix.com.tw>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Vincent Palatin <vpalatin@chromium.org>
Check the answers from the USB stack and avoid re-sending multiple times
the request if the device has disappeared.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Freddy Xin <freddy@asix.com.tw>
In order to R/W registers in suspend/resume functions, in_pm flags are
added to some functions to determine whether the nopm version of usb
functions is called.
Save BMCR and ANAR PHY registers in suspend function and restore them
in resume function.
Reset HW in resume function to ensure the PHY works correctly.
Signed-off-by: Freddy Xin <freddy@asix.com.tw>
Signed-off-by: Robert Foss <robert.foss@collabora.com>
Tested-by: Robert Foss <robert.foss@collabora.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct ethtool_ops i@p = { ... };
@ok1@
identifier r.i;
struct net_device e;
position p;
@@
e.ethtool_ops = &i@p;
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault says:
====================
ppp: fix deadlock upon recursive xmit
This series fixes the issue reported by Feng where packets looping
through a ppp device makes the module deadlock:
https://marc.info/?l=linux-netdev&m=147134567319038&w=2
The problem can occur on virtual interfaces (e.g. PPP over L2TP, or
PPPoE on vxlan devices), when a PPP packet is routed back to the PPP
interface.
PPP's xmit path isn't reentrant, so patch #1 uses a per-cpu variable
to detect and break recursion. Patch #2 sets the NETIF_F_LLTX flag to
avoid lock inversion issues between ppp and txqueue locks.
There are multiple entry points to the PPP xmit path. This series has
been tested with lockdep and should address recursion issues no matter
how the packet entered the path.
A similar issue in L2TP is not covered by this series:
l2tp_xmit_skb() also isn't reentrant, and it can be called as part of
PPP's xmit path (pppol2tp_xmit()), or directly from the L2TP socket
(l2tp_ppp_sendmsg()). If a packet is sent by l2tp_ppp_sendmsg() and
routed to the parent PPP interface, then it's going to hit
l2tp_xmit_skb() again.
Breaking recursion as done in ppp_generic is not enough, because we'd
still have a lock inversion issue (locking in l2tp_xmit_skb() can
happen before or after locking in ppp_generic). The best approach would
be to use the ip_tunnel functions and remove the socket locking in
l2tp_xmit_skb(). But that'd be something for net-next.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In case of misconfiguration, a virtual PPP channel might send packets
back to their parent PPP interface. This typically happens in
misconfigured L2TP setups, where PPP's peer IP address is set with the
IP of the L2TP peer.
When that happens the system hangs due to PPP trying to recursively
lock its xmit path.
[ 243.332155] BUG: spinlock recursion on CPU#1, accel-pppd/926
[ 243.333272] lock: 0xffff880033d90f18, .magic: dead4ead, .owner: accel-pppd/926, .owner_cpu: 1
[ 243.334859] CPU: 1 PID: 926 Comm: accel-pppd Not tainted 4.8.0-rc2 #1
[ 243.336010] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 243.336018] ffff7fffffffffff ffff8800319a77a0 ffffffff8128de85 ffff880033d90f18
[ 243.336018] ffff880033ad8000 ffff8800319a77d8 ffffffff810ad7c0 ffffffff0000039e
[ 243.336018] ffff880033d90f18 ffff880033d90f60 ffff880033d90f18 ffff880033d90f28
[ 243.336018] Call Trace:
[ 243.336018] [<ffffffff8128de85>] dump_stack+0x4f/0x65
[ 243.336018] [<ffffffff810ad7c0>] spin_dump+0xe1/0xeb
[ 243.336018] [<ffffffff810ad7f0>] spin_bug+0x26/0x28
[ 243.336018] [<ffffffff810ad8b9>] do_raw_spin_lock+0x5c/0x160
[ 243.336018] [<ffffffff815522aa>] _raw_spin_lock_bh+0x35/0x3c
[ 243.336018] [<ffffffffa01a88e2>] ? ppp_push+0xa7/0x82d [ppp_generic]
[ 243.336018] [<ffffffffa01a88e2>] ppp_push+0xa7/0x82d [ppp_generic]
[ 243.336018] [<ffffffff810adada>] ? do_raw_spin_unlock+0xc2/0xcc
[ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7
[ 243.336018] [<ffffffff81552438>] ? _raw_spin_unlock_irqrestore+0x34/0x49
[ 243.336018] [<ffffffffa01ac657>] ppp_xmit_process+0x48/0x877 [ppp_generic]
[ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7
[ 243.336018] [<ffffffff81408cd3>] ? skb_queue_tail+0x71/0x7c
[ 243.336018] [<ffffffffa01ad1c5>] ppp_start_xmit+0x21b/0x22a [ppp_generic]
[ 243.336018] [<ffffffff81426af1>] dev_hard_start_xmit+0x15e/0x32c
[ 243.336018] [<ffffffff81454ed7>] sch_direct_xmit+0xd6/0x221
[ 243.336018] [<ffffffff814273a8>] __dev_queue_xmit+0x52a/0x820
[ 243.336018] [<ffffffff814276a9>] dev_queue_xmit+0xb/0xd
[ 243.336018] [<ffffffff81430a3c>] neigh_direct_output+0xc/0xe
[ 243.336018] [<ffffffff8146b5d7>] ip_finish_output2+0x4d2/0x548
[ 243.336018] [<ffffffff8146a8e6>] ? dst_mtu+0x29/0x2e
[ 243.336018] [<ffffffff8146d49c>] ip_finish_output+0x152/0x15e
[ 243.336018] [<ffffffff8146df84>] ? ip_output+0x74/0x96
[ 243.336018] [<ffffffff8146df9c>] ip_output+0x8c/0x96
[ 243.336018] [<ffffffff8146d55e>] ip_local_out+0x41/0x4a
[ 243.336018] [<ffffffff8146dd15>] ip_queue_xmit+0x531/0x5c5
[ 243.336018] [<ffffffff814a82cd>] ? udp_set_csum+0x207/0x21e
[ 243.336018] [<ffffffffa01f2f04>] l2tp_xmit_skb+0x582/0x5d7 [l2tp_core]
[ 243.336018] [<ffffffffa01ea458>] pppol2tp_xmit+0x1eb/0x257 [l2tp_ppp]
[ 243.336018] [<ffffffffa01acf17>] ppp_channel_push+0x91/0x102 [ppp_generic]
[ 243.336018] [<ffffffffa01ad2d8>] ppp_write+0x104/0x11c [ppp_generic]
[ 243.336018] [<ffffffff811a3c1e>] __vfs_write+0x56/0x120
[ 243.336018] [<ffffffff81239801>] ? fsnotify_perm+0x27/0x95
[ 243.336018] [<ffffffff8123ab01>] ? security_file_permission+0x4d/0x54
[ 243.336018] [<ffffffff811a4ca4>] vfs_write+0xbd/0x11b
[ 243.336018] [<ffffffff811a5a0a>] SyS_write+0x5e/0x96
[ 243.336018] [<ffffffff81552a1b>] entry_SYSCALL_64_fastpath+0x13/0x94
The main entry points for sending packets over a PPP unit are the
.write() and .ndo_start_xmit() callbacks (simplified view):
.write(unit fd) or .ndo_start_xmit()
\
CALL ppp_xmit_process()
\
LOCK unit's xmit path (ppp->wlock)
|
CALL ppp_push()
\
LOCK channel's xmit path (chan->downl)
|
CALL lower layer's .start_xmit() callback
\
... might recursively call .ndo_start_xmit() ...
/
RETURN from .start_xmit()
|
UNLOCK channel's xmit path
/
RETURN from ppp_push()
|
UNLOCK unit's xmit path
/
RETURN from ppp_xmit_process()
Packets can also be directly sent on channels (e.g. LCP packets):
.write(channel fd) or ppp_output_wakeup()
\
CALL ppp_channel_push()
\
LOCK channel's xmit path (chan->downl)
|
CALL lower layer's .start_xmit() callback
\
... might call .ndo_start_xmit() ...
/
RETURN from .start_xmit()
|
UNLOCK channel's xmit path
/
RETURN from ppp_channel_push()
Key points about the lower layer's .start_xmit() callback:
* It can be called directly by a channel fd .write() or by
ppp_output_wakeup() or indirectly by a unit fd .write() or by
.ndo_start_xmit().
* In any case, it's always called with chan->downl held.
* It might route the packet back to its parent unit using
.ndo_start_xmit() as entry point.
This patch detects and breaks recursion in ppp_xmit_process(). This
function is a good candidate for the task because it's called early
enough after .ndo_start_xmit(), it's always part of the recursion
loop and it's on the path of whatever entry point is used to send
a packet on a PPP unit.
Recursion detection is done using the per-cpu ppp_xmit_recursion
variable.
Since ppp_channel_push() too locks the channel's xmit path and calls
the lower layer's .start_xmit() callback, we need to also increment
ppp_xmit_recursion there. However there's no need to check for
recursion, as it's out of the recursion loop.
Reported-by: Feng Gao <gfree.wind@gmail.com>
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Casting away const is bad practice. Since this is ARM specific driver
don't have hardware actually test this.
Having getter functions for ops is really unnecessary code bloat, but
not going to touch that.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot says:
====================
net: dsa: add MDB support
This patchset adds the switchdev MDB object support to the DSA layer.
The MDB support for the mv88e6xxx driver is very similar to the FDB
support. The FDB operations care about unicast addresses while the MDB
operations care about multicast addresses.
Both operation set load/purge/dump the Address Translation Table (ATU),
thus common code is used.
Changes in v2 based on Andrew's comments:
- drop "group" in multicast database related doc and comment
- change _one for more relevant _fid in mv88e6xxx_port_db_dump_one
- return -EOPNOTSUPP if switchdev obj ID is neither _FDB nor _MDB
====================
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the MDB operations. This consists of
loading/purging/dumping multicast addresses for a given port in the ATU.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>