Error propagation from cmd callbacks currently works in a way where
qeth_send_control_data_cb() picks the raw HW code from the response,
and the cmd's originator later translates this into an errno.
The callback itself only returns 0 ("done") or 1 ("expect more data").
This is
1. limiting, as the only means for the callback to report an internal
error is to invent pseudo HW codes (such as IPA_RC_ENOMEM), that
the originator then needs to understand. For non-IPA callbacks, we
even provide a separate field in the IO buffer metadata (iob->rc) so
the callback can pass back a return value.
2. fragile, as the originator must take care to not translate any errno
that is returned by qeth's own IO code paths (eg -ENOMEM). Also, any
originator that forgets to translate the HW codes potentially passes
garbage back to its caller. For instance, see
commit 2aa4867198 ("s390/qeth: translate SETVLAN/DELVLAN errors").
Introduce a new model where all HW error translation is done within the
callback, and the callback returns
> 0, if it expects more data (as before)
== 0, on success
< 0, with an errno
Start off with converting all callbacks to the new model that either
a) pass back pseudo HW codes, or b) have a dependency on a specific
HW error code. Also convert c) the one callback that uses iob->rc, and
d) qeth_setadpparms_change_macaddr_cb() so that it can pass back an
error back to qeth_l2_request_initial_mac() even when the cmd itself
was successful.
The old model remains supported: if the callback returns 0, we still
propagate the response's HW error code back to the originator.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When sending cmds via qeth_send_control_data(), qeth puts the request
on the IO channel and then blocks on the reply object until the response
has been received.
If the IO completes with error, there will never be a response and we
block until the reply-wait hits its timeout. For this case, connect the
request buffer to its reply object, so that we can immediately cancel
the wait.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current code enqueues & dequeues a reply object from the waiter list
in various places. In particular, the dequeue & enqueue in
qeth_send_control_data_cb() looks fragile - this can cause
qeth_clear_ipacmd_list() to skip the active object.
Add some helpers, and boil the logic down by giving
qeth_send_control_data() the sole responsibility to add and remove
objects.
qeth_send_control_data_cb() and qeth_clear_ipacmd_list() will now only
notify the reply object to interrupt its wait cycle. This can cause
a slight delay in the removal, but that's no concern.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
'len' specifies how much data we send to the HW, don't dump beyond this
boundary.
As of today this is no big concern - commands are built in full, zeroed
pages.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
csum offload and TSO have similar programming requirements. The TSO code
was reworked with commit "s390/qeth: enhance TSO control sequence",
adjust the csum control flow accordingly. Primarily this means replacing
custom helpers with more generic infrastructure.
Also, change the LP2LP check so that it warns on TX offload (not RX).
This is where reduced csum capability actually matters.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current code attempts to enable all advertised HW csum offload features.
Future-proof this by enabling only those features that we actually use.
Also, the IPv4 header csum feature is only needed for TX on L3 devices.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The code to fill the IPA length fields is duplicated three times across
the driver:
1. qeth_send_ipa_cmd() sets IPA_CMD_LENGTH, which matches the defaults
in the IPA_PDU_HEADER template.
2. for OSN, qeth_osn_send_ipa_cmd() bypasses this logic and inserts the
length passed by the caller.
3. SNMP commands (that can outgrow IPA_CMD_LENGTH) have their own way
of setting the length fields, via qeth_send_ipa_snmp_cmd().
Consolidate this into qeth_prepare_ipa_cmd(), which all originators of
IPA cmds already call during setup of their cmd. Let qeth_send_ipa_cmd()
pull the length from the cmd instead of hard-coding IPA_CMD_LENGTH.
For now, the SNMP code still needs to fix-up its length fields manually.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
qeth_l3_query_arp_cache_info() indicates a data length that's much
larger than the actual length of its request (ie. the value passed to
qeth_get_setassparms_cmd()). The confusion presumably comes from the
fact that the cmd _response_ can be quite large - but that's no concern
for the initial request IO.
Fixing this up allows us to use the generic qeth_send_ipa_cmd()
infrastructure.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yangbo Lu says:
====================
Add ENETC PTP clock driver
There is same QorIQ 1588 timer IP block on the new ENETC Ethernet
controller with eTSEC/DPAA Ethernet controllers. However it's
different endianness (little-endian) and using PCI driver.
To support ENETC PTP driver, ptp_qoriq driver needed to be
reworked to make functions global for reusing, to add little-
endian support, to add ENETC memory map support, and to add
ENETC dependency for ptp_qoriq driver.
In addition, although ENETC PTP driver is a PCI driver, the dts
node still could be used. Currently the ls1028a dtsi which is
the only platform by now using ENETC is not complete, so there
is still dependency for ENETC PTP node upstreaming. This will
be done in the near future. The hardware timestamping support
for ENETC is done but needs to be reworked with new method in
internal git tree, and will be sent out soon.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When an ethernet frame is padded to meet the minimum ethernet frame
size, the padding octets are not covered by the hardware checksum.
Fortunately the padding octets are usually zero's, which don't affect
checksum. However, it is not guaranteed. For example, switches might
choose to make other use of these octets.
This repeatedly causes kernel hardware checksum fault.
Prior to the cited commit below, skb checksum was forced to be
CHECKSUM_NONE when padding is detected. After it, we need to keep
skb->csum updated. However, fixing up CHECKSUM_COMPLETE requires to
verify and parse IP headers, it does not worth the effort as the packets
are so small that CHECKSUM_COMPLETE has no significant advantage.
Future work: when reporting checksum complete is not an option for
IP non-TCP/UDP packets, we can actually fallback to report checksum
unnecessary, by looking at cqe IPOK bit.
Fixes: 88078d98d1 ("net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends")
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch to add enetc_ptp driver into QorIQ PTP list
for maintaining.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is to add PTP clock driver for ENETC.
The driver reused QorIQ PTP clock driver.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is to add QorIQ PTP support for ENETC.
ENETC PTP driver which is a PCI driver for same
1588 timer IP block will reuse QorIQ PTP driver.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The 1588 timer on eTSEC Ethernet controller uses different
register memory map with DPAA Ethernet controller.
Now the new ENETC Ethernet controller uses same reigster
memory map with DPAA. To support ENETC, let's use register
memory map of DPAA/ENETC in default.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Specify "little-endian" property if the 1588 timer IP block
is little-endian mode. The default endian mode is big-endian.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is QorIQ 1588 timer IP block on the new ENETC Ethernet
controller. However it uses little endian mode which is different
with before. This patch is to add little endian support for the
driver by using "little-endian" dts node property.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Moved QorIQ PTP clock initialization/free into new functions
ptp_qoriq_init()/ptp_qoriq_free(). These functions could also
be reused by ENETC PTP drvier which is a PCI driver for same
1588 timer IP block.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is to make functions of ptp operations global,
so that ENETC PTP driver which is a PCI driver for same
1588 timer IP block could reuse them.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Strings containing "ptp_qoriq" or "qoriq_ptp" which were used for
structure/function names were complained by users. Let's just use
the unique "ptp_qoriq" to make these names more consistent.
This patch is just to unify the names using "ptp_qoriq". It hasn't
changed any functions.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are several skb_* functions where the locked and unlocked
functions are confusingly documented. For several of them, the
kernel-doc for the unlocked version is placed above the locked version,
which to the casual reader makes it seems like the locked version "takes
no locks and you must therefore hold required locks before calling it."
One can see, for example, that this link claims to document
skb_queue_head(), while instead describing __skb_queue_head().
https://www.kernel.org/doc/html/latest/networking/kapi.html#c.skb_queue_head
The correct documentation for skb_queue_head() is also included further
down the page.
This diff tested via:
$ scripts/kernel-doc -rst include/linux/skbuff.h net/core/skbuff.c
No new warnings were seen, and the output makes a little more sense.
Signed-off-by: Brian Norris <briannorris@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli says:
====================
Remove getting SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS
AFAICT there is no code that attempts to get the value of the attribute
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS while it is used with
switchdev_port_attr_set().
This is effectively no doing anything and it can slow down future work
that tries to make modifications in these areas so remove that.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no code that tries to get the attribute
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS, remove support for doing that.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no code that attempts to get the
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS attribute, remove support for that.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no code that will query the SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS
attribute remove support for that.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use new function phy_modify_mmd_changed(), the result speaks for itself.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Addition of tls1.3 support broke tls1.2 handshake when async crypto
accelerator is used. This is because the record type for non-data
records is not propagated to user application. Also when async
decryption happens, the decryption does not stop when two different
types of records get dequeued and submitted for decryption. To address
it, we decrypt tls1.2 non-data records in synchronous way. We check
whether the record we just processed has same type as the previous one
before checking for async condition and jumping to dequeue next record.
Fixes: 130b392c6c ("net: tls: Add tls 1.3 support")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are several places which make the decision whether to access the
XLGMAC vs GMAC that only check for PHY_INTERFACE_MODE_10GKR and not its
XAUI variant. Switch these to use the new helper so that we have
consistency through the driver.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a mvpp2_is_xlg() helper to identify whether the interface mode
should be using the XLGMAC rather than the GMAC.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Provide phylink_init_eee() to allow MAC drivers to initialise PHY EEE
from within the ethtool set_eee() method.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
There's little point calling mac_config() when the link is down.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit b639583f9e.
As per discussion with Jakub Kicinski and Michal Kubecek,
this will be better addressed by soon-too-come ethtool netlink
API with additional indication that given configuration request
is supposed to be persisted.
Also, remove the parameter support from bnxt_en driver.
Cc: Jiri Pirko <jiri@mellanox.com>
Cc: Michael Chan <michael.chan@broadcom.com>
Cc: Michal Kubecek <mkubecek@suse.cz>
Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christoph Hellwig says:
====================
net: don't pass a NULL struct device to DMA API functions v2
We still have a few drivers which pass a NULL struct device pointer
to DMA API functions, which generally is a bad idea as the API
implementations rely on the device not only for ops selection, but
also the dma mask and various other attributes.
This series contains all easy conversions to pass a struct device,
besides that there also is some arch code that needs separate handling,
a driver that should not use the DMA API at all, and one that is
a complete basket case to be deal with separately.
Changes since v1:
- fix an inverted ifdef in CAIF
- update the smc911x changelog
- split the series, this only contains the networking patches
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Note that smc911x apparently is a PIO chip with an external DMA
handshake, and we probably use the wrong device here. But at least
it matches the mapping side, which apparently works or at least
worked in the not too distant past.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Also use GFP_KERNEL instead of GFP_ATOMIC as the gfp_t for the memory
allocation, as we aren't in interrupt context or under a lock.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Note that this driver seems to entirely lack dma_map_single error
handling, but that is left for another time.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Note this driver seems to lack dma_unmap_* calls entirely, but fixing
that is left for another time.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The DMA API generally relies on a struct device to work properly, and
only barely works without one for legacy reasons. Pass the easily
available struct device from the platform_device to remedy this.
Also use the proper Kconfig symbol to check for DMA API availability.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel says:
====================
mlxsw: Several updates
Patches #1-#3 contain misc updates for the mlxsw driver, one of which is
a fix following recent introduction of flow_rule infrastructure.
Patch #4 avoids double sourcing of lib.sh in forwarding selftests.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't source lib.sh 2 times and make the script work with ifnames
passed on the command line.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver does not support VLAN push and pop, but only VLAN modify.
Fixes: 7386788175 ("drivers: net: use flow action infrastructure")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In case the register access failed an error would be logged anyway, so
we can drop the warning.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The LAG port collecting (receive) function was mistakenly set when the
port was registered as a LAG member, while it should be set only when
the port collection state is set to true. Set LAG port to collecting
when it is set to distributing, as described in the IEEE link
aggregation standard coupled control mux machine state diagram.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun says:
====================
net/smc: patches 2019-02-12
here are patches for SMC:
* patches 1 and 3 optimize SMC-R tx logic
* patch 2 is a cleanup without functional change
* patch 4 optimizes rx logic
* patches 5 and 6 improve robustness in link group and IB event handling
* patch 7 establishes Karsten Graul as another SMC maintainer
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add Karsten as additional maintainer for Shared Memory Communications
(SMC) Sockets.
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For robustness protect of higher port numbers than expected to avoid
setting bits behind our port_event_mask. In case of an DEVICE_FATAL
event all ports must be checked. The IB_EVENT_GID_CHANGE event is
provided in the global event handler, so handle it there. And handle a
QP_FATAL event instead of an DEVICE_FATAL event in the qp handler.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove the shortcut that smc_lgr_free() would skip the check for
existing connections when the link group is not in the link group list.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>