Commit Graph

39246 Commits

Author SHA1 Message Date
Hao Chen 1026b1534f net: hns3: remove unnecessary "static" of local variables in function
Some local variable declarations are no need to add "static", so remove it.

Signed-off-by: Hao Chen <chenhao288@hisilicon.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Guangbin Huang 7f2f8cf6ef net: hns3: don't config TM DWRR twice when set ETS
The function hclge_tm_dwrr_cfg() will be called twice in function
hclge_ieee_setets() when map_changed is true, the calling flow is
hclge_ieee_setets()
    hclge_map_update()
    |   hclge_tm_schd_setup_hw()
    |       hclge_tm_dwrr_cfg()
    hclge_notify_init_up()
    hclge_tm_dwrr_cfg()

It is no need to call hclge_tm_dwrr_cfg() twice actually, so just
return after calling hclge_notify_init_up().

Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Guangbin Huang aec35aecc3 net: hns3: add new function hclge_get_speed_bit()
Currently, function hclge_check_port_speed() uses switch/case statement
to get speed bit according to speed. To reuse this part of code and
improve code readability and maintainability, add a new function
hclge_get_speed_bit() to get speed bit according to map relationship
of speed and speed bit defined in array speed_bit_map.

Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Guangbin Huang 81414ba713 net: hns3: refactor function hclgevf_parse_capability()
The function hclgevf_parse_capability() will add more if statement in the
future, to improve code readability, maintainability and simplicity,
refactor this function by using a bit mapping array of IMP capabilities
and driver capabilities.

Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Guangbin Huang e1d93bc6ef net: hns3: refactor function hclge_parse_capability()
The function hclge_parse_capability() uses too many if statement, and
it may add more in the future. To improve code readability, maintainability
and simplicity, refactor this function by using a bit mapping array of IMP
capabilities and driver capabilities.

Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Yufeng Mo 0fc36e37d5 net: hns3: add trace event in hclge_gen_resp_to_vf()
Add a trace to get the info of pf responds to the mailbox message of vf.

Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-28 11:20:05 +01:00
Jakub Kicinski 907fd4a294 bnxt: count discards due to memory allocation errors
Count packets dropped due to buffer or skb allocation errors.
Report as part of rx_dropped.

v2: drop the ethtool -S entry [Vladimir]

Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-27 17:14:32 -07:00
Jakub Kicinski 40bedf7cb2 bnxt: count packets discarded because of netpoll
bnxt may discard packets if Rx completions are consumed
in an attempt to let netpoll make progress. It should be
extremely rare in practice but nonetheless such events
should be counted.

Since completion ring memory is allocated dynamically use
a similar scheme to what is done for HW stats to save them.

Report the stats in rx_dropped and per-netdev ethtool
counter. Chances that users care which ring dropped are
very low.

v3: only save the stat to rx_dropped on reset,
rx_total_netpoll_discards will now only show drops since
last reset, similar to other "total_discard" counters.

Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-27 17:14:22 -07:00
Brett Creeley b357d9717b ice: Only lock to update netdev dev_addr
commit 3ba7f53f8b ("ice: don't remove netdev->dev_addr from uc sync
list") introduced calls to netif_addr_lock_bh() and
netif_addr_unlock_bh() in the driver's ndo_set_mac() callback. This is
fine since the driver is updated the netdev's dev_addr, but since this
is a spinlock, the driver cannot sleep when the lock is held.
Unfortunately the functions to add/delete MAC filters depend on a mutex.
This was causing a trace with the lock debug kernel config options
enabled when changing the mac address via iproute.

[  203.273059] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:281
[  203.273065] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6698, name: ip
[  203.273068] Preemption disabled at:
[  203.273068] [<ffffffffc04aaeab>] ice_set_mac_address+0x8b/0x1c0 [ice]
[  203.273097] CPU: 31 PID: 6698 Comm: ip Tainted: G S      W I       5.14.0-rc4 #2
[  203.273100] Hardware name: Intel Corporation S2600WFT/S2600WFT, BIOS SE5C620.86B.02.01.0010.010620200716 01/06/2020
[  203.273102] Call Trace:
[  203.273107]  dump_stack_lvl+0x33/0x42
[  203.273113]  ? ice_set_mac_address+0x8b/0x1c0 [ice]
[  203.273124]  ___might_sleep.cold.150+0xda/0xea
[  203.273131]  mutex_lock+0x1c/0x40
[  203.273136]  ice_remove_mac+0xe3/0x180 [ice]
[  203.273155]  ? ice_fltr_add_mac_list+0x20/0x20 [ice]
[  203.273175]  ice_fltr_prepare_mac+0x43/0xa0 [ice]
[  203.273194]  ice_set_mac_address+0xab/0x1c0 [ice]
[  203.273206]  dev_set_mac_address+0xb8/0x120
[  203.273210]  dev_set_mac_address_user+0x2c/0x50
[  203.273212]  do_setlink+0x1dd/0x10e0
[  203.273217]  ? __nla_validate_parse+0x12d/0x1a0
[  203.273221]  __rtnl_newlink+0x530/0x910
[  203.273224]  ? __kmalloc_node_track_caller+0x17f/0x380
[  203.273230]  ? preempt_count_add+0x68/0xa0
[  203.273236]  ? _raw_spin_lock_irqsave+0x1f/0x30
[  203.273241]  ? kmem_cache_alloc_trace+0x4d/0x440
[  203.273244]  rtnl_newlink+0x43/0x60
[  203.273245]  rtnetlink_rcv_msg+0x13a/0x380
[  203.273248]  ? rtnl_calcit.isra.40+0x130/0x130
[  203.273250]  netlink_rcv_skb+0x4e/0x100
[  203.273256]  netlink_unicast+0x1a2/0x280
[  203.273258]  netlink_sendmsg+0x242/0x490
[  203.273260]  sock_sendmsg+0x58/0x60
[  203.273263]  ____sys_sendmsg+0x1ef/0x260
[  203.273265]  ? copy_msghdr_from_user+0x5c/0x90
[  203.273268]  ? ____sys_recvmsg+0xe6/0x170
[  203.273270]  ___sys_sendmsg+0x7c/0xc0
[  203.273272]  ? copy_msghdr_from_user+0x5c/0x90
[  203.273274]  ? ___sys_recvmsg+0x89/0xc0
[  203.273276]  ? __netlink_sendskb+0x50/0x50
[  203.273278]  ? mod_objcg_state+0xee/0x310
[  203.273282]  ? __dentry_kill+0x114/0x170
[  203.273286]  ? get_max_files+0x10/0x10
[  203.273288]  __sys_sendmsg+0x57/0xa0
[  203.273290]  do_syscall_64+0x37/0x80
[  203.273295]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  203.273296] RIP: 0033:0x7f8edf96e278
[  203.273298] Code: 89 02 48 c7 c0 ff ff ff ff eb b5 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 8d 05 25 63 2c 00 8b 00 85 c0 75 17 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 41 89 d4 55
[  203.273300] RSP: 002b:00007ffcb8bdac08 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
[  203.273303] RAX: ffffffffffffffda RBX: 000000006115e0ae RCX: 00007f8edf96e278
[  203.273304] RDX: 0000000000000000 RSI: 00007ffcb8bdac70 RDI: 0000000000000003
[  203.273305] RBP: 0000000000000000 R08: 0000000000000001 R09: 00007ffcb8bda5b0
[  203.273306] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
[  203.273306] R13: 0000555e10092020 R14: 0000000000000000 R15: 0000000000000005

Fix this by only locking when changing the netdev->dev_addr. Also, make
sure to restore the old netdev->dev_addr on any failures.

Fixes: 3ba7f53f8b ("ice: don't remove netdev->dev_addr from uc sync list")
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 13:15:55 -07:00
Jacob Keller 9ee313433c ice: restart periodic outputs around time changes
When we enabled auxiliary input/output support for the E810 device, we
forgot to add logic to restart the output when we change time. This is
important as the periodic output will be incorrect after a time change
otherwise.

This unfortunately includes the adjust time function, even though it
uses an atomic hardware interface. The atomic adjustment can still cause
the pin output to stall permanently, so we need to stop and restart it.

Introduce wrapper functions to temporarily disable and then re-enable
the clock outputs.

Fixes: 172db5f91d ("ice: add support for auxiliary input/output pins")
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Sunitha D Mekala <sunithax.d.mekala@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 13:14:41 -07:00
Aravindhan Gunasekaran 1ab011b0bf igc: Add support for CBS offloading
Implement support for Credit-based shaper(CBS) Qdisc hardware
offload mode in the driver. There are two sets of IEEE802.1Qav
(CBS) HW logic in i225 controller and this patch supports
enabling them in the top two priority TX queues.

Driver implemented as recommended by Foxville External
Architecture Specification v0.993. Idleslope and Hi-credit are
the CBS tunable parameters for i225 NIC, programmed in TQAVCC
and TQAVHC registers respectively.

In-order for IEEE802.1Qav (CBS) algorithm to work as intended
and provide BW reservation CBS should be enabled in highest
priority queue first. If we enable CBS on any of low priority
queues, the traffic in high priority queue does not allow low
priority queue to be selected for transmission and bandwidth
reservation is not guaranteed.

Signed-off-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com>
Signed-off-by: Mallikarjuna Chilakala <mallikarjuna.chilakala@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 09:31:09 -07:00
Vinicius Costa Gomes 61572d5f8f igc: Simplify TSN flags handling
Separates the procedure done during reset from applying a
configuration, knowing when the code is executing allow us to
separate the better what changes the hardware state from what
changes only the driver state.

Introduces a flag for bookkeeping the driver state of TSN
features. When Qav and frame-preemption is also implemented
this flag makes it easier to keep track on whether a TSN feature
driver state is enabled or not though controller state changes,
say, during a reset.

Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com>
Signed-off-by: Mallikarjuna Chilakala <mallikarjuna.chilakala@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 09:31:08 -07:00
Vinicius Costa Gomes c814a2d2d4 igc: Use default cycle 'start' and 'end' values for queues
Sets default values for each queue cycle start and cycle end.
This allows some simplification in the handling of these
configurations as most TSN features in i225 require a cycle
to be configured.

In i225, cycle start and end time is required to be programmed
for CBS to work properly.

Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com>
Signed-off-by: Mallikarjuna Chilakala <mallikarjuna.chilakala@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 09:31:08 -07:00
Jacob Keller 4dd0d5c33c ice: add lock around Tx timestamp tracker flush
The driver didn't take the lock while flushing the Tx tracker, which
could cause a race where one thread is trying to read timestamps out
while another thread is trying to read the tracker to check the
timestamps.

Avoid this by ensuring that flushing is locked against read accesses.

Fixes: ea9b847cda ("ice: enable transmit timestamps for E810 devices")
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 09:14:49 -07:00
Jacob Keller 1f0cbb3e89 ice: remove dead code for allocating pin_config
We have code in the ice driver which allocates the pin_config structure
if n_pins is > 0, but we never set n_pins to be greater than zero.
There's no reason to keep this code until we actually have pin_config
support. Remove this. We can re-add it properly when we implement
support for pin_config for E810-T devices.

Fixes: 172db5f91d ("ice: add support for auxiliary input/output pins")
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 09:11:39 -07:00
Jacob Keller 84c5fb8c42 ice: fix Tx queue iteration for Tx timestamp enablement
The driver accidentally copied the ice_for_each_rxq iterator when
implementing enablement of the ptp_tx bit for the Tx rings. We still
load the Tx rings and set the ptp_tx field, but we iterate over the
count of the num_rxq.

If the number of Tx and Rx queues differ, this could either cause
a buffer overrun when accessing the tx_rings list if num_txq is greater
than num_rxq, or it could cause us to fail to enable Tx timestamps for
some rings.

This was not noticed originally as we generally have the same number of
Tx and Rx queues.

Fixes: ea9b847cda ("ice: enable transmit timestamps for E810 devices")
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-27 08:38:55 -07:00
Harman Kalra 49d6baea79 octeontx2-af: cn10K: support for sched lmtst and other features
Enhancing the mailbox scope to support important configurations
like enabling scheduled LMTST, disable LMTLINE prefetch, disable
early completion for ordered LMTST, as per request from the
application. On FLR these configurations will be reset to default.
This patch also adds the 95XXO silicon version to octeontx2 silicon
list.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 12:41:13 +01:00
Hao Chen 0c5c135cdb net: hns3: uniform type of function parameter cmd
The parameter cmd in function definition of hns3_dbg_bd_file_init and
hns3_dbg_common_file_init is used type u32, this patch uniforms them
in function declaration to type u32 too.

Signed-off-by: Hao Chen <chenhao288@hisilicon.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Peng Li 5a24b1fd30 net: hns3: merge some repetitive macros
There are some repetitive macros have same meaning and value, this patch
merges them to make code clean.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Peng Li d7517f8f6b net: hns3: package new functions to simplify hclgevf_mbx_handler code
This patch packages two new function to simplify the function
hclgevf_mbx_handler, and it can reduce the code cycle complexity
and make code more concise.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Peng Li 5f22a80f32 net: hns3: remove redundant param to simplify code
The param msg_q is redundant, copy &req->msg to
hdev->arq.msg_q[hdev->arq.tail] directly makes code clean.
So removes the redundant param msg_q.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Peng Li 304cd8e776 net: hns3: use memcpy to simplify code
Use memcpy to copy req->msg.resp_data to resp->additional_info,
to simplify the code and improve a little efficiency.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Peng Li 67821a0cf5 net: hns3: remove redundant param mbx_event_pending
This patch removes the redundant param mbx_event_pending.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Huazhong Tan c511dfff4b net: hns3: add hns3_state_init() to do state initialization
To improve the readability and maintainability, add hns3_state_init() to
initialize the state, and this new function will be used to add more state
initialization in the future.

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
Guangbin Huang 4c116f85ec net: hns3: add macros for mac speeds of firmware command
To improve code readability, replace digital numbers of mac speeds
defined by firmware command with macros.

Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 11:20:21 +01:00
David S. Miller a550409378 mlx5-updates-2021-08-26
This patch series contains various fixes, additions and improvements to
 mlx5 software steering.
 
 Patch 1:
   adds support for REMOVE_HEADER packet reformat - a new reformat type
   that is supported starting with ConnectX-6 DX, and allows removing an
   arbitrary size packet segment at a selected position.
 
 Patches 2 and 3:
   add support for VLAN pop on TX and VLAN push on RX flows.
 
 Patch 4:
   enables retransmission mechanism for the SW Steering RC QP.
 
 Patch 5:
   does some improvements to error flow in building STE array and adds
   a more informative printout of an invalid actions sequence.
 
 Patch 6:
   improves error flow on SW Steering QP error.
 
 Patch 7:
   reduces the log level of a message that is printed when a table is
   connected to a lower/same level destination table, as this case proves to
   be not as rare as it was in the past.
 
 Patch 8:
   adds missing support for matching on IPv6 flow label for devices
   older than ConnectX-6 DX.
 
 Patch 9:
   replaces uintN_t types with kernel-style types.
 
 Patch 10:
   allows for using the right API for updating flow tables - if it is
   a FW-owned table, then FW API will be used.
 
 Patch 11:
   adds support for 'ignore_flow_level' on multi-destination flow
   tables that are created by SW Steering.
 
 Patch 12:
    optimizes FDB RX steering rule by skipping matching on source port,
    as the source port for all incoming packets equals to wire.
 
 Patch 13:
    is a small code refactoring - it merges several DR_STE_SIZE enums
    into a single enum.
 
 Patch 14:
    does some additional refactoring and removes HW-specific STE type
    from NIC domain.
 
 Patch 15:
    removes rehash ctrl struct from dr_htbl struct and saves some memory.
 
 Patch 16:
    does a more significant improvement in terms of memory consumption
    and was able to save about 1.6 Gb for 8M rules.
 
 Patch 17:
    adds support for update FTE, which is needed for cases where there
    are multiple rules with the same match.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAmEoF8wACgkQSD+KveBX
 +j7+oAgAvNXfSvagaGpqIqSKY39u4YGoaJzMga80Lj6AtnY3PBTUQUY3B7fReiqg
 8ZwYU9fCEVk4TFRTRvZOTph/lRw87SEAtFUXNLYlYrckDehQN2Jxw25uPLgy5cws
 pYkuBiljpMUf/Fqtk2x9i2gaV3eMjA9Hr8beP4OUWAfoFM/BOIBByK4hcb5qRjBf
 UUOOLCRS7QIktusnlc0jMlNJx0J2v5rNHH3aNKzsyqXknB9yhy49CUsQ5Y85bB7N
 tb41eFeV4VniDXU5MAuildgCv/KZ3FZUyPuOh5qwO7T8K+eUm1laCC0/sojWRAy9
 25BuhdpMn/OhCpyNGbn6Frs7C7+fmA==
 =Fzxb
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-updates-2021-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
This patch series contains various fixes, additions and improvements to
mlx5 software steering.

Patch 1:
  adds support for REMOVE_HEADER packet reformat - a new reformat type
  that is supported starting with ConnectX-6 DX, and allows removing an
  arbitrary size packet segment at a selected position.

Patches 2 and 3:
  add support for VLAN pop on TX and VLAN push on RX flows.

Patch 4:
  enables retransmission mechanism for the SW Steering RC QP.

Patch 5:
  does some improvements to error flow in building STE array and adds
  a more informative printout of an invalid actions sequence.

Patch 6:
  improves error flow on SW Steering QP error.

Patch 7:
  reduces the log level of a message that is printed when a table is
  connected to a lower/same level destination table, as this case proves to
  be not as rare as it was in the past.

Patch 8:
  adds missing support for matching on IPv6 flow label for devices
  older than ConnectX-6 DX.

Patch 9:
  replaces uintN_t types with kernel-style types.

Patch 10:
  allows for using the right API for updating flow tables - if it is
  a FW-owned table, then FW API will be used.

Patch 11:
  adds support for 'ignore_flow_level' on multi-destination flow
  tables that are created by SW Steering.

Patch 12:
   optimizes FDB RX steering rule by skipping matching on source port,
   as the source port for all incoming packets equals to wire.

Patch 13:
   is a small code refactoring - it merges several DR_STE_SIZE enums
   into a single enum.

Patch 14:
   does some additional refactoring and removes HW-specific STE type
   from NIC domain.

Patch 15:
   removes rehash ctrl struct from dr_htbl struct and saves some memory.

Patch 16:
   does a more significant improvement in terms of memory consumption
   and was able to save about 1.6 Gb for 8M rules.

Patch 17:
   adds support for update FTE, which is needed for cases where there
   are multiple rules with the same match.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-27 09:53:31 +01:00
Jakub Kicinski 97c78d0af5 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
drivers/net/wwan/mhi_wwan_mbim.c - drop the extra arg.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 17:57:57 -07:00
Yevgeny Kliteynik a2ebfbb7b1 net/mlx5: DR, Add support for update FTE
Add the support for update FTE, which is needed for cases where there are
multiple rules with the same match. In such case fs_core will merge the
actions and call update FTE to update current FTE. Since we don't want to
disrupt the traffic, we will add the new duplicate rule, and only then
remove the old duplicate rule.

Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:04 -07:00
Yevgeny Kliteynik 8a015baef5 net/mlx5: DR, Improve rule tracking memory consumption
To track each STE of the rule a rule member was allocated, each
member would point to one STE. This means that we would allocate
40B (rule member) * number of STEs per rule.

To reduce this per rule allocation we use the STE tree pointers
for next_htbl and pointing STE to navigate the tree, this allows
us to keep only the pointer to the last STE of rule (always unique).
From the last rule STE we are able to traverse and rebuild all of
the STEs that construct the rule.

In our testing with 8M rules, each consisting of 7 STES, we were able
to reduce 1.6GB of memory.

Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:04 -07:00
Yevgeny Kliteynik 32c8e3b230 net/mlx5: DR, Remove rehash ctrl struct from dr_htbl
The calculations to decide for the maximum allowed collision threshold
are simple and there is no reason to save them on the htbl struct.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:04 -07:00
Yevgeny Kliteynik 46f2a8ae8a net/mlx5: DR, Remove HW specific STE type from nic domain
Instead of using the HW specific STEv0 type, it is better to use
an enum to indicate if this is an RX or TX nic domain.
This means that now we will need to convert the nic domain type
to the corresponding STE type.

Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:03 -07:00
Yevgeny Kliteynik ab9d1f9612 net/mlx5: DR, Merge DR_STE_SIZE enums
Merge DR_STE_SIZE enums - no need for a separate enum for reduced STE size.

Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:03 -07:00
Yevgeny Kliteynik 990467f8af net/mlx5: DR, Skip source port matching on FDB RX domain
The FDB RX pipe is connected to the wire and the source port for all
incoming packets equals to wire, single uplink port per PF, this means
there is no point of matching on the source port in such case.
Once we recognize such case, we will optimize the RX steering rule.
Note that in such case we clean both source_eswitch_owner_vhca_id and
source_port.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:03 -07:00
Yevgeny Kliteynik 63b85f49c0 net/mlx5: DR, Add ignore_flow_level support for multi-dest flow tables
When creating an FTE, we might need to create multi-destination flow table,
which is eventually created by FW. In such case, this FW table should
include all the FTE properties as requested by the upper layer, including
the ability to point to another flow table with level lower or equal to
the current table - indicated by the "ignore_flow_level" property.

Signed-off-by: Chris Mi <cmi@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:02 -07:00
Yevgeny Kliteynik a01a43fa16 net/mlx5: DR, Use FW API when updating FW-owned flow table
Need to call the DR API only when it is DR table.
To update FW-owned table the driver should call the FW API.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:02 -07:00
Yevgeny Kliteynik ae3eddcff7 net/mlx5: DR, replace uintN_t with kernel-style types
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:02 -07:00
Yevgeny Kliteynik 0733535d59 net/mlx5: DR, Support IPv6 matching on flow label for STEv0
Add missing support for matching on IPv6 flow label for STEv0.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:01 -07:00
Bodong Wang d7d0b2450e net/mlx5: DR, Reduce print level for FT chaining level check
There are usecases with Connection Tracking that have such connection
as default, printing this warning in dmesg confuses the user.

Signed-off-by: Bodong Wang <bodong@mellanox.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:01 -07:00
Yevgeny Kliteynik d5a84e968f net/mlx5: DR, Warn and ignore SW steering rule insertion on QP err
In the event of SW steering QP entering error state, SW steering
cannot insert more rules, and will silently ignore the insertion
after issuing a warning.

Signed-off-by: Yuval Avnery <yuvalav@mellanox.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:01 -07:00
Yevgeny Kliteynik f35715a657 net/mlx5: DR, Improve error flow in actions_build_ste_arr
Improve error flow and print actions sequence when an
invalid/unsupported sequence provided.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:01 -07:00
Yevgeny Kliteynik ec449ed823 net/mlx5: DR, Enable QP retransmission
Under high stress, SW steering might get stuck on polling for completion
that never comes.
For such cases QP needs to have protocol retransmission mechanism enabled.
Currently the retransmission timeout is defined as 0 (unlimited). Fix this
by defining a real timeout.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:00 -07:00
Yevgeny Kliteynik 2de40f68cf net/mlx5: DR, Enable VLAN pop on TX and VLAN push on RX
Enable pop VLAN action in TX and push VLAN in RX.
These actions are supported only on STEv1.

On TX: when a host sends a packet, VLAN is popped at the beginning.
On RX: just before passing the packet to the host the VLAN is pushed.

Signed-off-by: Muhammad Sammar <muhammads@nvidia.com>
Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:00 -07:00
Yevgeny Kliteynik f5e22be534 net/mlx5: DR, Split modify VLAN state to separate pop/push states
Split modify vlan state in the actions state machine to pop vlan
and push vlan states. This enables using of pop/push vlan without
restrictions (e.g. pop vlan on TX in STEv1).

Signed-off-by: Muhammad Sammar <muhammads@nvidia.com>
Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:38:00 -07:00
Yevgeny Kliteynik 0139145fb8 net/mlx5: DR, Added support for REMOVE_HEADER packet reformat
ConnectX supports offloading of various encapsulations and decapsulations
(e.g. VXLAN), which are performed by 'Packet Reformat' action. Starting
with ConnectX-6 DX, a new reformat type is supported - REMOVE_HEADER, which
allows deleting an arbitrary size chunk at the selected position in the packet.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:37:59 -07:00
Wentao_Liang 6cc64770fb net/mlx5: DR, fix a potential use-after-free bug
In line 849 (#1), "mlx5dr_htbl_put(cur_htbl);" drops the reference to
cur_htbl and may cause cur_htbl to be freed.

However, cur_htbl is subsequently used in the next line, which may result
in an use-after-free bug.

Fix this by calling mlx5dr_err() before the cur_htbl is put.

Signed-off-by: Wentao_Liang <Wentao_Liang_g@163.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:42 -07:00
Dmytro Linkin f9d196bd63 net/mlx5e: Use correct eswitch for stack devices with lag
If link aggregation is used within stack devices driver rejects encap
rules if PF of the VF tunnel device is down. This happens because route
resolved for other PF and its eswitch instance is used to determine
correct vport.
To fix that use devcom feature to retrieve other eswitch instance if
failed to find vport for the 1st eswitch and LAG is active.

Fixes: 10742efc20 ("net/mlx5e: VF tunnel TX traffic offloading")
Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:42 -07:00
Maor Dickman ca6891f9b2 net/mlx5: E-Switch, Set vhca id valid flag when creating indir fwd group
When indirect forward group is created, flow is added with vhca id but
without setting vhca id valid flag which violates the PRM.

Fix by setting the missing flag, vhca id valid.

Fixes: 34ca65352d ("net/mlx5: E-Switch, Indirect table infrastructure")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:42 -07:00
Roi Dayan 9a5f9cc794 net/mlx5e: Fix possible use-after-free deleting fdb rule
After neigh-update-add failure we are still with a slow path rule but
the driver always assume the rule is an fdb rule.
Fix neigh-update-del by checking slow path tc flag on the flow.
Also fix neigh-update-add for when neigh-update-del fails the same.

Fixes: 5dbe906ff1 ("net/mlx5e: Use a slow path rule instead if vxlan neighbour isn't available")
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:42 -07:00
Leon Romanovsky 8e7e2e8ed0 net/mlx5: Remove all auxiliary devices at the unregister event
The call to mlx5_unregister_device() means that mlx5_core driver is
removed. In such scenario, we need to disregard all other flags like
attach/detach and forcibly remove all auxiliary devices.

Fixes: a5ae8fc905 ("net/mlx5e: Don't create devices during unload flow")
Tested-and-Reported-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:41 -07:00
Dima Chumak 2f8b6161cc net/mlx5: Lag, fix multipath lag activation
When handling FIB_EVENT_ENTRY_REPLACE event for a new multipath route,
lag activation can be missed if a stale (struct lag_mp)->mfi pointer
exists, which was associated with an older multipath route that had been
removed.

Normally, when a route is removed, it triggers mlx5_lag_fib_event(),
which handles FIB_EVENT_ENTRY_DEL and clears mfi pointer. But, if
mlx5_lag_check_prereq() condition isn't met, for example when eswitch is
in legacy mode, the fib event is skipped and mfi pointer becomes stale.

Fix by resetting mfi pointer to NULL in mlx5_deactivate_lag().

Fixes: 8a66e45859 ("net/mlx5: Change ownership model for lag")
Signed-off-by: Dima Chumak <dchumak@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-26 15:15:41 -07:00
Guangbin Huang 8c1671e0d1 net: hns3: fix get wrong pfc_en when query PFC configuration
Currently, when query PFC configuration by dcbtool, driver will return
PFC enable status based on TC. As all priorities are mapped to TC0 by
default, if TC0 is enabled, then all priorities mapped to TC0 will be
shown as enabled status when query PFC setting, even though some
priorities have never been set.

for example:
$ dcb pfc show dev eth0
pfc-cap 4 macsec-bypass off delay 0
prio-pfc 0:off 1:off 2:off 3:off 4:off 5:off 6:off 7:off
$ dcb pfc set dev eth0 prio-pfc 0:on 1:on 2:on 3:on
$ dcb pfc show dev eth0
pfc-cap 4 macsec-bypass off delay 0
prio-pfc 0:on 1:on 2:on 3:on 4:on 5:on 6:on 7:on

To fix this problem, just returns user's PFC config parameter saved in
driver.

Fixes: cacde272dd ("net: hns3: Add hclge_dcb module for the support of DCB feature")
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:17 -07:00
Yufeng Mo 3462207d2d net: hns3: fix GRO configuration error after reset
The GRO configuration is enabled by default after reset. This
is incorrect and should be restored to the user-configured value.
So this restoration is added during reset initialization.

Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:17 -07:00
Yufeng Mo 55649d5654 net: hns3: change the method of getting cmd index in debugfs
Currently, the cmd index is obtained in debugfs by comparing file names.
However, this method may cause errors when processing more complex file
names. So, change this method by saving cmd in private data and comparing
it when getting cmd index in debugfs for optimization.

Fixes: 5e69ea7ee2 ("net: hns3: refactor the debugfs process")
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:17 -07:00
Guojia Liao 94391fae82 net: hns3: fix duplicate node in VLAN list
VLAN list should not be added duplicate VLAN node, otherwise it would
cause "add failed" when restore VLAN from VLAN list, so this patch adds
VLAN ID check before adding node into VLAN list.

Fixes: c6075b1934 ("net: hns3: Record VF vlan tables")
Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:17 -07:00
Yonglong Liu b15c072a9f net: hns3: fix speed unknown issue in bond 4
In bond 4, when the link goes down and up repeatedly, the bond may get an
unknown speed, and then this port can not work.

The driver notify netif_carrier_on() before update the link state, when the
bond receive carrier on, will query the speed of the port, if the query
operation happens before updating the link state, will get an unknown
speed. So need to notify netif_carrier_on() after update the link state.

Fixes: 46a3df9f97 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support")
Fixes: e2cb1dec97 ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support")
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:16 -07:00
Yufeng Mo a96d9330b0 net: hns3: add waiting time before cmdq memory is released
After the cmdq registers are cleared, the firmware may take time to
clear out possible left over commands in the cmdq. Driver must release
cmdq memory only after firmware has completed processing of left over
commands.

Fixes: 232d0d55fc ("net: hns3: uninitialize command queue while unloading PF driver")
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:16 -07:00
Yufeng Mo 1a6d281946 net: hns3: clear hardware resource when loading driver
If a PF is bonded to a virtual machine and the virtual machine exits
unexpectedly, some hardware resource cannot be cleared. In this case,
loading driver may cause exceptions. Therefore, the hardware resource
needs to be cleared when the driver is loaded.

Fixes: 46a3df9f97 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support")
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-26 07:24:16 -07:00
Joel Stanley ee7da21ac4 net: Add driver for LiteX's LiteETH network interface
LiteX is a soft system-on-chip that targets FPGAs. LiteETH is a basic
network device that is commonly used in LiteX designs.

The driver was first written in 2017 and has been maintained by the
LiteX community in various trees. Thank you to all who have contributed.

Co-developed-by: Gabriel Somlo <gsomlo@gmail.com>
Co-developed-by: David Shah <dave@ds0.me>
Co-developed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Joel Stanley <joel@jms.id.au>
Tested-by: Gabriel Somlo <gsomlo@gmail.com>
Reviewed-by: Gabriel Somlo <gsomlo@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 12:13:52 +01:00
Heiner Kallweit 4b33433ee7 r8169: add rtl_enable_exit_l1
This adds a function for what has been magic register writes so far.
It's based on recent changes to vendor drivers r8101, r8168, r8125,
and deals with events that trigger an early ASPM L1 exit.
Description of the bits has been kindly provided by Realtek.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 12:05:43 +01:00
Yang Yingliang 5e8243e66b octeontx2-pf: cn10k: Fix error return code in otx2_set_flowkey_cfg()
If otx2_mbox_get_rsp() fails, otx2_set_flowkey_cfg() need return an
error code.

Fixes: e793836545 ("octeontx2-pf: Fix algorithm index in MCAM rules with RSS action")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 10:44:50 +01:00
Rahul Lakkireddy 43fed4d48d cxgb4: dont touch blocked freelist bitmap after free
When adapter init fails, the blocked freelist bitmap is already freed
up and should not be touched. So, move the bitmap zeroing closer to
where it was successfully allocated. Also handle adapter init failure
unwind path immediately and avoid setting up RDMA memory windows.

Fixes: 5b377d114f ("cxgb4: Add debugfs facility to inject FL starvation")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 10:23:24 +01:00
Shannon Nelson a0c007b3f6 ionic: handle mac filter overflow
Make sure we go into PROMISC mode when we have too many
filters by specifically counting the filters that successfully
get saved to the firmware.

The device advertises max_ucast_filters and max_mcast_filters,
but really only has max_ucast_filters slots available for
uc and mc filters combined.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 09:41:50 +01:00
Shannon Nelson 8b41517313 ionic: refactor ionic_lif_addr to remove a layer
The filter counting in ionic_lif_addr() really isn't useful,
and potentially misleading, especially when we're checking in
ionic_lif_rx_mode() to see if we need to go into PROMISC mode.
We can safely refactor this and remove a calling layer.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 09:41:50 +01:00
Shannon Nelson 969f843946 ionic: sync the filters in the work task
In order to separate the atomic needs of __dev_uc_sync()
and __dev_mc_sync() from the safe rx_mode handling, we need
to have the ndo handler manipulate the driver's filter list,
and later have the driver sync the filters to the firmware,
outside of the atomic context.

Here we put __dev_mc_sync() and __dev_uc_sync() back into the
ndo callback to give them their netif_addr_lock context and
have them update the driver's filter list, flagging changes
that should be made to the device filter list.  Later, in the
rx_mode handler, we read those hints and sync up the device's
list as needed.

It is possible for multiple add/delete requests to come from
the stack before the rx_mode task processes the list, but the
handling of the sync status flag should keep everything sorted
correctly.  For example, if a delete of an existing filter is
followed by another add before the rx_mode task is run, as can
happen when going in and out of a bond, the add will cancel
the delete and no actual changes will be sent to the device.

We also add a check in the watchdog to see if there are any
stray unsync'd filters, possibly left over from a filter
overflow and waiting to get sync'd after some other filter
gets removed to make room.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 09:41:50 +01:00
Shannon Nelson b941ea0571 ionic: flatten calls to set-rx-mode
Since only two functions call through ionic_set_rx_mode(), one
that can sleep and one that can't, we can split the function
and put the bits of code into the callers.  This removes an
unnecessary calling layer.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 09:41:50 +01:00
Shannon Nelson 56c8a53b62 ionic: remove old work task types
With the move of mac filter handling to outside of the
ndo_rx_mode context using the IONIC_DW_TYPE_RX_MODE,
we no longer are using IONIC_DW_TYPE_RX_ADDR_ADD and
IONIC_DW_TYPE_RX_ADDR_DEL and they can be removed.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-26 09:41:50 +01:00
Sunil Goutham 66c312ea1d octeontx2-af: Add mbox to retrieve bandwidth profile free count
Added mbox for PF/VF drivers to retrieve current ingress bandwidth
profile free count. Also added current policer timeunit
configuration info based on which ratelimiting decisions can be
taken by PF/VF drivers.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:02 +01:00
Sunil Goutham 18603683d7 octeontx2-af: Remove channel verification while installing MCAM rules
New usecases are popping up where in user wants to install common MCAM
filters for all interfaces. Having channel verification will result in
duplicating such MCAM filters for each of the ingress interface. Hence
removed channel verification.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Subbaraya Sundeep a8b90c9d26 octeontx2-af: Add PTP device id for CN10K and 95O silcons
CN10K slicon has different device id for PTP device.
Hence this patch updates the driver with new id.
Though ptp driver being a separate driver AF manages
configuring PTP block by all PFs. To manage ptp, AF
driver checks in its probe whether
1. ptp hardware device found on silicon
2. A driver is bound to ptp device
3. The ptp driver probe is successful

In failure of cases 1 and 3, AF proceeds with out ptp
and for case 2 defers the probe. This patch refactors
code also to check for all the PTP device ids given in
ptp device ids table for case 1.

Also added PTP device ID for 95O silicon

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
George Cherian 275e5d175d octeontx2-af: Add free rsrc count mbox msg
Upon receiving the MBOX_MSG_FREE_RSRC_CNT, the AF will find out the
current number of free resources and reply it back to the requester. No
guarantee is given on the future state of the free resources yet.
If another requester sends MBOX_MSG_ATTACH_RESOURCES after this call,
the number of available resources might change.

Signed-off-by: George Cherian <george.cherian@marvell.com>
Signed-off-by: Stanislaw Kardach <skardach@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Radha Mohan Chintakuntla fe1939bb23 octeontx2-af: Add SDP interface support
Added support for packet IO via SDK links which is used when
Octeon is connected as a end-point. Traffic host to end-point
and vice versa flow through SDP links. This patch also support
dual SDP blocks supported in 98xx silicon.

Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
Signed-off-by: Subrahmanyam Nilla <snilla@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Harman Kalra aefaa8c715 octeontx2-af: nix and lbk in loop mode in 98xx
In 98xx, there are 2 NIX blocks and 4 LBK blocks present. The way
these NIX-LBK should be configured depends on the use case. By
default loopback functionality is supported in AF VF pairs which
are attached to NIX0 and NIX1 LFs alternatively to ensure load
balancing. NIX0 transmits a packet to LBK1 which will be received
by NIX1 and packet transmitted by NIX1 will get received by NIX0 via
LBK2.

There are some requirements where only one AF VF is used and respective
NIX is expected to operate in a mode where it can receive it own packet
back. This can be achieved if NIX0 sends packet to LBK0 and not LBK1.
Adding a flag in LF alloc request mailbox which can setup NIX0 to use
LBK0 and NIX1 can use LBK3.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Subbaraya Sundeep 039190bb35 octeontx2-pf: cleanup transmit link deriving logic
Unlike OcteonTx2, the channel numbers used by CGX/RPM
and LBK on CN10K silicons aren't fixed in HW. They are
SW programmable, hence we cannot derive transmit link
from static channel numbers anymore. Get the same from
admin function via mailbox.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Jerin Jacob 72e192a163 octeontx2-af: Allow to configure flow tag LSB byte as RSS adder
Before C0 HW revision, The RSS adder was computed based the
following static formula.

rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^
flow_tag<23:16> ^ flow_tag<31:24>

The above scheme has the following drawbacks:
1) It is not in line with other standard NIC behavior.
2) There can be an SW use case where SW can compute the hash
upfront using Toeplitz function and predict the queue selection
to optimize some packet lookup function. The nonstandard
way of doing XOR makes the consumer to not predict the queue selection.

C0 HW revision onwards, The HW can configure the
rss_adder<7:0> as flow_tag<7:0> to align with standard NICs.

This patch adds an option to select legacy RSS adder mode
vs standard NIC behavior by setting NIX_LF_RSS_TAG_LSB_AS_ADDER flag.

Since this bit field is used as reserved in old HW revisions,
No need to have an additional HW version check.

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Nithin Dabilpuram d06411632e octeontx2-af: enable tx shaping feature for 96xx C0
Starting from 96xx C0 onwards all silicons support traffic shaping.
This patch enables that feature along with other changes
- When PIR/CIR shaping config is modified, toggle SW_XOFF
  for config to take effect
- Before SMQ flush, clear SW_XOFF at all parent schedulers
- Support to read current transmit scheduler configuration via mbox

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 13:39:01 +01:00
Cai Huoqing fbcf8a3401 net: ethernet: actions: Add helper dependency on COMPILE_TEST
it's helpful for complie test in other platform(e.g.X86)

Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 12:06:53 +01:00
Nithin Dabilpuram 1c74b89171 octeontx2-af: Wait for TX link idle for credits change
NIX_AF_TX_LINKX_NORM_CREDIT holds running counter of
tx credits available per link. But, tx credits should be
configured based on MTU config. So MTU change needs tx
credit count update.

An issue exists whereby when both PF & VF are enabled and
PF traffic is flowing, if VF requests for MTU update,
updating the NORM_CREDIT register will lead to corruption
of credit count and subsequent deadlock of tx link as
the NORM_CREDIT register holds running count.

This patch provides workaround by pausing link traffic
using NIX_AF_TL1X_SW_XOFF, waiting for existing packets to
drain, and used credits be returned before updating new
credit count.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 12:05:54 +01:00
Nithin Dabilpuram 906999c9b6 octeontx2-af: Change the order of queue work and interrupt disable
Clear and disable interrupt before queueing work as there might be
a chance that work gets completed on other core faster and
interrupt enable as a part of the work completes before
interrupt disable in the interrupt context. This leads to
permanent disable of interrupt.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 12:05:54 +01:00
Geetha sowjanya ae2c341eb0 octeontx2-af: cn10k: Set cache lines for NPA batch alloc
Set NPA batch allocation engine to process 35 cache lines
per turn on CN10k platform.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 12:04:34 +01:00
Biju Das 0d13a1a464 ravb: Add reset support
Reset support is present on R-Car. Let's support it, if it is
available.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das 511d74d9d8 ravb: Factorise ravb_emac_init function
The E-MAC IP on the R-Car AVB module has different initialization
parameters for RX frame size, duplex settings, different offset
for transfer speed setting and has magic packet detection support
compared to E-MAC on RZ/G2L Gigabit Ethernet module. Factorise
the ravb_emac_init function to support the later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das eb4fd12744 ravb: Factorise ravb_dmac_init function
The DMAC IP on the R-Car AVB module has different initialization
parameters for RCR, TGC, TCCR, RIC0, RIC2, and TIC compared to
DMAC IP on the RZ/G2L Gigabit Ethernet module. Factorise the
ravb_dmac_init function to support the later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das 80f35a0df0 ravb: Factorise ravb_set_features
RZ/G2L supports HW checksum on RX and TX whereas R-Car supports on RX.
Factorise ravb_set_features to support this feature.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das cb21104f2c ravb: Factorise ravb_adjust_link function
R-Car supports 100 and 1000 Mbps transfer speed whereas RZ/G2L
in addition support 10Mbps. Factorise ravb_adjust_link function
in order to support 10Mbps speed.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das d5d95c1136 ravb: Factorise ravb_rx function
R-Car uses an extended descriptor in RX whereas, RZ/G2L uses
normal descriptor in RX. Factorise the ravb_rx function to
support the later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das 7870a41848 ravb: Factorise ravb_ring_init function
The ravb_ring_init function uses an extended descriptor in RX for
R-Car and normal descriptor for RZ/G2L. Add a helper function
for RX ring buffer allocation to support later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:17 +01:00
Biju Das 1ae22c19e7 ravb: Factorise ravb_ring_format function
The ravb_ring_format function uses an extended descriptor in RX
for R-Car compared to the normal descriptor for RZ/G2L. Factorise
RX ring buffer buildup to extend the support for later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Biju Das bf46b75784 ravb: Factorise ravb_ring_free function
R-Car uses extended descriptor in RX, whereas RZ/G2L uses normal
descriptor. Factorise ravb_ring_free function so that it can
support later SoC.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Biju Das a69a3d094d ravb: Add ptp_cfg_active to struct ravb_hw_info
There are some H/W differences for the gPTP feature between
R-Car Gen3, R-Car Gen2, and RZ/G2L as below.

1) On R-Car Gen3, gPTP support is active in config mode.
2) On R-Car Gen2, gPTP support is not active in config mode.
3) RZ/G2L does not support the gPTP feature.

Add a ptp_cfg_active hw feature bit to struct ravb_hw_info for
supporting gPTP active in config mode for R-Car Gen3.
This patch also removes enum ravb_chip_id, chip_id from both
struct ravb_hw_info and struct ravb_private, as it is unused.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Biju Das 8f27219a61 ravb: Add no_ptp_cfg_active to struct ravb_hw_info
There are some H/W differences for the gPTP feature between
R-Car Gen3, R-Car Gen2, and RZ/G2L as below.

1) On R-Car Gen2, gPTP support is not active in config mode.
2) On R-Car Gen3, gPTP support is active in config mode.
3) RZ/G2L does not support the gPTP feature.

Add a no_ptp_cfg_active hw feature bit to struct ravb_hw_info for
handling gPTP for R-Car Gen2.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Biju Das 6de19fa0e9 ravb: Add multi_irq to struct ravb_hw_info
R-Car Gen3 supports separate interrupts for E-MAC and DMA queues,
whereas R-Car Gen2 and RZ/G2L have a single interrupt instead.

Add a multi_irq hw feature bit to struct ravb_hw_info to enable
this only for R-Car Gen3.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Biju Das c81d894226 ravb: Remove the macros NUM_TX_DESC_GEN[23]
For addressing 4 bytes alignment restriction on transmission
buffer for R-Car Gen2 we use 2 descriptors whereas it is a single
descriptor for other cases.
Replace the macros NUM_TX_DESC_GEN[23] with magic number and
add a comment to explain it.

Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Suggested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:18:16 +01:00
Shai Malin e543468869 qede: Fix memset corruption
Thanks to Kees Cook who detected the problem of memset that starting
from not the first member, but sized for the whole struct.
The better change will be to remove the redundant memset and to clear
only the msix_cnt member.

Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Reported-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:07:55 +01:00
Haiyang Zhang c1a3e9f98d net: mana: Add WARN_ON_ONCE in case of CQE read overflow
This is not an expected case normally.
Add WARN_ON_ONCE in case of CQE read overflow, instead of failing
silently.

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:06:54 +01:00
Haiyang Zhang 1e2d0824a9 net: mana: Add support for EQ sharing
The existing code uses (1 + #vPorts * #Queues) MSIXs, which may exceed
the device limit.

Support EQ sharing, so that multiple vPorts (NICs) can share the same
set of MSIXs.

And, report the EQ-sharing capability bit to the host, which means the
host can potentially offer more vPorts and queues to the VM.

Also update the resource limit checking and error handling for better
robustness.

Now, we support up to 256 virtual ports per VF (it was 16/VF), and
support up to 64 queues per vPort (it was 16).

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:06:54 +01:00
Haiyang Zhang e1b5683ff6 net: mana: Move NAPI from EQ to CQ
The existing code has NAPI threads polling on EQ directly. To prepare
for EQ sharing among vPorts, move NAPI from EQ to CQ so that one EQ
can serve multiple CQs from different vPorts.

The "arm bit" is only set when CQ processing is completed to reduce
the number of EQ entries, which in turn reduce the number of interrupts
on EQ.

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:06:54 +01:00
Shaokun Zhang 807d1032e0 netxen_nic: Remove the repeated declaration
Function 'netxen_rom_fast_read' is declared twice, so remove the
repeated declaration.

Cc: Manish Chopra <manishc@marvell.com>
Cc: Rahul Verma <rahulv@marvell.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:04:53 +01:00
Nathan Chancellor bc4f128d86 cxgb4: Properly revert VPD changes
Clang warns:

drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2785:2: error: variable 'kw_offset' is uninitialized when used here [-Werror,-Wuninitialized]
        FIND_VPD_KW(i, "RV");
        ^~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2776:39: note: expanded from macro 'FIND_VPD_KW'
        var = pci_vpd_find_info_keyword(vpd, kw_offset, vpdr_len, name); \
                                             ^~~~~~~~~
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2748:34: note: initialize the variable 'kw_offset' to silence this warning
        unsigned int vpdr_len, kw_offset, id_len;
                                        ^
                                         = 0
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2785:2: error: variable 'vpdr_len' is uninitialized when used here [-Werror,-Wuninitialized]
        FIND_VPD_KW(i, "RV");
        ^~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2776:50: note: expanded from macro 'FIND_VPD_KW'
        var = pci_vpd_find_info_keyword(vpd, kw_offset, vpdr_len, name); \
                                                        ^~~~~~~~
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c:2748:23: note: initialize the variable 'vpdr_len' to silence this warning
        unsigned int vpdr_len, kw_offset, id_len;
                             ^
                              = 0
2 errors generated.

The series "PCI/VPD: Convert more users to the new VPD API functions"
was applied to net-next when it should have been applied to the PCI tree
because of build errors. However, commit 82e34c8a9b ("Revert "Revert
"cxgb4: Search VPD with pci_vpd_find_ro_info_keyword()""") reapplied a
change, resulting in the warning above.

Properly revert commit 8d63ee602d ("cxgb4: Search VPD with
pci_vpd_find_ro_info_keyword()") to fix the warning and restore proper
functionality. This also reverts commit 3a93bedea0 ("cxgb4: Remove
unused vpd_param member ec") to avoid future merge conflicts, as that
change has been applied to the PCI tree.

Link: https://lore.kernel.org/r/20210823120929.7c6f7a4f@canb.auug.org.au/
Link: https://lore.kernel.org/r/1ca29408-7bc7-4da5-59c7-87893c9e0442@gmail.com/
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 11:03:13 +01:00
Song Yoong Siang 2b9fff64f0 net: stmmac: fix kernel panic due to NULL pointer dereference of buf->xdp
Ensure a valid XSK buffer before proceed to free the xdp buffer.

The following kernel panic is observed without this patch:

RIP: 0010:xp_free+0x5/0x40
Call Trace:
stmmac_napi_poll_rxtx+0x332/0xb30 [stmmac]
? stmmac_tx_timer+0x3c/0xb0 [stmmac]
net_rx_action+0x13d/0x3d0
__do_softirq+0xfc/0x2fb
? smpboot_register_percpu_thread+0xe0/0xe0
run_ksoftirqd+0x32/0x70
smpboot_thread_fn+0x1d8/0x2c0
kthread+0x169/0x1a0
? kthread_park+0x90/0x90
ret_from_fork+0x1f/0x30
---[ end trace 0000000000000002 ]---

Fixes: bba2556efa ("net: stmmac: Enable RX via AF_XDP zero-copy")
Cc: <stable@vger.kernel.org> # 5.13.x
Suggested-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:59:39 +01:00
Song Yoong Siang a6451192da net: stmmac: fix kernel panic due to NULL pointer dereference of xsk_pool
After free xsk_pool, there is possibility that napi polling is still
running in the middle, thus causes a kernel crash due to kernel NULL
pointer dereference of rx_q->xsk_pool and tx_q->xsk_pool.

Fix this by changing the XDP pool setup sequence to:
 1. disable napi before free xsk_pool
 2. enable napi after init xsk_pool

The following kernel panic is observed without this patch:

RIP: 0010:xsk_uses_need_wakeup+0x5/0x10
Call Trace:
stmmac_napi_poll_rxtx+0x3a9/0xae0 [stmmac]
__napi_poll+0x27/0x130
net_rx_action+0x233/0x280
__do_softirq+0xe2/0x2b6
run_ksoftirqd+0x1a/0x20
smpboot_thread_fn+0xac/0x140
? sort_range+0x20/0x20
kthread+0x124/0x150
? set_kthread_struct+0x40/0x40
ret_from_fork+0x1f/0x30
---[ end trace a77c8956b79ac107 ]---

Fixes: bba2556efa ("net: stmmac: Enable RX via AF_XDP zero-copy")
Cc: <stable@vger.kernel.org> # 5.13.x
Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:59:39 +01:00
David S. Miller d484dc2b21 Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:

====================
1GbE Intel Wired LAN Driver Updates 2021-08-24

Vinicius Costa Gomes says:

This adds support for PCIe PTM (Precision Time Measurement) to the igc
driver. PCIe PTM allows the NIC and Host clocks to be compared more
precisely, improving the clock synchronization accuracy.

Patch 1/4 reverts a commit that made pci_enable_ptm() private to the
PCI subsystem, reverting makes it possible for it to be called from
the drivers.

Patch 2/4 adds the pcie_ptm_enabled() helper.

Patch 3/4 calls pci_enable_ptm() from the igc driver.

Patch 4/4 implements the PCIe PTM support. Exposing it via the
.getcrosststamp() API implies that the time measurements are made
synchronously with the ioctl(). The hardware was implemented so the
most convenient way to retrieve that information would be
asynchronously. So, to follow the expectations of the ioctl() we have
to use less convenient ways, triggering an PCIe PTM dialog every time
a ioctl() is received.

Some questions are raised (also pointed out in the commit message):

1. Using convert_art_ns_to_tsc() is too x86 specific, there should be
   a common way to create a 'system_counterval_t' from a timestamp.

2. convert_art_ns_to_tsc() says that it should only be used when
   X86_FEATURE_TSC_KNOWN_FREQ is true, but during tests it works even
   when it returns false. Should that check be done?
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:57:19 +01:00
Harini Katakam 85520079af net: macb: Add a NULL check on desc_ptp
macb_ptp_desc will not return NULL under most circumstances with correct
Kconfig and IP design config register. But for the sake of the extreme
corner case, check for NULL when using the helper. In case of rx_tstamp,
no action is necessary except to return (similar to timestamp disabled)
and warn. In case of TX, return -EINVAL to let the skb be free. Perform
this check before marking skb in progress.
Fixes coverity warning:
(4) Event dereference:
Dereferencing a null pointer "desc_ptp"

Signed-off-by: Harini Katakam <harini.katakam@xilinx.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:39:17 +01:00
Alok Prasad 755f905340 qed: Enable automatic recovery on error condition.
This patch enables automatic recovery by default in case of various
error condition like fw assert , hardware error etc.
This also ensure driver can handle multiple iteration of assertion
conditions.

Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Alok Prasad <palok@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:38:16 +01:00
Michael Riesch 2d26f6e39a net: stmmac: dwmac-rk: fix unbalanced pm_runtime_enable warnings
This reverts commit 2c896fb02e
"net: stmmac: dwmac-rk: add pd_gmac support for rk3399" and fixes
unbalanced pm_runtime_enable warnings.

In the commit to be reverted, support for power management was
introduced to the Rockchip glue code. Later, power management support
was introduced to the stmmac core code, resulting in multiple
invocations of pm_runtime_{enable,disable,get_sync,put_sync}.

The multiple invocations happen in rk_gmac_powerup and
stmmac_{dvr_probe, resume} as well as in rk_gmac_powerdown and
stmmac_{dvr_remove, suspend}, respectively, which are always called
in conjunction.

Fixes: 5ec5582343 ("net: stmmac: add clocks management for gmac driver")
Signed-off-by: Michael Riesch <michael.riesch@wolfvision.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-25 10:37:17 +01:00
Vinicius Costa Gomes a90ec84837 igc: Add support for PTP getcrosststamp()
i225 supports PCIe Precision Time Measurement (PTM), allowing us to
support the PTP_SYS_OFFSET_PRECISE ioctl() in the driver via the
getcrosststamp() function.

The easiest way to expose the PTM registers would be to configure the PTM
dialogs to run periodically, but the PTP_SYS_OFFSET_PRECISE ioctl()
semantics are more aligned to using a kind of "one-shot" way of retrieving
the PTM timestamps. But this causes a bit more code to be written: the
trigger registers for the PTM dialogs are not cleared automatically.

i225 can be configured to send "fake" packets with the PTM
information, adding support for handling these types of packets is
left for the future.

PTM improves the accuracy of time synchronization, for example, using
phc2sys, while a simple application is sending packets as fast as
possible. First, without .getcrosststamp():

phc2sys[191.382]: enp4s0 sys offset      -959 s2 freq    -454 delay   4492
phc2sys[191.482]: enp4s0 sys offset       798 s2 freq   +1015 delay   4069
phc2sys[191.583]: enp4s0 sys offset       962 s2 freq   +1418 delay   3849
phc2sys[191.683]: enp4s0 sys offset       924 s2 freq   +1669 delay   3753
phc2sys[191.783]: enp4s0 sys offset       664 s2 freq   +1686 delay   3349
phc2sys[191.883]: enp4s0 sys offset       218 s2 freq   +1439 delay   2585
phc2sys[191.983]: enp4s0 sys offset       761 s2 freq   +2048 delay   3750
phc2sys[192.083]: enp4s0 sys offset       756 s2 freq   +2271 delay   4061
phc2sys[192.183]: enp4s0 sys offset       809 s2 freq   +2551 delay   4384
phc2sys[192.283]: enp4s0 sys offset      -108 s2 freq   +1877 delay   2480
phc2sys[192.383]: enp4s0 sys offset     -1145 s2 freq    +807 delay   4438
phc2sys[192.484]: enp4s0 sys offset       571 s2 freq   +2180 delay   3849
phc2sys[192.584]: enp4s0 sys offset       241 s2 freq   +2021 delay   3389
phc2sys[192.684]: enp4s0 sys offset       405 s2 freq   +2257 delay   3829
phc2sys[192.784]: enp4s0 sys offset        17 s2 freq   +1991 delay   3273
phc2sys[192.884]: enp4s0 sys offset       152 s2 freq   +2131 delay   3948
phc2sys[192.984]: enp4s0 sys offset      -187 s2 freq   +1837 delay   3162
phc2sys[193.084]: enp4s0 sys offset     -1595 s2 freq    +373 delay   4557
phc2sys[193.184]: enp4s0 sys offset       107 s2 freq   +1597 delay   3740
phc2sys[193.284]: enp4s0 sys offset       199 s2 freq   +1721 delay   4010
phc2sys[193.385]: enp4s0 sys offset      -169 s2 freq   +1413 delay   3701
phc2sys[193.485]: enp4s0 sys offset       -47 s2 freq   +1484 delay   3581
phc2sys[193.585]: enp4s0 sys offset       -65 s2 freq   +1452 delay   3778
phc2sys[193.685]: enp4s0 sys offset        95 s2 freq   +1592 delay   3888
phc2sys[193.785]: enp4s0 sys offset       206 s2 freq   +1732 delay   4445
phc2sys[193.885]: enp4s0 sys offset      -652 s2 freq    +936 delay   2521
phc2sys[193.985]: enp4s0 sys offset      -203 s2 freq   +1189 delay   3391
phc2sys[194.085]: enp4s0 sys offset      -376 s2 freq    +955 delay   2951
phc2sys[194.185]: enp4s0 sys offset      -134 s2 freq   +1084 delay   3330
phc2sys[194.285]: enp4s0 sys offset       -22 s2 freq   +1156 delay   3479
phc2sys[194.386]: enp4s0 sys offset        32 s2 freq   +1204 delay   3602
phc2sys[194.486]: enp4s0 sys offset       122 s2 freq   +1303 delay   3731

Statistics for this run (total of 2179 lines), in nanoseconds:
  average: -1.12
  stdev: 634.80
  max: 1551
  min: -2215

With .getcrosststamp() via PCIe PTM:

phc2sys[367.859]: enp4s0 sys offset         6 s2 freq   +1727 delay      0
phc2sys[367.959]: enp4s0 sys offset        -2 s2 freq   +1721 delay      0
phc2sys[368.059]: enp4s0 sys offset         5 s2 freq   +1727 delay      0
phc2sys[368.160]: enp4s0 sys offset        -1 s2 freq   +1723 delay      0
phc2sys[368.260]: enp4s0 sys offset        -4 s2 freq   +1719 delay      0
phc2sys[368.360]: enp4s0 sys offset        -5 s2 freq   +1717 delay      0
phc2sys[368.460]: enp4s0 sys offset         1 s2 freq   +1722 delay      0
phc2sys[368.560]: enp4s0 sys offset        -3 s2 freq   +1718 delay      0
phc2sys[368.660]: enp4s0 sys offset         5 s2 freq   +1725 delay      0
phc2sys[368.760]: enp4s0 sys offset        -1 s2 freq   +1721 delay      0
phc2sys[368.860]: enp4s0 sys offset         0 s2 freq   +1721 delay      0
phc2sys[368.960]: enp4s0 sys offset         0 s2 freq   +1721 delay      0
phc2sys[369.061]: enp4s0 sys offset         4 s2 freq   +1725 delay      0
phc2sys[369.161]: enp4s0 sys offset         1 s2 freq   +1724 delay      0
phc2sys[369.261]: enp4s0 sys offset         4 s2 freq   +1727 delay      0
phc2sys[369.361]: enp4s0 sys offset         8 s2 freq   +1732 delay      0
phc2sys[369.461]: enp4s0 sys offset         7 s2 freq   +1733 delay      0
phc2sys[369.561]: enp4s0 sys offset         4 s2 freq   +1733 delay      0
phc2sys[369.661]: enp4s0 sys offset         1 s2 freq   +1731 delay      0
phc2sys[369.761]: enp4s0 sys offset         1 s2 freq   +1731 delay      0
phc2sys[369.861]: enp4s0 sys offset        -5 s2 freq   +1725 delay      0
phc2sys[369.961]: enp4s0 sys offset        -4 s2 freq   +1725 delay      0
phc2sys[370.062]: enp4s0 sys offset         2 s2 freq   +1730 delay      0
phc2sys[370.162]: enp4s0 sys offset        -7 s2 freq   +1721 delay      0
phc2sys[370.262]: enp4s0 sys offset        -3 s2 freq   +1723 delay      0
phc2sys[370.362]: enp4s0 sys offset         1 s2 freq   +1726 delay      0
phc2sys[370.462]: enp4s0 sys offset        -3 s2 freq   +1723 delay      0
phc2sys[370.562]: enp4s0 sys offset        -1 s2 freq   +1724 delay      0
phc2sys[370.662]: enp4s0 sys offset        -4 s2 freq   +1720 delay      0
phc2sys[370.762]: enp4s0 sys offset        -7 s2 freq   +1716 delay      0
phc2sys[370.862]: enp4s0 sys offset        -2 s2 freq   +1719 delay      0

Statistics for this run (total of 2179 lines), in nanoseconds:
  average: 0.14
  stdev: 5.03
  max: 48
  min: -27

For reference, the statistics for runs without PCIe congestion show
that the improvements from enabling PTM are less dramatic. For two
runs of 16466 entries:
  without PTM: avg -0.04 stdev 10.57 max 39 min -42
  with PTM: avg 0.01 stdev 4.20 max 19 min -16

One possible explanation is that when PTM is not enabled, and there's a lot
of traffic in the PCIe fabric, some register reads will take more time
than the others because of congestion on the PCIe fabric.

When PTM is enabled, even if the PTM dialogs take more time to
complete under heavy traffic, the time measurements do not depend on
the time to read the registers.

This was implemented following the i225 EAS version 0.993.

Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-24 12:01:34 -07:00
Vinicius Costa Gomes 1b5d73fb86 igc: Enable PCIe PTM
Enables PCIe PTM (Precision Time Measurement) support in the igc
driver. Notifies the PCI devices that PCIe PTM should be enabled.

PCIe PTM is similar protocol to PTP (Precision Time Protocol) running
in the PCIe fabric, it allows devices to report time measurements from
their internal clocks and the correlation with the PCIe root clock.

The i225 NIC exposes some registers that expose those time
measurements, those registers will be used, in later patches, to
implement the PTP_SYS_OFFSET_PRECISE ioctl().

Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-08-24 11:49:21 -07:00
Yufeng Mo cce1689eb5 net: hns3: add ethtool support for CQE/EQE mode configuration
Add support in ethtool for switching EQE/CQE mode.

Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-24 07:38:29 -07:00
Yufeng Mo 9f0c6f4b74 net: hns3: add support for EQE/CQE mode configuration
For device whose version is above V3(include V3), the GL can
select EQE or CQE mode, so adds support for it.

In CQE mode, the coalesced timer will restart when the first new
completion occurs, while in EQE mode, the timer will not restart.

Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-24 07:38:29 -07:00
Yufeng Mo f3ccfda193 ethtool: extend coalesce setting uAPI with CQE mode
In order to support more coalesce parameters through netlink,
add two new parameter kernel_coal and extack for .set_coalesce
and .get_coalesce, then some extra info can return to user with
the netlink API.

Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-24 07:38:29 -07:00
Heiner Kallweit 18a9eae240 r8169: enable ASPM L0s state
ASPM is disabled completely because we've seen different types of
problems in the past. However it seems these problems occurred with
L1 or L1 sub-states only. On all the chip versions I've seen the
acceptable L0s exit latency is 512ns. This should be short enough not
to cause problems. If the actual L0s exit latency of the PCIe link
is bigger than 512ns then the PCI core will disable L0s anyway.
So let's give it a try and disable L1 and L1 sub-states only.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-24 10:50:26 +01:00
Shai Malin b0cd08537d qed: Fix the VF msix vectors flow
For VFs we should return with an error in case we didn't get the exact
number of msix vectors as we requested.
Not doing that will lead to a crash when starting queues for this VF.

Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-24 09:22:37 +01:00
Heiner Kallweit 1bb39cb65b cxgb4: improve printing NIC information
Currently the interface name and PCI address are printed twice, because
netdev_info() is printing this information implicitly already. This results
in messages like the following. remove the duplicated information.

cxgb4 0000:81:00.4 eth3: eth3: Chelsio T6225-OCP-SO (0000:81:00.4) 1G/10G/25GBASE-SFP28

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-24 09:16:39 +01:00
Tang Bin f6a4e0e8a0 via-velocity: Use of_device_get_match_data to simplify code
Retrieve OF match data, it's better and cleaner to use
'of_device_get_match_data' over 'of_match_device'.

Signed-off-by: Tang Bin <tangbin@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:56:15 +01:00
Tang Bin b708a96d76 via-rhine: Use of_device_get_match_data to simplify code
Retrieve OF match data, it's better and cleaner to use
'of_device_get_match_data' over 'of_match_device'.

Signed-off-by: Tang Bin <tangbin@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:56:15 +01:00
Christophe JAILLET 609c1308fb hinic: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:02:29 +01:00
Christophe JAILLET a14e39041b qlcnic: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:02:29 +01:00
Christophe JAILLET eb9c5c0d3a net/mellanox: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:02:28 +01:00
Christophe JAILLET a0991bf441 net: 8139cp: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:02:28 +01:00
Christophe JAILLET 75bacb6d20 myri10ge: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

A message split on 2 lines has been merged.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 12:02:28 +01:00
Christophe JAILLET 056b29ae07 net: sunhme: Remove unused macros
The usage of these macros has been removed in commit db1a8611c8
("sunhme: Convert to pure OF driver."). So they can be removed.

This simplifies code and helps for removing the wrappers in
include/linux/pci-dma-compat.h.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:58:33 +01:00
Christophe JAILLET e5c88bc91b forcedeth: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:57 +01:00
Christophe JAILLET 83b2d939d1 net: jme: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:57 +01:00
Christophe JAILLET 05fbeb21af net: ec_bhf: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

A useless "err = -EIO;" assignment has been removed.
'dma_set_mask_and_coherent()' already return only 0 or -EIO.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:57 +01:00
Christophe JAILLET 4489d8f528 net: chelsio: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:57 +01:00
Christophe JAILLET df70303dd1 net: broadcom: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:56 +01:00
Christophe JAILLET 3852e54e67 net: atlantic: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

The patch has been generated with the coccinelle script below.

It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
This is less verbose.

A useless "!= 0" has also been removed in a test.

It has been compile tested.

@@
@@
-    PCI_DMA_BIDIRECTIONAL
+    DMA_BIDIRECTIONAL

@@
@@
-    PCI_DMA_TODEVICE
+    DMA_TO_DEVICE

@@
@@
-    PCI_DMA_FROMDEVICE
+    DMA_FROM_DEVICE

@@
@@
-    PCI_DMA_NONE
+    DMA_NONE

@@
expression e1, e2, e3;
@@
-    pci_alloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3;
@@
-    pci_zalloc_consistent(e1, e2, e3)
+    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)

@@
expression e1, e2, e3, e4;
@@
-    pci_free_consistent(e1, e2, e3, e4)
+    dma_free_coherent(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_single(e1, e2, e3, e4)
+    dma_map_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_single(e1, e2, e3, e4)
+    dma_unmap_single(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4, e5;
@@
-    pci_map_page(e1, e2, e3, e4, e5)
+    dma_map_page(&e1->dev, e2, e3, e4, e5)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_page(e1, e2, e3, e4)
+    dma_unmap_page(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_map_sg(e1, e2, e3, e4)
+    dma_map_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_unmap_sg(e1, e2, e3, e4)
+    dma_unmap_sg(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
+    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_single_for_device(e1, e2, e3, e4)
+    dma_sync_single_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
+    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)

@@
expression e1, e2, e3, e4;
@@
-    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
+    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)

@@
expression e1, e2;
@@
-    pci_dma_mapping_error(e1, e2)
+    dma_mapping_error(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_dma_mask(e1, e2)
+    dma_set_mask(&e1->dev, e2)

@@
expression e1, e2;
@@
-    pci_set_consistent_dma_mask(e1, e2)
+    dma_set_coherent_mask(&e1->dev, e2)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:56:56 +01:00
Maxim Kiselev 359f4cdd7d net: marvell: fix MVNETA_TX_IN_PRGRS bit number
According to Armada XP datasheet bit at 0 position is corresponding for
TxInProg indication.

Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network unit")
Signed-off-by: Maxim Kiselev <bigunclemax@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:51:26 +01:00
Wong Vee Khee 82a44ae113 net: stmmac: fix kernel panic due to NULL pointer dereference of plat->est
In the case of taprio offload is not enabled, the error handling path
causes a kernel crash due to kernel NULL pointer deference.

Fix this by adding check for NULL before attempt to access 'plat->est'
on the mutex_lock() call.

The following kernel panic is observed without this patch:

RIP: 0010:mutex_lock+0x10/0x20
Call Trace:
tc_setup_taprio+0x482/0x560 [stmmac]
kmem_cache_alloc_trace+0x13f/0x490
taprio_disable_offload.isra.0+0x9d/0x180 [sch_taprio]
taprio_destroy+0x6c/0x100 [sch_taprio]
qdisc_create+0x2e5/0x4f0
tc_modify_qdisc+0x126/0x740
rtnetlink_rcv_msg+0x12b/0x380
_raw_spin_lock_irqsave+0x19/0x40
_raw_spin_unlock_irqrestore+0x18/0x30
create_object+0x212/0x340
rtnl_calcit.isra.0+0x110/0x110
netlink_rcv_skb+0x50/0x100
netlink_unicast+0x191/0x230
netlink_sendmsg+0x243/0x470
sock_sendmsg+0x5e/0x60
____sys_sendmsg+0x20b/0x280
copy_msghdr_from_user+0x5c/0x90
__mod_memcg_state+0x87/0xf0
 ___sys_sendmsg+0x7c/0xc0
lru_cache_add+0x7f/0xa0
_raw_spin_unlock+0x16/0x30
wp_page_copy+0x449/0x890
handle_mm_fault+0x921/0xfc0
__sys_sendmsg+0x59/0xa0
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xa9
---[ end trace b1f19b24368a96aa ]---

Fixes: b60189e039 ("net: stmmac: Integrate EST with TAPRIO scheduler API")
Cc: <stable@vger.kernel.org> # 5.10.x
Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:49:34 +01:00
David S. Miller 46002bf300 Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue
Tony Nguyen says:

====================
Intel Wired LAN Driver Updates 2021-08-20

This series contains updates to igc and e1000e drivers.

Aaron Ma resolves a page fault which occurs when thunderbolt is
unplugged for igc.

Toshiki Nishioka fixes Tx queue looping to use actual number of queues
instead of max value for igc.

Sasha fixes an incorrect latency comparison by decoding the values before
comparing and prevents attempted writes to read-only NVMs for e1000e.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:45:37 +01:00
Christophe JAILLET 5ed74b03eb xgene-v2: Fix a resource leak in the error handling path of 'xge_probe()'
A successful 'xge_mdio_config()' call should be balanced by a corresponding
'xge_mdio_remove()' call in the error handling path of the probe, as
already done in the remove function.

Update the error handling path accordingly.

Fixes: ea8ab16ab2 ("drivers: net: xgene-v2: Add MDIO support")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:23:48 +01:00
David S. Miller 1a6ef20b41 Revert "sfc: falcon: Read VPD with pci_vpd_alloc()"
This reverts commit 3873a9a4d8.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:15:53 +01:00
David S. Miller a7eeb7a7dd Revert "sfc: falcon: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit 01dbe7129d.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:15:34 +01:00
David S. Miller cd3d5d6881 Revert "cxgb4: Validate VPD checksum with pci_vpd_check_csum()"
This reverts commit 96ce96f151.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:15:05 +01:00
David S. Miller 4fb2c383e0 Revert "bnx2x: Read VPD with pci_vpd_alloc()"
This reverts commit bed3db3d73.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:08:20 +01:00
David S. Miller 82e34c8a9b Revert "Revert "cxgb4: Search VPD with pci_vpd_find_ro_info_keyword()""
This reverts commit df6deaf673.
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:07:17 +01:00
David S. Miller 3408259b6a Revert "bnx2: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit ddc122aac9.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:05:19 +01:00
David S. Miller 4fd1315706 Revert "bnxt: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit 58a9b5d262.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:04:23 +01:00
David S. Miller 4a55c34e30 Revert "bnx2x: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit da417885a9.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:01:55 +01:00
David S. Miller 197c316ce4 Revert "bnxt: Read VPD with pci_vpd_alloc()"
This reverts commit ebcdc8ebe8.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:01:35 +01:00
David S. Miller 54c0bcc028 Revert "bnxt: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit 58a9b5d262.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 11:00:25 +01:00
David S. Miller df6deaf673 Revert "cxgb4: Search VPD with pci_vpd_find_ro_info_keyword()"
This reverts commit 8d63ee602d.

Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-23 10:59:19 +01:00
Jason Gunthorpe 1a010d73ef Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
Saeed Mahameed says:
====================

This pulls mlx5-next branch which includes patches already reviewed on
net-next and rdma mailing lists.

1) mlx5 single E-Switch FDB for lag

2) IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq

3) Add DCS caps & fields support

We need this in net-next as multiple features are dependent on the
single FDB feature.

====================

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

* mellanox/mlx5-next:
  net/mlx5: Lag, Create shared FDB when in switchdev mode
  net/mlx5: E-Switch, add logic to enable shared FDB
  net/mlx5: Lag, move lag destruction to a workqueue
  net/mlx5: Lag, properly lock eswitch if needed
  net/mlx5: Add send to vport rules on paired device
  net/mlx5: E-Switch, Add event callback for representors
  net/mlx5e: Use shared mappings for restoring from metadata
  net/mlx5e: Add an option to create a shared mapping
  net/mlx5: E-Switch, set flow source for send to uplink rule
  RDMA/mlx5: Add shared FDB support
  {net, RDMA}/mlx5: Extend send to vport rules
  RDMA/mlx5: Fill port info based on the relevant eswitch
  net/mlx5: Lag, add initial logic for shared FDB
  net/mlx5: Return mdev from eswitch
  IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq
2021-08-22 19:22:58 -03:00
Heiner Kallweit 8d63ee602d cxgb4: Search VPD with pci_vpd_find_ro_info_keyword()
Use pci_vpd_find_ro_info_keyword() to search for keywords in VPD to
simplify the code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit 3a93bedea0 cxgb4: Remove unused vpd_param member ec
Member ec isn't used, so remove it.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit 96ce96f151 cxgb4: Validate VPD checksum with pci_vpd_check_csum()
Validate the VPD checksum with pci_vpd_check_csum() to simplify the code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit 58a9b5d262 bnxt: Search VPD with pci_vpd_find_ro_info_keyword()
Use pci_vpd_find_ro_info_keyword() to search for keywords in VPD to
simplify the code.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit ebcdc8ebe8 bnxt: Read VPD with pci_vpd_alloc()
Use pci_vpd_alloc() to dynamically allocate a properly sized buffer and
read the full VPD data into it.

This simplifies the code, and we no longer have to make assumptions about
VPD size.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit da417885a9 bnx2x: Search VPD with pci_vpd_find_ro_info_keyword()
Use pci_vpd_find_ro_info_keyword() to search for keywords in VPD to
simplify the code.

str_id_reg and str_id_cap hold the same string and are used in the same
comparison. This doesn't make sense, use one string str_id instead.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:40 +01:00
Heiner Kallweit bed3db3d73 bnx2x: Read VPD with pci_vpd_alloc()
Use pci_vpd_alloc() to dynamically allocate a properly sized buffer and
read the full VPD data into it.

This simplifies the code, and we no longer have to make assumptions about
VPD size.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:39 +01:00
Heiner Kallweit 0df79c8646 bnx2: Replace open-coded version with swab32s()
Use swab32s() instead of open-coding it.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 21:45:39 +01:00