Added new compatible for WCN3998 and corresponding voltage
and current values to WCN3998 compatible.
Changed driver code to support WCN3998
Signed-off-by: Harish Bandi <c-hbandi@codeaurora.org>
Reviewed-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Balakrishna Godavarthi <bgodavar@codeaurora.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Pull networking fixes from David Miller:
1) Out of bounds access in xfrm IPSEC policy unlink, from Yue Haibing.
2) Missing length check for esp4 UDP encap, from Sabrina Dubroca.
3) Fix byte order of RX STBC access in mac80211, from Johannes Berg.
4) Inifnite loop in bpftool map create, from Alban Crequy.
5) Register mark fix in ebpf verifier after pkt/null checks, from Paul
Chaignon.
6) Properly use rcu_dereference_sk_user_data in L2TP code, from Eric
Dumazet.
7) Buffer overrun in marvell phy driver, from Andrew Lunn.
8) Several crash and statistics handling fixes to bnxt_en driver, from
Michael Chan and Vasundhara Volam.
9) Several fixes to the TLS layer from Jakub Kicinski (copying negative
amounts of data in reencrypt, reencrypt frag copying, blind nskb->sk
NULL deref, etc).
10) Several UDP GRO fixes, from Paolo Abeni and Eric Dumazet.
11) PID/UID checks on ipv6 flow labels are inverted, from Willem de
Bruijn.
12) Use after free in l2tp, from Eric Dumazet.
13) IPV6 route destroy races, also from Eric Dumazet.
14) SCTP state machine can erroneously run recursively, fix from Xin
Long.
15) Adjust AF_PACKET msg_name length checks, add padding bytes if
necessary. From Willem de Bruijn.
16) Preserve skb_iif, so that forwarded packets have consistent values
even if fragmentation is involved. From Shmulik Ladkani.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (69 commits)
udp: fix GRO packet of death
ipv6: A few fixes on dereferencing rt->from
rds: ib: force endiannes annotation
selftests: fib_rule_tests: print the result and return 1 if any tests failed
ipv4: ip_do_fragment: Preserve skb_iif during fragmentation
net/tls: avoid NULL pointer deref on nskb->sk in fallback
selftests: fib_rule_tests: Fix icmp proto with ipv6
packet: validate msg_namelen in send directly
packet: in recvmsg msg_name return at least sizeof sockaddr_ll
sctp: avoid running the sctp state machine recursively
stmmac: pci: Fix typo in IOT2000 comment
Documentation: fix netdev-FAQ.rst markup warning
ipv6: fix races in ip6_dst_destroy()
l2ip: fix possible use-after-free
appletalk: Set error code if register_snap_client failed
net: dsa: bcm_sf2: fix buffer overflow doing set_rxnfc
rxrpc: Fix net namespace cleanup
ipv6/flowlabel: wait rcu grace period before put_pid()
vrf: Use orig netdev to count Ip6InNoRoutes and a fresh route lookup when sending dest unreach
tcp: add sanity tests in tcp_add_backlog()
...
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAlzLBg4QHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpsG8EACHxeAgDY3j+ZSLqn9sCFzvDo+hc9q2x64o
ydscoxcdRSZYAU1dP6bnjQBFZbwVHaDzYG11L9aPwlvbsxis9/XWASc/gCo8sumN
itmk3jKW5zYXlLw57ojh2+/UbPKnO5fKJcA66AakDGTopp3nYZQvov1kUJKOwTiz
Ir48vEwST4DU9szkfGGeKOGG735veU/yarUNyADwc+CIigBcIa/A/bZtY8P4xV5d
6bSLnMayehntZaHpVUPwDqOgP7+nSqIzQ88BPAKQnSzDat/SK4C0pgIX9L2nS1/4
7fBpg3/HN0sFgUuhAG/3ILD82yD3CWDGbt2xif2CP0ylvVOMB6d/ObDAIxwTBKVd
MwJ3U0RK+Ph2rvaglp6ifk3YgS0Nzt4N5F9AVmp32RR3QpqzJ2pLN4PUjbdWaEbr
lugu6uengn0flu90mesBazrVNrZkrZ2w859tq9TNWPYpdIthKrvNn44J1bXx+yUj
F1d+Bf0Sxxnc1b8hEKn9KOXPRoIY07hd/wUi2a/F4RVGSFa2b9CnaengqXIDzQNC
duEErt+Pd2xlBA4WhPlKSI2PmrWJvpDr7iuAiqdC8cC69hpF4F2/yWCiL+JYI4pF
v0zPn214KZ2rg9XZfb0Me8ff1BNvmXJMKzvecX2mOcclcO97PFVkqJhcx/75JE/6
ooQiTofl+w==
=KjZ4
-----END PGP SIGNATURE-----
Merge tag 'for-linus-20190502' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe:
"This is mostly io_uring fixes/tweaks. Most of these were actually done
in time for the last -rc, but I wanted to ensure that everything
tested out great before including them. The code delta looks larger
than it really is, as it's mostly just comment additions/changes.
Outside of the comment additions/changes, this is mostly removal of
unnecessary barriers. In all, this pull request contains:
- Tweak to how we handle errors at submission time. We now post a
completion event if the error occurs on behalf of an sqe, instead
of returning it through the system call. If the error happens
outside of a specific sqe, we return the error through the system
call. This makes it nicer to use and makes the "normal" use case
behave the same as the offload cases. (me)
- Fix for a missing req reference drop from async context (me)
- If an sqe is submitted with RWF_NOWAIT, don't punt it to async
context. Return -EAGAIN directly, instead of using it as a hint to
do async punt. (Stefan)
- Fix notes on barriers (Stefan)
- Remove unnecessary barriers (Stefan)
- Fix potential double free of memory in setup error (Mark)
- Further improve sq poll CPU validation (Mark)
- Fix page allocation warning and leak on buffer registration error
(Mark)
- Fix iov_iter_type() for new no-ref flag (Ming)
- Fix a case where dio doesn't honor bio no-page-ref (Ming)"
* tag 'for-linus-20190502' of git://git.kernel.dk/linux-block:
io_uring: avoid page allocation warnings
iov_iter: fix iov_iter_type
block: fix handling for BIO_NO_PAGE_REF
io_uring: drop req submit reference always in async punt
io_uring: free allocated io_memory once
io_uring: fix SQPOLL cpu validation
io_uring: have submission side sqe errors post a cqe
io_uring: remove unnecessary barrier after unsetting IORING_SQ_NEED_WAKEUP
io_uring: remove unnecessary barrier after incrementing dropped counter
io_uring: remove unnecessary barrier before reading SQ tail
io_uring: remove unnecessary barrier after updating SQ head
io_uring: remove unnecessary barrier before reading cq head
io_uring: remove unnecessary barrier before wq_has_sleeper
io_uring: fix notes on barriers
io_uring: fix handling SQEs requesting NOWAIT
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlzK8pkUHGJoZWxnYWFz
QGdvb2dsZS5jb20ACgkQWYigwDrT+vwa1Q/9F/LWkDPI0RHrOYlWNHGwrw+jwL6g
XmvBFbuyuqieaTrljtS5PLbfT2DoD0ZISolQqTjDB4KCHr5u0KyaJgUnbHk4c7qq
arCOwEC6Ad/xRMmOd3V1qcPAHK8tNigCKOGYOtEk+YfLWfsla7FlqJBtp+5eZ7NZ
UnGwkaeOzEDY9JskPhcHcoHkZjILgIU8PQsJMjDkh8iDJpYKIjasWcFoAmLaXZb6
WLha6NO7oZxL3ZX6660K1ah/CxLSx3mgd+BX2KKfucNBmvzu90RuhTRNvegCMm36
BVglWam4vI95a44N/9w+oPao8p5NMnldWbH7/g3iKFPASylLNUDPT/wKdpACYneK
ADJmf3+A9OoI7AWEyadR9WvvGuceQ2IoIL0tGI7824HnGwc5KpF+CcFUDTSPRbVO
94H3+q7M9YEA3CvQQkSqEIPtDcrFrO5x+aEijP4EhmE/7IV52VRyEY1IOHr88hzQ
umf6TbbxNPaW+g/1IOTSgESSl1nPXKe85Aeb7i4mle9XOp0YIHh44FYyrPoSeysd
egdij/9K+EPjlLpLm+X7L/dApwBNt+7dmblxanlilOSaDwkpfZ3yIm0jAgcuXhL6
lXj30vdlCmBiYRV5bMMletrflH29UsEsSpjkW99APU+6Dmx3E2/fWoeEPTkuHw5s
bPZ7n6KcJewg8Tg=
=ajSb
-----END PGP SIGNATURE-----
Merge tag 'pci-v5.1-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI fixes from Bjorn Helgaas:
"I apologize for sending these so late in the cycle. We went back and
forth about how to deal with the unexpected logging of intentional
link state changes and finally decided to just config them off by
default.
PCI fixes:
- Stop ignoring "pci=disable_acs_redir" parameter (Logan Gunthorpe)
- Use shared MSI/MSI-X vector for Link Bandwidth Management (Alex
Williamson)
- Add Kconfig option for Link Bandwidth notification messages (Keith
Busch)"
* tag 'pci-v5.1-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
PCI/LINK: Add Kconfig option (default off)
PCI/portdrv: Use shared MSI/MSI-X vector for Bandwidth Management
PCI: Fix issue with "pci=disable_acs_redir" parameter being ignored
- Regression fix for the marvell nand driver.
-----BEGIN PGP SIGNATURE-----
iQJKBAABCAA0FiEEdgfidid8lnn52cLTZvlZhesYu8EFAlzGIXcWHHJpY2hhcmRA
c2lnbWEtc3Rhci5hdAAKCRBm+VmF6xi7wccEEACKboXDWT+qMkUbSEfsmAG0lGlz
gDxZ5/EH4Lfi7KzCWKyLLwM0xXH3ZLQ6vCg3jm2odbEiqKOWbIS/6kjoLdrYhngv
+7LYAYmByozowiERjZn80EOrGp88oddf5H9+iRINh6U9g3ACR9xM9PMejKtxqust
dHrTtSsZvyO6+lrjPqgmIwhZeqQ/4Lp1fzoNjVPi+mT39gfR8RlJw3PUnPvknjff
Uxi/XrJUZEDex4MzvUwSNtBhUm1Mz3AHa9/gUHmUdPhbXLyQ2Qcsy2RJlwUPfn1T
CUR33j5rTOu+ThC8Aydz2FxyZng0IU3BO3TQnEEz87dNCCYNAtIFDdr+Wm1uV/jI
l8+w0vbfS8GazUU/EcaKVFSvK+b/5BvppClol4m1miI0I1wF5NIpYaSyG/OqVy3y
vEcNzQTGokU5gJX8r7VgeIsvoXevL1AlngFhD+Rks+Q7kJ1F+vDzWaxZHO1RJ/tC
Livk6DswWDAKx492rsTlF0My/WMrsleVS/9qJXkXDEEucVOD0Iox3biatjx5juKY
DCzTwfr6Jd6144d4wL/4w4rI7atGAWciv5WBkGf9Xh7pnhOsIfQ6z3kF1JB1vlJL
uHgRq0Vg20p3DzdpUXuz8dliNY7RIjciDuhq8a3RYyP7VzZZbZJRF82qEZhsbXTQ
Y60r0yUqyOrR/mnhDg==
=4gBj
-----END PGP SIGNATURE-----
Merge tag 'mtd/fixes-for-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux
Pull MTD fix from Richard Weinberger:
"A single regression fix for the marvell nand driver"
* tag 'mtd/fixes-for-5.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux:
mtd: rawnand: marvell: Clean the controller state before each operation
e8303bb7a7 ("PCI/LINK: Report degraded links via link bandwidth
notification") added dmesg logging whenever a link changes speed or width
to a state that is considered degraded. Unfortunately, it cannot
differentiate signal integrity-related link changes from those
intentionally initiated by an endpoint driver, including drivers that may
live in userspace or VMs when making use of vfio-pci. Some GPU drivers
actively manage the link state to save power, which generates a stream of
messages like this:
vfio-pci 0000:07:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x16 link at 0000:00:02.0 (capable of 64.000 Gb/s with 5 GT/s x16 link)
Since we can't distinguish the intentional changes from the signal
integrity issues, leave the reporting turned off by default. Add a Kconfig
option to turn it on if desired.
Fixes: e8303bb7a7 ("PCI/LINK: Report degraded links via link bandwidth notification")
Link: https://lore.kernel.org/linux-pci/20190501142942.26972-1-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Fixes: d84aec4215 ("net: ll_temac: Fix support for 64-bit platforms")
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
syzbot was able to crash host by sending UDP packets with a 0 payload.
TCP does not have this issue since we do not aggregate packets without
payload.
Since dev_gro_receive() sets gso_size based on skb_gro_len(skb)
it seems not worth trying to cope with padded packets.
BUG: KASAN: slab-out-of-bounds in skb_gro_receive+0xf5f/0x10e0 net/core/skbuff.c:3826
Read of size 16 at addr ffff88808893fff0 by task syz-executor612/7889
CPU: 0 PID: 7889 Comm: syz-executor612 Not tainted 5.1.0-rc7+ #96
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
print_address_description.cold+0x7c/0x20d mm/kasan/report.c:187
kasan_report.cold+0x1b/0x40 mm/kasan/report.c:317
__asan_report_load16_noabort+0x14/0x20 mm/kasan/generic_report.c:133
skb_gro_receive+0xf5f/0x10e0 net/core/skbuff.c:3826
udp_gro_receive_segment net/ipv4/udp_offload.c:382 [inline]
call_gro_receive include/linux/netdevice.h:2349 [inline]
udp_gro_receive+0xb61/0xfd0 net/ipv4/udp_offload.c:414
udp4_gro_receive+0x763/0xeb0 net/ipv4/udp_offload.c:478
inet_gro_receive+0xe72/0x1110 net/ipv4/af_inet.c:1510
dev_gro_receive+0x1cd0/0x23c0 net/core/dev.c:5581
napi_gro_frags+0x36b/0xd10 net/core/dev.c:5843
tun_get_user+0x2f24/0x3fb0 drivers/net/tun.c:1981
tun_chr_write_iter+0xbd/0x156 drivers/net/tun.c:2027
call_write_iter include/linux/fs.h:1866 [inline]
do_iter_readv_writev+0x5e1/0x8e0 fs/read_write.c:681
do_iter_write fs/read_write.c:957 [inline]
do_iter_write+0x184/0x610 fs/read_write.c:938
vfs_writev+0x1b3/0x2f0 fs/read_write.c:1002
do_writev+0x15e/0x370 fs/read_write.c:1037
__do_sys_writev fs/read_write.c:1110 [inline]
__se_sys_writev fs/read_write.c:1107 [inline]
__x64_sys_writev+0x75/0xb0 fs/read_write.c:1107
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x441cc0
Code: 05 48 3d 01 f0 ff ff 0f 83 9d 09 fc ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 3d 51 93 29 00 00 75 14 b8 14 00 00 00 0f 05 <48> 3d 01 f0 ff ff 0f 83 74 09 fc ff c3 48 83 ec 08 e8 ba 2b 00 00
RSP: 002b:00007ffe8c716118 EFLAGS: 00000246 ORIG_RAX: 0000000000000014
RAX: ffffffffffffffda RBX: 00007ffe8c716150 RCX: 0000000000441cc0
RDX: 0000000000000001 RSI: 00007ffe8c716170 RDI: 00000000000000f0
RBP: 0000000000000000 R08: 000000000000ffff R09: 0000000000a64668
R10: 0000000020000040 R11: 0000000000000246 R12: 000000000000c2d9
R13: 0000000000402b50 R14: 0000000000000000 R15: 0000000000000000
Allocated by task 5143:
save_stack+0x45/0xd0 mm/kasan/common.c:75
set_track mm/kasan/common.c:87 [inline]
__kasan_kmalloc mm/kasan/common.c:497 [inline]
__kasan_kmalloc.constprop.0+0xcf/0xe0 mm/kasan/common.c:470
kasan_slab_alloc+0xf/0x20 mm/kasan/common.c:505
slab_post_alloc_hook mm/slab.h:437 [inline]
slab_alloc mm/slab.c:3393 [inline]
kmem_cache_alloc+0x11a/0x6f0 mm/slab.c:3555
mm_alloc+0x1d/0xd0 kernel/fork.c:1030
bprm_mm_init fs/exec.c:363 [inline]
__do_execve_file.isra.0+0xaa3/0x23f0 fs/exec.c:1791
do_execveat_common fs/exec.c:1865 [inline]
do_execve fs/exec.c:1882 [inline]
__do_sys_execve fs/exec.c:1958 [inline]
__se_sys_execve fs/exec.c:1953 [inline]
__x64_sys_execve+0x8f/0xc0 fs/exec.c:1953
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 5351:
save_stack+0x45/0xd0 mm/kasan/common.c:75
set_track mm/kasan/common.c:87 [inline]
__kasan_slab_free+0x102/0x150 mm/kasan/common.c:459
kasan_slab_free+0xe/0x10 mm/kasan/common.c:467
__cache_free mm/slab.c:3499 [inline]
kmem_cache_free+0x86/0x260 mm/slab.c:3765
__mmdrop+0x238/0x320 kernel/fork.c:677
mmdrop include/linux/sched/mm.h:49 [inline]
finish_task_switch+0x47b/0x780 kernel/sched/core.c:2746
context_switch kernel/sched/core.c:2880 [inline]
__schedule+0x81b/0x1cc0 kernel/sched/core.c:3518
preempt_schedule_irq+0xb5/0x140 kernel/sched/core.c:3745
retint_kernel+0x1b/0x2d
arch_local_irq_restore arch/x86/include/asm/paravirt.h:767 [inline]
kmem_cache_free+0xab/0x260 mm/slab.c:3766
anon_vma_chain_free mm/rmap.c:134 [inline]
unlink_anon_vmas+0x2ba/0x870 mm/rmap.c:401
free_pgtables+0x1af/0x2f0 mm/memory.c:394
exit_mmap+0x2d1/0x530 mm/mmap.c:3144
__mmput kernel/fork.c:1046 [inline]
mmput+0x15f/0x4c0 kernel/fork.c:1067
exec_mmap fs/exec.c:1046 [inline]
flush_old_exec+0x8d9/0x1c20 fs/exec.c:1279
load_elf_binary+0x9bc/0x53f0 fs/binfmt_elf.c:864
search_binary_handler fs/exec.c:1656 [inline]
search_binary_handler+0x17f/0x570 fs/exec.c:1634
exec_binprm fs/exec.c:1698 [inline]
__do_execve_file.isra.0+0x1394/0x23f0 fs/exec.c:1818
do_execveat_common fs/exec.c:1865 [inline]
do_execve fs/exec.c:1882 [inline]
__do_sys_execve fs/exec.c:1958 [inline]
__se_sys_execve fs/exec.c:1953 [inline]
__x64_sys_execve+0x8f/0xc0 fs/exec.c:1953
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
The buggy address belongs to the object at ffff88808893f7c0
which belongs to the cache mm_struct of size 1496
The buggy address is located 600 bytes to the right of
1496-byte region [ffff88808893f7c0, ffff88808893fd98)
The buggy address belongs to the page:
page:ffffea0002224f80 count:1 mapcount:0 mapping:ffff88821bc40ac0 index:0xffff88808893f7c0 compound_mapcount: 0
flags: 0x1fffc0000010200(slab|head)
raw: 01fffc0000010200 ffffea00025b4f08 ffffea00027b9d08 ffff88821bc40ac0
raw: ffff88808893f7c0 ffff88808893e440 0000000100000001 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff88808893fe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff88808893ff00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff88808893ff80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^
ffff888088940000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff888088940080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Fixes: e20cf8d3f1 ("udp: implement GRO for plain UDP sockets.")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* cpcap-battery: fix a division by zero
* core: fix systemd issue due to log messages produced by uevent
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE72YNB0Y/i3JqeVQT2O7X88g7+poFAlzKDOMACgkQ2O7X88g7
+prUnhAAlQoIMn2x7DnjYo1ZL6lj4+PHSzIC9qSVA3EGdRMq49pEHYcVuVN6kwCf
amWMjq9Yt1787jSQM/QlvnA0XjKCPSQE7qFzLa6/VrHbNh4KXaZt5I6s9k0+/qKr
SN30EeDfrtp9AYMu/AAZe+HtdSUCo29HuZsznb3QCCszGUG30PESuqHMqV9iCmul
c4XIBMIsl0vxDxo2jJUzLtNdSlf9bmAQIiQ68GrkGCcPmP7bRg7U4i8UImB6D34v
LTULG5U8coCbwZb+2n6Fc6Zek1GkWYteJODSomyxHpSt/6Y4z4s4XGcrkk4sYz8R
QVUL/I5DXnZP9CZg/tJUuyno9PFK5AgS1695SGBNODQgEFcD11QViWlYZDt4/RZh
QRiD2L5pSB39N+WSRcAmHns7JHLmrlc2kGChvF8ULNwLMuiziVrgv9uDfNHoebRG
v2EGj2RInXlf06tME5cA1hQzcT2CrejkpHrSp84Hina1VUMig4VDi57QURSsnyyR
e13jwF9k7Tnbn26YNAeGtzSjDHU8CjfQ9zlHOjnyr3F+eYlJrov+4hJWSBWHPJTj
G55BIrVkOLAd4VhbhoGwGu9hLZC2hHVV4M25Ijp7zBSNGHxcb1t5ZcUx2PAwB7R4
RZxqosnu+B6OQjHWy8vgIBaUUbMGM1/Ldrx/uFWGVtL+qBUJIiY=
=Q783
-----END PGP SIGNATURE-----
Merge tag 'for-v5.1-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply
Pull power supply fixes from Sebastian Reichel:
"Two more fixes for the 5.1 cycle.
One division by zero fix in a specific driver and one core workaround
for bad userspace behaviour from systemd regarding uevents. IMHO this
can be considered to be a userspace bug, but the debug messages are
useless anyways
- cpcap-battery: fix a division by zero
- core: fix systemd issue due to log messages produced by uevent"
* tag 'for-v5.1-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply:
power: supply: sysfs: prevent endless uevent loop with CONFIG_POWER_SUPPLY_DEBUG
power: supply: cpcap-battery: Fix division by zero
It is a followup after the fix in
commit 9c69a13205 ("route: Avoid crash from dereferencing NULL rt->from")
rt6_do_redirect():
1. NULL checking is needed on rt->from because a parallel
fib6_info delete could happen that sets rt->from to NULL.
(e.g. rt6_remove_exception() and fib6_drop_pcpu_from()).
2. fib6_info_hold() is not enough. Same reason as (1).
Meaning, holding dst->__refcnt cannot ensure
rt->from is not NULL or rt->from->fib6_ref is not 0.
Instead of using fib6_info_hold_safe() which ip6_rt_cache_alloc()
is already doing, this patch chooses to extend the rcu section
to keep "from" dereference-able after checking for NULL.
inet6_rtm_getroute():
1. NULL checking is also needed on rt->from for a similar reason.
Note that inet6_rtm_getroute() is using RTNL_FLAG_DOIT_UNLOCKED.
Fixes: a68886a691 ("net/ipv6: Make from in rt6_info rcu protected")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Wei Wang <weiwan@google.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While the endiannes is being handled correctly as indicated by the comment
above the offending line - sparse was unhappy with the missing annotation
as be64_to_cpu() expects a __be64 argument. To mitigate this annotation
all involved variables are changed to a consistent __le64 and the
conversion to uint64_t delayed to the call to rds_cong_map_updated().
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier says:
====================
net: mvpp2: cls: Add classification
This series is a rework of the previously standalone patch adding
classification support for mvpp2 :
https://lore.kernel.org/netdev/20190423075031.26074-1-maxime.chevallier@bootlin.com/
This patch has been reworked according to Saeed's review, to make sure
that the location of the rule is always respected and serves as a way to
prioritize rules between each other. This the 3rd iteration of this
submission, but since it's now a series, I reset the revision numbering.
This series implements that in a limited configuration for now, since we
limit the total number of rules per port to 4.
The main factors for this limitation are that :
- We share the classification tables between all ports (4 max, although
one is only used for internal loopback), hence we have to perform a
logical separation between rules, which is done today by dedicated
ranges for each port in each table
- The "Flow table", which dictates which lookups operations are
performed for an ingress packet, in subdivided into 22 "sub flows",
each corresponding to a traffic type based on the L3 proto, L4
proto, the presence or not of a VLAN tag and the L3 fragmentation.
This makes so that when adding a rule, it has to be added into each
of these subflows, introducing duplications of entries and limiting
our max number of entries.
These limitations can be overcomed in several ways, but for readability
sake, I'd rather submit basic classification offload support for now,
and improve it gradually.
This series also adds a small cosmetic cleanup patch (1), and also adds
support for the "Drop" action compared to the first submission of this
feature. It is simple enough to be added with this basic support.
Compared to the first submissions, the NETIF_F_NTUPLE flag was also
removed, following Saeed's comment.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit introduces support for the "Drop" action in classification
offload. This corresponds to the "-1" action with ethtool -N.
This is achieved using the color marking actions available in the C2
engine, which associate a color to a packet. These colors can be either
Green, Yellow or Red, Red meaning that the packet should be dropped.
Green and Yellow colors are interpreted by the Policer, which isn't
supported yet.
This method of dropping using the Classifier is different than the
already existing early-drop features, such as VLAN filtering and MAC
UC/MC filtering, which are performed during the Parsing step, and
therefore take precedence over classification actions.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit introduces basic classification offloading support for the
PPv2 controller.
The PPv2 classifier has many classification engines, for now we only use
the C2 TCAM match engine.
This engine allows to perform ternary lookups on 64 bits keys (called
Header Extracted Key), that are built by extracting fields from the packet
header and concatenating them. At most 4 fields can be extracted for a
single lookup.
This basic implementation allows to build the HEK from the following
fields :
- L4 source and destination ports (for UDP and TCP)
More fields are to be added in the future.
Classification flows are added through the ethtool interface, using the
newly introduced flow_rule infrastructure as an internal rule
representation, allowing to more easily implement tc flower rules if
need be.
The internal design for now allocates one range of 4 rules per port
due to the internal design of the flow table, which uses 22 sub-flows.
When inserting a classification rule, the rule is created in every
relevant sub-flow.
This low rule-count is a very simple design which reaches quickly the
limitations of the flow table ordering, but guarantees that the rule
ordering will always be respected.
This commit only introduces support for the "steer to rxq" action.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As of today, the classification code is used only for RSS. We split the
incoming traffic into multiple flows, that correspond to the ethtool
flow_type parameter.
We don't want to use the ethtool flow definitions such as TCP_V4_FLOW,
for several reason :
- We want to decorrelate the driver code from ethtool as much as
possible, so that we can easily use other interfaces such as tc flower,
- We want the flow_type to be a bitfield, so that we can match flows
embedded into each other, such as TCP4 which is a subset of IP4.
This commit does the conversion to the newer type.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cosmetic patch removing extra whitespaces when writing the flow_table
entries
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- regression in memset if line size !64
- avoid panic if PAE and IOC
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJcycMVAAoJEGnX8d3iisJefY0P/R5jR6F0XDsnV/px0tQz2Dhk
Lwl759QUoXeRzqeq8m2xDffIiEUmmPuUspauoVaGcHLMwBHOEOZ0WjTK3eSCF3XK
zDT0cQ0oWcXyRZo46GLhW6hZYy17tFhXsE7fAURxIUrwu5sUaW5Z+YTMXrbRzWUb
QmVJuf98sN4uheZ2Ayupg/5rOQhXh9+/IlMcybytmWQa+F4IuJkd505wNfQ3kWqQ
ZeSNhzpKNAf4IofpfCowIGKE+9eyeeinMdYbppBkPCbX+ozpAj8ZcV3WRH5sFexP
Ha3llkN4wBQZ0HglEO3FqRYnkU1qKgADXHcW3rz6nxNCb9WqGCk7sx1PuEARqAOX
ZLvXT2c6+rUc74dORZAHjLdT4HEE1nSIp7KmuvYQY/Hg/AdcKcBtgJEAToJa4iNY
lhZnjK5SXUV2ppPLI2duHphGeh3jzjR7Oa3ksLxKc2iQ2RBCJeyBj4G9g7NJYnL0
sb5pUDadHkn5MkqOU0Zy6mQ+5Lm1CCfodc2F/3KMyxNxAkmC8mhDFsASgY/2Z3li
llj9NX2Cnmw697shFQTg24hxKZ1yg49c1LhHnkIvnPdJ4DuDjWIHnpuzHnBWCbHS
qRw4ZZ2Psr2hNTzauVwHYbpLALcHTMUXU6Ecx0TuEWNyGchehV4llmvCn1ZxWGmQ
v+2cv+4Jt2l6ocRxENDY
=Qm+2
-----END PGP SIGNATURE-----
Merge tag 'arc-5.1-final' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fixes from Vineet Gupta:
"A few minor fixes for ARC.
- regression in memset if line size !64
- avoid panic if PAE and IOC"
* tag 'arc-5.1-final' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: memset: fix build with L1_CACHE_SHIFT != 6
ARC: [hsdk] Make it easier to add PAE40 region to DTB
ARC: PAE40: don't panic and instead turn off hw ioc
The Interrupt Message Number in the PCIe Capabilities register (PCIe r4.0,
sec 7.5.3.2) indicates which MSI/MSI-X vector is shared by interrupts
related to the PCIe Capability, including Link Bandwidth Management and
Link Autonomous Bandwidth Interrupts (Link Control, 7.5.3.7), Command
Completed and Hot-Plug Interrupts (Slot Control, 7.5.3.10), and the PME
Interrupt (Root Control, 7.5.3.12).
pcie_message_numbers() checked whether we want to enable PME or Hot-Plug
interrupts but neglected to check for Link Bandwidth Management, so if we
only wanted the Bandwidth Management interrupts, it decided we didn't need
any vectors at all. Then pcie_port_enable_irq_vec() tried to reallocate
zero vectors, which failed, resulting in fallback to INTx.
On some systems, e.g., an X79-based workstation, that INTx seems broken or
not handled correctly, so we got spurious IRQ16 interrupts for Bandwidth
Management events.
Change pcie_message_numbers() so that if we want Link Bandwidth Management
interrupts, we use the shared MSI/MSI-X vector from the PCIe Capabilities
register.
Fixes: e8303bb7a7 ("PCI/LINK: Report degraded links via link bandwidth notification")
Link: https://lore.kernel.org/lkml/155597243666.19387.1205950870601742062.stgit@gimli.home
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
[bhelgaas: changelog]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Revert a recent ACPICA change that caused initialization to fail on
systems with Thunderbolt docking stations connected at the init time.
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAlzJV98SHHJqd0Byand5
c29ja2kubmV0AAoJEILEb/54YlRx0FIP/iBkMXnLk6snFKOjOfHRqYx/cLzix67Q
hcVDzUPoLF8FuXyRoHPiqH9eVMznLp801X1r0ER1GnCkp5GJPS05beXEhZBqPUNr
dz8UXGL6iJjT7A2L8DRAyDg/H8Ql7ddg/7qKVQbjr9V8Yu1KFRTCP/6n+d3vhDuB
PRi1yII7sSkzFhHRyS6Yl8PuW+hxeBfgoFelfNWwUBtozTkAW9TyBmDXIfqzZjfS
QBh9y4lp0yOYDjhZ2zwKEQKCzO6g1l2OC7o0hvYnkwByrV/fKIfNMvzrdU7JO0sP
42qq6ap52+2TBbI8fqf9hWuXDCnFZ1DtHLKs4vecGV2FzXN20Qh8lPiTi4bUQmze
8KRVqfpSyKKQ4U00YuV47S1GoBnrUZ1LplH7tnIVoa0DSkubcAlA2+uUvjDXOKUJ
C2QkKOQ+wVZTw2GEoj2JS0iKArb6tW/6F8I1dtZomIZnAEvTzPBGdbiX4zbp9LAp
0PTU41tTmV/lROs2cVnyf7tU2dM57r6O4pMn1ZMK/Lq4aLm9R2iEAk29vhalgvVw
HvgbCoXHz6cark7b90098JPnT/bCvnGXJqoZ6G/2vwoujuvNPA4FHbt7HG3ijo7u
bxABEhewIeEiJHcmbjEeqczA35vSoeIqeXyYAhtAnlHG3jooexKSDXAmiHYY1+jp
AfuAQL38awi0
=2XoK
-----END PGP SIGNATURE-----
Merge tag 'acpi-5.1-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI fix from Rafael Wysocki:
"Revert a recent ACPICA change that caused initialization to fail on
systems with Thunderbolt docking stations connected at the init time"
* tag 'acpi-5.1-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
Revert "ACPICA: Clear status of GPEs before enabling them"
The 'extent_type' variable does seem to be reliably initialized, but
it's _very_ non-obvious, since there's a "goto next" case that jumps
over the normal initialization. That will then always trigger the
"start >= extent_end" test, which will end up never falling through to
the use of that variable.
But the code is certainly not obvious, and the compiler warning looks
reasonable. Make 'extent_type' an int, and initialize it to an invalid
negative value, which seems to be the common pattern in other places.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Esben Haabendal says:
====================
net: ll_temac: x86_64 support
This patch series adds support for use of ll_temac driver with
platform_data configuration and fixes endianess and 64-bit problems so
that it can be used on x86_64 platform.
A few bugfixes are also included.
Changes since v2:
- Fixed lp->indirect_mutex initialization regression for OF
platforms introduced in v2
Changes since v1:
- Make indirect_mutex specification mandatory when using platform_data
- Move header to include/linux/platform_data
- Enable COMPILE_TEST for XILINX_LL_TEMAC
- Rebased to v5.1-rc7
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
As soon as TAILDESCR_PTR is written, DMA transfers might start.
Let's ensure we are ready to receive DMA IRQ's before doing that.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows custom setup of IRQ coalescing for platforms using legacy
platform_device. The irq timeout and count parameters can be used for
tuning cpu load vs. latency.
I have maintained the 0x00000400 bit in TX_CHNL_CTRL. It is specified as
unused in the documentation I have available. It does not make any
difference in the hardware I have available, so it is left in to not risk
breaking other platforms where it might be used.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use usleep_range() to avoid problems with msleep() actually sleeping
much longer than expected.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
As we are actually using a BD for both the skb and each frag contained in
it, the oldest TX BD would be overwritten when there was exactly one BD
less than needed.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Unmap the actual buffer length, not the amount of data received, avoiding
resource exhaustion of swiotlb (seen on x86_64 platform).
Signed-off-by: Esben Haabendal <esben@geanix.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Indirect register access goes through a DCR bus bridge, which
allows only one outstanding transaction. And to make matters
worse, each TEMAC IP block contains two Ethernet interfaces, and
although they seem to have separate registers for indirect access,
they actually share the registers. Or to be more specific, MSW, LSW
and CTL registers are physically shared between Ethernet interfaces
in same TEMAC IP, with RDY register being (almost) specificic to
the Ethernet interface. The 0x10000 bit in RDY reflects combined
bus ready state though.
So we need to take care to synchronize not only within a single
device, but also between devices in same TEMAC IP.
This commit allows to do that with legacy platform devices.
For OF devices, the xlnx,compound parent of the temac node should be
used to find siblings, and setup a shared indirect_mutex between them.
I will leave this work to somebody else, as I don't have hardware to
test that. No regression is introduced by that, as before this commit
using two Ethernet interfaces in same TEMAC block is simply broken.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
With little-endian and 64-bit support in place, the ll_temac driver can
now be used on x86 and x86_64 platforms.
And while at it, enable COMPILE_TEST also.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Both TEMAC and SDMA is big-endian, so make sure that all values in SDMA
buffer descriptors (cmdac_bd) are handled as big-endian, independent of the
host endianness. With all currently supported platforms being big-endian,
this change does not make a change for any of them.
Note, when using app3 and app4 for piggybacking skb pointers there is no
need to care about endianness, as neither TEMAC nor SDMA access app3 and
app4 in TX buffer descriptors.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace the powerpc specific MMIO register access functions with the
generic big-endian mmio access functions, and add support for
little-endian access depending on configuration.
Big-endian access is maintained as the default, but little-endian can
be configured in device-tree binding or in platform data.
The temac_ior()/temac_iow() functions are replaced with macro wrappers
to avoid modifying existing code more than necessary.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The use of buffer descriptor APP4 field (32-bit) for storing skb pointer
obviously does not work on 64-bit platforms.
As APP3 is also unused, we can use that to store the other half of 64-bit
pointer values.
Contrary to what is hinted at in commit message of commit 15bfe05c8d
("net: ethernet: xilinx: Mark XILINX_LL_TEMAC broken on 64-bit")
there are no other pointers stored in cdmac_bd.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Support initialization with platdata, so the driver can be used on
non-device-tree platforms.
For currently supported device-tree platforms, the driver should behave
as before.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As a side effect, a few error cases are fixed.
If of_iomap() of sdma_regs failed, no error code was returned. Fixed to
return -ENOMEM similar to of_iomap() fail of regs.
If sysfs_create_group() or register_netdev() failed, lp->phy_node was not
released.
Finally, the order in remove function is corrected to be reverse order
of what is done in probe, i.e. calling temac_mdio_teardown() last, so we
unregister the netdev that most likely is using the mdio_bus first.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: 65b2b4939a ("selftests: net: initial fib rule tests")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix inconsistent IS_ERR and PTR_ERR in cpsw_probe,
The proper pointer to use is clk instead of mode.
This issue was detected with the help of Coccinelle.
Fixes: 83a8471ba2 ("net: ethernet: ti: cpsw: refactor probe to group common hw initialization")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
The pvlock_page and hvclock_page variables are (as the name implies)
addresses to pages, created by the linker script.
But we declared them as just "extern u8" variables, which _works_, but
now that gcc does some more bounds checking, it causes warnings like
warning: array subscript 1 is outside array bounds of ‘u8[1]’
when we then access more than one byte from those variables.
Fix this by simply making the declaration of the variables match
reality, which makes the compiler happy too.
Signed-off-by: Linus Torvalds <torvalds@-linux-foundation.org>
I'm not sure what made gcc warn about this code now. The 'ret' variable
does end up initialized in all cases, but it's definitely not obvious,
so the compiler is quite reasonable to warn about this.
So just add initialization to make it all much more obvious both to
compilers and to humans.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We already did this for clang, but now gcc has that warning too. Yes,
yes, the address may be unaligned. And that's kind of the point.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Previously, during fragmentation after forwarding, skb->skb_iif isn't
preserved, i.e. 'ip_copy_metadata' does not copy skb_iif from given
'from' skb.
As a result, ip_do_fragment's creates fragments with zero skb_iif,
leading to inconsistent behavior.
Assume for example an eBPF program attached at tc egress (post
forwarding) that examines __sk_buff->ingress_ifindex:
- the correct iif is observed if forwarding path does not involve
fragmentation/refragmentation
- a bogus iif is observed if forwarding path involves
fragmentation/refragmentatiom
Fix, by preserving skb_iif during 'ip_copy_metadata'.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In io_sqe_buffer_register() we allocate a number of arrays based on the
iov_len from the user-provided iov. While we limit iov_len to SZ_1G,
we can still attempt to allocate arrays exceeding MAX_ORDER.
On a 64-bit system with 4KiB pages, for an iov where iov_base = 0x10 and
iov_len = SZ_1G, we'll calculate that nr_pages = 262145. When we try to
allocate a corresponding array of (16-byte) bio_vecs, requiring 4194320
bytes, which is greater than 4MiB. This results in SLUB warning that
we're trying to allocate greater than MAX_ORDER, and failing the
allocation.
Avoid this by using kvmalloc() for allocations dependent on the
user-provided iov_len. At the same time, fix a leak of imu->bvec when
registration fails.
Full splat from before this patch:
WARNING: CPU: 1 PID: 2314 at mm/page_alloc.c:4595 __alloc_pages_nodemask+0x7ac/0x2938 mm/page_alloc.c:4595
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 2314 Comm: syz-executor326 Not tainted 5.1.0-rc7-dirty #4
Hardware name: linux,dummy-virt (DT)
Call trace:
dump_backtrace+0x0/0x2f0 include/linux/compiler.h:193
show_stack+0x20/0x30 arch/arm64/kernel/traps.c:158
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x110/0x190 lib/dump_stack.c:113
panic+0x384/0x68c kernel/panic.c:214
__warn+0x2bc/0x2c0 kernel/panic.c:571
report_bug+0x228/0x2d8 lib/bug.c:186
bug_handler+0xa0/0x1a0 arch/arm64/kernel/traps.c:956
call_break_hook arch/arm64/kernel/debug-monitors.c:301 [inline]
brk_handler+0x1d4/0x388 arch/arm64/kernel/debug-monitors.c:316
do_debug_exception+0x1a0/0x468 arch/arm64/mm/fault.c:831
el1_dbg+0x18/0x8c
__alloc_pages_nodemask+0x7ac/0x2938 mm/page_alloc.c:4595
alloc_pages_current+0x164/0x278 mm/mempolicy.c:2132
alloc_pages include/linux/gfp.h:509 [inline]
kmalloc_order+0x20/0x50 mm/slab_common.c:1231
kmalloc_order_trace+0x30/0x2b0 mm/slab_common.c:1243
kmalloc_large include/linux/slab.h:480 [inline]
__kmalloc+0x3dc/0x4f0 mm/slub.c:3791
kmalloc_array include/linux/slab.h:670 [inline]
io_sqe_buffer_register fs/io_uring.c:2472 [inline]
__io_uring_register fs/io_uring.c:2962 [inline]
__do_sys_io_uring_register fs/io_uring.c:3008 [inline]
__se_sys_io_uring_register fs/io_uring.c:2990 [inline]
__arm64_sys_io_uring_register+0x9e0/0x1bc8 fs/io_uring.c:2990
__invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
invoke_syscall arch/arm64/kernel/syscall.c:47 [inline]
el0_svc_common.constprop.0+0x148/0x2e0 arch/arm64/kernel/syscall.c:83
el0_svc_handler+0xdc/0x100 arch/arm64/kernel/syscall.c:129
el0_svc+0x8/0xc arch/arm64/kernel/entry.S:948
SMP: stopping secondary CPUs
Dumping ftrace buffer:
(ftrace buffer empty)
Kernel Offset: disabled
CPU features: 0x002,23000438
Memory Limit: none
Rebooting in 1 seconds..
Fixes: edafccee56 ("io_uring: add support for pre-mapped user IO buffers")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-block@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Vinicius Costa Gomes says:
====================
net/sched: taprio change schedules
Changes from RFC:
- Removed the patches for taprio offloading, because of the lack of
in-tree users;
- Updated the links to point to the PATCH version of this series;
Original cover letter:
Overview
--------
This RFC has two objectives, it adds support for changing the running
schedules during "runtime", explained in more detail later, and
proposes an interface between taprio and the drivers for hardware
offloading.
These two different features are presented together so it's clear what
the "final state" would look like. But after the RFC stage, they can
be proposed (and reviewed) separately.
Changing the schedules without disrupting traffic is important for
handling dynamic use cases, for example, when streams are
added/removed and when the network configuration changes.
Hardware offloading support allows schedules to be more precise and
have lower resource usage.
Changing schedules
------------------
The same as the other interfaces we proposed, we try to use the same
concepts as the IEEE 802.1Q-2018 specification. So, for changing
schedules, there are an "oper" (operational) and an "admin" schedule.
The "admin" schedule is mutable and not in use, the "oper" schedule is
immutable and is in use.
That is, when the user first adds an schedule it is in the "admin"
state, and it becomes "oper" when its base-time (basically when it
starts) is reached.
What this means is that now it's possible to create taprio with a schedule:
$ tc qdisc add dev IFACE parent root handle 100 taprio \
num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@1 2@2 \
base-time 10000000 \
sched-entry S 03 300000 \
sched-entry S 02 300000 \
sched-entry S 06 400000 \
clockid CLOCK_TAI
And then, later, after the previous schedule is "promoted" to "oper",
add a new ("admin") schedule to be used some time later:
$ tc qdisc change dev IFACE parent root handle 100 taprio \
base-time 1553121866000000000 \
sched-entry S 02 500000 \
sched-entry S 0f 400000 \
clockid CLOCK_TAI
When enabling the ability to change schedules, it makes sense to add
two more defined knobs to schedules: "cycle-time" allows to truncate a
cycle to some value, so it repeats after a well-defined value;
"cycle-time-extension" controls how much an entry can be extended if
it's the last one before the change of schedules, the reason is to
avoid a very small cycle when transitioning from a schedule to
another.
With these, taprio in the software mode should provide a fairly
complete implementation of what's defined in the Enhancements for
Scheduled Traffic parts of the specification.
Hardware offload
----------------
Some workloads require better guarantees from their schedules than
what's provided by the software implementation. This series proposes
an interface for configuring schedules into compatible network
controllers.
This part is proposed together with the support for changing
schedules, because it raises questions like, should the "qdisc" side
be responsible of providing visibility into the schedules or should it
be the driver?
In this proposal, the driver is called passing the new schedule as
soon as it is validated, and the "core" qdisc takes care of displaying
(".dump()") the correct schedules at all times. It means that some
logic would need to be duplicated in the driver, if the hardware
doesn't have support for multiple schedules. But as taprio doesn't
have enough information about the underlying controller to know how
much in advance a schedule needs to be informed to the hardware, it
feels like a fair compromise.
The hardware offloading part of this proposal also tries to define an
interface for frame-preemption and how it interacts with the
scheduling of traffic, see Section 8.6.8.4 of IEEE 802.1Q-2018 for
more information.
One important difference between the qdisc interface and the
qdisc-driver interface, is that the "gate mask" on the qdisc side
references traffic classes, that is bit 0 of the gate mask means
Traffic Class 0, and in the driver interface, it specifies the queues,
that is bit 0 means queue 0. That is to say that taprio converts the
references to traffic classes to references to queues before sending
the offloading request to the driver.
Request for help
----------------
I would like that interested driver maintainers could take a look at
the proposed interface and see if it's going to be too awkward for any
particular device. Also, pointers to available documentation would be
appreciated. The idea here is to start a discussion so we can have an
interface that would work for multiple vendors.
Links
-----
kernel patches:
https://github.com/vcgomes/net-next/tree/taprio-add-support-for-change-v3
iproute2 patches:
https://github.com/vcgomes/iproute2/tree/taprio-add-support-for-change-v3
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
IEEE 802.1Q-2018 defines the concept of a cycle-time-extension, so the
last entry of a schedule before the start of a new schedule can be
extended, so "too-short" entries can be avoided.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
IEEE 802.1Q-2018 defines that a the cycle-time of a schedule may be
overridden, so the schedule is truncated to a determined "width".
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The IEEE 802.1Q-2018 defines two "types" of schedules, the "Oper" (from
operational?) and "Admin" ones. Up until now, 'taprio' only had
support for the "Oper" one, added when the qdisc is created. This adds
support for the "Admin" one, which allows the .change() operation to
be supported.
Just for clarification, some quick (and dirty) definitions, the "Oper"
schedule is the currently (as in this instant) running one, and it's
read-only. The "Admin" one is the one that the system configurator has
installed, it can be changed, and it will be "promoted" to "Oper" when
it's 'base-time' is reached.
The idea behing this patch is that calling something like the below,
(after taprio is already configured with an initial schedule):
$ tc qdisc change taprio dev IFACE parent root \
base-time X \
sched-entry <CMD> <GATES> <INTERVAL> \
...
Will cause a new admin schedule to be created and programmed to be
"promoted" to "Oper" at instant X. If an "Admin" schedule already
exists, it will be overwritten with the new parameters.
Up until now, there was some code that was added to ease the support
of changing a single entry of a schedule, but was ultimately unused.
Now, that we have support for "change" with more well thought
semantics, updating a single entry seems to be less useful.
So we remove what is in practice dead code, and return a "not
supported" error if the user tries to use it. If changing a single
entry would make the user's life easier we may ressurrect this idea,
but at this point, removing it simplifies the code.
For now, only the schedule specific bits are allowed to be added for a
new schedule, that means that 'clockid', 'num_tc', 'map' and 'queues'
cannot be modified.
Example:
$ tc qdisc change dev IFACE parent root handle 100 taprio \
base-time $BASE_TIME \
sched-entry S 00 500000 \
sched-entry S 0f 500000 \
clockid CLOCK_TAI
The only change in the netlink API introduced by this change is the
introduction of an "admin" type in the response to a dump request,
that type allows userspace to separate the "oper" schedule from the
"admin" schedule. If userspace doesn't support the "admin" type, it
will only display the "oper" schedule.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Right now, this isn't a problem, but the next commit allows schedules
to be added during runtime. When a new schedule transitions from the
inactive to the active state ("admin" -> "oper") the previous one can
be freed, if it's freed just after the RCU read lock is released, we
may access an invalid entry.
So, we should take care to protect the dequeue() flow, so all the
places that access the entries are protected by the RCU read lock.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng says:
====================
undo congestion window on spurious SYN or SYNACK timeout
Linux TCP currently uses the initial congestion window of 1 packet
if multiple SYN or SYNACK timeouts per RFC6298. However such
timeouts are often spurious on wireless or cellular networks that
experience high delay variances (e.g. ramping up dormant radios or
local link retransmission). Another case is when the underlying
path is longer than the default SYN timeout (e.g. 1 second). In
these cases starting the transfer with a minimal congestion window
is detrimental to the performance for short flows.
One naive approach is to simply ignore SYN or SYNACK timeouts and
always use a larger or default initial window. This approach however
risks pouring gas to the fire when the network is already highly
congested. This is particularly true in data center where application
could start thousands to millions of connections over a single or
multiple hosts resulting in high SYN drops (e.g. incast).
This patch-set detects spurious SYN and SYNACK timeouts upon
completing the handshake via the widely-supported TCP timestamp
options. Upon such events the sender reverts to the default
initial window to start the data transfer so it gets best of both
worlds. This patch-set supports this feature for both active and
passive as well as Fast Open or regular connections.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Relocate the congestion window initialization from tcp_init_metrics()
to tcp_init_transfer() to improve code readability.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use a helper to consolidate two identical code block for passive TFO.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch makes passive Fast Open reverts the cwnd to default
initial cwnd (10 packets) if the SYNACK timeout is spurious.
Passive Fast Open uses a full socket during handshake so it can
use the existing undo logic to detect spurious retransmission
by recording the first SYNACK timeout in key state variable
retrans_stamp. Upon receiving the ACK of the SYNACK, if the socket
has sent some data before the timeout, the spurious timeout
is detected by tcp_try_undo_recovery() in tcp_process_loss()
in tcp_ack().
But if the socket has not send any data yet, tcp_ack() does not
execute the undo code since no data is acknowledged. The fix is to
check such case explicitly after tcp_ack() during the ACK processing
in SYN_RECV state. In addition this is checked in FIN_WAIT_1 state
in case the server closes the socket before handshake completes.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>