Remove unnecessary check on update_lft variable in
addrconf_prefix_rcv_add_addr routine since it is always set to 0.
Moreover remove update_lft re-initialization to 0
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the interface is down, head/tail of the descriptor
ring address is set to 0 in netsec_netdev_stop().
But netsec hardware still keeps the previous descriptor
ring address, so there is inconsistency between driver
and hardware after interface is up at a later time.
To address this inconsistency, add netsec_reset_hardware()
when the interface is down.
In addition, to minimize the reset process,
add flag to decide whether driver loads the netsec microcode.
Even if driver resets the netsec hardware, netsec microcode
keeps resident on RAM, so it is ok we only load the microcode
at initialization.
This patch is critical for installation over network.
Signed-off-by: Masahisa KOJIMA <masahisa.kojima@linaro.org>
Fixes: 533dd11a12 ("net: socionext: Add Synquacer NetSec driver")
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Enable TX-irq as well during ndo_open() as we can not count upon
RX to arrive early enough to trigger the napi. This patch is critical
for installation over network.
Fixes: 533dd11a12 ("net: socionext: Add Synquacer NetSec driver")
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The usage of of_device_get_match_data() reduce the code size a bit.
Also, the only way to call mtk_probe() is to match an entry in
of_mtk_match[], so match cannot be NULL.
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull networking fixes from David Miller:
1) In ip_gre tunnel, handle the conflict between TUNNEL_{SEQ,CSUM} and
GSO/LLTX properly. From Sabrina Dubroca.
2) Stop properly on error in lan78xx_read_otp(), from Phil Elwell.
3) Don't uncompress in slip before rstate is initialized, from Tejaswi
Tanikella.
4) When using 1.x firmware on aquantia, issue a deinit before we
hardware reset the chip, otherwise we break dirty wake WOL. From
Igor Russkikh.
5) Correct log check in vhost_vq_access_ok(), from Stefan Hajnoczi.
6) Fix ethtool -x crashes in bnxt_en, from Michael Chan.
7) Fix races in l2tp tunnel creation and duplicate tunnel detection,
from Guillaume Nault.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (22 commits)
l2tp: fix race in duplicate tunnel detection
l2tp: fix races in tunnel creation
tun: send netlink notification when the device is modified
tun: set the flags before registering the netdevice
lan78xx: Don't reset the interface on open
bnxt_en: Fix NULL pointer dereference at bnxt_free_irq().
bnxt_en: Need to include RDMA rings in bnxt_check_rings().
bnxt_en: Support max-mtu with VF-reps
bnxt_en: Ignore src port field in decap filter nodes
bnxt_en: do not allow wildcard matches for L2 flows
bnxt_en: Fix ethtool -x crash when device is down.
vhost: return bool from *_access_ok() functions
vhost: fix vhost_vq_access_ok() log check
vhost: Fix vhost_copy_to_user()
net: aquantia: oops when shutdown on already stopped device
net: aquantia: Regression on reset with 1.x firmware
cdc_ether: flag the Cinterion AHS8 modem by gemalto as WWAN
slip: Check if rstate is initialized before uncompressing
lan78xx: Avoid spurious kevent 4 "error"
lan78xx: Correctly indicate invalid OTP
...
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrPnM8ACgkQsN6d1ii/
Ey9Kzwf/eQVb6zzn7FDHAb6pLaZ5i2xi2xohsKmhAVQIEa94rZ3mLoRegtnIfyjO
RcjjSAzHSZO9NQgNA2ALdu6bBdzu4/ywQEQCnY2Gqxp0ocG/+k3p/FqLHZGdcqPo
e3gpcVxHSFWUCCGm1t3umI25driqrUq4xa6UFi2IB4djDvTrK/JsSygKx6GiVujL
2eV7v7rgqaaVZQyo8iOd+LlWuKZewKLfnALUDC21X5J2HmvfoyTdn85kldzbiIsG
YR7mcfgAtAVTyCfgXI3eqAGpRFEyqR4ga87oahdV3/iW+4wreh4hm2Xd/IETXklv
Epxyet8IlMB9886PuZhZqgnW6o1RDA==
=z3bP
-----END PGP SIGNATURE-----
Merge tag 'for-linus-4.17-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen fixes from Juergen Gross:
"A few fixes of Xen related core code and drivers"
* tag 'for-linus-4.17-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/pvh: Indicate XENFEAT_linux_rsdp_unrestricted to Xen
xen/acpi: off by one in read_acpi_id()
xen/acpi: upload _PSD info for non Dom0 CPUs too
x86/xen: Delay get_cpu_cap until stack canary is established
xen: xenbus_dev_frontend: Verify body of XS_TRANSACTION_END
xen: xenbus: Catch closing of non existent transactions
xen: xenbus_dev_frontend: Fix XS_TRANSACTION_END handling
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlrO9NILHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYNMNg/+OO27GlzmTMVOW6ze8iOPam9yokU4/PRAdFYcOBUI
LE9NvXnsX6wj+dhFmxsO4lw4aTWEHBa6ZE0FoHNDNj/MG8oOLqATC6gaOqEwh5Xg
5yfosvbd/6GS8YH1vU6odIqNEfSqZh/ko/vHItz/LkpfbmJRfkwvf+lnUIDELAlH
8tCmNbXM7FxY1Ma9q1XvIkS/3dqlcWgqegL4TTejKr/rM3VWDyhqx1eg2uU8saU8
WQobchcrcGwy6NuEZ3TgEz3LUkBBCT/lrbNkIhKzll7O1d6fnTH04AFiBnEutVQv
LUVJQKTjagV2EACFMckdsBDzB+ZbJsNZuWk40fT6OwtESqoZwIrIUCZB92nUv2rK
noYtabkA5NfAOIkHuI98WYZZjngauAa4GHCzzIc6J3837JMyTfgKpS1HQAJ9hsqW
ijaaCn49T4rAtTySFegAUsLqRJi+GBhXiKIn7AZTDFakQffGZwS6MPkaGD1KyX3k
vtxUlmXdzfgkc0wBCxQwvTbWUGjWc0zRllgY5hHa/XbtkJrpeCvMSZ9A8k005ud3
2hcY17km+JEKSxitKa6/T5OetT9dMgO1LxMdkoxdWXU1z0aYIYOmXG8RuaP6bf+e
mmgKMX8GYeqO43LM7FyvgcLzL/hL+UAaOTkVefRYi03kuf4pbOoRuThEO0BryljK
Tig=
=l30v
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-4.17-2' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fix from Christoph Hellwig:
"Fix for one swiotlb regression in 2.16 from Takashi"
* tag 'dma-mapping-4.17-2' of git://git.infradead.org/users/hch/dma-mapping:
swiotlb: fix unexpected swiotlb_alloc_coherent failures
* minor regression test cleanup
* formatting fixes for end user use of kdb
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJaz048AAoJEIciOldedpOjdSEP/i07tDKf/A7cFIsRgJgXO4hV
M3fB3Kzr1DYrrfhWtWfjez/H7ScmYgNSwH7lsP8YibrpvwwxXblsE67zlg7w3oll
qaGx7zVvBRwHo/0xCJicM7sb3Ey5KX3/ycCpRTmJvj+ywnKlMed6oTU/N9V7mBR0
ScFpst/omZEkJzYJQwkZPpW8A1zxWYKp/F3g8jAOSz50/S2RWjzSFfg7Efm7+ND7
IRo/Qcvj+gRxTJyEHxS0wU2EO1egnGLjHmzl1PZMq5X0WsWSUYJ7s6faYh/geuiD
KFsIapYhRm3SEtgFmCnrVySk3GfdjaU+XDRPzSQk9qehySxU/oZdZbwtaI8YFo3t
HvoMyvZg4B3BSU1s4WqGyo97Ug2T3z58V2mnfU0IiDH5wiiFg3uCNoBY7CQXG+GP
wzPheSD+rWVAlcKuuNOQfufIkHrtWhJzjOPsVs4GfgOnZg6T1N7p40+i+hW6JNNi
K2NTTc7o/SZ7P7de5RibuaGnvE9zCVPpag27Zsasvhrh3BKriBv1ijYUXVbgoImL
sCFnERUYnR2M4iIAX2oMXyyW5KoiNJWCr+XaEmaYeoCOCcO2FQwo6J3SiNf2WZ4K
BXZ4LlvTFqG1ew/GCcWxenCo5mtEqPvt9eyAF2R0CCgiP4m2SG6sEB4JkvJBvoI9
ZtJBLWguNYJyBwbKqKaq
=zz/y
-----END PGP SIGNATURE-----
Merge tag 'for_linus-4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb
Pull kdb updates from Jason Wessel:
- fix 2032 time access issues and new compiler warnings
- minor regression test cleanup
- formatting fixes for end user use of kdb
* tag 'for_linus-4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb:
kdb: use memmove instead of overlapping memcpy
kdb: use ktime_get_mono_fast_ns() instead of ktime_get_ts()
kdb: bl: don't use tab character in output
kdb: drop newline in unknown command output
kdb: make "mdr" command repeat
kdb: use __ktime_get_real_seconds instead of __current_kernel_time
misc: kgdbts: Display progress of asynchronous tests
- Use generic pci_mmap_resoruce_range()
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iEYEABECAAYFAlrMWo4ACgkQykllyylKDCGrrQCfZHss5ank6e1H+EApm0KqEQFu
kbwAoIj6TdCVMH44kJwqtraIhBXV9dhX
=vYT3
-----END PGP SIGNATURE-----
Merge tag 'microblaze-4.17-rc1' of git://git.monstr.eu/linux-2.6-microblaze
Pull microblaze updates from Michal Simek:
"Use generic pci_mmap_resource_range()"
* tag 'microblaze-4.17-rc1' of git://git.monstr.eu/linux-2.6-microblaze:
microblaze: Use generic pci_mmap_resource_range()
microblaze: Provide pgprot_device/writecombine macros for nommu
I have one regression fix for a minor build problem after the architecture
removal series, plus a rework of the barriers in the readl/writel
functions, thanks to work by Sinan Kaya:
This started from a discussion on the linuxpcc and rdma mailing lists
[1]. To summarize, we decided that architectures are responsible to
serialize readl() and writel() accesses on a device MMIO space relative
to DMA performed by that device.
This series provides a pessimistic implementation of that behavior for
asm-generic/io.h, which is in turn used by a number of architectures
(h8300, microblaze, nios2, openrisc, s390, sparc, um, unicore32, and
xtensa). Some of those presumably need no extra barriers, or something
weaker than rmb()/wmb(), and they are advised to override the new default
for better performance.
For inb()/outb(), the same barriers are used, but architectures might
want to add another barrier to outb() here if that can guarantee
non-posted behavior (some architectures can, others cannot do that).
The readl_relaxed()/writel_relaxed() family of functions retains the
existing behavior with no extra barriers.
[1]: https://lists.ozlabs.org/pipermail/linuxppc-dev/2018-March/170481.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJazitHAAoJEGCrR//JCVInd0wP/iMzr1HWDgMjeeuxekFjwWDg
9fL+BFt1afeYb4wniqJcF7ymLow/H5Fbhj4dwM1p34De+CZ3+3JGNyK8qzoeKPjR
I2U5QqjWCHWDqpWRGWxO28dbs5/1EoW1zgctTNMUPHiamnomz9XIn0xaVKpu4HZ3
OtaeJm8seKTSj1+A2fye9sDpqMUJuVcnZAWJgqMJ8T98uMBOiJYWHftnFEJpSlwG
SJSt4AYsJnE+3BFawX1g3VWrHn9WN1uwVasJ1INFkLYNuLMYaK7RYjoBWNwHW+RQ
luq4xZE+HZehyZptilfs05x2IlhGSOVN5m0nVM2if9aXoEoO1UdaySbwO6Ukq085
VyfCzY+k4l0v44o4JqaSyAFLEae0809E6cQcGg3cjdstQv1Q3cgAJ96myP0x+QTw
b0xJGoo46eOfqpK4njARyjTSceYPgzkB5Dqngg9rCuh+EogotWpRRDB6zoeGGRK8
oOzMp0qLsAZFcYvjft5h0Cp6X51qfyJpBkJkvnASmF4yJPZlpCRGux+HM3jFb9bV
zbH+KPqTa47OmOK8MNIaFHMR1yMgZU6B2oEwFDEaG0M+6FC5irMSkgcDwIIMJXlJ
wLp7+4WhwFzFDe1mp/tKM5V4h9D6vQtSUjgOJffhxRXqCMkxc7eABmYBBkjMCsca
ibKXyZN16d1kRU9j7upb
=oBQh
-----END PGP SIGNATURE-----
Merge tag 'asm-generic' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic
Pull asm-generic fixes from Arnd Bergmann:
"I have one regression fix for a minor build problem after the
architecture removal series, plus a rework of the barriers in the
readl/writel functions, thanks to work by Sinan Kaya:
This started from a discussion on the linuxpcc and rdma mailing
lists[1]. To summarize, we decided that architectures are responsible
to serialize readl() and writel() accesses on a device MMIO space
relative to DMA performed by that device.
This series provides a pessimistic implementation of that behavior for
asm-generic/io.h, which is in turn used by a number of architectures
(h8300, microblaze, nios2, openrisc, s390, sparc, um, unicore32, and
xtensa). Some of those presumably need no extra barriers, or something
weaker than rmb()/wmb(), and they are advised to override the new
default for better performance.
For inb()/outb(), the same barriers are used, but architectures might
want to add another barrier to outb() here if that can guarantee
non-posted behavior (some architectures can, others cannot do that).
The readl_relaxed()/writel_relaxed() family of functions retains the
existing behavior with no extra barriers"
[1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2018-March/170481.html
* tag 'asm-generic' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
io: change writeX_relaxed() to remove barriers
io: change readX_relaxed() to remove barriers
dts: remove cris & metag dts hard link file
io: change inX() to have their own IO barrier overrides
io: change outX() to have their own IO barrier overrides
io: define stronger ordering for the default writeX() implementation
io: define stronger ordering for the default readX() implementation
io: define several IO & PIO barrier types for the asm-generic version
This adds reporting hugepage stats to virtio-balloon.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaziF/AAoJECgfDbjSjVRpVu8H/Aw8MRgCDNx85w6HdruPeJWx
NzRGAlZLaCnTc23PJ+bcAeribyPSeuTIj3M7QOMaY1fVGV8MmpQfS5lzdvmL9vJ/
Lug/7f+QNYLlao1QlszVg+4n79BRtXvH6qOdS+nj8zvTbm/pCr3ec/yrBv4Rfqy5
TWrZcceQ7Jhw/7EF7AFUxkmw2/TpRV/4yF9wOgDabshAytdN3PAzs38IYtOa+BLp
bUiJTXGPeYe0M4qkZ6zfwU2fLZqc2DCSFAagPb8jU46OfcViH8/fYfPbm5kQ7X81
LlSOg/ui6+ZJPHWzDjDy8N/HWpi0Qqbbdne60pKJC7dPlyQMRb2m5w6TqivmPyg=
=QwFg
-----END PGP SIGNATURE-----
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio update from Michael Tsirkin:
"This adds reporting hugepage stats to virtio-balloon"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio_balloon: export hugetlb page allocation counts
These updates come with:
- OF_IOMMU support for the Rockchip iommu driver so that it can
use generic DT bindings
- Rework of locking in the AMD IOMMU interrupt remapping code to
make it work better in RT kernels
- Support for improved iotlb flushing in the AMD IOMMU driver
- Support for 52-bit physical and virtual addressing in the
ARM-SMMU
- Various other small fixes and cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJazi2BAAoJECvwRC2XARrjYVwP/AyXK7CjRvaiHFopIUO0WpwY
V3GiKrODtSNHqPSKuFnqIssIhxZPw/SKFz6E/pe08pZ/pxHYTxeTL78Wz7D1+4Sp
n0YokSM5qLb660OTQVnyKNCku8cEMCb9hkQ/75SFgwcILQYF93cZBDIdBn93OKVO
6xAOE+tqd8Daulnk0YpdiCTFTJPzYHPl6B7scoUav26uaKxWeMJxeYe+EXC+4WQG
U1u/jDiVXyllzGgRqqfrmO4L2acmsK8HL97hD4+m1URJKDlb8ho6xwaRThFZWqXS
SbrYnvH0ruWGrLiQKmVUssw8FqbcXCzq3236g2O8jE4jqWSm70twg+q31iMjwD7v
bwsJGMkk7aLrquv9Zpaylpf8tRECk5bjhTFC2zB0pdum5XLx47j0IHKWMLPYhkCz
E0pBefvuhoSTbt/5X0urSRzH2Hk4ljEsM+QjlfH8SN3ALTljFjay607wbxC7t35M
LEL5AuNsDDBddoJIi9D13CdJEZa4lps8dbpB8m40lQVvmiLPLcKraaG0RfKQ397T
wsxhsDOQYM2FCwfUP3n8RTsMKRIp/UVkKY+2G7AsKofciSeulK6nDbrV7jFnitx4
vTxbRgpNejJpqzKZG/W9lCGWk1BhmQK/Cbu6JW5IA4+ew9omWkFp61U6rtc645Te
6cNEYBiMz/RZIiC2b18J
=kte5
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
- OF_IOMMU support for the Rockchip iommu driver so that it can use
generic DT bindings
- rework of locking in the AMD IOMMU interrupt remapping code to make
it work better in RT kernels
- support for improved iotlb flushing in the AMD IOMMU driver
- support for 52-bit physical and virtual addressing in the ARM-SMMU
- various other small fixes and cleanups
* tag 'iommu-updates-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (53 commits)
iommu/io-pgtable-arm: Avoid warning with 32-bit phys_addr_t
iommu/rockchip: Support sharing IOMMU between masters
iommu/rockchip: Add runtime PM support
iommu/rockchip: Fix error handling in init
iommu/rockchip: Use OF_IOMMU to attach devices automatically
iommu/rockchip: Use IOMMU device for dma mapping operations
dt-bindings: iommu/rockchip: Add clock property
iommu/rockchip: Control clocks needed to access the IOMMU
iommu/rockchip: Fix TLB flush of secondary IOMMUs
iommu/rockchip: Use iopoll helpers to wait for hardware
iommu/rockchip: Fix error handling in attach
iommu/rockchip: Request irqs in rk_iommu_probe()
iommu/rockchip: Fix error handling in probe
iommu/rockchip: Prohibit unbind and remove
iommu/amd: Return proper error code in irq_remapping_alloc()
iommu/amd: Make amd_iommu_devtable_lock a spin_lock
iommu/amd: Drop the lock while allocating new irq remap table
iommu/amd: Factor out setting the remap table for a devid
iommu/amd: Use `table' instead `irt' as variable name in amd_iommu_update_ga()
iommu/amd: Remove the special case from alloc_irq_table()
...
- Rework the idle loop in order to prevent CPUs from spending too
much time in shallow idle states by making it stop the scheduler
tick before putting the CPU into an idle state only if the idle
duration predicted by the idle governor is long enough. That
required the code to be reordered to invoke the idle governor
before stopping the tick, among other things (Rafael Wysocki,
Frederic Weisbecker, Arnd Bergmann).
- Add the missing description of the residency sysfs attribute to
the cpuidle documentation (Prashanth Prakash).
- Finalize the cpufreq cleanup moving frequency table validation
from drivers to the core (Viresh Kumar).
- Fix a clock leak regression in the armada-37xx cpufreq driver
(Gregory Clement).
- Fix the initialization of the CPU performance data structures
for shared policies in the CPPC cpufreq driver (Shunyong Yang).
- Clean up the ti-cpufreq, intel_pstate and CPPC cpufreq drivers
a bit (Viresh Kumar, Rafael Wysocki).
- Mark the expected switch fall-throughs in the PM QoS core (Gustavo
Silva).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJazfv7AAoJEILEb/54YlRx/kYP+gPOX5O5cFF22Y2xvDHPMWjm
D/3Nc2aRo+5DuHHECSIJ3ZVQzVoamN5zQ1KbsBRV0bJgwim4fw4M199Jr/0I2nES
1pkByuxLrAtwb83uX3uBIQnwgKOAwRftOTeVaFaMoXgIbyUqK7ZFkGq0xQTnKqor
6+J+78O7wMaIZ0YXQP98BC6g96vs/f+ICrh7qqY85r4NtO/thTA1IKevBmlFeIWR
yVhEYgwSFBaWehKK8KgbshmBBEk3qzDOYfwZF/JprPhiN/6madgHgYjHC8Seok5c
QUUTRlyO1ULTQe4JulyJUKobx7HE9u/FXC0RjbBiKPnYR4tb9Hd8OpajPRZo96AT
8IQCdzL2Iw/ZyQsmQZsWeO1HwPTwVlF/TO2gf6VdQtH221izuHG025p8/RcZe6zb
fTTFhh6/tmBvmOlbKMwxaLbGbwcj/5W5GvQXlXAtaElLobwwNEcEyVfF4jo4Zx/U
DQc7agaAps67lcgFAqNDy0PoU6bxV7yoiAIlTJHO9uyPkDNyIfb0ZPlmdIi3xYZd
tUD7C+VBezrNCkw7JWL1xXLFfJ5X7K6x5bi9I7TBj1l928Hak0dwzs7KlcNBtF1Y
SwnJsNa3kxunGsPajya8dy5gdO0aFeB9Bse0G429+ugk2IJO/Q9M9nQUArJiC9Xl
Gw1bw5Ynv6lx+r5EqxHa
=Pnk4
-----END PGP SIGNATURE-----
Merge tag 'pm-4.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more power management updates from Rafael Wysocki:
"These include one big-ticket item which is the rework of the idle loop
in order to prevent CPUs from spending too much time in shallow idle
states. It reduces idle power on some systems by 10% or more and may
improve performance of workloads in which the idle loop overhead
matters. This has been in the works for several weeks and it has been
tested and reviewed quite thoroughly.
Also included are changes that finalize the cpufreq cleanup moving
frequency table validation from drivers to the core, a few fixes and
cleanups of cpufreq drivers, a cpuidle documentation update and a PM
QoS core update to mark the expected switch fall-throughs in it.
Specifics:
- Rework the idle loop in order to prevent CPUs from spending too
much time in shallow idle states by making it stop the scheduler
tick before putting the CPU into an idle state only if the idle
duration predicted by the idle governor is long enough.
That required the code to be reordered to invoke the idle governor
before stopping the tick, among other things (Rafael Wysocki,
Frederic Weisbecker, Arnd Bergmann).
- Add the missing description of the residency sysfs attribute to the
cpuidle documentation (Prashanth Prakash).
- Finalize the cpufreq cleanup moving frequency table validation from
drivers to the core (Viresh Kumar).
- Fix a clock leak regression in the armada-37xx cpufreq driver
(Gregory Clement).
- Fix the initialization of the CPU performance data structures for
shared policies in the CPPC cpufreq driver (Shunyong Yang).
- Clean up the ti-cpufreq, intel_pstate and CPPC cpufreq drivers a
bit (Viresh Kumar, Rafael Wysocki).
- Mark the expected switch fall-throughs in the PM QoS core (Gustavo
Silva)"
* tag 'pm-4.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (23 commits)
tick-sched: avoid a maybe-uninitialized warning
cpufreq: Drop cpufreq_table_validate_and_show()
cpufreq: SCMI: Don't validate the frequency table twice
cpufreq: CPPC: Initialize shared perf capabilities of CPUs
cpufreq: armada-37xx: Fix clock leak
cpufreq: CPPC: Don't set transition_latency
cpufreq: ti-cpufreq: Use builtin_platform_driver()
cpufreq: intel_pstate: Do not include debugfs.h
PM / QoS: mark expected switch fall-throughs
cpuidle: Add definition of residency to sysfs documentation
time: hrtimer: Use timerqueue_iterate_next() to get to the next timer
nohz: Avoid duplication of code related to got_idle_tick
nohz: Gather tick_sched booleans under a common flag field
cpuidle: menu: Avoid selecting shallow states with stopped tick
cpuidle: menu: Refine idle state selection for running tick
sched: idle: Select idle state before stopping the tick
time: hrtimer: Introduce hrtimer_next_event_without()
time: tick-sched: Split tick_nohz_stop_sched_tick()
cpuidle: Return nohz hint from cpuidle_select()
jiffies: Introduce USER_TICK_USEC and redefine TICK_USEC
...
in my local tree for some time. I need to push them upstream:
- Separate out config-bisect.pl from ktest.pl.
This allows users to do config bisects without full ktest setup.
- Email on status change.
Allow the user to be emailed on test start, finish, failure, etc.
- Other small fixes and enhancements
-----BEGIN PGP SIGNATURE-----
iQHIBAABCgAyFiEEPm6V/WuN2kyArTUe1a05Y9njSUkFAlrOCmcUHHJvc3RlZHRA
Z29vZG1pcy5vcmcACgkQ1a05Y9njSUnVhAv/Xa30lY98HFbssw2dUcGEtbv16em6
iqcExca3tDBRN0JRx299WEozjOANI5OUWcNZP0PcVBBRKdn0RvyAhxj76P7Y+8MH
tFbkiLhQqxPrGq+VQdWnmqC3V8yHTFk4yMlwowTvH+6F6ev8YtQbOU6aNcRFcke1
lFzYxpU3KqlS1zm23zjzKazKJJTfP7DVtEDkoNEBK6xlRDz0PAVd8ectSbAShBEl
9xODhPDeVI4fAxxt1uK4rhHU17+XFIHHuuftetT5NNuPTnhsarfVOse+fJxvi0Gn
Ijfgzutad5HERsMZWOhhPy9IZItGg+tHceXAbPx98stZrCeCxWHRVZ9R2uDxa/2J
4/9dCcXxDcjCMyqMsEtwyyuJrK7Nslsn0VqcbVRS1ModlSfyqvy81neOZ3g9B7Dd
0nSBh+5rOirI/X82Ye8lQnZN5CjEZsUrYwtSK1iKzBeGiitdD+GbI4AaWrzvAlUc
VzvUJ45tmhnodETJ2emddgpEFjHU0JGjSL70
=19bA
-----END PGP SIGNATURE-----
Merge tag 'ktest-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest
Pull ktest updates from Steven Rostedt:
"These commits have either been sitting in my INBOX or have been in my
local tree for some time. I need to push them upstream:
- Separate out config-bisect.pl from ktest.pl.
This allows users to do config bisects without full ktest setup.
- Email on status change.
Allow the user to be emailed on test start, finish, failure, etc.
- Other small fixes and enhancements"
* tag 'ktest-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest: (24 commits)
ktest: Take submenu into account for grub2 menus
ktest.pl: Add MAIL_COMMAND option to define how to send email
ktest.pl: Use run_command to execute sending mail
ktest.pl: Allow dodie be recursive
ktest.pl: Kill test if mailer is not supported
ktest.pl: Add MAIL_PATH option to define where to find the mailer
ktest.pl: No need to print no mailer is specified when mailto is not
Ktest: add email options to sample.config
Ktest: Use dodie for critical falures
Ktest: Add SigInt handling
Ktest: Add email support
ktest.pl: Detect if a config-bisect was interrupted
ktest.pl: Make finding config-bisect.pl dynamic
ktest.pl: Have ktest.pl pass -r to config-bisect.pl to reset bisect
ktest.pl: Use diffconfig if available for failed config bisects
ktest.pl: Allow for the config-bisect.pl output to display to console
ktest: Use config-bisect.pl in ktest.pl
ktest: Add standalone config-bisect.pl program
ktest: Set do_not_reboot=y for CONFIG_BISECT_TYPE=build
ktest: Set buildonly=1 for CONFIG_BISECT_TYPE=build
...
Pull UML updates from Richard Weinberger:
- a new and faster epoll based IRQ controller and NIC driver
- misc fixes and janitorial updates
* git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
Fix vector raw inintialization logic
Migrate vector timers to new timer API
um: Compile with modern headers
um: vector: Fix an error handling path in 'vector_parse()'
um: vector: Fix a memory allocation check
um: vector: fix missing unlock on error in vector_net_open()
um: Add missing EXPORT for free_irq_by_fd()
High Performance UML Vector Network Driver
Epoll based IRQ controller
um: Use POSIX ucontext_t instead of struct ucontext
um: time: Use timespec64 for persistent clock
um: Restore symbol versions for __memcpy and memcpy
Here is a very small set of fixes for inclusion in linux-4.17-rc1:
Two changes for the maintainer file, and one more fix for the newly
added npcm platform, to enable the level 2 cache controller.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJazib3AAoJEGCrR//JCVIndE8P/1vRE7HcR+DPVwjDWGdfDhQ+
LuF86+eZRwxHewykA+RQpsdQ8c2N2g+5oL20EvtJpInS/2iE36evZicFIMHQKfnL
cbij5fi2/hlCRGWfscr2g5Zq1KNPSLqyfRms5z27TRddYoZMYqMg67TwwkQ0tdmf
WeQ+2Dg17SaZecGFLWmr8cJlo+bx6U32KYHOMo7X8mhW91GPEHLqh/u+LhVQPnnt
LdZ4IcMH4lC/uXl39qojT8aWmzm2fcKkYBJAMbm3cL3tu76Jyy8rEy5AAdqjHsin
Xb+wcnCfk6q+dO3OjFhA3bvgDw4LUIyntMiFQvANWWMiGrmIynw9VYIMiQ2Y0S15
Kv9e2uwkUHiuWJZCDbFCa9pa0kDmxMpoC45wDBR2ktuAa4HB/BP6PKTFrLC0Fxag
KJvRRDklxHDWbsA+mZTser1mEKG9kslnGNfF3LCALWnFkMgWKiV1NuLr8HucaO7d
7b7ppofmjva+qcHjDd8I9r4qFnLcWDJHZg3cZus+4WjTHt5ws+ueb0pYPNIKvnE1
dhvjY9dSmeEZrM8rqLcUD9Npnm4GA+ySFpFty1jPP4hb0Q+Ho1p8BssP1Ktxw8mX
FQ59x/RYj/5+yPj7st0ERZsxir4D39H3PlYsTWeqjZCCG8VMgBX+UND0ozh7ytYM
1+1r1us28shsL5Xp6HRo
=kiZ6
-----END PGP SIGNATURE-----
Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Arnd Bergmann:
"Here is a very small set of fixes for inclusion in linux-4.17-rc1: Two
changes for the maintainer file, and one more fix for the newly added
npcm platform, to enable the level 2 cache controller"
* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
MAINTAINERS: Update ASPEED entry with details
MAINTAINERS: Migrate oxnas list to groups.io
arm: npcm: enable L2 cache in NPCM7xx architecture
Guillaume Nault says:
====================
l2tp: tunnel creation fixes
L2TP tunnel creation is racy. We need to make sure that the tunnel
returned by l2tp_tunnel_create() isn't going to be freed while the
caller is using it. This is done in patch #1, by separating tunnel
creation from tunnel registration.
With the tunnel registration code in place, we can now check for
duplicate tunnels in a race-free way. This is done in patch #2, which
incidentally removes the last use of l2tp_tunnel_find().
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
We can't use l2tp_tunnel_find() to prevent l2tp_nl_cmd_tunnel_create()
from creating a duplicate tunnel. A tunnel can be concurrently
registered after l2tp_tunnel_find() returns. Therefore, searching for
duplicates must be done at registration time.
Finally, remove l2tp_tunnel_find() entirely as it isn't use anywhere
anymore.
Fixes: 309795f4be ("l2tp: Add netlink control API for L2TP")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
l2tp_tunnel_create() inserts the new tunnel into the namespace's tunnel
list and sets the socket's ->sk_user_data field, before returning it to
the caller. Therefore, there are two ways the tunnel can be accessed
and freed, before the caller even had the opportunity to take a
reference. In practice, syzbot could crash the module by closing the
socket right after a new tunnel was returned to pppol2tp_create().
This patch moves tunnel registration out of l2tp_tunnel_create(), so
that the caller can safely hold a reference before publishing the
tunnel. This second step is done with the new l2tp_tunnel_register()
function, which is now responsible for associating the tunnel to its
socket and for inserting it into the namespace's list.
While moving the code to l2tp_tunnel_register(), a few modifications
have been done. First, the socket validation tests are done in a helper
function, for clarity. Also, modifying the socket is now done after
having inserted the tunnel to the namespace's tunnels list. This will
allow insertion to fail, without having to revert theses modifications
in the error path (a followup patch will check for duplicate tunnels
before insertion). Either the socket is a kernel socket which we
control, or it is a user-space socket for which we have a reference on
the file descriptor. In any case, the socket isn't going to be closed
from under us.
Reported-by: syzbot+fbeeb5c3b538e8545644@syzkaller.appspotmail.com
Fixes: fd558d186d ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
I added dumping of link information about tun devices over netlink in
commit 1ec010e705 ("tun: export flags, uid, gid, queue information
over netlink"), but didn't add the missing netlink notifications when
the device's exported properties change.
This patch adds notifications when owner/group or flags are modified,
when queues are attached/detached, and when a tun fd is closed.
Reported-by: Thomas Haller <thaller@redhat.com>
Fixes: 1ec010e705 ("tun: export flags, uid, gid, queue information over netlink")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Otherwise, register_netdevice advertises the creation of the device with
the default flags, instead of what the user requested.
Reported-by: Thomas Haller <thaller@redhat.com>
Fixes: 1ec010e705 ("tun: export flags, uid, gid, queue information over netlink")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 92571a1aae ("lan78xx: Connect phy early") moves the PHY
initialisation into lan78xx_probe, but lan78xx_open subsequently calls
lan78xx_reset. As well as forcing a second round of link negotiation,
this reset frequently prevents the phy interrupt from being generated
(even though the link is up), rendering the interface unusable.
Fix this issue by removing the lan78xx_reset call from lan78xx_open.
Fixes: 92571a1aae ("lan78xx: Connect phy early")
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan says:
====================
bnxt_en: Fixes for net.
This bug fix series include NULL pointer fixes in ethtool -x code path
and in the error clean up path when freeing IRQs, a ring accounting bug
that missed rings used by the RDMA driver, and 3 bug fixes related to TC
Flower and VF-reps.
v2: Fixed commit message of patch 4. Changed the pound sign to $ sign
in front of the ip command.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When open fails during ethtool -L ring change, for example, the driver
may crash at bnxt_free_irq() because bp->bnapi is NULL.
If we fail to allocate all the new rings, bnxt_open_nic() will free
all the memory including bp->bnapi. Subsequent call to bnxt_close_nic()
will try to dereference bp->bnapi in bnxt_free_irq().
Fix it by checking for !bp->bnapi in bnxt_free_irq().
Fixes: e5811b8c09 ("bnxt_en: Add IRQ remapping logic.")
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With recent changes to reserve both L2 and RDMA rings, we need to include
the RDMA rings in bnxt_check_rings(). Otherwise we will under-estimate
the rings we need during ethtool -L and may lead to failure.
Fixes: fbcfc8e467 ("bnxt_en: Reserve completion rings and MSIX for bnxt_re RDMA driver.")
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While a VF is configured with a bigger mtu (> 1500), any packets that
are punted to the VF-rep (slow-path) get dropped by OVS kernel-datapath
with the following message: "dropped over-mtu packet". Fix this by
returning the max-mtu value for a VF-rep derived from its corresponding VF.
VF-rep's mtu can be changed using 'ip' command as shown in this example:
$ ip link set bnxt0_pf0vf0 mtu 9000
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver currently uses src port field (along with other fields) in the
decap tunnel key, while looking up and adding tunnel nodes. This leads to
redundant cfa_decap_filter_alloc() requests to the FW and flow-miss in the
flow engine. Fix this by ignoring the src port field in decap tunnel nodes.
Fixes: f484f6782e ("bnxt_en: add hwrm FW cmds for cfa_encap_record and decap_filter")
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before this patch the following commands would succeed as far as the
user was concerned:
$ tc qdisc add dev p1p1 ingress
$ tc filter add dev p1p1 parent ffff: protocol all \
flower skip_sw action drop
$ tc filter add dev p1p1 parent ffff: protocol ipv4 \
flower skip_sw src_mac 00:02:00:00:00:01/44 action drop
The current flow offload infrastructure used does not support wildcard
matching for ethernet headers, so do not allow the second or third
commands to succeed. If a user wants to drop traffic on that interface
the protocol and MAC addresses need to be specified explicitly:
$ tc qdisc add dev p1p1 ingress
$ tc filter add dev p1p1 parent ffff: protocol arp \
flower skip_sw action drop
$ tc filter add dev p1p1 parent ffff: protocol ipv4 \
flower skip_sw action drop
...
$ tc filter add dev p1p1 parent ffff: protocol ipv4 \
flower skip_sw src_mac 00:02:00:00:00:01 action drop
$ tc filter add dev p1p1 parent ffff: protocol ipv4 \
flower skip_sw src_mac 00:02:00:00:00:02 action drop
...
There are also checks for VLAN parameters in this patch as other callers
may wildcard those parameters even if tc does not. Using different
flow infrastructure could allow this to work in the future for L2 flows,
but for now it does not.
Fixes: 2ae7408fed ("bnxt_en: bnxt: add TC flower filter offload support")
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix ethtool .get_rxfh() crash by checking for valid indirection table
address before copying the data.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Merge more updates from Andrew Morton:
- almost all of the rest of MM
- kasan updates
- lots of procfs work
- misc things
- lib/ updates
- checkpatch
- rapidio
- ipc/shm updates
- the start of willy's XArray conversion
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (140 commits)
page cache: use xa_lock
xarray: add the xa_lock to the radix_tree_root
fscache: use appropriate radix tree accessors
export __set_page_dirty
unicore32: turn flush_dcache_mmap_lock into a no-op
arm64: turn flush_dcache_mmap_lock into a no-op
mac80211_hwsim: use DEFINE_IDA
radix tree: use GFP_ZONEMASK bits of gfp_t for flags
linux/const.h: refactor _BITUL and _BITULL a bit
linux/const.h: move UL() macro to include/linux/const.h
linux/const.h: prefix include guard of uapi/linux/const.h with _UAPI
xen, mm: allow deferred page initialization for xen pv domains
elf: enforce MAP_FIXED on overlaying elf segments
fs, elf: drop MAP_FIXED usage from elf_map
mm: introduce MAP_FIXED_NOREPLACE
MAINTAINERS: update bouncing aacraid@adaptec.com addresses
fs/dcache.c: add cond_resched() in shrink_dentry_list()
include/linux/kfifo.h: fix comment
ipc/shm.c: shm_split(): remove unneeded test for NULL shm_file_data.vm_ops
kernel/sysctl.c: add kdoc comments to do_proc_do{u}intvec_minmax_conv_param
...
Remove the address_space ->tree_lock and use the xa_lock newly added to
the radix_tree_root. Rename the address_space ->page_tree to ->i_pages,
since we don't really care that it's a tree.
[willy@infradead.org: fix nds32, fs/dax.c]
Link: http://lkml.kernel.org/r/20180406145415.GB20605@bombadil.infradead.orgLink: http://lkml.kernel.org/r/20180313132639.17387-9-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Jeff Layton <jlayton@redhat.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This results in no change in structure size on 64-bit machines as it
fits in the padding between the gfp_t and the void *. 32-bit machines
will grow the structure from 8 to 12 bytes. Almost all radix trees are
protected with (at least) a spinlock, so as they are converted from
radix trees to xarrays, the data structures will shrink again.
Initialising the spinlock requires a name for the benefit of lockdep, so
RADIX_TREE_INIT() now needs to know the name of the radix tree it's
initialising, and so do IDR_INIT() and IDA_INIT().
Also add the xa_lock() and xa_unlock() family of wrappers to make it
easier to use the lock. If we could rely on -fplan9-extensions in the
compiler, we could avoid all of this syntactic sugar, but that wasn't
added until gcc 4.6.
Link: http://lkml.kernel.org/r/20180313132639.17387-8-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Don't open-code accesses to data structure internals.
Link: http://lkml.kernel.org/r/20180313132639.17387-7-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
XFS currently contains a copy-and-paste of __set_page_dirty(). Export
it from buffer.c instead.
Link: http://lkml.kernel.org/r/20180313132639.17387-6-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Unicore doesn't walk the VMA tree in its flush_dcache_page()
implementation, so has no need to take the tree_lock.
Link: http://lkml.kernel.org/r/20180313132639.17387-5-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ARM64 doesn't walk the VMA tree in its flush_dcache_page()
implementation, so has no need to take the tree_lock.
Link: http://lkml.kernel.org/r/20180313132639.17387-4-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is preferred to opencoding an IDA_INIT.
Link: http://lkml.kernel.org/r/20180313132639.17387-2-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "XArray", v9. (First part thereof).
This patchset is, I believe, appropriate for merging for 4.17. It
contains the XArray implementation, to eventually replace the radix
tree, and converts the page cache to use it.
This conversion keeps the radix tree and XArray data structures in sync
at all times. That allows us to convert the page cache one function at
a time and should allow for easier bisection. Other than renaming some
elements of the structures, the data structures are fundamentally
unchanged; a radix tree walk and an XArray walk will touch the same
number of cachelines. I have changes planned to the XArray data
structure, but those will happen in future patches.
Improvements the XArray has over the radix tree:
- The radix tree provides operations like other trees do; 'insert' and
'delete'. But what most users really want is an automatically
resizing array, and so it makes more sense to give users an API that
is like an array -- 'load' and 'store'. We still have an 'insert'
operation for users that really want that semantic.
- The XArray considers locking as part of its API. This simplifies a
lot of users who formerly had to manage their own locking just for
the radix tree. It also improves code generation as we can now tell
RCU that we're holding a lock and it doesn't need to generate as much
fencing code. The other advantage is that tree nodes can be moved
(not yet implemented).
- GFP flags are now parameters to calls which may need to allocate
memory. The radix tree forced users to decide what the allocation
flags would be at creation time. It's much clearer to specify them at
allocation time.
- Memory is not preloaded; we don't tie up dozens of pages on the off
chance that the slab allocator fails. Instead, we drop the lock,
allocate a new node and retry the operation. We have to convert all
the radix tree, IDA and IDR preload users before we can realise this
benefit, but I have not yet found a user which cannot be converted.
- The XArray provides a cmpxchg operation. The radix tree forces users
to roll their own (and at least four have).
- Iterators take a 'max' parameter. That simplifies many users and will
reduce the amount of iteration done.
- Iteration can proceed backwards. We only have one user for this, but
since it's called as part of the pagefault readahead algorithm, that
seemed worth mentioning.
- RCU-protected pointers are not exposed as part of the API. There are
some fun bugs where the page cache forgets to use rcu_dereference()
in the current codebase.
- Value entries gain an extra bit compared to radix tree exceptional
entries. That gives us the extra bit we need to put huge page swap
entries in the page cache.
- Some iterators now take a 'filter' argument instead of having
separate iterators for tagged/untagged iterations.
The page cache is improved by this:
- Shorter, easier to read code
- More efficient iterations
- Reduction in size of struct address_space
- Fewer walks from the top of the data structure; the XArray API
encourages staying at the leaf node and conducting operations there.
This patch (of 8):
None of these bits may be used for slab allocations, so we can use them
as radix tree flags as long as we mask them off before passing them to
the slab allocator. Move the IDR flag from the high bits to the
GFP_ZONEMASK bits.
Link: http://lkml.kernel.org/r/20180313132639.17387-3-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Jeff Layton <jlayton@kernel.org>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minor cleanups available by _UL and _ULL.
Link: http://lkml.kernel.org/r/1519301715-31798-5-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ARM, ARM64 and UniCore32 duplicate the definition of UL():
#define UL(x) _AC(x, UL)
This is not actually arch-specific, so it will be useful to move it to a
common header. Currently, we only have the uapi variant for
linux/const.h, so I am creating include/linux/const.h.
I also added _UL(), _ULL() and ULL() because _AC() is mostly used in
the form either _AC(..., UL) or _AC(..., ULL). I expect they will be
replaced in follow-up cleanups. The underscore-prefixed ones should
be used for exported headers.
Link: http://lkml.kernel.org/r/1519301715-31798-4-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "linux/const.h: cleanups of macros such as UL(), _BITUL(),
BIT() etc", v3.
ARM, ARM64, UniCore32 define UL() as a shorthand of _AC(..., UL). More
architectures may introduce it in the future.
UL() is arch-agnostic, and useful. So let's move it to
include/linux/const.h
Currently, <asm/memory.h> must be included to use UL(). It pulls in more
bloats just for defining some bit macros.
I posted V2 one year ago.
The previous posts are:
https://patchwork.kernel.org/patch/9498273/https://patchwork.kernel.org/patch/9498275/https://patchwork.kernel.org/patch/9498269/https://patchwork.kernel.org/patch/9498271/
At that time, what blocked this series was a comment from
David Howells:
You need to be very careful doing this. Some userspace stuff
depends on the guard macro names on the kernel header files.
(https://patchwork.kernel.org/patch/9498275/)
Looking at the code closer, I noticed this is not a problem.
See the following line.
https://github.com/torvalds/linux/blob/v4.16-rc2/scripts/headers_install.sh#L40
scripts/headers_install.sh rips off _UAPI prefix from guard macro names.
I ran "make headers_install" and confirmed the result is what I expect.
So, we can prefix the include guard of include/uapi/linux/const.h,
and add a new include/linux/const.h.
This patch (of 4):
I am going to add include/linux/const.h for the kernel space.
Add _UAPI to the include guard of include/uapi/linux/const.h to
prepare for that.
Please notice the guard name of the exported one will be kept as-is.
So, this commit has no impact to the userspace even if some userspace
stuff depends on the guard macro names.
scripts/headers_install.sh processes exported headers by SED, and
rips off "_UAPI" from guard macro names.
#ifndef _UAPI_LINUX_CONST_H
#define _UAPI_LINUX_CONST_H
will be turned into
#ifndef _LINUX_CONST_H
#define _LINUX_CONST_H
Link: http://lkml.kernel.org/r/1519301715-31798-2-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Juergen Gross noticed that commit f7f99100d8 ("mm: stop zeroing memory
during allocation in vmemmap") broke XEN PV domains when deferred struct
page initialization is enabled.
This is because the xen's PagePinned() flag is getting erased from
struct pages when they are initialized later in boot.
Juergen fixed this problem by disabling deferred pages on xen pv
domains. It is desirable, however, to have this feature available as it
reduces boot time. This fix re-enables the feature for pv-dmains, and
fixes the problem the following way:
The fix is to delay setting PagePinned flag until struct pages for all
allocated memory are initialized, i.e. until after free_all_bootmem().
A new x86_init.hyper op init_after_bootmem() is called to let xen know
that boot allocator is done, and hence struct pages for all the
allocated memory are now initialized. If deferred page initialization
is enabled, the rest of struct pages are going to be initialized later
in boot once page_alloc_init_late() is called.
xen_after_bootmem() walks page table's pages and marks them pinned.
Link: http://lkml.kernel.org/r/20180226160112.24724-2-pasha.tatashin@oracle.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Juergen Gross <jgross@suse.com>
Tested-by: Juergen Gross <jgross@suse.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Jinbum Park <jinb.park7@gmail.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Jia Zhang <zhang.jia@linux.alibaba.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Anshuman has reported that with "fs, elf: drop MAP_FIXED usage from
elf_map" applied, some ELF binaries in his environment fail to start
with
[ 23.423642] 9148 (sed): Uhuuh, elf segment at 0000000010030000 requested but the memory is mapped already
[ 23.423706] requested [10030000, 10040000] mapped [10030000, 10040000] 100073 anon
The reason is that the above binary has overlapping elf segments:
LOAD 0x0000000000000000 0x0000000010000000 0x0000000010000000
0x0000000000013a8c 0x0000000000013a8c R E 10000
LOAD 0x000000000001fd40 0x000000001002fd40 0x000000001002fd40
0x00000000000002c0 0x00000000000005e8 RW 10000
LOAD 0x0000000000020328 0x0000000010030328 0x0000000010030328
0x0000000000000384 0x00000000000094a0 RW 10000
That binary has two RW LOAD segments, the first crosses a page border
into the second
0x1002fd40 (LOAD2-vaddr) + 0x5e8 (LOAD2-memlen) == 0x10030328 (LOAD3-vaddr)
Handle this situation by enforcing MAP_FIXED when we establish a
temporary brk VMA to handle overlapping segments. All other mappings
will still use MAP_FIXED_NOREPLACE.
Link: http://lkml.kernel.org/r/20180213100440.GM3443@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Andrei Vagin <avagin@openvz.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Kees Cook <keescook@chromium.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Both load_elf_interp and load_elf_binary rely on elf_map to map segments
on a controlled address and they use MAP_FIXED to enforce that. This is
however dangerous thing prone to silent data corruption which can be
even exploitable.
Let's take CVE-2017-1000253 as an example. At the time (before commit
eab09532d400: "binfmt_elf: use ELF_ET_DYN_BASE only for PIE")
ELF_ET_DYN_BASE was at TASK_SIZE / 3 * 2 which is not that far away from
the stack top on 32b (legacy) memory layout (only 1GB away). Therefore
we could end up mapping over the existing stack with some luck.
The issue has been fixed since then (a87938b2e246: "fs/binfmt_elf.c: fix
bug in loading of PIE binaries"), ELF_ET_DYN_BASE moved moved much
further from the stack (eab09532d4 and later by c715b72c1ba4: "mm:
revert x86_64 and arm64 ELF_ET_DYN_BASE base changes") and excessive
stack consumption early during execve fully stopped by da029c11e6
("exec: Limit arg stack to at most 75% of _STK_LIM"). So we should be
safe and any attack should be impractical. On the other hand this is
just too subtle assumption so it can break quite easily and hard to
spot.
I believe that the MAP_FIXED usage in load_elf_binary (et. al) is still
fundamentally dangerous. Moreover it shouldn't be even needed. We are
at the early process stage and so there shouldn't be unrelated mappings
(except for stack and loader) existing so mmap for a given address should
succeed even without MAP_FIXED. Something is terribly wrong if this is
not the case and we should rather fail than silently corrupt the
underlying mapping.
Address this issue by changing MAP_FIXED to the newly added
MAP_FIXED_NOREPLACE. This will mean that mmap will fail if there is an
existing mapping clashing with the requested one without clobbering it.
[mhocko@suse.com: fix build]
[akpm@linux-foundation.org: coding-style fixes]
[avagin@openvz.org: don't use the same value for MAP_FIXED_NOREPLACE and MAP_SYNC]
Link: http://lkml.kernel.org/r/20171218184916.24445-1-avagin@openvz.org
Link: http://lkml.kernel.org/r/20171213092550.2774-3-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrei Vagin <avagin@openvz.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "mm: introduce MAP_FIXED_NOREPLACE", v2.
This has started as a follow up discussion [3][4] resulting in the
runtime failure caused by hardening patch [5] which removes MAP_FIXED
from the elf loader because MAP_FIXED is inherently dangerous as it
might silently clobber an existing underlying mapping (e.g. stack).
The reason for the failure is that some architectures enforce an
alignment for the given address hint without MAP_FIXED used (e.g. for
shared or file backed mappings).
One way around this would be excluding those archs which do alignment
tricks from the hardening [6]. The patch is really trivial but it has
been objected, rightfully so, that this screams for a more generic
solution. We basically want a non-destructive MAP_FIXED.
The first patch introduced MAP_FIXED_NOREPLACE which enforces the given
address but unlike MAP_FIXED it fails with EEXIST if the given range
conflicts with an existing one. The flag is introduced as a completely
new one rather than a MAP_FIXED extension because of the backward
compatibility. We really want a never-clobber semantic even on older
kernels which do not recognize the flag. Unfortunately mmap sucks
wrt flags evaluation because we do not EINVAL on unknown flags. On
those kernels we would simply use the traditional hint based semantic so
the caller can still get a different address (which sucks) but at least
not silently corrupt an existing mapping. I do not see a good way
around that. Except we won't export expose the new semantic to the
userspace at all.
It seems there are users who would like to have something like that.
Jemalloc has been mentioned by Michael Ellerman [7]
Florian Weimer has mentioned the following:
: glibc ld.so currently maps DSOs without hints. This means that the kernel
: will map right next to each other, and the offsets between them a completely
: predictable. We would like to change that and supply a random address in a
: window of the address space. If there is a conflict, we do not want the
: kernel to pick a non-random address. Instead, we would try again with a
: random address.
John Hubbard has mentioned CUDA example
: a) Searches /proc/<pid>/maps for a "suitable" region of available
: VA space. "Suitable" generally means it has to have a base address
: within a certain limited range (a particular device model might
: have odd limitations, for example), it has to be large enough, and
: alignment has to be large enough (again, various devices may have
: constraints that lead us to do this).
:
: This is of course subject to races with other threads in the process.
:
: Let's say it finds a region starting at va.
:
: b) Next it does:
: p = mmap(va, ...)
:
: *without* setting MAP_FIXED, of course (so va is just a hint), to
: attempt to safely reserve that region. If p != va, then in most cases,
: this is a failure (almost certainly due to another thread getting a
: mapping from that region before we did), and so this layer now has to
: call munmap(), before returning a "failure: retry" to upper layers.
:
: IMPROVEMENT: --> if instead, we could call this:
:
: p = mmap(va, ... MAP_FIXED_NOREPLACE ...)
:
: , then we could skip the munmap() call upon failure. This
: is a small thing, but it is useful here. (Thanks to Piotr
: Jaroszynski and Mark Hairgrove for helping me get that detail
: exactly right, btw.)
:
: c) After that, CUDA suballocates from p, via:
:
: q = mmap(sub_region_start, ... MAP_FIXED ...)
:
: Interestingly enough, "freeing" is also done via MAP_FIXED, and
: setting PROT_NONE to the subregion. Anyway, I just included (c) for
: general interest.
Atomic address range probing in the multithreaded programs in general
sounds like an interesting thing to me.
The second patch simply replaces MAP_FIXED use in elf loader by
MAP_FIXED_NOREPLACE. I believe other places which rely on MAP_FIXED
should follow. Actually real MAP_FIXED usages should be docummented
properly and they should be more of an exception.
[1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org
[2] http://lkml.kernel.org/r/20171129144219.22867-1-mhocko@kernel.org
[3] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
[4] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
[5] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
[6] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
[7] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au
This patch (of 2):
MAP_FIXED is used quite often to enforce mapping at the particular range.
The main problem of this flag is, however, that it is inherently dangerous
because it unmaps existing mappings covered by the requested range. This
can cause silent memory corruptions. Some of them even with serious
security implications. While the current semantic might be really
desiderable in many cases there are others which would want to enforce the
given range but rather see a failure than a silent memory corruption on a
clashing range. Please note that there is no guarantee that a given range
is obeyed by the mmap even when it is free - e.g. arch specific code is
allowed to apply an alignment.
Introduce a new MAP_FIXED_NOREPLACE flag for mmap to achieve this
behavior. It has the same semantic as MAP_FIXED wrt. the given address
request with a single exception that it fails with EEXIST if the requested
address is already covered by an existing mapping. We still do rely on
get_unmaped_area to handle all the arch specific MAP_FIXED treatment and
check for a conflicting vma after it returns.
The flag is introduced as a completely new one rather than a MAP_FIXED
extension because of the backward compatibility. We really want a
never-clobber semantic even on older kernels which do not recognize the
flag. Unfortunately mmap sucks wrt. flags evaluation because we do not
EINVAL on unknown flags. On those kernels we would simply use the
traditional hint based semantic so the caller can still get a different
address (which sucks) but at least not silently corrupt an existing
mapping. I do not see a good way around that.
[mpe@ellerman.id.au: fix whitespace]
[fail on clashing range with EEXIST as per Florian Weimer]
[set MAP_FIXED before round_hint_to_min as per Khalid Aziz]
Link: http://lkml.kernel.org/r/20171213092550.2774-2-mhocko@kernel.org
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Kees Cook <keescook@chromium.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Jason Evans <jasone@google.com>
Cc: David Goldblatt <davidtgoldblatt@gmail.com>
Cc: Edward Tomasz Napierała <trasz@FreeBSD.org>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Adaptec is now part of Microsemi.
Commit 2a81ffdd9d ("MAINTAINERS: Update email address for aacraid")
updated only one of the driver maintainer addresses.
Update the other two sections as the aacraid@adaptec.com address
bounces.
Link: http://lkml.kernel.org/r/1522103936.12357.27.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Dave Carroll <david.carroll@microsemi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>