Commit Graph

40955 Commits

Author SHA1 Message Date
Linus Torvalds f648372dfe Thermal control updates for 5.18-rc1
- Add a new thermal driver for the Intel Hardware Feedback Interface
    (HFI) including the HFI initialization, HFI notification interrupt
    handling and sending CPU capabilities change messages to user
    space via the thermal netlink interface (Ricardo Neri, Srinivas
    Pandruvada, Nathan Chancellor, Randy Dunlap).
 
  - Extend the intel-speed-select utility to handle out-of-band CPU
    configuration changes and add support for the CPU capabilities
    change messages sent over the thermal netlink interface by the new
    HFI thermal driver to it (Srinivas Pandruvada).
 
  - Convert the DT bindings to yaml format for the Exynos platform
    and fix and update the MAINTAINERS file for this driver (Krzysztof
    Kozlowski).
 
  - Register the thermal zones as HWmon sensors for the QCom's
    Tsens driver and TI thermal platforms (Dmitry Baryshkov, Romain
    Naour).
 
  - Add the msm8953 compatible documentation in the bindings (Luca
    Weiss).
 
  - Add the sm8150 platform support to the QCom LMh driver's DT
    binding (Thara Gopinath).
 
  - Check the command result from the IPC command to the BPMP in the
    Tegra driver (Mikko Perttunen).
 
  - Silence the error for normal configuration where the interrupt
    is optionnal in the Broadcom thermal driver (Florian Fainelli).
 
  - Remove remaining dead code from the TI thermal driver (Yue
    Haibing).
 
  - Don't use bitmap_weight() in end_power_clamp() in the powerclamp
    driver (Yury Norov).
 
  - Update the OS policy capabilities handshake in the int340x thermal
    driver (Srinivas Pandruvada).
 
  - Increase the policies bitmap size in int340x (Srinivas Pandruvada).
 
  - Replace acpi_bus_get_device() with acpi_fetch_acpi_dev() in the
    int340x thermal driver (Rafael Wysocki).
 
  - Check for NULL after calling kmemdup() in int340x (Jiasheng Jiang).
 
  - Add Intel Dynamic Power and Thermal Framework (DPTF) kernel interface
    documentation (Srinivas Pandruvada).
 
  - Fix bullet list warning in the thermal documentation (Randy Dunlap).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmI4pU8SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRx47oP+gJMvi3IT/yaN4wyxoT6OeM8A8qPNQIw
 A6olZeL5/t1tp3jPU5qJ498q9W6vokovdqklAya4eqChmPboVk11A3TJ+dhflIRU
 NxaXIKTueNh5AwD08O9jhJJCJEejsb2i7lzWkJKMM/S3eZCciZU9ac4C5WVi3DqM
 F7WL62vhzsknsuTtCw9KLufmI3+NUFW98nS/B2EmesZs1WLfEnrEajYTvzgJRXQH
 qiO6x6fK4HJWP8D7XYxNwGpRObfRFOIkZYt40iXsV8s1fsdcEcKUnXpCviOg3tQ8
 mLE+xqnpAKxaGmrI8QZr6863/NyG5dN8A3hk6ZbTN7vWnyVLmRIzs8XZ8hoPycmH
 LeEGn/LV1td1qrJykRemCYzJCfmXF2k0b6MjJGxgUQ7UItlBXr2pVRWXCFlY+Ekh
 9ahZ7/2BSwdaW5DHbseZIIvF/rsCq0/i4+xV2JizM7ufnlFRx+6jP68KLDQxjwgp
 ZparKMYI/8zEgMq3x3tlvh5JsK4M0kA95NC+bsov4gNh0jbrm+CL92g5PuDLXAby
 RlW8Fmvx1px1n6IEoeLAtbTdQVJwqyNWUyVIhrXkJVGBkCcupCAfuMY9s6woKemf
 IXr1n/tjKG3hxuh/NTgAKYvIKaWSNF1ZIdNGbvgpzEGL+p26y96qhJYFlNBthXy9
 v/4V8qFn0w6R
 =6PYL
 -----END PGP SIGNATURE-----

Merge tag 'thermal-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull thermal control updates from Rafael Wysocki:
 "As far as new functionality is concerned, there is a new thermal
  driver for the Intel Hardware Feedback Interface (HFI) along with some
  intel-speed-select utility changes to support it. There are also new
  DT compatible strings for a couple of platforms, and thermal zones on
  some platforms will be registered as HWmon sensors now.

  Apart from the above, some drivers are updated (fixes mostly) and
  there is a new piece of documentation for the Intel DPTF (Dynamic
  Power and Thermal Framework) sysfs interface.

  Specifics:

   - Add a new thermal driver for the Intel Hardware Feedback Interface
     (HFI) including the HFI initialization, HFI notification interrupt
     handling and sending CPU capabilities change messages to user space
     via the thermal netlink interface (Ricardo Neri, Srinivas
     Pandruvada, Nathan Chancellor, Randy Dunlap).

   - Extend the intel-speed-select utility to handle out-of-band CPU
     configuration changes and add support for the CPU capabilities
     change messages sent over the thermal netlink interface by the new
     HFI thermal driver to it (Srinivas Pandruvada).

   - Convert the DT bindings to yaml format for the Exynos platform and
     fix and update the MAINTAINERS file for this driver (Krzysztof
     Kozlowski).

   - Register the thermal zones as HWmon sensors for the QCom's Tsens
     driver and TI thermal platforms (Dmitry Baryshkov, Romain Naour).

   - Add the msm8953 compatible documentation in the bindings (Luca
     Weiss).

   - Add the sm8150 platform support to the QCom LMh driver's DT binding
     (Thara Gopinath).

   - Check the command result from the IPC command to the BPMP in the
     Tegra driver (Mikko Perttunen).

   - Silence the error for normal configuration where the interrupt is
     optionnal in the Broadcom thermal driver (Florian Fainelli).

   - Remove remaining dead code from the TI thermal driver (Yue
     Haibing).

   - Don't use bitmap_weight() in end_power_clamp() in the powerclamp
     driver (Yury Norov).

   - Update the OS policy capabilities handshake in the int340x thermal
     driver (Srinivas Pandruvada).

   - Increase the policies bitmap size in int340x (Srinivas Pandruvada).

   - Replace acpi_bus_get_device() with acpi_fetch_acpi_dev() in the
     int340x thermal driver (Rafael Wysocki).

   - Check for NULL after calling kmemdup() in int340x (Jiasheng Jiang).

   - Add Intel Dynamic Power and Thermal Framework (DPTF) kernel
     interface documentation (Srinivas Pandruvada).

   - Fix bullet list warning in the thermal documentation (Randy
     Dunlap)"

* tag 'thermal-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (30 commits)
  thermal: int340x: Update OS policy capability handshake
  thermal: int340x: Increase bitmap size
  Documentation: thermal: DPTF Documentation
  MAINTAINERS: thermal: samsung: update Krzysztof Kozlowski's email
  thermal/drivers/ti-soc-thermal: Remove unused function ti_thermal_get_temp()
  thermal/drivers/brcmstb_thermal: Interrupt is optional
  thermal: tegra-bpmp: Handle errors in BPMP response
  drivers/thermal/ti-soc-thermal: Add hwmon support
  dt-bindings: thermal: tsens: Add msm8953 compatible
  dt-bindings: thermal: Add sm8150 compatible string for LMh
  thermal/drivers/qcom/lmh: Add support for sm8150
  thermal/drivers/tsens: register thermal zones as hwmon sensors
  MAINTAINERS: thermal: samsung: Drop obsolete properties
  dt-bindings: thermal: samsung: Convert to dtschema
  tools/power/x86/intel-speed-select: v1.12 release
  tools/power/x86/intel-speed-select: HFI support
  tools/power/x86/intel-speed-select: OOB daemon mode
  thermal: intel: hfi: INTEL_HFI_THERMAL depends on NET
  thermal: netlink: Fix parameter type of thermal_genl_cpu_capability_event() stub
  thermal: Replace acpi_bus_get_device()
  ...
2022-03-21 14:35:11 -07:00
Linus Torvalds 02b82b02c3 Power management updates for 5.18-rc1
- Allow device_pm_check_callbacks() to be called from interrupt
    context without issues (Dmitry Baryshkov).
 
  - Modify devm_pm_runtime_enable() to automatically handle
    pm_runtime_dont_use_autosuspend() at driver exit time (Douglas
    Anderson).
 
  - Make the schedutil cpufreq governor use to_gov_attr_set() instead
    of open coding it (Kevin Hao).
 
  - Replace acpi_bus_get_device() with acpi_fetch_acpi_dev() in the
    cpufreq longhaul driver (Rafael Wysocki).
 
  - Unify show() and store() naming in cpufreq and make it use
    __ATTR_XX (Lianjie Zhang).
 
  - Make the intel_pstate driver use the EPP value set by the firmware
    by default (Srinivas Pandruvada).
 
  - Re-order the init checks in the powernow-k8 cpufreq driver (Mario
    Limonciello).
 
  - Make the ACPI processor idle driver check for architectural
    support for LPI to avoid using it on x86 by mistake (Mario
    Limonciello).
 
  - Add Sapphire Rapids Xeon support to the intel_idle driver (Artem
    Bityutskiy).
 
  - Add 'preferred_cstates' module argument to the intel_idle driver
    to work around C1 and C1E handling issue on Sapphire Rapids (Artem
    Bityutskiy).
 
  - Add core C6 optimization on Sapphire Rapids to the intel_idle
    driver (Artem Bityutskiy).
 
  - Optimize the haltpoll cpuidle driver a bit (Li RongQing).
 
  - Remove leftover text from intel_idle() kerneldoc comment and fix
    up white space in intel_idle (Rafael Wysocki).
 
  - Fix load_image_and_restore() error path (Ye Bin).
 
  - Fix typos in comments in the system wakeup hadling code (Tom Rix).
 
  - Clean up non-kernel-doc comments in hibernation code (Jiapeng
    Chong).
 
  - Fix __setup handler error handling in system-wide suspend and
    hibernation core code (Randy Dunlap).
 
  - Add device name to suspend_report_result() (Youngjin Jang).
 
  - Make virtual guests honour ACPI S4 hardware signature by
    default (David Woodhouse).
 
  - Block power off of a parent PM domain unless child is in deepest
    state (Ulf Hansson).
 
  - Use dev_err_probe() to simplify error handling for generic PM
    domains (Ahmad Fatoum).
 
  - Fix sleep-in-atomic bug caused by genpd_debug_remove() (Shawn Guo).
 
  - Document Intel uncore frequency scaling (Srinivas Pandruvada).
 
  - Add DTPM hierarchy description (Daniel Lezcano).
 
  - Change the locking scheme in DTPM (Daniel Lezcano).
 
  - Fix dtpm_cpu cleanup at exit time and missing virtual DTPM pointer
    release (Daniel Lezcano).
 
  - Make dtpm_node_callback[] static (kernel test robot).
 
  - Fix spelling mistake "initialze" -> "initialize" in
    dtpm_create_hierarchy() (Colin Ian King).
 
  - Add tracer tool for the amd-pstate driver (Jinzhou Su).
 
  - Fix PC6 displaying in turbostat on some systems (Artem Bityutskiy).
 
  - Add AMD P-State support to the cpupower utility (Huang Rui).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmI4pM4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxh5wQAJEz3u55wIHzeov30obtXaD3SxxnvRzR
 p96gRcmNoR2so/Q9D+h+JHZKQkVklbnbqExMXQn1qarceAUN7KPjVMRvagjZsC/f
 J3LtQmx96yqGTCzOTu5n+Ol2ojKLMCMo++no/2873BYhd60TV6oQxRzkNiZx215n
 tT6MKY5ZMX448VKWAWh9vt5rdvbBj9z6cfvpchK/3bziE21lfLz/1iXeFnwqjPGU
 XuA7NYbVAHOfsdHZk19+4qAgm8EYkmjd4/J8HDlb7XouyLuUGy8KJZYhSrJKiQ1C
 f9f2Zw0925/YpBmFXOwxuYWP9KjFKlq7Cdr3SSgVGDOvgyRtpeV4fU8Y6WPFCtEV
 fQdKr9/4KQP6hwUpxJZucSf49wcnyh7hFDMxrwVVcL96yXZef1OqG3ITihJY/n4J
 +wDnpR2VqBeiG5NyECjk3mPROZGFfUlHRsqMd3JOswMpGF5phpEI9nNFcayB262S
 Rkgcb3MacFVsuo/ZBdzCUTZ6ECvjxZn4FGZPxumkp65SJO18gOPbqs8qfGCZ3Tgb
 GDy0CWEOv/KuGnks1CkBGok2Z4q8s2GcZmaOp9BiPjxKJD71i4uPtiGA/5Ahb6cm
 Cu0G7Ub/t2Vc93E7mnTE4hh2IuiAN73yB5teM4YNllHw6f+aqVGlvJktIMpShajo
 eEBNFlkwljyz
 =WlR9
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These are mostly fixes and cleanups all over the code and a new piece
  of documentation for Intel uncore frequency scaling.

  Functionality-wise, the intel_idle driver will support Sapphire Rapids
  Xeons natively now (with some extra facilities for controlling
  C-states more precisely on those systems), virtual guests will take
  the ACPI S4 hardware signature into account by default, the
  intel_pstate driver will take the defualt EPP value from the firmware,
  cpupower utility will support the AMD P-state driver added in the
  previous cycle, and there is a new tracer utility for that driver.

  Specifics:

   - Allow device_pm_check_callbacks() to be called from interrupt
     context without issues (Dmitry Baryshkov).

   - Modify devm_pm_runtime_enable() to automatically handle
     pm_runtime_dont_use_autosuspend() at driver exit time (Douglas
     Anderson).

   - Make the schedutil cpufreq governor use to_gov_attr_set() instead
     of open coding it (Kevin Hao).

   - Replace acpi_bus_get_device() with acpi_fetch_acpi_dev() in the
     cpufreq longhaul driver (Rafael Wysocki).

   - Unify show() and store() naming in cpufreq and make it use
     __ATTR_XX (Lianjie Zhang).

   - Make the intel_pstate driver use the EPP value set by the firmware
     by default (Srinivas Pandruvada).

   - Re-order the init checks in the powernow-k8 cpufreq driver (Mario
     Limonciello).

   - Make the ACPI processor idle driver check for architectural support
     for LPI to avoid using it on x86 by mistake (Mario Limonciello).

   - Add Sapphire Rapids Xeon support to the intel_idle driver (Artem
     Bityutskiy).

   - Add 'preferred_cstates' module argument to the intel_idle driver to
     work around C1 and C1E handling issue on Sapphire Rapids (Artem
     Bityutskiy).

   - Add core C6 optimization on Sapphire Rapids to the intel_idle
     driver (Artem Bityutskiy).

   - Optimize the haltpoll cpuidle driver a bit (Li RongQing).

   - Remove leftover text from intel_idle() kerneldoc comment and fix up
     white space in intel_idle (Rafael Wysocki).

   - Fix load_image_and_restore() error path (Ye Bin).

   - Fix typos in comments in the system wakeup hadling code (Tom Rix).

   - Clean up non-kernel-doc comments in hibernation code (Jiapeng
     Chong).

   - Fix __setup handler error handling in system-wide suspend and
     hibernation core code (Randy Dunlap).

   - Add device name to suspend_report_result() (Youngjin Jang).

   - Make virtual guests honour ACPI S4 hardware signature by default
     (David Woodhouse).

   - Block power off of a parent PM domain unless child is in deepest
     state (Ulf Hansson).

   - Use dev_err_probe() to simplify error handling for generic PM
     domains (Ahmad Fatoum).

   - Fix sleep-in-atomic bug caused by genpd_debug_remove() (Shawn Guo).

   - Document Intel uncore frequency scaling (Srinivas Pandruvada).

   - Add DTPM hierarchy description (Daniel Lezcano).

   - Change the locking scheme in DTPM (Daniel Lezcano).

   - Fix dtpm_cpu cleanup at exit time and missing virtual DTPM pointer
     release (Daniel Lezcano).

   - Make dtpm_node_callback[] static (kernel test robot).

   - Fix spelling mistake "initialze" -> "initialize" in
     dtpm_create_hierarchy() (Colin Ian King).

   - Add tracer tool for the amd-pstate driver (Jinzhou Su).

   - Fix PC6 displaying in turbostat on some systems (Artem Bityutskiy).

   - Add AMD P-State support to the cpupower utility (Huang Rui)"

* tag 'pm-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (58 commits)
  cpufreq: powernow-k8: Re-order the init checks
  cpuidle: intel_idle: Drop redundant backslash at line end
  cpuidle: intel_idle: Update intel_idle() kerneldoc comment
  PM: hibernate: Honour ACPI hardware signature by default for virtual guests
  cpufreq: intel_pstate: Use firmware default EPP
  cpufreq: unify show() and store() naming and use __ATTR_XX
  PM: core: keep irq flags in device_pm_check_callbacks()
  cpuidle: haltpoll: Call cpuidle_poll_state_init() later
  Documentation: amd-pstate: add tracer tool introduction
  tools/power/x86/amd_pstate_tracer: Add tracer tool for AMD P-state
  tools/power/x86/intel_pstate_tracer: make tracer as a module
  cpufreq: amd-pstate: Add more tracepoint for AMD P-State module
  PM: sleep: Add device name to suspend_report_result()
  turbostat: fix PC6 displaying on some systems
  intel_idle: add core C6 optimization for SPR
  intel_idle: add 'preferred_cstates' module argument
  intel_idle: add SPR support
  PM: runtime: Have devm_pm_runtime_enable() handle pm_runtime_dont_use_autosuspend()
  ACPI: processor idle: Check for architectural support for LPI
  cpuidle: PSCI: Move the `has_lpi` check to the beginning of the function
  ...
2022-03-21 14:26:28 -07:00
Linus Torvalds 242ba6656d ACPI updates for 5.18-rc1
- Use uintptr_t and offsetof() in the ACPICA code to avoid compiler
    warnings regarding NULL pointer arithmetic (Rafael Wysocki).
 
  - Fix possible NULL pointer dereference in acpi_ns_walk_namespace()
    when passed "acpi=off" in the command line (Rafael Wysocki).
 
  - Fix and clean up acpi_os_read/write_port() (Rafael Wysocki).
 
  - Introduce acpi_bus_for_each_dev() and use it for walking all ACPI
    device objects in the Type C code (Rafael Wysocki).
 
  - Fix the _OSC platform capabilities negotioation and prevent CPPC
    from being used if the platform firmware indicates that it not
    supported via _OSC (Rafael Wysocki).
 
  - Use ida_alloc() instead of ida_simple_get() for ACPI enumeration
    of devices (Rafael Wysocki).
 
  - Add AGDI and CEDT to the list of known ACPI table signatures (Ilkka
    Koskinen, Robert Kiraly).
 
  - Add power management debug messages related to suspend-to-idle in
    two places (Rafael Wysocki).
 
  - Fix __acpi_node_get_property_reference() return value and clean up
    that function (Andy Shevchenko, Sakari Ailus).
 
  - Fix return value of the __setup handler in the ACPI PM timer clock
    source driver (Randy Dunlap).
 
  - Clean up double words in two comments (Tom Rix).
 
  - Add "skip i2c clients" quirks for Lenovo Yoga Tablet 1050F/L and
    Nextbook Ares 8 (Hans de Goede).
 
  - Clean up frequency invariance handling on x86 in the ACPI CPPC
    library (Huang Rui).
 
  - Work around broken XSDT on the Advantech DAC-BJ01 board (Mark
    Cilissen).
 
  - Make wakeup events checks in the ACPI EC driver more
    straightforward and clean up acpi_ec_submit_event() (Rafael
    Wysocki).
 
  - Make it possible to obtain the CPU capacity with the help of CPPC
    information (Ionela Voinescu).
 
  - Improve fine grained fan control in the ACPI fan driver and
    document it (Srinivas Pandruvada).
 
  - Add device HID and quirk for Microsoft Surface Go 3 to the ACPI
    battery driver (Maximilian Luz).
 
  - Make the ACPI driver for Intel SoCs (LPSS) let the SPI driver know
    the exact type of the controller (Andy Shevchenko).
 
  - Force native backlight mode on Clevo NL5xRU and NL5xNU (Werner
    Sembach).
 
  - Fix return value of __setup handlers in the APEI code (Randy
    Dunlap).
 
  - Add Arm Generic Diagnostic Dump and Reset device driver (Ilkka
    Koskinen).
 
  - Limit printable size of BERT table data (Darren Hart).
 
  - Fix up HEST and GHES initialization (Shuai Xue).
 
  - Update the ACPI device enumeration documentation and unify the ASL
    style in GPIO-related examples (Andy Shevchenko).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmI4pF0SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxrPMP/A8kkgzJegS4CtUCtUpLcCufaggdpQTd
 I9GQJeo73wGdmaelCQuXFJ9NUhuA1KHIU0WYqneWX+wifht+wl+KAZYvswPm0/wt
 TiypiyRMf8Il0Q9tTTmWKSokK80O7ks8OZEe1HmiJimdEn+F1XUzLLgbQKFqhbbV
 NHkVix3xR/7htgSb0ksaijH3XLyStuwPvc4WFueO14Pp5Bkr2Of33Xdd0UYeTCi4
 RUqL3qJ4DT5gvgKipg43y6D2igRq/xMKx1bgnBjtwKChtjK23GGR6UB/jAIitIMv
 XpxLw7kceY65zjJmmJ1+OKeM6CNAcIbTeyCyffSAH/MYRObj93XpMjnhxXILzjYB
 Pz2U/lJy0kgw0PUkFzTdPkuuJlDn5GLY8F2cytvtlQAIhtFVFFcnHZYfhhLRWpoN
 Sta2NHpGRejR/jixkQ4JtsjQ/Og02zQ9N344enaC64h3JYPBSyM8mpLH/YoXnuSx
 jDPQK1KE/QVXRixKFjrPSXYq2p7w/CH7yZXX7TOo+ScnLhapiSUpyh7wiFslZ729
 v11yzjsgBQk27qf1EGSImsh+YoRck9qOTb9tkVXGxcifTUPYzyXGn4T5i/ZwpN9v
 nL6imYuiRJjFNAksbWo72hjYfhNwWAoCIXgUuxroCPLGGT394j5djisHYMjDNAsG
 x43D1Fd4vEgT
 =uB8P
 -----END PGP SIGNATURE-----

Merge tag 'acpi-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
 "From the new functionality perspective, the most significant items
  here are the new driver for the 'ARM Generic Diagnostic Dump and
  Reset' device, the extension of fine grain fan control in the ACPI fan
  driver, and the change making it possible to use CPPC information to
  obtain CPU capacity.

  There are also a few new quirks, a bunch of fixes, including the
  platform-level _OSC handling change to make it actually take the
  platform firmware response into account, some code and documentation
  cleanups, and a notable update of the ACPI device enumeration
  documentation.

  Specifics:

   - Use uintptr_t and offsetof() in the ACPICA code to avoid compiler
     warnings regarding NULL pointer arithmetic (Rafael Wysocki).

   - Fix possible NULL pointer dereference in acpi_ns_walk_namespace()
     when passed "acpi=off" in the command line (Rafael Wysocki).

   - Fix and clean up acpi_os_read/write_port() (Rafael Wysocki).

   - Introduce acpi_bus_for_each_dev() and use it for walking all ACPI
     device objects in the Type C code (Rafael Wysocki).

   - Fix the _OSC platform capabilities negotioation and prevent CPPC
     from being used if the platform firmware indicates that it not
     supported via _OSC (Rafael Wysocki).

   - Use ida_alloc() instead of ida_simple_get() for ACPI enumeration of
     devices (Rafael Wysocki).

   - Add AGDI and CEDT to the list of known ACPI table signatures (Ilkka
     Koskinen, Robert Kiraly).

   - Add power management debug messages related to suspend-to-idle in
     two places (Rafael Wysocki).

   - Fix __acpi_node_get_property_reference() return value and clean up
     that function (Andy Shevchenko, Sakari Ailus).

   - Fix return value of the __setup handler in the ACPI PM timer clock
     source driver (Randy Dunlap).

   - Clean up double words in two comments (Tom Rix).

   - Add "skip i2c clients" quirks for Lenovo Yoga Tablet 1050F/L and
     Nextbook Ares 8 (Hans de Goede).

   - Clean up frequency invariance handling on x86 in the ACPI CPPC
     library (Huang Rui).

   - Work around broken XSDT on the Advantech DAC-BJ01 board (Mark
     Cilissen).

   - Make wakeup events checks in the ACPI EC driver more
     straightforward and clean up acpi_ec_submit_event() (Rafael
     Wysocki).

   - Make it possible to obtain the CPU capacity with the help of CPPC
     information (Ionela Voinescu).

   - Improve fine grained fan control in the ACPI fan driver and
     document it (Srinivas Pandruvada).

   - Add device HID and quirk for Microsoft Surface Go 3 to the ACPI
     battery driver (Maximilian Luz).

   - Make the ACPI driver for Intel SoCs (LPSS) let the SPI driver know
     the exact type of the controller (Andy Shevchenko).

   - Force native backlight mode on Clevo NL5xRU and NL5xNU (Werner
     Sembach).

   - Fix return value of __setup handlers in the APEI code (Randy
     Dunlap).

   - Add Arm Generic Diagnostic Dump and Reset device driver (Ilkka
     Koskinen).

   - Limit printable size of BERT table data (Darren Hart).

   - Fix up HEST and GHES initialization (Shuai Xue).

   - Update the ACPI device enumeration documentation and unify the ASL
     style in GPIO-related examples (Andy Shevchenko)"

* tag 'acpi-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (52 commits)
  clocksource: acpi_pm: fix return value of __setup handler
  ACPI: bus: Avoid using CPPC if not supported by firmware
  Revert "ACPI: Pass the same capabilities to the _OSC regardless of the query flag"
  ACPI: video: Force backlight native for Clevo NL5xRU and NL5xNU
  arm64, topology: enable use of init_cpu_capacity_cppc()
  arch_topology: obtain cpu capacity using information from CPPC
  x86, ACPI: rename init_freq_invariance_cppc() to arch_init_invariance_cppc()
  ACPI: AGDI: Add driver for Arm Generic Diagnostic Dump and Reset device
  ACPI: tables: Add AGDI to the list of known table signatures
  ACPI/APEI: Limit printable size of BERT table data
  ACPI: docs: gpio-properties: Unify ASL style for GPIO examples
  ACPI / x86: Work around broken XSDT on Advantech DAC-BJ01 board
  ACPI: APEI: fix return value of __setup handlers
  x86/ACPI: CPPC: Move init_freq_invariance_cppc() into x86 CPPC
  x86: Expose init_freq_invariance() to topology header
  x86/ACPI: CPPC: Move AMD maximum frequency ratio setting function into x86 CPPC
  x86/ACPI: CPPC: Rename cppc_msr.c to cppc.c
  ACPI / x86: Add skip i2c clients quirk for Lenovo Yoga Tablet 1050F/L
  ACPI / x86: Add skip i2c clients quirk for Nextbook Ares 8
  ACPICA: Avoid walking the ACPI Namespace if it is not there
  ...
2022-03-21 14:17:20 -07:00
Linus Torvalds bba90e0964 Core code updates:
- Reduce the amount of work to release a task stack in context
     switch. There is no real reason to do cgroup accounting and memory
     freeing in this performance sensitive context. Aside of this the
     invoked functions cannot be called from this preemption disabled
     context on PREEMPT_RT enabled kernels. Solve this by moving the
     accounting into do_exit() and delaying the freeing of the stack unless
     the vmap stack can be cached.
 
   - Provide a mechanism to delay raising signals from atomic context on
     PREEMPT_RT enabled kernels as sighand::lock cannot be acquired.  Store
     the information in the task struct and raise it in the exit path.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmI4U6gTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoSpkEACwgaaQUbqVrpw5yb6LbwzUPnjEdFNN
 uUQCv0XZD8LWbfhcQQVSPWGho7S/w2Mkpdhi0DkVb2K0dkB7EvITSNEC4KoS/yez
 8iQBpv6Lm00quHdNLjkQySSZ4NYB8M1GasBI7zSBjROK/+sRqioTPQsM0oDemGmD
 uMvw0dgDJRlB8X4LZv0xuJbYLdSzu2VOlWd5aJG9BUgHkd7PfUWMlHsa29FP0hkP
 A5yziOnr9kMsmCAsgmiyDW/GmefrEealby5M/jgnxTruF/OLnDsP+PYMlws47fPx
 g6xpHkT5H0zQJ/nMJtK2JAlxpnbIl4cLuUnpn7wX316yjBpP2s3Pw04AVdzPPoBa
 ufAoOLFtnrKN6enIqLWaJHGAsBHEULw6d3/7HoAEQOVWChnQSuWOob8z0QDbvM14
 kKtz+LTrO+P5a15fd4g5+9lFBXJUTnF74SYQNwxIm2cV9hxrf15NhAr8yg+RtUvF
 /ilNNAFtXkASLqs9moEi7U+GyBYwemG+gduVZ3Dw8FBxK/vHmDrhlItcZdKom+UJ
 k4VFDVhzd2GYRHMrcaLfkCYew6ou+LD/rjdPhIU9OQHgILIMLY5aLqxDuyPtHqDz
 TEyF5qsL4wYLIUdsWlqyHISqQQ6LfnpIyko5kb2Zt56sYtrcZr8swDy+yimiEOdL
 G4BzQu0nVbCLhw==
 =uGTc
 -----END PGP SIGNATURE-----

Merge tag 'core-core-2022-03-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core process handling RT latency updates from Thomas Gleixner:

 - Reduce the amount of work to release a task stack in context switch.
   There is no real reason to do cgroup accounting and memory freeing in
   this performance sensitive context.

   Aside of this the invoked functions cannot be called from this
   preemption disabled context on PREEMPT_RT enabled kernels. Solve this
   by moving the accounting into do_exit() and delaying the freeing of
   the stack unless the vmap stack can be cached.

 - Provide a mechanism to delay raising signals from atomic context on
   PREEMPT_RT enabled kernels as sighand::lock cannot be acquired. Store
   the information in the task struct and raise it in the exit path.

* tag 'core-core-2022-03-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  signal, x86: Delay calling signals in atomic on RT enabled kernels
  fork: Use IS_ENABLED() in account_kernel_stack()
  fork: Only cache the VMAP stack in finish_task_switch()
  fork: Move task stack accounting to do_exit()
  fork: Move memcg_charge_kernel_stack() into CONFIG_VMAP_STACK
  fork: Don't assign the stack pointer in dup_task_struct()
  fork, IA64: Provide alloc_thread_stack_node() for IA64
  fork: Duplicate task_struct before stack allocation
  fork: Redo ifdefs around task stack handling
2022-03-21 12:37:33 -07:00
Linus Torvalds 3fd33273a4 Reenable ENQCMD/PASID support:
- Simplify the PASID handling to allocate the PASID once, associate it to
    the mm of a process and free it on mm_exit(). The previous attempt of
    refcounted PASIDs and dynamic alloc()/free() turned out to be error
    prone and too complex. The PASID space is 20bits, so the case of
    resource exhaustion is a pure academic concern.
 
  - Populate the PASID MSR on demand via #GP to avoid racy updates via IPIs.
 
  - Reenable ENQCMD and let objtool check for the forbidden usage of ENQCMD
    in the kernel.
 
  - Update the documentation for Shared Virtual Addressing accordingly.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmI4WpETHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoUfnD/0bY94rgEX4Uuy/mFQ1W8X8XlcyKrha
 0/cRATb+4QV/pwJgGr2nClKhGlFMYPdJLvKMC1TCUPCVrLD1RNmluIZoFzeqXwhm
 jDdCcFOuGZ2D4ujDPWwOOpKBT1ytovnQa7+lH6QJyKkEqdcC2ncOvGJQoiRxRQIG
 8wTVs/OUvQJ5ZhSZQMKQN4uMWMyHEjhbroYS30/uNi/598jTPgzlEoa14XocQ9Os
 nS6ALvjuc9MsJ34F61etMaJU1ZMI3Wx75u9QjEvX6hmJs87YdvgwE7lzJUKFDEuh
 gewM0wp2fTa8/azzP0eMiHTin56PqFdmllzRqXmilbZMEPOeI29dZVArCdpKcAn0
 r9p1kJUT3Xl2G3Oir/OdCaaQHcznD1Y5ZFOyh12wgEucZ/rdeSr7nq7n5HoOL5Bw
 Q2o6YvTkE9DOL0nTN1lSXGiPspou7fzX0uUcRBrbJUS3sBv4zGIlaJXUaTVnSdAt
 VZj4LeOK7v2BjyeiOY0iaaIQd3xjmLUF0UjozXS5M13SoVcToZRbyWqhDzPvNuKA
 imQb/dnFpXhABgmuqAiJLeqM0VtGMFNc780OURkcsBSPng+iSEdV4DzuhK0jpU8x
 Uk1RuGMd/vgmrlDFBrw+orQQiiKR1ixpI0LiHfcOBycfJhqTwcnrNZvAN5/do28Z
 E23+QzlUbZF0cw==
 =Dy8V
 -----END PGP SIGNATURE-----

Merge tag 'x86-pasid-2022-03-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 PASID support from Thomas Gleixner:
 "Reenable ENQCMD/PASID support:

   - Simplify the PASID handling to allocate the PASID once, associate
     it to the mm of a process and free it on mm_exit().

     The previous attempt of refcounted PASIDs and dynamic
     alloc()/free() turned out to be error prone and too complex. The
     PASID space is 20bits, so the case of resource exhaustion is a pure
     academic concern.

   - Populate the PASID MSR on demand via #GP to avoid racy updates via
     IPIs.

   - Reenable ENQCMD and let objtool check for the forbidden usage of
     ENQCMD in the kernel.

   - Update the documentation for Shared Virtual Addressing accordingly"

* tag 'x86-pasid-2022-03-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  Documentation/x86: Update documentation for SVA (Shared Virtual Addressing)
  tools/objtool: Check for use of the ENQCMD instruction in the kernel
  x86/cpufeatures: Re-enable ENQCMD
  x86/traps: Demand-populate PASID MSR via #GP
  sched: Define and initialize a flag to identify valid PASID in the task
  x86/fpu: Clear PASID when copying fpstate
  iommu/sva: Assign a PASID to mm on PASID allocation and free it on mm exit
  kernel/fork: Initialize mm's PASID
  iommu/ioasid: Introduce a helper to check for valid PASIDs
  mm: Change CONFIG option for mm->pasid field
  iommu/sva: Rename CONFIG_IOMMU_SVA_LIB to CONFIG_IOMMU_SVA
2022-03-21 12:28:13 -07:00
Linus Torvalds eaa54b1458 - Remove a misleading message and an unused function
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4l6UACgkQEsHwGGHe
 VUonRw/7BQUx2A8O6mMtQDGzDVJlKobKb/6t+VhmdrC7YHUbMjgl8+Q9EjZIGkE4
 EF3Wb/5gUmnVhWzB2y5veE1mX4wlcXs0HyGL5iSn7X4UYxpPLdCSTcAriyj6I/0F
 fId9aTcb9dbiXGnK4F+tBRVphoe2uLCJUAdbNoprOBWUsmIjKaMewl+sLg8m1+f5
 mtTY83koAtblSCtkUP/sLx/1RM+LO2uep11W20m44eRXkc8CJ3mOiMoLwCZGyyaM
 66y/6i2QqPCWEUE8VUklrEM+gI/vGn1yb6DJo465aauHFabteYaiTa0kAiZ+xyOn
 UlybAhL2nJlkCJ+mt2+FNujdvJu2z8MGNTuQPgI0CeYlRllGuvvNfkpP6PLhqE+c
 HUfNbmJgB163c3w8QOlppkmCImiup+wtm8r3w4h0brD68sqnRv1AkXp83hC4MBNP
 k1/S/3GCLOFjq6LHvZJycq8r1NpbNPKGNq81kjNKobfWZHX3fEclVGJGjiDkJQhC
 VA4hCtIUnpagpMHwPHZ9fdHROHWCDJjLaEY5L/qiGnrBJPfwVpbmRv86k8kE+hJu
 IgoqRF1DSWMlhg3lNKGnoodTvgrWJM/HZgp/exrY0/N83AMatcWmaAlwWkrGQpGY
 HdnSKzXSHIwLFlf7WiVCoUDpRU4zRZzUUFm3mMqgdAJV8mwufsA=
 =EF36
 -----END PGP SIGNATURE-----

Merge tag 'x86_cleanups_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 cleanups from Borislav Petkov:

 - Remove a misleading message and an unused function

* tag 'x86_cleanups_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/nmi: Remove the 'strange power saving mode' hint from unknown NMI handler
  x86/pat: Remove the unused set_pages_array_wt() function
2022-03-21 11:49:16 -07:00
Linus Torvalds 6b9bfb1365 - Add shared confidential computing code which will be used by both
vendors instead of proliferating home-grown solutions for technologies
 which are pretty similar
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4kiQACgkQEsHwGGHe
 VUojvA//QD5VxsqPq+RAQWAFiWGHCpFed2szc2Q5eAZj6CEmXcqBOdTqaoHpJpVl
 L1uvB6oLq8WTOea0V3xGu1kfiLRuq1fo0mqZeTxe3iZ3kUk/SU0wGfTLDECB58mI
 P5A+CZFiAk4XJ/kRqJWNxmd5kIDjhlCx4ysVbPl1vm/qfS6FEGb5HUr317kbOYwK
 zw5cEajnYu2KA6bI8nGuy30vmvn97gpy98vCiCzKrcBPggO8WHiJ+kqD72BhP5em
 z7mh4aFrAPVbIMqd/Xb5La3zvP7Vii4Tz9mSUsKy/Ige+ghFZQ18LPk2yANvmWeN
 hIFDqSsESR2go0tKvSrzPln8h93hKx/TPbiF9jVMISBZFdWCGQvzCrYDHqzHFQJ1
 zHw0lxdFQimfhs5YlEumZCqq2Dc7w3OGCVfP22+t7pNhnixPT3Dlie0Ya6z/aXV3
 VNcqckDDZLijQlf0iPhbw2fBs9ErTcB3OXHKmX78Zxb4hP4WJx8QK4lMPzFkPd9H
 bTEquYQWIPsjdRTlMl50nCpNHtAzo56H01G6ZPPx/5Y7Lt38UXJERfdqBhQjNF6F
 ILPMrOn/BHU9snlqSCh7SxhRiRdafThIJHsi5zQrDC4rPvlwi5kinIzGnPyOuDbO
 qwwnPOzx855/Zw0swKrQRXaxU7lwGKo529yKZWt7r8WB12tSOao=
 =zWVD
 -----END PGP SIGNATURE-----

Merge tag 'x86_cc_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 confidential computing updates from Borislav Petkov:

 - Add shared confidential computing code which will be used by both
   vendors instead of proliferating home-grown solutions for
   technologies (SEV/SNP and TDX) which are pretty similar

* tag 'x86_cc_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/mm/cpa: Generalize __set_memory_enc_pgtable()
  x86/coco: Add API to handle encryption mask
  x86/coco: Explicitly declare type of confidential computing platform
  x86/cc: Move arch/x86/{kernel/cc_platform.c => coco/core.c}
2022-03-21 11:38:53 -07:00
Linus Torvalds 88f30ac227 - Add a missing function section annotation
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4VSgACgkQEsHwGGHe
 VUr2ew/9HyXJu2Yp39Gd/UVz/pAGpnlkw+d57lGTeQ2MilvSmJ6t5pE+F6XQ6qX9
 jRzi5VOpWRSNQ6YzZHiVXCdgL49kNvJQTpnDXDsdcYn7zxynSp1d1G7YfJpI0eC2
 9dho37eLnoXDY0RsATYiZsN0v1tD92WktYVgUPMZBUj6jysNZK8FeBOkyEhJLaUu
 LiFwET7A+Odg3ioek34v4mbGJ0pTobHOxF125b+Rq/wHKu3LjKGvj/tiYdTh32oo
 R0HgSszEQBfPn5lfjw1YodDTS3N4BvhWQZwlOdMr8GwlgynVnBC381PSObRVpPT1
 nKQizDnyE0YKHPn0Mq58vN92D+ulmfS58vDCotRYHSUwwuUXDt2klr/y62e4Vh90
 cWlWxzojrj8Dta44ofw87oESJfu5jJ7zQmeMKSjuud3CLsaqkPKcwXsiBwgV5glu
 MA9QBfnbRjoXr9CkDcii17K3S5udY3pfkNSE9P6J+GwBgfOLVeE91WLaGDc3wChf
 tC9bLlV5F71gCpoZYSBdCKsVaV+8ZHeHjH9UTmkGH9zxKEkP0HEdxBSkL4uLhrCt
 NcU/Xt57O/SrzVu23XPufbI52PanGYHs7kNl1NNLthaIXgg43Mtx+c5rI0uWOFK7
 4vFbJEtCvqOBEnVHpPjiUHH30ihSnx6D8N13trOdu67NtUXrxPc=
 =PVnz
 -----END PGP SIGNATURE-----

Merge tag 'x86_sev_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 SEV fix from Borislav Petkov:

 - Add a missing function section annotation

* tag 'x86_sev_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/head64: Add missing __head annotation to sme_postprocess_startup()
2022-03-21 11:35:10 -07:00
Linus Torvalds 35cbdaf753 - Shorten CALL insns to pvops by a byte by using rip-relative addressing
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4VFoACgkQEsHwGGHe
 VUr8Sg//V1SAfIOwOxveO9nPsyGwopeAHOm/6x87SSs18qDzYUGAGiTwfzXBqKqn
 gan8wSBVnsbK2gqF8ecCFUzYnJhjqzvnXvFwUmWpdN5zePz0YDSgm6LQy9FPj6Fq
 l7X8GV4zG1FiDw+4dGaus2FqVnpwyjTY9oT0/X9GBxCGXXDFvcZOm8++N09mo0rZ
 K4Ot6vYzY426HpBv3RWdIutSbBDDX4yMUPi1X4PQVotmb5aZljyQrmn5kllg+9uI
 R6ScXT616pbyGh/iDMaQ2L36Y/TYSg5sAFtkGaYQUAYLrCnwuEKEuFETwm393xxH
 ki/bf9F+kuimp4tjMIBJLZFCeHQ3aC7yuX+lLrrMuTAz5hnARWX+Xxf05+TwbRLI
 RaMx7HzoiaB5vQHJqXeNiJgVZPYR7rIvJNm7PLs1o3EK0kFTSpjL1ZTUg91H2LsF
 FZQobm46LxVORU3B9YSRxv3c0BB7idA4uvRNt55b6MxcfU75f7hHzXoFGiXtL/4G
 nfG/RAuMHUAKpH/FHyt5tVA7Tc+jRB3YizTKjIeUjPAJMnEtBnkC4yggVINOYewO
 uIXMb3gq5jwICuklY8nTvBClmzsdOTL9ZCogFKzifBFokvpDVcoGQ6lQZn7y3Dya
 p0KHxBwJ4a84jkR18fwW6kb7jW6lPYi88Yn0a7ReqDoabbDUxps=
 =djLR
 -----END PGP SIGNATURE-----

Merge tag 'x86_paravirt_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 paravirt improvement from Borislav Petkov:

 - Shorten CALL insns to pvops by a byte by using rip-relative
   addressing

* tag 'x86_paravirt_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/paravirt: Use %rip-relative addressing in hook calls
2022-03-21 11:28:24 -07:00
Linus Torvalds e10821b8a0 - Correct Kconfig symbol visibility on x86
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4U6oACgkQEsHwGGHe
 VUoYzA/9Fat1tkrMWLrL8YiOG3QSLaPJ3BBP5Lc65HF5G9ACdnkhApEs+eXVVg50
 +Pt70ZNYHIe2lpcju7GuXvVub6yT4ZV2t/uedtpa1jSd1XcQ73PZy6XYczw5v/ZX
 8z6dMQ4ULcXYv6Z5B/bMry5vZnq3w0sGJlhszO0NPjbJeYC8/8/8c750Ze60rcU6
 jyeX7maP84FeTNefDpsrBUWbIGunYNHuQ1M5XTZ9W118T300YB4zoMkaqYcFmKX+
 Pv0xqE5Ns5IUt+pUpOQWS0GuleUDcsFOGmL4ndV7/4Uz0saT403x1LZL0tv4uEYZ
 LSJTRjXP/cgReUm+KaZOTIBWjh4w5+0Cs4etM583+6m0TtJ6UAAkbAfz/3TsWBKo
 E93XdZqoa+B0Jppw7QgmY5t4+gxx9OkPt+DQofXrGFBzRuBUUa7NxOaoPnqKDM3q
 DX2xe4BJOSoNX6SwvCNir+A6L1HGgnqBAjzmuYgr6akmg67Q8msGHj1ZH3BUYyB6
 EVJA02EkcpNlfCflSJMZt5YzD3FpiP93O9Wzz8G5WtxrbO5xbR+VIlNZChIdHqUZ
 tq/AqbLcldySiD3Z1gOkrDSvAf8t9I3kmkyQZgaKTSTkw5y5fmpFsU9o4fap9yLb
 hPi0ujH9lYsA8Rz65eW8L9PLm4AsOv+CiGjzU0JhGuIgOyuonTE=
 =3AfB
 -----END PGP SIGNATURE-----

Merge tag 'x86_build_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 Kconfig fix from Borislav Petkov:

 - Correct Kconfig symbol visibility on x86

* tag 'x86_build_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/Kconfig: Select ARCH_SELECT_MEMORY_MODEL only if FLATMEM and SPARSEMEM are possible
2022-03-21 11:25:41 -07:00
Linus Torvalds 2268735045 - Add support for a couple new insn sets to the insn decoder: AVX512-FP16,
AMX, other misc insns.
 
 - Update VMware-specific MAINTAINERS entries
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4URIACgkQEsHwGGHe
 VUob3A/9GFyqt9bBKrSaq9Rt1UVkq6dQhG3kO7dW5d0YDvy8JmR9is4rNDV9GGx6
 A1OAue/gDlZFIz/829oS1qwjB7GZ4Rfb0gRo33bytDLLmd0BRXW7ioZ54jBRnWvy
 8dZ2WruMmazK6uJxoHvtOA+Pt3ukb074CZZ1SfW344clWK6FJZeptyRclWaT1Py2
 QOIJOxMraCdNAay/1ZvOdIqqdIPx5+JyzbHIYOWUFzwT4y+Q8kFNbigrJnqxe5Ij
 aqRjzMIvt6MeLwbq9CfLsPFA3gaSzYeOkuXQPcqRgd5LU5ZyXBLStUrGEv1fsMvd
 9Kh7VFycZPS7MKzxoEcbuJTTOR4cBsINOlbo9iWr7UD5pm5h7c3vc+nCyia+U+Xo
 5XRpf8nitt4a3r1f6HxwXJS0OlBkS4CqexE2OejY4yhWRlxhMcIvRyquU+Z0J4Bp
 mgDJuXSzfJfFcBzp4jjOBxGPNEjXXOdy/qc/1jR97eMmTKrk3gk/74NWUx9hw4oN
 5RGeC+khAD13TL0yVQfKBe5HuLK5tHppAzXAnT2xi6qUn+VJjLxNWgg3iV9tbShM
 4q5vJp3BmvNOY8HQv1R3IDFfN0IAL09Q9v6EzEroNuVUhEOzBdH7JSzWkvBBveZb
 FVgD3I+wNBE1nQD3cP/6DGbRe1JG3ULDF95WJshB8gNJwavlZGs=
 =f7VZ
 -----END PGP SIGNATURE-----

Merge tag 'x86_misc_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull misc x86 updates from Borislav Petkov:

 - Add support for a couple new insn sets to the insn decoder:
   AVX512-FP16, AMX, other misc insns.

 - Update VMware-specific MAINTAINERS entries

* tag 'x86_misc_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  MAINTAINERS: Mark VMware mailing list entries as email aliases
  MAINTAINERS: Add Zack as maintainer of vmmouse driver
  MAINTAINERS: Update maintainers for paravirt ops and VMware hypervisor interface
  x86/insn: Add AVX512-FP16 instructions to the x86 instruction decoder
  perf/tests: Add AVX512-FP16 instructions to x86 instruction decoder test
  x86/insn: Add misc instructions to x86 instruction decoder
  perf/tests: Add misc instructions to the x86 instruction decoder test
  x86/insn: Add AMX instructions to the x86 instruction decoder
  perf/tests: Add AMX instructions to x86 instruction decoder test
2022-03-21 11:19:00 -07:00
Linus Torvalds d752e21114 - Merge the AMD and Intel PPIN code into a shared one by both vendors.
Add the PPIN number to sysfs so that sockets can be identified when
 replacement is needed
 
 - Minor fixes and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmI4UBcACgkQEsHwGGHe
 VUoDQw/9E2WVsLS+iVngYI5hY+LQbbeOLPt9sqgdf8U/4tQPwJfAKYRALQ3FvjbJ
 XKEqcHoCIBH0XQSC0TPpBF96VABr5Vrc/knV8x2OWJ82p54beB0rXh5mdsnsrVMQ
 7Gi00iEgZ1kw7TwRN7rMzUpudNOr7C/SSQL535hyZA4NT9QkNBObZWHMnojVnEmr
 AW1TY54xXSpu7xDY5ari5NGeSgvu1PIsCy0EKMK/SFLEpKQW4lt+lUe1aSieMarj
 xgnfIyGW3SUbadwlvLbIqVpR0RQBDLTabx8nyXnJAVZlwuAfioRUGL+Z4GFA0Y7q
 uDofxuScBAea3sPPFAAIoh13y595TjowBX7pHA1sqjWLmFKt6Qqz5dq1uBVEvIYw
 uTAQ/igJ4N2jq03jwnAw1LUAES5azSseCsiQxR7oqzK9KaRlptxHTAqhjqsgpIp4
 VLdYtgkzOEFiOsWsHWP1Dd+vzpMvTh5gtTXZuVcldo2D6scdcj+oaloHQ5XMiFu1
 GKuyiY4EbkRcp9ZQ847xOn4knEg+aq9zL0tJoWWEMKfRQn6425TEOLqkIdc9QfeU
 t63yqJ1q3NTjjzxzy/FdKwdoyOOQxeDl5YGPX3gZnj9X/0wgs+dHRmKp0o74SIg9
 4h2kB69wRwn6rC09P2UkQVGpDL0mnif4ZAh61vRE+mS0zSNCkEA=
 =MWVZ
 -----END PGP SIGNATURE-----

Merge tag 'x86_cpu_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 cpu feature updates from Borislav Petkov:

 - Merge the AMD and Intel PPIN code into a shared one by both vendors.
   Add the PPIN number to sysfs so that sockets can be identified when
   replacement is needed

 - Minor fixes and cleanups

* tag 'x86_cpu_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/cpu: Clear SME feature flag when not in use
  x86/cpufeatures: Put the AMX macros in the word 18 block
  topology/sysfs: Add PPIN in sysfs under cpu topology
  topology/sysfs: Add format parameter to macro defining "show" functions for proc
  x86/cpu: Read/save PPIN MSR during initialization
  x86/cpu: X86_FEATURE_INTEL_PPIN finally has a CPUID bit
  x86/cpu: Merge Intel and AMD ppin_init() functions
  x86/CPU/AMD: Use default_groups in kobj_type
2022-03-21 11:11:48 -07:00
Linus Torvalds 356a1adca8 arm64 updates for 5.18
- Support for including MTE tags in ELF coredumps
 
 - Instruction encoder updates, including fixes to 64-bit immediate
   generation and support for the LSE atomic instructions
 
 - Improvements to kselftests for MTE and fpsimd
 
 - Symbol aliasing and linker script cleanups
 
 - Reduce instruction cache maintenance performed for user mappings
   created using contiguous PTEs
 
 - Support for the new "asymmetric" MTE mode, where stores are checked
   asynchronously but loads are checked synchronously
 
 - Support for the latest pointer authentication algorithm ("QARMA3")
 
 - Support for the DDR PMU present in the Marvell CN10K platform
 
 - Support for the CPU PMU present in the Apple M1 platform
 
 - Use the RNDR instruction for arch_get_random_{int,long}()
 
 - Update our copy of the Arm optimised string routines for str{n}cmp()
 
 - Fix signal frame generation for CPUs which have foolishly elected to
   avoid building in support for the fpsimd instructions
 
 - Workaround for Marvell GICv3 erratum #38545
 
 - Clarification to our Documentation (booting reqs. and MTE prctl())
 
 - Miscellanous cleanups and minor fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAmIvta8QHHdpbGxAa2Vy
 bmVsLm9yZwAKCRC3rHDchMFjNAIhB/oDSva5FryAFExVuIB+mqRkbZO9kj6fy/5J
 ctN9LEVO2GI/U1TVAUWop1lXmP8Kbq5UCZOAuY8sz7dAZs7NRUWkwTrXVhaTpi6L
 oxCfu5Afu76d/TGgivNz+G7/ewIJRFj5zCPmHezLF9iiWPUkcAsP0XCp4a0iOjU4
 04O4d7TL/ap9ujEes+U0oEXHnyDTPrVB2OVE316FKD1fgztcjVJ2U+TxX5O4xitT
 PPIfeQCjQBq1B2OC1cptE3wpP+YEr9OZJbx+Ieweidy1CSInEy0nZ13tLoUnGPGU
 KPhsvO9daUCbhbd5IDRBuXmTi/sHU4NIB8LNEVzT1mUPnU8pCizv
 =ziGg
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:

 - Support for including MTE tags in ELF coredumps

 - Instruction encoder updates, including fixes to 64-bit immediate
   generation and support for the LSE atomic instructions

 - Improvements to kselftests for MTE and fpsimd

 - Symbol aliasing and linker script cleanups

 - Reduce instruction cache maintenance performed for user mappings
   created using contiguous PTEs

 - Support for the new "asymmetric" MTE mode, where stores are checked
   asynchronously but loads are checked synchronously

 - Support for the latest pointer authentication algorithm ("QARMA3")

 - Support for the DDR PMU present in the Marvell CN10K platform

 - Support for the CPU PMU present in the Apple M1 platform

 - Use the RNDR instruction for arch_get_random_{int,long}()

 - Update our copy of the Arm optimised string routines for str{n}cmp()

 - Fix signal frame generation for CPUs which have foolishly elected to
   avoid building in support for the fpsimd instructions

 - Workaround for Marvell GICv3 erratum #38545

 - Clarification to our Documentation (booting reqs. and MTE prctl())

 - Miscellanous cleanups and minor fixes

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (90 commits)
  docs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferred
  arm64/mte: Remove asymmetric mode from the prctl() interface
  arm64: Add cavium_erratum_23154_cpus missing sentinel
  perf/marvell: Fix !CONFIG_OF build for CN10K DDR PMU driver
  arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition
  Documentation: vmcoreinfo: Fix htmldocs warning
  kasan: fix a missing header include of static_keys.h
  drivers/perf: Add Apple icestorm/firestorm CPU PMU driver
  drivers/perf: arm_pmu: Handle 47 bit counters
  arm64: perf: Consistently make all event numbers as 16-bits
  arm64: perf: Expose some Armv9 common events under sysfs
  perf/marvell: cn10k DDR perf event core ownership
  perf/marvell: cn10k DDR perfmon event overflow handling
  perf/marvell: CN10k DDR performance monitor support
  dt-bindings: perf: marvell: cn10k ddr performance monitor
  arm64: clean up tools Makefile
  perf/arm-cmn: Update watchpoint format
  perf/arm-cmn: Hide XP PUB events for CMN-600
  arm64: drop unused includes of <linux/personality.h>
  arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones
  ...
2022-03-21 10:46:39 -07:00
Kees Cook c7500c1b53 um: Allow builds with Clang
Add SUBARCH target for Clang+um (which must go last, not alphabetically,
so the other SUBARCHes are assigned). Remove open-coded "DEFINE"
macro, instead using linux/kbuild.h's version which was updated to use
Clang-friendly assembly in commit cf0c3e68aa ("kbuild: fix asm-offset
generation to work with clang"). Redefine "DEFINE_LONGS" in terms of
"COMMENT" and "DEFINE" so that the intended coment actually has useful
content. Add a missed "break" to avoid implicit fall-through warnings.

This lets me run KUnit tests with Clang:

$ ./tools/testing/kunit/kunit.py run --make_options LLVM=1
...

Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: David Gow <davidgow@google.com>
Cc: linux-um@lists.infradead.org
Cc: linux-kbuild@vger.kernel.org
Cc: linux-kselftest@vger.kernel.org
Cc: kunit-dev@googlegroups.com
Cc: llvm@lists.linux.dev
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/lkml/Yg2YubZxvYvx7%2Fnm@dev-arch.archlinux-ax161/
Tested-by: David Gow <davidgow@google.com>
Link: https://lore.kernel.org/lkml/CABVgOSk=oFxsbSbQE-v65VwR2+mXeGXDDjzq8t7FShwjJ3+kUg@mail.gmail.com/
Signed-off-by: Kees Cook <keescook@chromium.org>
---
v1: https://lore.kernel.org/lkml/20220217002843.2312603-1-keescook@chromium.org
v2: https://lore.kernel.org/lkml/20220224055831.1854786-1-keescook@chromium.org
v3:
 - use kbuild.h to avoid duplication (Masahiro)
 - fix intended comments (Masahiro)
 - use SUBARCH (Nathan)
2022-03-21 08:13:03 -07:00
Paolo Bonzini c9b8fecddb KVM: use kvcalloc for array allocations
Instead of using array_size, use a function that takes care of the
multiplication.  While at it, switch to kvcalloc since this allocation
should not be very large.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:41 -04:00
Oliver Upton 6d8491910f KVM: x86: Introduce KVM_CAP_DISABLE_QUIRKS2
KVM_CAP_DISABLE_QUIRKS is irrevocably broken. The capability does not
advertise the set of quirks which may be disabled to userspace, so it is
impossible to predict the behavior of KVM. Worse yet,
KVM_CAP_DISABLE_QUIRKS will tolerate any value for cap->args[0], meaning
it fails to reject attempts to set invalid quirk bits.

The only valid workaround for the quirky quirks API is to add a new CAP.
Actually advertise the set of quirks that can be disabled to userspace
so it can predict KVM's behavior. Reject values for cap->args[0] that
contain invalid bits.

Finally, add documentation for the new capability and describe the
existing quirks.

Signed-off-by: Oliver Upton <oupton@google.com>
Message-Id: <20220301060351.442881-5-oupton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:41 -04:00
Thomas Gleixner 5e17b2ee45 kvm: x86: Require const tsc for RT
Non constant TSC is a nightmare on bare metal already, but with
virtualization it becomes a complete disaster because the workarounds
are horrible latency wise. That's also a preliminary for running RT in
a guest on top of a RT host.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Message-Id: <Yh5eJSG19S2sjZfy@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:40 -04:00
Paolo Bonzini f144c49e8c KVM: x86: synthesize CPUID leaf 0x80000021h if useful
Guests X86_BUG_NULL_SEG if and only if the host has them.  Use the info
from static_cpu_has_bug to form the 0x80000021 CPUID leaf that was
defined for Zen3.  Userspace can then set the bit even on older CPUs
that do not have the bug, such as Zen2.

Do the same for X86_FEATURE_LFENCE_RDTSC as well, since various processors
have had very different ways of detecting it and not all of them are
available to userspace.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:40 -04:00
Paolo Bonzini 58b3d12c0a KVM: x86: add support for CPUID leaf 0x80000021
CPUID leaf 0x80000021 defines some features (or lack of bugs) of AMD
processors.  Expose the ones that make sense via KVM_GET_SUPPORTED_CPUID.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:40 -04:00
Maxim Levitsky bf07be36cd KVM: x86: do not use KVM_X86_OP_OPTIONAL_RET0 for get_mt_mask
KVM_X86_OP_OPTIONAL_RET0 can only be used with 32-bit return values on 32-bit
systems, because unsigned long is only 32-bits wide there and 64-bit values
are returned in edx:eax.

Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 09:28:25 -04:00
Paolo Bonzini 873dd12217 Revert "KVM: x86/mmu: Zap only TDP MMU leafs in kvm_zap_gfn_range()"
This reverts commit cf3e26427c.

Multi-vCPU Hyper-V guests started crashing randomly on boot with the
latest kvm/queue and the problem can be bisected the problem to this
particular patch. Basically, I'm not able to boot e.g. 16-vCPU guest
successfully anymore. Both Intel and AMD seem to be affected. Reverting
the commit saves the day.

Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 05:11:51 -04:00
Paolo Bonzini fcb93eb6d0 kvm: x86/mmu: Flush TLB before zap_gfn_range releases RCU
Since "KVM: x86/mmu: Zap only TDP MMU leafs in kvm_zap_gfn_range()"
is going to be reverted, it's not going to be true anymore that
the zap-page flow does not free any 'struct kvm_mmu_page'.  Introduce
an early flush before tdp_mmu_zap_leafs() returns, to preserve
bisectability.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-21 05:11:51 -04:00
Borislav Petkov fe83f5eae4 kvm/emulate: Fix SETcc emulation function offsets with SLS
The commit in Fixes started adding INT3 after RETs as a mitigation
against straight-line speculation.

The fastop SETcc implementation in kvm's insn emulator uses macro magic
to generate all possible SETcc functions and to jump to them when
emulating the respective instruction.

However, it hardcodes the size and alignment of those functions to 4: a
three-byte SETcc insn and a single-byte RET. BUT, with SLS, there's an
INT3 that gets slapped after the RET, which brings the whole scheme out
of alignment:

  15:   0f 90 c0                seto   %al
  18:   c3                      ret
  19:   cc                      int3
  1a:   0f 1f 00                nopl   (%rax)
  1d:   0f 91 c0                setno  %al
  20:   c3                      ret
  21:   cc                      int3
  22:   0f 1f 00                nopl   (%rax)
  25:   0f 92 c0                setb   %al
  28:   c3                      ret
  29:   cc                      int3

and this explodes like this:

  int3: 0000 [#1] PREEMPT SMP PTI
  CPU: 0 PID: 2435 Comm: qemu-system-x86 Not tainted 5.17.0-rc8-sls #1
  Hardware name: Dell Inc. Precision WorkStation T3400  /0TP412, BIOS A14 04/30/2012
  RIP: 0010:setc+0x5/0x8 [kvm]
  Code: 00 00 0f 1f 00 0f b6 05 43 24 06 00 c3 cc 0f 1f 80 00 00 00 00 0f 90 c0 c3 cc 0f \
	  1f 00 0f 91 c0 c3 cc 0f 1f 00 0f 92 c0 c3 cc <0f> 1f 00 0f 93 c0 c3 cc 0f 1f 00 \
	  0f 94 c0 c3 cc 0f 1f 00 0f 95 c0
  Call Trace:
   <TASK>
   ? x86_emulate_insn [kvm]
   ? x86_emulate_instruction [kvm]
   ? vmx_handle_exit [kvm_intel]
   ? kvm_arch_vcpu_ioctl_run [kvm]
   ? kvm_vcpu_ioctl [kvm]
   ? __x64_sys_ioctl
   ? do_syscall_64
   ? entry_SYSCALL_64_after_hwframe
   </TASK>

Raise the alignment value when SLS is enabled and use a macro for that
instead of hard-coding naked numbers.

Fixes: e463a09af2 ("x86: Add straight-line-speculation mitigation")
Reported-by: Jamie Heilman <jamie@audible.transient.net>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Jamie Heilman <jamie@audible.transient.net>
Link: https://lore.kernel.org/r/YjGzJwjrvxg5YZ0Z@audible.transient.net
[Add a comment and a bit of safety checking, since this is going to be changed
 again for IBT support. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-20 14:55:46 +01:00
Rafael J. Wysocki 31035f3e20 Merge branch 'thermal-hfi'
Merge Intel Hardware Feedback Interface (HFI) thermal driver for
5.18-rc1 and update the intel-speed-select utility to support that
driver.

* thermal-hfi:
  tools/power/x86/intel-speed-select: v1.12 release
  tools/power/x86/intel-speed-select: HFI support
  tools/power/x86/intel-speed-select: OOB daemon mode
  thermal: intel: hfi: INTEL_HFI_THERMAL depends on NET
  thermal: netlink: Fix parameter type of thermal_genl_cpu_capability_event() stub
  thermal: intel: hfi: Notify user space for HFI events
  thermal: netlink: Add a new event to notify CPU capabilities change
  thermal: intel: hfi: Enable notification interrupt
  thermal: intel: hfi: Handle CPU hotplug events
  thermal: intel: hfi: Minimally initialize the Hardware Feedback Interface
  x86/cpu: Add definitions for the Intel Hardware Feedback Interface
  x86/Documentation: Describe the Intel Hardware Feedback Interface
2022-03-18 19:00:26 +01:00
Rafael J. Wysocki dfad78e07e Merge branches 'pm-sleep', 'pm-domains' and 'pm-docs'
Merge changes related to system sleep, PM domains changes and power
management documentation changes for 5.18-rc1:

 - Fix load_image_and_restore() error path (Ye Bin).

 - Fix typos in comments in the system wakeup hadling code (Tom Rix).

 - Clean up non-kernel-doc comments in hibernation code (Jiapeng
   Chong).

 - Fix __setup handler error handling in system-wide suspend and
   hibernation core code (Randy Dunlap).

 - Add device name to suspend_report_result() (Youngjin Jang).

 - Make virtual guests honour ACPI S4 hardware signature by
   default (David Woodhouse).

 - Block power off of a parent PM domain unless child is in deepest
   state (Ulf Hansson).

 - Use dev_err_probe() to simplify error handling for generic PM
   domains (Ahmad Fatoum).

 - Fix sleep-in-atomic bug caused by genpd_debug_remove() (Shawn Guo).

 - Document Intel uncore frequency scaling (Srinivas Pandruvada).

* pm-sleep:
  PM: hibernate: Honour ACPI hardware signature by default for virtual guests
  PM: sleep: Add device name to suspend_report_result()
  PM: suspend: fix return value of __setup handler
  PM: hibernate: fix __setup handler error handling
  PM: hibernate: Clean up non-kernel-doc comments
  PM: sleep: wakeup: Fix typos in comments
  PM: hibernate: fix load_image_and_restore() error path

* pm-domains:
  PM: domains: Fix sleep-in-atomic bug caused by genpd_debug_remove()
  PM: domains: use dev_err_probe() to simplify error handling
  PM: domains: Prevent power off for parent unless child is in deepest state

* pm-docs:
  Documentation: admin-guide: pm: Document uncore frequency scaling
2022-03-18 18:29:21 +01:00
Rafael J. Wysocki 24b2b094b5 Merge branches 'acpi-ec', 'acpi-cppc', 'acpi-fan' and 'acpi-battery'
Merge ACPI EC driver changes, CPPC-related changes, ACPI fan driver
changes and ACPI battery driver changes for 5.18-rc1:

 - Make wakeup events checks in the ACPI EC driver more
   straightforward and clean up acpi_ec_submit_event() (Rafael
   Wysocki).

 - Make it possible to obtain the CPU capacity with the help of CPPC
   information (Ionela Voinescu).

 - Improve fine grained fan control in the ACPI fan driver and
   document it (Srinivas Pandruvada).

 - Add device HID and quirk for Microsoft Surface Go 3 to the ACPI
   battery driver (Maximilian Luz).

* acpi-ec:
  ACPI: EC: Rearrange code in acpi_ec_submit_event()
  ACPI: EC: Reduce indentation level in acpi_ec_submit_event()
  ACPI: EC: Do not return result from advance_transaction()

* acpi-cppc:
  arm64, topology: enable use of init_cpu_capacity_cppc()
  arch_topology: obtain cpu capacity using information from CPPC
  x86, ACPI: rename init_freq_invariance_cppc() to arch_init_invariance_cppc()

* acpi-fan:
  Documentation/admin-guide/acpi: Add documentation for fine grain control
  ACPI: fan: Add additional attributes for fine grain control
  ACPI: fan: Properly handle fine grain control
  ACPI: fan: Optimize struct acpi_fan_fif
  ACPI: fan: Separate file for attributes creation
  ACPI: fan: Fix error reporting to user space

* acpi-battery:
  ACPI: battery: Add device HID and quirk for Microsoft Surface Go 3
2022-03-18 17:36:54 +01:00
Rafael J. Wysocki 03d5c98d91 Merge branches 'acpi-pm', 'acpi-properties', 'acpi-misc' and 'acpi-x86'
Merge ACPI power management changes, ACPI device properties handling
changes, x86-specific ACPI changes and miscellaneous ACPI changes for
5.18-rc1:

 - Add power management debug messages related to suspend-to-idle in
   two places (Rafael Wysocki).

 - Fix __acpi_node_get_property_reference() return value and clean up
   that function (Andy Shevchenko, Sakari Ailus).

 - Fix return value of the __setup handler in the ACPI PM timer clock
   source driver (Randy Dunlap).

 - Clean up double words in two comments (Tom Rix).

 - Add "skip i2c clients" quirks for Lenovo Yoga Tablet 1050F/L and
   Nextbook Ares 8 (Hans de Goede).

 - Clean up frequency invariance handling on x86 in the ACPI CPPC
   library (Huang Rui).

 - Work around broken XSDT on the Advantech DAC-BJ01 board (Mark
   Cilissen).

* acpi-pm:
  ACPI: EC / PM: Print additional debug message in acpi_ec_dispatch_gpe()
  ACPI: PM: Print additional debug message in acpi_s2idle_wake()

* acpi-properties:
  ACPI: property: Get rid of redundant 'else'
  ACPI: properties: Consistently return -ENOENT if there are no more references

* acpi-misc:
  clocksource: acpi_pm: fix return value of __setup handler
  ACPI: clean up double words in two comments

* acpi-x86:
  ACPI / x86: Work around broken XSDT on Advantech DAC-BJ01 board
  x86/ACPI: CPPC: Move init_freq_invariance_cppc() into x86 CPPC
  x86: Expose init_freq_invariance() to topology header
  x86/ACPI: CPPC: Move AMD maximum frequency ratio setting function into x86 CPPC
  x86/ACPI: CPPC: Rename cppc_msr.c to cppc.c
  ACPI / x86: Add skip i2c clients quirk for Lenovo Yoga Tablet 1050F/L
  ACPI / x86: Add skip i2c clients quirk for Nextbook Ares 8
2022-03-18 17:23:05 +01:00
Masami Hiramatsu 75caf33eda rethook: x86: Add rethook x86 implementation
Add rethook for x86 implementation. Most of the code has been copied from
kretprobes on x86.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735286243.1084943.7477055110527046644.stgit@devnote2
2022-03-17 20:16:35 -07:00
Jakub Kicinski e243f39685 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-17 13:56:58 -07:00
Hou Tao 73e14451f3 bpf, x86: Fall back to interpreter mode when extra pass fails
Extra pass for subprog jit may fail (e.g. due to bpf_jit_harden race),
but bpf_func is not cleared for the subprog and jit_subprogs will
succeed. The running of the bpf program may lead to oops because the
memory for the jited subprog image has already been freed.

So fall back to interpreter mode by clearing bpf_func/jited/jited_len
when extra pass fails.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220309123321.2400262-2-houtao1@huawei.com
2022-03-16 15:12:18 -07:00
David Woodhouse f6c46b1d62 PM: hibernate: Honour ACPI hardware signature by default for virtual guests
The ACPI specification says that OSPM should refuse to restore from
hibernate if the hardware signature changes, and should boot from
scratch. However, real BIOSes often vary the hardware signature in cases
where we *do* want to resume from hibernate, so Linux doesn't follow the
spec by default.

However, in a virtual environment there's no reason for the VMM to vary
the hardware signature *unless* it wants to trigger a clean reboot as
defined by the ACPI spec. So enable the check by default if a hypervisor
is detected.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-16 19:29:32 +01:00
Jiri Kosina d4c9df20a3 x86/nmi: Remove the 'strange power saving mode' hint from unknown NMI handler
The

  Do you have a strange power saving mode enabled?

hint when unknown NMI happens dates back to i386 stone age, and isn't
currently really helpful.

Unknown NMIs are coming for many different reasons (broken firmware,
faulty hardware, ...) and rarely have anything to do with 'strange power
saving mode' (whatever that even is).

Just remove it as it's largerly misleading.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/nycvar.YFH.7.76.2203140924120.24795@cbobk.fhfr.pm
2022-03-16 11:02:41 +01:00
Greg Kroah-Hartman 7f220d4a38 Linux 5.17-rc8
-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmIuUskeHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGCFkH/2n3mpGXuITp0ZXE
 TNrpbdZOof5SgLw+w7THswXuo6m5yRGNKQs9fvIvDD8Vf7/OdQQfPOmF1cIE5+nk
 wcz6aHKbdrok8Jql2qjJqWXZ5xbGj6qywg3zZrwOUsCKFP5p+AjBJcmZOsvQHjSp
 ASODy1moOlK+nO52TrMaJw74a8xQPmQiNa+T2P+FedEYjlcRH/c7hLJ7GEnL6+cC
 /R4bATZq3tiInbTBlkC0hR0iVNgRXwXNyv9PEXrYYYHnekh8G1mgSNf06iejLcsG
 aAYsW9NyPxu8zPhhHNx79K9o8BMtxGD4YQpsfdfIEnf9Q3euqAKe2evRWqHHlDms
 RuSCtsc=
 =M9Nc
 -----END PGP SIGNATURE-----

Merge tag 'v5.17-rc8' into usb-next

We need the Xen USB fixes as other patches depend on those changes.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-16 09:04:22 +01:00
jianchunfu 309b517276 arch:x86:xen: Remove unnecessary assignment in xen_apic_read()
In the function xen_apic_read(), the initialized value of 'ret' is unused
because it will be assigned by the function HYPERVISOR_platform_op(),
thus remove it.

Signed-off-by: jianchunfu <jianchunfu@cmss.chinamobile.com>
Link: https://lore.kernel.org/r/20220314070514.2602-1-jianchunfu@cmss.chinamobile.com
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-03-15 20:35:35 -05:00
Peter Zijlstra b0ae33a2d2 usb: early: xhci-dbc: Remove duplicate keep parsing
The generic earlyprintk= parsing already parses the optional ",keep",
no need to duplicate that in the xdbc driver.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220304152135.975568860@infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-15 18:20:34 +01:00
Peter Zijlstra 69f8aeab43 x86/tsc: Be consistent about use_tsc_delay()
Currently loops_per_jiffy is set in tsc_early_init(), but then don't
switch to delay_tsc, with the result that delay_loop is used with
loops_per_jiffy set for delay_tsc.

Then in (late) tsc_init() lpj_fine is set (which is mostly unused) and
after which use_tsc_delay() is finally called.

Move both loops_per_jiffy and use_tsc_delay() into
tsc_enable_sched_clock() which is called the moment tsc_khz is
determined, be it early or late. Keeping the lot consistent.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220304152135.914397165@infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-15 18:20:33 +01:00
Ingo Molnar 9cea0d46f5 Merge branch 'x86/cpu' into x86/core, to resolve conflicts
Conflicts:
	arch/x86/include/asm/cpufeatures.h

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-03-15 12:52:51 +01:00
Ingo Molnar 8c490b42fe Merge branch 'x86/pasid' into x86/core, to resolve conflicts
Conflicts:
	tools/objtool/arch/x86/decode.c

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-03-15 12:50:59 +01:00
Nathan Chancellor aaeed6ecc1 x86/Kconfig: Do not allow CONFIG_X86_X32_ABI=y with llvm-objcopy
There are two outstanding issues with CONFIG_X86_X32_ABI and
llvm-objcopy, with similar root causes:

1. llvm-objcopy does not properly convert .note.gnu.property when going
   from x86_64 to x86_x32, resulting in a corrupted section when
   linking:

   https://github.com/ClangBuiltLinux/linux/issues/1141

2. llvm-objcopy produces corrupted compressed debug sections when going
   from x86_64 to x86_x32, also resulting in an error when linking:

   https://github.com/ClangBuiltLinux/linux/issues/514

After commit 41c5ef31ad71 ("x86/ibt: Base IBT bits"), the
.note.gnu.property section is always generated when
CONFIG_X86_KERNEL_IBT is enabled, which causes the first issue to become
visible with an allmodconfig build:

  ld.lld: error: arch/x86/entry/vdso/vclock_gettime-x32.o:(.note.gnu.property+0x1c): program property is too short

To avoid this error, do not allow CONFIG_X86_X32_ABI to be selected when
using llvm-objcopy. If the two issues ever get fixed in llvm-objcopy,
this can be turned into a feature check.

Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220314194842.3452-3-nathan@kernel.org
2022-03-15 10:32:48 +01:00
Masahiro Yamada 83a44a4f47 x86: Remove toolchain check for X32 ABI capability
Commit 0bf6276392 ("x32: Warn and disable rather than error if
binutils too old") added a small test in arch/x86/Makefile because
binutils 2.22 or newer is needed to properly support elf32-x86-64. This
check is no longer necessary, as the minimum supported version of
binutils is 2.23, which is enforced at configuration time with
scripts/min-tool-version.sh.

Remove this check and replace all uses of CONFIG_X86_X32 with
CONFIG_X86_X32_ABI, as two symbols are no longer necessary.

[nathan: Rebase, fix up a few places where CONFIG_X86_X32 was still
         used, and simplify commit message to satisfy -tip requirements]

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220314194842.3452-2-nathan@kernel.org
2022-03-15 10:32:48 +01:00
Peter Zijlstra ed53a0d971 x86/alternative: Use .ibt_endbr_seal to seal indirect calls
Objtool's --ibt option generates .ibt_endbr_seal which lists
superfluous ENDBR instructions. That is those instructions for which
the function is never indirectly called.

Overwrite these ENDBR instructions with a NOP4 such that these
function can never be indirect called, reducing the number of viable
ENDBR targets in the kernel.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154319.822545231@infradead.org
2022-03-15 10:32:47 +01:00
Peter Zijlstra 89bc853eae objtool: Find unused ENDBR instructions
Find all ENDBR instructions which are never referenced and stick them
in a section such that the kernel can poison them, sealing the
functions from ever being an indirect call target.

This removes about 1-in-4 ENDBR instructions.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154319.763643193@infradead.org
2022-03-15 10:32:47 +01:00
Peter Zijlstra 3515899bef x86: Annotate idtentry_df()
Without CONFIG_X86_ESPFIX64 exc_double_fault() is noreturn and objtool
is clever enough to figure that out.

vmlinux.o: warning: objtool: asm_exc_double_fault()+0x22: unreachable instruction

0000000000001260 <asm_exc_double_fault>:
1260:       f3 0f 1e fa             endbr64
1264:       90                      nop
1265:       90                      nop
1266:       90                      nop
1267:       e8 84 03 00 00          call   15f0 <paranoid_entry>
126c:       48 89 e7                mov    %rsp,%rdi
126f:       48 8b 74 24 78          mov    0x78(%rsp),%rsi
1274:       48 c7 44 24 78 ff ff ff ff      movq   $0xffffffffffffffff,0x78(%rsp)
127d:       e8 00 00 00 00          call   1282 <asm_exc_double_fault+0x22> 127e: R_X86_64_PLT32    exc_double_fault-0x4
1282:       e9 09 04 00 00          jmp    1690 <paranoid_exit>

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/Yi9gOW9f1GGwwUD6@hirez.programming.kicks-ass.net
2022-03-15 10:32:45 +01:00
Peter Zijlstra dca5da2abe x86,objtool: Move the ASM_REACHABLE annotation to objtool.h
Because we need a variant for .S files too.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/Yi9gOW9f1GGwwUD6@hirez.programming.kicks-ass.net
2022-03-15 10:32:45 +01:00
Peter Zijlstra be0075951f x86: Annotate call_on_stack()
vmlinux.o: warning: objtool: page_fault_oops()+0x13c: unreachable instruction

0000 000000000005b460 <page_fault_oops>:
...
0128    5b588:  49 89 23                mov    %rsp,(%r11)
012b    5b58b:  4c 89 dc                mov    %r11,%rsp
012e    5b58e:  4c 89 f2                mov    %r14,%rdx
0131    5b591:  48 89 ee                mov    %rbp,%rsi
0134    5b594:  4c 89 e7                mov    %r12,%rdi
0137    5b597:  e8 00 00 00 00          call   5b59c <page_fault_oops+0x13c>    5b598: R_X86_64_PLT32   handle_stack_overflow-0x4
013c    5b59c:  5c                      pop    %rsp

vmlinux.o: warning: objtool: sysvec_reboot()+0x6d: unreachable instruction

0000 00000000000033f0 <sysvec_reboot>:
...
005d     344d:  4c 89 dc                mov    %r11,%rsp
0060     3450:  e8 00 00 00 00          call   3455 <sysvec_reboot+0x65>        3451: R_X86_64_PLT32    irq_enter_rcu-0x4
0065     3455:  48 89 ef                mov    %rbp,%rdi
0068     3458:  e8 00 00 00 00          call   345d <sysvec_reboot+0x6d>        3459: R_X86_64_PC32     .text+0x47d0c
006d     345d:  e8 00 00 00 00          call   3462 <sysvec_reboot+0x72>        345e: R_X86_64_PLT32    irq_exit_rcu-0x4
0072     3462:  5c                      pop    %rsp

Both cases are due to a call_on_stack() calling a __noreturn function.
Since that's an inline asm, GCC can't do anything about the
instructions after the CALL. Therefore put in an explicit
ASM_REACHABLE annotation to make sure objtool and gcc are consistently
confused about control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154319.468805622@infradead.org
2022-03-15 10:32:44 +01:00
Peter Zijlstra f9cdf7ca57 x86: Mark stop_this_cpu() __noreturn
vmlinux.o: warning: objtool: smp_stop_nmi_callback()+0x2b: unreachable instruction

0000 0000000000047cf0 <smp_stop_nmi_callback>:
...
0026    47d16:  e8 00 00 00 00          call   47d1b <smp_stop_nmi_callback+0x2b>       47d17: R_X86_64_PLT32   stop_this_cpu-0x4
002b    47d1b:  b8 01 00 00 00          mov    $0x1,%eax

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154319.290905453@infradead.org
2022-03-15 10:32:43 +01:00
Peter Zijlstra 2b6ff7dea6 x86/ibt: Dont generate ENDBR in .discard.text
Having ENDBR in discarded sections can easily lead to relocations into
discarded sections which the linkers aren't really fond of. Objtool
also shouldn't generate them, but why tempt fate.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154319.054842742@infradead.org
2022-03-15 10:32:42 +01:00
Peter Zijlstra e8d61bdf0f x86/ibt,sev: Annotations
No IBT on AMD so far.. probably correct, who knows.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.995109889@infradead.org
2022-03-15 10:32:41 +01:00
Peter Zijlstra 3215de84c0 x86/ibt,ftrace: Annotate ftrace code patching
These are code patching sites, not indirect targets.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.936599479@infradead.org
2022-03-15 10:32:41 +01:00
Peter Zijlstra 3e3f069504 x86/ibt: Annotate text references
Annotate away some of the generic code references. This is things
where we take the address of a symbol for exception handling or return
addresses (eg. context switch).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.877758523@infradead.org
2022-03-15 10:32:40 +01:00
Peter Zijlstra fe379fa4d1 x86/ibt: Disable IBT around firmware
Assume firmware isn't IBT clean and disable it across calls.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.759989383@infradead.org
2022-03-15 10:32:40 +01:00
Peter Zijlstra 99c95c5d4f x86/alternative: Simplify int3_selftest_ip
Similar to ibt_selftest_ip, apply the same pattern.

Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.700456643@infradead.org
2022-03-15 10:32:40 +01:00
Peter Zijlstra af22700390 x86/ibt,kexec: Disable CET on kexec
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.641454603@infradead.org
2022-03-15 10:32:39 +01:00
Peter Zijlstra 991625f3dd x86/ibt: Add IBT feature, MSR and #CP handling
The bits required to make the hardware go.. Of note is that, provided
the syscall entry points are covered with ENDBR, #CP doesn't need to
be an IST because we'll never hit the syscall gap.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.582331711@infradead.org
2022-03-15 10:32:39 +01:00
Peter Zijlstra 5891271055 x86/ibt,bpf: Add ENDBR instructions to prologue and trampoline
With IBT enabled builds we need ENDBR instructions at indirect jump
target sites, since we start execution of the JIT'ed code through an
indirect jump, the very first instruction needs to be ENDBR.

Similarly, since eBPF tail-calls use indirect branches, their landing
site needs to be an ENDBR too.

The trampolines need similar adjustment.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Fixed-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.464998838@infradead.org
2022-03-15 10:32:38 +01:00
Peter Zijlstra cc66bb9145 x86/ibt,kprobes: Cure sym+0 equals fentry woes
In order to allow kprobes to skip the ENDBR instructions at sym+0 for
X86_KERNEL_IBT builds, change _kprobe_addr() to take an architecture
callback to inspect the function at hand and modify the offset if
needed.

This streamlines the existing interface to cover more cases and
require less hooks. Once PowerPC gets fully converted there will only
be the one arch hook.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.405947704@infradead.org
2022-03-15 10:32:38 +01:00
Peter Zijlstra e52fc2cf3f x86/ibt,ftrace: Make function-graph play nice
Return trampoline must not use indirect branch to return; while this
preserves the RSB, it is fundamentally incompatible with IBT. Instead
use a retpoline like ROP gadget that defeats IBT while not unbalancing
the RSB.

And since ftrace_stub is no longer a plain RET, don't use it to copy
from. Since RET is a trivial instruction, poke it directly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.347296408@infradead.org
2022-03-15 10:32:37 +01:00
Peter Zijlstra aebfd12521 x86/ibt,ftrace: Search for __fentry__ location
Currently a lot of ftrace code assumes __fentry__ is at sym+0. However
with Intel IBT enabled the first instruction of a function will most
likely be ENDBR.

Change ftrace_location() to not only return the __fentry__ location
when called for the __fentry__ location, but also when called for the
sym+0 location.

Then audit/update all callsites of this function to consistently use
these new semantics.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.227581603@infradead.org
2022-03-15 10:32:37 +01:00
Peter Zijlstra 6649fa876d x86/ibt,kvm: Add ENDBR to fastops
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.168850084@infradead.org
2022-03-15 10:32:37 +01:00
Peter Zijlstra 214b9a83b6 x86/ibt,crypto: Add ENDBR for the jump-table entries
The code does:

	## branch into array
	mov     jump_table(,%rax,8), %bufp
	JMP_NOSPEC bufp

resulting in needing to mark the jump-table entries with ENDBR.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.110500806@infradead.org
2022-03-15 10:32:36 +01:00
Peter Zijlstra c3b037917c x86/ibt,paravirt: Sprinkle ENDBR
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154318.051635891@infradead.org
2022-03-15 10:32:36 +01:00
Peter Zijlstra c4691712b5 x86/linkage: Add ENDBR to SYM_FUNC_START*()
Ensure the ASM functions have ENDBR on for IBT builds, this follows
the ARM64 example. Unlike ARM64, we'll likely end up overwriting them
with poison.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.992708941@infradead.org
2022-03-15 10:32:36 +01:00
Peter Zijlstra 8f93402b92 x86/ibt,entry: Sprinkle ENDBR dust
Kernel entry points should be having ENDBR on for IBT configs.

The SYSCALL entry points are found through taking their respective
address in order to program them in the MSRs, while the exception
entry points are found through UNWIND_HINT_IRET_REGS.

The rule is that any UNWIND_HINT_IRET_REGS at sym+0 should have an
ENDBR, see the later objtool ibt validation patch.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.933157479@infradead.org
2022-03-15 10:32:35 +01:00
Peter Zijlstra 5b2fc51576 x86/ibt,xen: Sprinkle the ENDBR
Even though Xen currently doesn't advertise IBT, prepare for when it
will eventually do so and sprinkle the ENDBR dust accordingly.

Even though most of the entry points are IRET like, the CPL0
Hypervisor can set WAIT-FOR-ENDBR and demand ENDBR at these sites.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.873919996@infradead.org
2022-03-15 10:32:35 +01:00
Peter Zijlstra 8b87d8cec1 x86/entry,xen: Early rewrite of restore_regs_and_return_to_kernel()
By doing an early rewrite of 'jmp native_iret` in
restore_regs_and_return_to_kernel() we can get rid of the last
INTERRUPT_RETURN user and paravirt_iret.

Suggested-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.815039833@infradead.org
2022-03-15 10:32:34 +01:00
Peter Zijlstra 6cf3e4c0d2 x86/entry: Cleanup PARAVIRT
Since commit 5c8f6a2e31 ("x86/xen: Add
xenpv_restore_regs_and_return_to_usermode()") Xen will no longer reach
this code and we can do away with the paravirt
SWAPGS/INTERRUPT_RETURN.

Suggested-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.756014488@infradead.org
2022-03-15 10:32:34 +01:00
Peter Zijlstra ba27d1a808 x86/ibt,paravirt: Use text_gen_insn() for paravirt_patch()
Less duplication is more better.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.697253958@infradead.org
2022-03-15 10:32:34 +01:00
Peter Zijlstra bbf92368b0 x86/text-patching: Make text_gen_insn() play nice with ANNOTATE_NOENDBR
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.638561109@infradead.org
2022-03-15 10:32:33 +01:00
Peter Zijlstra 156ff4a544 x86/ibt: Base IBT bits
Add Kconfig, Makefile and basic instruction support for x86 IBT.

(Ab)use __DISABLE_EXPORTS to disable IBT since it's already employed
to mark compressed and purgatory. Additionally mark realmode with it
as well to avoid inserting ENDBR instructions there. While ENDBR is
technically a NOP, inserting them was causing some grief due to code
growth. There's also a problem with using __noendbr in code compiled
without -fcf-protection=branch.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.519875203@infradead.org
2022-03-15 10:32:33 +01:00
Peter Zijlstra 537da1ed54 objtool,efi: Update __efi64_thunk annotation
The current annotation relies on not running objtool on the file; this
won't work when running objtool on vmlinux.o. Instead explicitly mark
__efi64_thunk() to be ignored.

This preserves the status quo, which is somewhat unfortunate. Luckily
this code is hardly ever used.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Link: https://lore.kernel.org/r/20220308154317.402118218@infradead.org
2022-03-15 10:32:32 +01:00
Peter Zijlstra 599d66b847 Merge branch 'arm64/for-next/linkage'
Enjoy the cleanups and avoid conflicts vs linkage

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2022-03-15 10:32:31 +01:00
Ingo Molnar ccdbf33c23 Linux 5.17-rc8
-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmIuUskeHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGCFkH/2n3mpGXuITp0ZXE
 TNrpbdZOof5SgLw+w7THswXuo6m5yRGNKQs9fvIvDD8Vf7/OdQQfPOmF1cIE5+nk
 wcz6aHKbdrok8Jql2qjJqWXZ5xbGj6qywg3zZrwOUsCKFP5p+AjBJcmZOsvQHjSp
 ASODy1moOlK+nO52TrMaJw74a8xQPmQiNa+T2P+FedEYjlcRH/c7hLJ7GEnL6+cC
 /R4bATZq3tiInbTBlkC0hR0iVNgRXwXNyv9PEXrYYYHnekh8G1mgSNf06iejLcsG
 aAYsW9NyPxu8zPhhHNx79K9o8BMtxGD4YQpsfdfIEnf9Q3euqAKe2evRWqHHlDms
 RuSCtsc=
 =M9Nc
 -----END PGP SIGNATURE-----

Merge tag 'v5.17-rc8' into sched/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-03-15 10:28:12 +01:00
Will Deacon 563c463595 Merge branch 'for-next/linkage' into for-next/core
* for-next/linkage:
  arm64: module: remove (NOLOAD) from linker script
  linkage: remove SYM_FUNC_{START,END}_ALIAS()
  x86: clean up symbol aliasing
  arm64: clean up symbol aliasing
  linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
2022-03-14 19:01:05 +00:00
Linus Torvalds f0e18b03fc - Free shmem backing storage for SGX enclave pages when those are
swapped back into EPC memory
 
 - Prevent do_int3() from being kprobed, to avoid recursion
 
 - Remap setup_data and setup_indirect structures properly when accessing
 their members
 
 - Correct the alternatives patching order for modules too
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmItzJgACgkQEsHwGGHe
 VUqaow/8C115xuZEBn+iT+adcQxbqrg3S2en/Hq0aJOEhkNkbOhgAW0OWHvj7Gs3
 +2taD35MqzneEOfa0Gv46600V4+SV5K5NAFndr4PA2FVgIw01rEQios2oc4QSQBP
 PVJgvGyIMpN71ODKTiZ8w4ihp3J7MWDkCP1z4hbO/lfM4tOXcYzh2Lv1fE8hHr5b
 qFtPDyYgfEKUVFa+sv2sE1cJw670UFDcFqGAIjxUUm0r78GKmPz08gZm9YiBTJgV
 jrxySdpAh/eaPeHNfFH9RzAD2ZGZppgIkPCp33ZdrMEhnZmwLz7vc76BMbkD2P6w
 1fBmBZ5F8yOMaaLHSGh4Ek5Gs3p9DjmaZdEWwz+yiIe1RFLKyOQu6gsmGbAyuQx4
 KSfPFnfkOfw/7cz6BSp3Sh6zgrGPqloIVcHkWRth/LJZSV/fVgM8bPg3VLJP6WFi
 o4WTcNAq/fNMAmGwtIVpTUW/QJafXvOauKkDGQkMQ87U68QSh6uDrvrvMHPF8W+Y
 SPcYrdsAPagLxq0GCCQ6doSvBjWNTolXfTnfAoATZpae0URmrvu9ddgUbIlgeQWY
 n/rK+cKk+iuLTEZC55+v5OALwEMOM3Tuz4Ghko8re0pkD/kE61m3Az6w5sKN3Inc
 c21tvO/dxHhAnHV+34d2LM27PU4qoFdVO2mPup702x68XT+X0/g=
 =YLph
 -----END PGP SIGNATURE-----

Merge tag 'x86_urgent_for_v5.17_rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Borislav Petkov:

 - Free shmem backing storage for SGX enclave pages when those are
   swapped back into EPC memory

 - Prevent do_int3() from being kprobed, to avoid recursion

 - Remap setup_data and setup_indirect structures properly when
   accessing their members

 - Correct the alternatives patching order for modules too

* tag 'x86_urgent_for_v5.17_rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/sgx: Free backing memory after faulting the enclave page
  x86/traps: Mark do_int3() NOKPROBE_SYMBOL
  x86/boot: Add setup_indirect support in early_memremap_is_setup_data()
  x86/boot: Fix memremap of setup_indirect structures
  x86/module: Fix the paravirt vs alternative order
2022-03-13 10:36:38 -07:00
Jarkko Sakkinen 08999b2489 x86/sgx: Free backing memory after faulting the enclave page
There is a limited amount of SGX memory (EPC) on each system.  When that
memory is used up, SGX has its own swapping mechanism which is similar
in concept but totally separate from the core mm/* code.  Instead of
swapping to disk, SGX swaps from EPC to normal RAM.  That normal RAM
comes from a shared memory pseudo-file and can itself be swapped by the
core mm code.  There is a hierarchy like this:

	EPC <-> shmem <-> disk

After data is swapped back in from shmem to EPC, the shmem backing
storage needs to be freed.  Currently, the backing shmem is not freed.
This effectively wastes the shmem while the enclave is running.  The
memory is recovered when the enclave is destroyed and the backing
storage freed.

Sort this out by freeing memory with shmem_truncate_range(), as soon as
a page is faulted back to the EPC.  In addition, free the memory for
PCMD pages as soon as all PCMD's in a page have been marked as unused
by zeroing its contents.

Cc: stable@vger.kernel.org
Fixes: 1728ab54b4 ("x86/sgx: Add a page reclaimer")
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20220303223859.273187-1-jarkko@kernel.org
2022-03-11 10:31:06 -08:00
Li Huafei a365a65f9c x86/traps: Mark do_int3() NOKPROBE_SYMBOL
Since kprobe_int3_handler() is called in do_int3(), probing do_int3()
can cause a breakpoint recursion and crash the kernel. Therefore,
do_int3() should be marked as NOKPROBE_SYMBOL.

Fixes: 21e28290b3 ("x86/traps: Split int3 handler up")
Signed-off-by: Li Huafei <lihuafei1@huawei.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20220310120915.63349-1-lihuafei1@huawei.com
2022-03-11 19:19:30 +01:00
David Gow f4f03f299a um: Cleanup syscall_handler_t definition/cast, fix warning
The syscall_handler_t type for x86_64 was defined as 'long (*)(void)',
but always cast to 'long (*)(long, long, long, long, long, long)' before
use. This now triggers a warning (see below).

Define syscall_handler_t as the latter instead, and remove the cast.
This simplifies the code, and fixes the warning.

Warning:
In file included from ../arch/um/include/asm/processor-generic.h:13
                 from ../arch/x86/um/asm/processor.h:41
                 from ../include/linux/rcupdate.h:30
                 from ../include/linux/rculist.h:11
                 from ../include/linux/pid.h:5
                 from ../include/linux/sched.h:14
                 from ../include/linux/ptrace.h:6
                 from ../arch/um/kernel/skas/syscall.c:7:
../arch/um/kernel/skas/syscall.c: In function ‘handle_syscall’:
../arch/x86/um/shared/sysdep/syscalls_64.h:18:11: warning: cast between incompatible function types from ‘long int (*)(void)’ to ‘long int (*)(long int,  long int,  long int,  long int,  long int,  long int)’ [
-Wcast-function-type]
   18 |         (((long (*)(long, long, long, long, long, long)) \
      |           ^
../arch/x86/um/asm/ptrace.h:36:62: note: in definition of macro ‘PT_REGS_SET_SYSCALL_RETURN’
   36 | #define PT_REGS_SET_SYSCALL_RETURN(r, res) (PT_REGS_AX(r) = (res))
      |                                                              ^~~
../arch/um/kernel/skas/syscall.c:46:33: note: in expansion of macro ‘EXECUTE_SYSCALL’
   46 |                                 EXECUTE_SYSCALL(syscall, regs));
      |                                 ^~~~~~~~~~~~~~~

Signed-off-by: David Gow <davidgow@google.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-03-11 10:48:03 +01:00
Yang Li 3bdd271bc8 um: Remove duplicated include in syscalls_64.c
Fix following includecheck warning:
./arch/x86/um/syscalls_64.c: registers.h is included more than once.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Fixes: dbba7f704a ("um: stop polluting the namespace with registers.h contents")
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-03-11 10:41:08 +01:00
Jakub Kicinski 1e8a3f0d2a Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
net/dsa/dsa2.c
  commit afb3cc1a39 ("net: dsa: unlock the rtnl_mutex when dsa_master_setup() fails")
  commit e83d565378 ("net: dsa: replay master state events in dsa_tree_{setup,teardown}_master")
https://lore.kernel.org/all/20220307101436.7ae87da0@canb.auug.org.au/

drivers/net/ethernet/intel/ice/ice.h
  commit 97b0129146 ("ice: Fix error with handling of bonding MTU")
  commit 43113ff734 ("ice: add TTY for GNSS module for E810T device")
https://lore.kernel.org/all/20220310112843.3233bcf1@canb.auug.org.au/

drivers/staging/gdm724x/gdm_lte.c
  commit fc7f750dc9 ("staging: gdm724x: fix use after free in gdm_lte_rx()")
  commit 4bcc4249b4 ("staging: Use netif_rx().")
https://lore.kernel.org/all/20220308111043.1018a59d@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 17:16:56 -08:00
Eric W. Biederman 355f841a3f tracehook: Remove tracehook.h
Now that all of the definitions have moved out of tracehook.h into
ptrace.h, sched/signal.h, resume_user_mode.h there is nothing left in
tracehook.h so remove it.

Update the few files that were depending upon tracehook.h to bring in
definitions to use the headers they need directly.

Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20220309162454.123006-13-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-03-10 16:51:51 -06:00
Eric W. Biederman 8ba62d3794 task_work: Call tracehook_notify_signal from get_signal on all architectures
Always handle TIF_NOTIFY_SIGNAL in get_signal.  With commit 35d0b389f3
("task_work: unconditionally run task_work from get_signal()") always
calling task_work_run all of the work of tracehook_notify_signal is
already happening except clearing TIF_NOTIFY_SIGNAL.

Factor clear_notify_signal out of tracehook_notify_signal and use it in
get_signal so that get_signal only needs one call of task_work_run.

To keep the semantics in sync update xfer_to_guest_mode_work (which
does not call get_signal) to call tracehook_notify_signal if either
_TIF_SIGPENDING or _TIF_NOTIFY_SIGNAL.

Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20220309162454.123006-8-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-03-10 16:51:36 -06:00
Eric W. Biederman 8ca07e17c9 task_work: Remove unnecessary include from posix_timers.h
Break a header file circular dependency by removing the unnecessary
include of task_work.h from posix_timers.h.

sched.h -> posix-timers.h
posix-timers.h -> task_work.h
task_work.h -> sched.h

Add missing includes of task_work.h to:
arch/x86/mm/tlb.c
kernel/time/posix-cpu-timers.c

Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20220309162454.123006-6-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-03-10 13:38:01 -06:00
Ionela Voinescu 1132e6de11 x86, ACPI: rename init_freq_invariance_cppc() to arch_init_invariance_cppc()
init_freq_invariance_cppc() was called in acpi_cppc_processor_probe(),
after CPU performance information and controls were populated from the
per-cpu _CPC objects.

But these _CPC objects provide information that helps with both CPU
(u-arch) and frequency invariance. Therefore, change the function name
to a more generic one, while adding the arch_ prefix, as this function
is expected to be defined differently by different architectures.

Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-10 20:21:58 +01:00
Jiapeng Chong b359b3a029 x86/xen: Fix kerneldoc warning
Fix the following W=1 kernel warnings:

arch/x86/xen/setup.c:725: warning: expecting prototype for
machine_specific_memory_setup(). Prototype was for xen_memory_setup()
instead.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/20220307062554.8334-1-jiapeng.chong@linux.alibaba.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-03-10 09:27:55 -06:00
Dongli Zhang eed0574432 xen: delay xen_hvm_init_time_ops() if kdump is boot on vcpu>=32
The sched_clock() can be used very early since commit 857baa87b6
("sched/clock: Enable sched clock early"). In addition, with commit
38669ba205 ("x86/xen/time: Output xen sched_clock time from 0"), kdump
kernel in Xen HVM guest may panic at very early stage when accessing
&__this_cpu_read(xen_vcpu)->time as in below:

setup_arch()
 -> init_hypervisor_platform()
     -> x86_init.hyper.init_platform = xen_hvm_guest_init()
         -> xen_hvm_init_time_ops()
             -> xen_clocksource_read()
                 -> src = &__this_cpu_read(xen_vcpu)->time;

This is because Xen HVM supports at most MAX_VIRT_CPUS=32 'vcpu_info'
embedded inside 'shared_info' during early stage until xen_vcpu_setup() is
used to allocate/relocate 'vcpu_info' for boot cpu at arbitrary address.

However, when Xen HVM guest panic on vcpu >= 32, since
xen_vcpu_info_reset(0) would set per_cpu(xen_vcpu, cpu) = NULL when
vcpu >= 32, xen_clocksource_read() on vcpu >= 32 would panic.

This patch calls xen_hvm_init_time_ops() again later in
xen_hvm_smp_prepare_boot_cpu() after the 'vcpu_info' for boot vcpu is
registered when the boot vcpu is >= 32.

This issue can be reproduced on purpose via below command at the guest
side when kdump/kexec is enabled:

"taskset -c 33 echo c > /proc/sysrq-trigger"

The bugfix for PVM is not implemented due to the lack of testing
environment.

[boris: xen_hvm_init_time_ops() returns on errors instead of jumping to end]

Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/20220302164032.14569-3-dongli.zhang@oracle.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-03-10 09:27:55 -06:00
Ross Philipson 445c1470b6 x86/boot: Add setup_indirect support in early_memremap_is_setup_data()
The x86 boot documentation describes the setup_indirect structures and
how they are used. Only one of the two functions in ioremap.c that needed
to be modified to be aware of the introduction of setup_indirect
functionality was updated. Adds comparable support to the other function
where it was missing.

Fixes: b3c72fc9a7 ("x86/boot: Introduce setup_indirect")
Signed-off-by: Ross Philipson <ross.philipson@oracle.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/1645668456-22036-3-git-send-email-ross.philipson@oracle.com
2022-03-09 12:49:46 +01:00
Ross Philipson 7228918b34 x86/boot: Fix memremap of setup_indirect structures
As documented, the setup_indirect structure is nested inside
the setup_data structures in the setup_data list. The code currently
accesses the fields inside the setup_indirect structure but only
the sizeof(struct setup_data) is being memremapped. No crash
occurred but this is just due to how the area is remapped under the
covers.

Properly memremap both the setup_data and setup_indirect structures
in these cases before accessing them.

Fixes: b3c72fc9a7 ("x86/boot: Introduce setup_indirect")
Signed-off-by: Ross Philipson <ross.philipson@oracle.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/1645668456-22036-2-git-send-email-ross.philipson@oracle.com
2022-03-09 12:49:44 +01:00
Michael Kelley eeda29db98 x86/hyperv: Output host build info as normal Windows version number
Hyper-V provides host version number information that is output in
text form by a Linux guest when it boots. For whatever reason, the
formatting has historically been non-standard. Change it to output
in normal Windows version format for better readability.

Similar code for ARM64 guests already outputs in normal Windows
version format.

Signed-off-by: Michael Kelley <mikelley@microsoft.com>
Link: https://lore.kernel.org/r/1646767364-2234-1-git-send-email-mikelley@microsoft.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
2022-03-08 20:44:50 +00:00
Mark Cilissen e702196bf8 ACPI / x86: Work around broken XSDT on Advantech DAC-BJ01 board
On this board the ACPI RSDP structure points to both a RSDT and an XSDT,
but the XSDT points to a truncated FADT. This causes all sorts of trouble
and usually a complete failure to boot after the following error occurs:

  ACPI Error: Unsupported address space: 0x20 (*/hwregs-*)
  ACPI Error: AE_SUPPORT, Unable to initialize fixed events (*/evevent-*)
  ACPI: Unable to start ACPI Interpreter

This leaves the ACPI implementation in such a broken state that subsequent
kernel subsystem initialisations go wrong, resulting in among others
mismapped PCI memory, SATA and USB enumeration failures, and freezes.

As this is an older embedded platform that will likely never see any BIOS
updates to address this issue and its default shipping OS only complies to
ACPI 1.0, work around this by forcing `acpi=rsdt`. This patch, applied on
top of Linux 5.10.102, was confirmed on real hardware to fix the issue.

Signed-off-by: Mark Cilissen <mark@yotsuba.nl>
Cc: All applicable <stable@vger.kernel.org>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-08 19:52:22 +01:00
Huang Rui eb5616d4ad x86/ACPI: CPPC: Move init_freq_invariance_cppc() into x86 CPPC
The init_freq_invariance_cppc code actually doesn't need the SMP
functionality. So setting the CONFIG_SMP as the check condition for
init_freq_invariance_cppc may cause the confusion to misunderstand the
CPPC. And the x86 CPPC file is better space to store the CPPC related
functions, while the init_freq_invariance_cppc is out of smpboot, that
means, the CONFIG_SMP won't be mandatory condition any more. And It's more
clear than before.

Signed-off-by: Huang Rui <ray.huang@amd.com>
[ rjw: Subject adjustment ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-08 19:16:43 +01:00
Huang Rui 666f6ecf35 x86: Expose init_freq_invariance() to topology header
The function init_freq_invariance will be used on x86 CPPC, so expose it in
the topology header.

Signed-off-by: Huang Rui <ray.huang@amd.com>
[ rjw: Subject adjustment ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-08 19:16:43 +01:00
Huang Rui 82d8936914 x86/ACPI: CPPC: Move AMD maximum frequency ratio setting function into x86 CPPC
The AMD maximum frequency ratio setting function depends on CPPC, so the
x86 CPPC implementation file is better space for this function.

Signed-off-by: Huang Rui <ray.huang@amd.com>
[ rjw: Subject adjustment ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-08 19:16:43 +01:00
Huang Rui fd8af343a2 x86/ACPI: CPPC: Rename cppc_msr.c to cppc.c
Rename the cppc_msr.c to cppc.c in x86 ACPI, that expects to use this file
to cover more function implementation for ACPI CPPC beside MSR helpers.
Naming as "cppc" is more straightforward as one of the functionalities
under ACPI subsystem.

Signed-off-by: Huang Rui <ray.huang@amd.com>
[ rjw: Subject ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-08 19:16:43 +01:00
Suravee Suthikulpanit 4a204f7895 KVM: SVM: Allow AVIC support on system w/ physical APIC ID > 255
Expand KVM's mask for the AVIC host physical ID to the full 12 bits defined
by the architecture.  The number of bits consumed by hardware is model
specific, e.g. early CPUs ignored bits 11:8, but there is no way for KVM
to enumerate the "true" size.  So, KVM must allow using all bits, else it
risks rejecting completely legal x2APIC IDs on newer CPUs.

This means KVM relies on hardware to not assign x2APIC IDs that exceed the
"true" width of the field, but presumably hardware is smart enough to tie
the width to the max x2APIC ID.  KVM also relies on hardware to support at
least 8 bits, as the legacy xAPIC ID is writable by software.  But, those
assumptions are unavoidable due to the lack of any way to enumerate the
"true" width.

Cc: stable@vger.kernel.org
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Fixes: 44a95dae1d ("KVM: x86: Detect and Initialize AVIC support")
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Message-Id: <20220211000851.185799-1-suravee.suthikulpanit@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:59:12 -05:00
Sean Christopherson 396fd74d61 KVM: x86/mmu: WARN on any attempt to atomically update REMOVED SPTE
Disallow calling tdp_mmu_set_spte_atomic() with a REMOVED "old" SPTE.
This solves a conundrum introduced by commit 3255530ab1 ("KVM: x86/mmu:
Automatically update iter->old_spte if cmpxchg fails"); if the helper
doesn't update old_spte in the REMOVED case, then theoretically the
caller could get stuck in an infinite loop as it will fail indefinitely
on the REMOVED SPTE.  E.g. until recently, clear_dirty_gfn_range() didn't
check for a present SPTE and would have spun until getting rescheduled.

In practice, only the page fault path should "create" a new SPTE, all
other paths should only operate on existing, a.k.a. shadow present,
SPTEs.  Now that the page fault path pre-checks for a REMOVED SPTE in all
cases, require all other paths to indirectly pre-check by verifying the
target SPTE is a shadow-present SPTE.

Note, this does not guarantee the actual SPTE isn't REMOVED, nor is that
scenario disallowed.  The invariant is only that the caller mustn't
invoke tdp_mmu_set_spte_atomic() if the SPTE was REMOVED when last
observed by the caller.

Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-25-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:59:10 -05:00
Sean Christopherson 58298b0681 KVM: x86/mmu: Check for a REMOVED leaf SPTE before making the SPTE
Explicitly check for a REMOVED leaf SPTE prior to attempting to map
the final SPTE when handling a TDP MMU fault.  Functionally, this is a
nop as tdp_mmu_set_spte_atomic() will eventually detect the frozen SPTE.
Pre-checking for a REMOVED SPTE is a minor optmization, but the real goal
is to allow tdp_mmu_set_spte_atomic() to have an invariant that the "old"
SPTE is never a REMOVED SPTE.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-24-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:59:09 -05:00
Paolo Bonzini efd995dae5 KVM: x86/mmu: Zap defunct roots via asynchronous worker
Zap defunct roots, a.k.a. roots that have been invalidated after their
last reference was initially dropped, asynchronously via the existing work
queue instead of forcing the work upon the unfortunate task that happened
to drop the last reference.

If a vCPU task drops the last reference, the vCPU is effectively blocked
by the host for the entire duration of the zap.  If the root being zapped
happens be fully populated with 4kb leaf SPTEs, e.g. due to dirty logging
being active, the zap can take several hundred seconds.  Unsurprisingly,
most guests are unhappy if a vCPU disappears for hundreds of seconds.

E.g. running a synthetic selftest that triggers a vCPU root zap with
~64tb of guest memory and 4kb SPTEs blocks the vCPU for 900+ seconds.
Offloading the zap to a worker drops the block time to <100ms.

There is an important nuance to this change.  If the same work item
was queued twice before the work function has run, it would only
execute once and one reference would be leaked.  Therefore, now that
queueing and flushing items is not anymore protected by kvm->slots_lock,
kvm_tdp_mmu_invalidate_all_roots() has to check root->role.invalid and
skip already invalid roots.  On the other hand, kvm_mmu_zap_all_fast()
must return only after those skipped roots have been zapped as well.
These two requirements can be satisfied only if _all_ places that
change invalid to true now schedule the worker before releasing the
mmu_lock.  There are just two, kvm_tdp_mmu_put_root() and
kvm_tdp_mmu_invalidate_all_roots().

Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-23-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:57:11 -05:00
Sean Christopherson 1b6043e8e5 KVM: x86/mmu: Zap roots in two passes to avoid inducing RCU stalls
When zapping a TDP MMU root, perform the zap in two passes to avoid
zapping an entire top-level SPTE while holding RCU, which can induce RCU
stalls.  In the first pass, zap SPTEs at PG_LEVEL_1G, and then
zap top-level entries in the second pass.

With 4-level paging, zapping a PGD that is fully populated with 4kb leaf
SPTEs take up to ~7 or so seconds (time varies based on kernel config,
number of (v)CPUs, etc...).  With 5-level paging, that time can balloon
well into hundreds of seconds.

Before remote TLB flushes were omitted, the problem was even worse as
waiting for all active vCPUs to respond to the IPI introduced significant
overhead for VMs with large numbers of vCPUs.

By zapping 1gb SPTEs (both shadow pages and hugepages) in the first pass,
the amount of work that is done without dropping RCU protection is
strictly bounded, with the worst case latency for a single operation
being less than 100ms.

Zapping at 1gb in the first pass is not arbitrary.  First and foremost,
KVM relies on being able to zap 1gb shadow pages in a single shot when
when repacing a shadow page with a hugepage.  Zapping a 1gb shadow page
that is fully populated with 4kb dirty SPTEs also triggers the worst case
latency due writing back the struct page accessed/dirty bits for each 4kb
page, i.e. the two-pass approach is guaranteed to work so long as KVM can
cleany zap a 1gb shadow page.

  rcu: INFO: rcu_sched self-detected stall on CPU
  rcu:     52-....: (20999 ticks this GP) idle=7be/1/0x4000000000000000
                                          softirq=15759/15759 fqs=5058
   (t=21016 jiffies g=66453 q=238577)
  NMI backtrace for cpu 52
  Call Trace:
   ...
   mark_page_accessed+0x266/0x2f0
   kvm_set_pfn_accessed+0x31/0x40
   handle_removed_tdp_mmu_page+0x259/0x2e0
   __handle_changed_spte+0x223/0x2c0
   handle_removed_tdp_mmu_page+0x1c1/0x2e0
   __handle_changed_spte+0x223/0x2c0
   handle_removed_tdp_mmu_page+0x1c1/0x2e0
   __handle_changed_spte+0x223/0x2c0
   zap_gfn_range+0x141/0x3b0
   kvm_tdp_mmu_zap_invalidated_roots+0xc8/0x130
   kvm_mmu_zap_all_fast+0x121/0x190
   kvm_mmu_invalidate_zap_pages_in_memslot+0xe/0x10
   kvm_page_track_flush_slot+0x5c/0x80
   kvm_arch_flush_shadow_memslot+0xe/0x10
   kvm_set_memslot+0x172/0x4e0
   __kvm_set_memory_region+0x337/0x590
   kvm_vm_ioctl+0x49c/0xf80

Reported-by: David Matlack <dmatlack@google.com>
Cc: Ben Gardon <bgardon@google.com>
Cc: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-22-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:57:09 -05:00
Paolo Bonzini 8351779ce6 KVM: x86/mmu: Allow yielding when zapping GFNs for defunct TDP MMU root
Allow yielding when zapping SPTEs after the last reference to a valid
root is put.  Because KVM must drop all SPTEs in response to relevant
mmu_notifier events, mark defunct roots invalid and reset their refcount
prior to zapping the root.  Keeping the refcount elevated while the zap
is in-progress ensures the root is reachable via mmu_notifier until the
zap completes and the last reference to the invalid, defunct root is put.

Allowing kvm_tdp_mmu_put_root() to yield fixes soft lockup issues if the
root in being put has a massive paging structure, e.g. zapping a root
that is backed entirely by 4kb pages for a guest with 32tb of memory can
take hundreds of seconds to complete.

  watchdog: BUG: soft lockup - CPU#49 stuck for 485s! [max_guest_memor:52368]
  RIP: 0010:kvm_set_pfn_dirty+0x30/0x50 [kvm]
   __handle_changed_spte+0x1b2/0x2f0 [kvm]
   handle_removed_tdp_mmu_page+0x1a7/0x2b8 [kvm]
   __handle_changed_spte+0x1f4/0x2f0 [kvm]
   handle_removed_tdp_mmu_page+0x1a7/0x2b8 [kvm]
   __handle_changed_spte+0x1f4/0x2f0 [kvm]
   tdp_mmu_zap_root+0x307/0x4d0 [kvm]
   kvm_tdp_mmu_put_root+0x7c/0xc0 [kvm]
   kvm_mmu_free_roots+0x22d/0x350 [kvm]
   kvm_mmu_reset_context+0x20/0x60 [kvm]
   kvm_arch_vcpu_ioctl_set_sregs+0x5a/0xc0 [kvm]
   kvm_vcpu_ioctl+0x5bd/0x710 [kvm]
   __se_sys_ioctl+0x77/0xc0
   __x64_sys_ioctl+0x1d/0x20
   do_syscall_64+0x44/0xa0
   entry_SYSCALL_64_after_hwframe+0x44/0xae

KVM currently doesn't put a root from a non-preemptible context, so other
than the mmu_notifier wrinkle, yielding when putting a root is safe.

Yield-unfriendly iteration uses for_each_tdp_mmu_root(), which doesn't
take a reference to each root (it requires mmu_lock be held for the
entire duration of the walk).

tdp_mmu_next_root() is used only by the yield-friendly iterator.

tdp_mmu_zap_root_work() is explicitly yield friendly.

kvm_mmu_free_roots() => mmu_free_root_page() is a much bigger fan-out,
but is still yield-friendly in all call sites, as all callers can be
traced back to some combination of vcpu_run(), kvm_destroy_vm(), and/or
kvm_create_vm().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-21-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:57:09 -05:00
Paolo Bonzini 22b94c4b63 KVM: x86/mmu: Zap invalidated roots via asynchronous worker
Use the system worker threads to zap the roots invalidated
by the TDP MMU's "fast zap" mechanism, implemented by
kvm_tdp_mmu_invalidate_all_roots().

At this point, apart from allowing some parallelism in the zapping of
roots, the workqueue is a glorified linked list: work items are added and
flushed entirely within a single kvm->slots_lock critical section.  However,
the workqueue fixes a latent issue where kvm_mmu_zap_all_invalidated_roots()
assumes that it owns a reference to all invalid roots; therefore, no
one can set the invalid bit outside kvm_mmu_zap_all_fast().  Putting the
invalidated roots on a linked list... erm, on a workqueue ensures that
tdp_mmu_zap_root_work() only puts back those extra references that
kvm_mmu_zap_all_invalidated_roots() had gifted to it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 10:55:27 -05:00
Sean Christopherson bb95dfb9e2 KVM: x86/mmu: Defer TLB flush to caller when freeing TDP MMU shadow pages
Defer TLB flushes to the caller when freeing TDP MMU shadow pages instead
of immediately flushing.  Because the shadow pages are freed in an RCU
callback, so long as at least one CPU holds RCU, all CPUs are protected.
For vCPUs running in the guest, i.e. consuming TLB entries, KVM only
needs to ensure the caller services the pending TLB flush before dropping
its RCU protections.  I.e. use the caller's RCU as a proxy for all vCPUs
running in the guest.

Deferring the flushes allows batching flushes, e.g. when installing a
1gb hugepage and zapping a pile of SPs.  And when zapping an entire root,
deferring flushes allows skipping the flush entirely (because flushes are
not needed in that case).

Avoiding flushes when zapping an entire root is especially important as
synchronizing with other CPUs via IPI after zapping every shadow page can
cause significant performance issues for large VMs.  The issue is
exacerbated by KVM zapping entire top-level entries without dropping
RCU protection, which can lead to RCU stalls even when zapping roots
backing relatively "small" amounts of guest memory, e.g. 2tb.  Removing
the IPI bottleneck largely mitigates the RCU issues, though it's likely
still a problem for 5-level paging.  A future patch will further address
the problem by zapping roots in multiple passes to avoid holding RCU for
an extended duration.

Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-20-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:57 -05:00
Sean Christopherson bd29677952 KVM: x86/mmu: Do remote TLB flush before dropping RCU in TDP MMU resched
When yielding in the TDP MMU iterator, service any pending TLB flush
before dropping RCU protections in anticipation of using the caller's RCU
"lock" as a proxy for vCPUs in the guest.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-19-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:56 -05:00
Sean Christopherson cf3e26427c KVM: x86/mmu: Zap only TDP MMU leafs in kvm_zap_gfn_range()
Zap only leaf SPTEs in the TDP MMU's zap_gfn_range(), and rename various
functions accordingly.  When removing mappings for functional correctness
(except for the stupid VFIO GPU passthrough memslots bug), zapping the
leaf SPTEs is sufficient as the paging structures themselves do not point
at guest memory and do not directly impact the final translation (in the
TDP MMU).

Note, this aligns the TDP MMU with the legacy/full MMU, which zaps only
the rmaps, a.k.a. leaf SPTEs, in kvm_zap_gfn_range() and
kvm_unmap_gfn_range().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-18-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:56 -05:00
Sean Christopherson acbda82a81 KVM: x86/mmu: Require mmu_lock be held for write to zap TDP MMU range
Now that all callers of zap_gfn_range() hold mmu_lock for write, drop
support for zapping with mmu_lock held for read.  That all callers hold
mmu_lock for write isn't a random coincidence; now that the paths that
need to zap _everything_ have their own path, the only callers left are
those that need to zap for functional correctness.  And when zapping is
required for functional correctness, mmu_lock must be held for write,
otherwise the caller has no guarantees about the state of the TDP MMU
page tables after it has run, e.g. the SPTE(s) it zapped can be
immediately replaced by a vCPU faulting in a page.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-17-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:55 -05:00
Sean Christopherson e2b5b21d3a KVM: x86/mmu: Add dedicated helper to zap TDP MMU root shadow page
Add a dedicated helper for zapping a TDP MMU root, and use it in the three
flows that do "zap_all" and intentionally do not do a TLB flush if SPTEs
are zapped (zapping an entire root is safe if and only if it cannot be in
use by any vCPU).  Because a TLB flush is never required, unconditionally
pass "false" to tdp_mmu_iter_cond_resched() when potentially yielding.

Opportunistically document why KVM must not yield when zapping roots that
are being zapped by kvm_tdp_mmu_put_root(), i.e. roots whose refcount has
reached zero, and further harden the flow to detect improper KVM behavior
with respect to roots that are supposed to be unreachable.

In addition to hardening zapping of roots, isolating zapping of roots
will allow future simplification of zap_gfn_range() by having it zap only
leaf SPTEs, and by removing its tricky "zap all" heuristic.  By having
all paths that truly need to free _all_ SPs flow through the dedicated
root zapper, the generic zapper can be freed of those concerns.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-16-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:55 -05:00
Sean Christopherson 77c8cd6b85 KVM: x86/mmu: Skip remote TLB flush when zapping all of TDP MMU
Don't flush the TLBs when zapping all TDP MMU pages, as the only time KVM
uses the slow version of "zap everything" is when the VM is being
destroyed or the owning mm has exited.  In either case, KVM_RUN is
unreachable for the VM, i.e. the guest TLB entries cannot be consumed.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-15-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:54 -05:00
Sean Christopherson c10743a182 KVM: x86/mmu: Zap only the target TDP MMU shadow page in NX recovery
When recovering a potential hugepage that was shattered for the iTLB
multihit workaround, precisely zap only the target page instead of
iterating over the TDP MMU to find the SP that was passed in.  This will
allow future simplification of zap_gfn_range() by having it zap only
leaf SPTEs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-14-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:54 -05:00
Sean Christopherson 626808d137 KVM: x86/mmu: Refactor low-level TDP MMU set SPTE helper to take raw values
Refactor __tdp_mmu_set_spte() to work with raw values instead of a
tdp_iter objects so that a future patch can modify SPTEs without doing a
walk, and without having to synthesize a tdp_iter.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-13-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:53 -05:00
Sean Christopherson 966da62ada KVM: x86/mmu: WARN if old _or_ new SPTE is REMOVED in non-atomic path
WARN if the new_spte being set by __tdp_mmu_set_spte() is a REMOVED_SPTE,
which is called out by the comment as being disallowed but not actually
checked.  Keep the WARN on the old_spte as well, because overwriting a
REMOVED_SPTE in the non-atomic path is also disallowed (as evidence by
lack of splats with the existing WARN).

Fixes: 08f07c800e ("KVM: x86/mmu: Flush TLBs after zap in TDP MMU PF handler")
Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-12-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:53 -05:00
Sean Christopherson 0e587aa733 KVM: x86/mmu: Add helpers to read/write TDP MMU SPTEs and document RCU
Add helpers to read and write TDP MMU SPTEs instead of open coding
rcu_dereference() all over the place, and to provide a convenient
location to document why KVM doesn't exempt holding mmu_lock for write
from having to hold RCU (and any future changes to the rules).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-11-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:52 -05:00
Sean Christopherson a151aceca1 KVM: x86/mmu: Drop RCU after processing each root in MMU notifier hooks
Drop RCU protection after processing each root when handling MMU notifier
hooks that aren't the "unmap" path, i.e. aren't zapping.  Temporarily
drop RCU to let RCU do its thing between roots, and to make it clear that
there's no special behavior that relies on holding RCU across all roots.

Currently, the RCU protection is completely superficial, it's necessary
only to make rcu_dereference() of SPTE pointers happy.  A future patch
will rely on holding RCU as a proxy for vCPUs in the guest, e.g. to
ensure shadow pages aren't freed before all vCPUs do a TLB flush (or
rather, acknowledge the need for a flush), but in that case RCU needs to
be held until the flush is complete if and only if the flush is needed
because a shadow page may have been removed.  And except for the "unmap"
path, MMU notifier events cannot remove SPs (don't toggle PRESENT bit,
and can't change the PFN for a SP).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-10-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:52 -05:00
Sean Christopherson 93fa50f644 KVM: x86/mmu: Batch TLB flushes from TDP MMU for MMU notifier change_spte
Batch TLB flushes (with other MMUs) when handling ->change_spte()
notifications in the TDP MMU.  The MMU notifier path in question doesn't
allow yielding and correcty flushes before dropping mmu_lock.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-9-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:51 -05:00
Sean Christopherson c8e5a0d0e9 KVM: x86/mmu: Check for !leaf=>leaf, not PFN change, in TDP MMU SP removal
Look for a !leaf=>leaf conversion instead of a PFN change when checking
if a SPTE change removed a TDP MMU shadow page.  Convert the PFN check
into a WARN, as KVM should never change the PFN of a shadow page (except
when its being zapped or replaced).

From a purely theoretical perspective, it's not illegal to replace a SP
with a hugepage pointing at the same PFN.  In practice, it's impossible
as that would require mapping guest memory overtop a kernel-allocated SP.
Either way, the check is odd.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-8-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:51 -05:00
Paolo Bonzini 614f6970aa KVM: x86/mmu: do not allow readers to acquire references to invalid roots
Remove the "shared" argument of for_each_tdp_mmu_root_yield_safe, thus ensuring
that readers do not ever acquire a reference to an invalid root.  After this
patch, all readers except kvm_tdp_mmu_zap_invalidated_roots() treat
refcount=0/valid, refcount=0/invalid and refcount=1/invalid in exactly the
same way.  kvm_tdp_mmu_zap_invalidated_roots() is different but it also
does not acquire a reference to the invalid root, and it cannot see
refcount=0/invalid because it is guaranteed to run after
kvm_tdp_mmu_invalidate_all_roots().

Opportunistically add a lockdep assertion to the yield-safe iterator.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:50 -05:00
Paolo Bonzini 7c554d8e51 KVM: x86/mmu: only perform eager page splitting on valid roots
Eager page splitting is an optimization; it does not have to be performed on
invalid roots.  It is also the only case in which a reader might acquire
a reference to an invalid root, so after this change we know that readers
will skip both dying and invalid roots.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:50 -05:00
Sean Christopherson 226b8c8f85 KVM: x86/mmu: Require mmu_lock be held for write in unyielding root iter
Assert that mmu_lock is held for write by users of the yield-unfriendly
TDP iterator.  The nature of a shared walk means that the caller needs to
play nice with other tasks modifying the page tables, which is more or
less the same thing as playing nice with yielding.  Theoretically, KVM
could gain a flow where it could legitimately take mmu_lock for read in
a non-preemptible context, but that's highly unlikely and any such case
should be viewed with a fair amount of scrutiny.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:47 -05:00
Sean Christopherson 7ae5840e6f KVM: x86/mmu: Document that zapping invalidated roots doesn't need to flush
Remove the misleading flush "handling" when zapping invalidated TDP MMU
roots, and document that flushing is unnecessary for all flavors of MMUs
when zapping invalid/obsolete roots/pages.  The "handling" in the TDP MMU
is dead code, as zap_gfn_range() is called with shared=true, in which
case it will never return true due to the flushing being handled by
tdp_mmu_zap_spte_atomic().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:36 -05:00
Sean Christopherson db01416b22 KVM: x86/mmu: Formalize TDP MMU's (unintended?) deferred TLB flush logic
Explicitly ignore the result of zap_gfn_range() when putting the last
reference to a TDP MMU root, and add a pile of comments to formalize the
TDP MMU's behavior of deferring TLB flushes to alloc/reuse.  Note, this
only affects the !shared case, as zap_gfn_range() subtly never returns
true for "flush" as the flush is handled by tdp_mmu_zap_spte_atomic().

Putting the root without a flush is ok because even if there are stale
references to the root in the TLB, they are unreachable because KVM will
not run the guest with the same ASID without first flushing (where ASID
in this context refers to both SVM's explicit ASID and Intel's implicit
ASID that is constructed from VPID+PCID+EPT4A+etc...).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-5-seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:23 -05:00
Sean Christopherson f28e9c7fce KVM: x86/mmu: Fix wrong/misleading comments in TDP MMU fast zap
Fix misleading and arguably wrong comments in the TDP MMU's fast zap
flow.  The comments, and the fact that actually zapping invalid roots was
added separately, strongly suggests that zapping invalid roots is an
optimization and not required for correctness.  That is a lie.

KVM _must_ zap invalid roots before returning from kvm_mmu_zap_all_fast(),
because when it's called from kvm_mmu_invalidate_zap_pages_in_memslot(),
KVM is relying on it to fully remove all references to the memslot.  Once
the memslot is gone, KVM's mmu_notifier hooks will be unable to find the
stale references as the hva=>gfn translation is done via the memslots.
If KVM doesn't immediately zap SPTEs and userspace unmaps a range after
deleting a memslot, KVM will fail to zap in response to the mmu_notifier
due to not finding a memslot corresponding to the notifier's range, which
leads to a variation of use-after-free.

The other misleading comment (and code) explicitly states that roots
without a reference should be skipped.  While that's technically true,
it's also extremely misleading as it should be impossible for KVM to
encounter a defunct root on the list while holding mmu_lock for write.
Opportunistically add a WARN to enforce that invariant.

Fixes: b7cccd397f ("KVM: x86/mmu: Fast invalidation for TDP MMU")
Fixes: 4c6654bd16 ("KVM: x86/mmu: Tear down roots before kvm_mmu_zap_all_fast returns")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:18 -05:00
Sean Christopherson 3354ef5a59 KVM: x86/mmu: Check for present SPTE when clearing dirty bit in TDP MMU
Explicitly check for present SPTEs when clearing dirty bits in the TDP
MMU.  This isn't strictly required for correctness, as setting the dirty
bit in a defunct SPTE will not change the SPTE from !PRESENT to PRESENT.
However, the guarded MMU_WARN_ON() in spte_ad_need_write_protect() would
complain if anyone actually turned on KVM's MMU debugging.

Fixes: a6a0b05da9 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-3-seanjc@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:31:17 -05:00
Paolo Bonzini 37b2a6510a KVM: use __vcalloc for very large allocations
Allocations whose size is related to the memslot size can be arbitrarily
large.  Do not use kvzalloc/kvcalloc, as those are limited to "not crazy"
sizes that fit in 32 bits.

Cc: stable@vger.kernel.org
Fixes: 7661809d49 ("mm: don't allow oversized kvmalloc() calls")
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-08 09:30:57 -05:00
Peter Zijlstra 5adf349439 x86/module: Fix the paravirt vs alternative order
Ever since commit

  4e6292114c ("x86/paravirt: Add new features for paravirt patching")

there is an ordering dependency between patching paravirt ops and
patching alternatives, the module loader still violates this.

Fixes: 4e6292114c ("x86/paravirt: Add new features for paravirt patching")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20220303112825.068773913@infradead.org
2022-03-08 14:15:25 +01:00
Linus Torvalds 4a01e748a5 - Mitigate Spectre v2-type Branch History Buffer attacks on machines
which support eIBRS, i.e., the hardware-assisted speculation restriction
 after it has been shown that such machines are vulnerable even with the
 hardware mitigation.
 
 - Do not use the default LFENCE-based Spectre v2 mitigation on AMD as it
 is insufficient to mitigate such attacks. Instead, switch to retpolines
 on all AMD by default.
 
 - Update the docs and add some warnings for the obviously vulnerable
 cmdline configurations.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmIkktUACgkQEsHwGGHe
 VUo7ZQ/+O4hzL/tHY0V/ekkDxCrJ3q3Hp+DcxUl2ee5PC3Qgxv1Z1waH6ppK8jQs
 marAGr7FYbvzY039ON7irxhpSIckBCpx9tM2F43zsPxxY8EdxGojkHbmaqso5HtW
 l3/O28AcZYoKN/fF8rRAIJy4hrTVascKrNJ2fOiYWYBT62ZIoPm0FusgXbKTZPD+
 gT7iUMoyPjBnKdWDT9L6kKOxDF9TivX1Y6JdDHbnnBsgRkeFatkeq9BJ93M73q63
 Ziq9c8ZcEXyKez+cGFCfXM7+pNYmfsiL48lilTyf+v+GXahDJQOkFw39j5zXEALm
 Nk6yB3PRQ74pEwm5WbK7KO8iwPpblmnDB978mfUcpk+9xWJD8pyoUcItAmCBsXh1
 LjIImYPqL6YihUb9udh+PEDISsfzWNzr4T+kgW9/yXXG4ZmGy3TLInhTK+rNAxJa
 EshWZExEZj6yJvt83Vu08W9fppYJq976tJvl8LWOYthaxqY7IQz0q7mYd799yxk0
 MLPqvZP1+4pHzqn2c9yeHgrwHwMmoqcyMx6B3EA5maYQPdlT7Fk9RCBeCdIA/ieF
 OgGxy1WwMH+cvUa5MaBy3Y32LeYU3bUJh0yPFq/7BxEYGG9PJtLhg2xTo1Ui8F1d
 fKrcSFcjZKVJ9UE5HaqOcp4ka+Q220I9IDGURXkAFQlnOU7X7CE=
 =Athd
 -----END PGP SIGNATURE-----

Merge tag 'x86_bugs_for_v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 spectre fixes from Borislav Petkov:

 - Mitigate Spectre v2-type Branch History Buffer attacks on machines
   which support eIBRS, i.e., the hardware-assisted speculation
   restriction after it has been shown that such machines are vulnerable
   even with the hardware mitigation.

 - Do not use the default LFENCE-based Spectre v2 mitigation on AMD as
   it is insufficient to mitigate such attacks. Instead, switch to
   retpolines on all AMD by default.

 - Update the docs and add some warnings for the obviously vulnerable
   cmdline configurations.

* tag 'x86_bugs_for_v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT
  x86/speculation: Warn about Spectre v2 LFENCE mitigation
  x86/speculation: Update link to AMD speculation whitepaper
  x86/speculation: Use generic retpoline by default on AMD
  x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting
  Documentation/hw-vuln: Update spectre doc
  x86/speculation: Add eIBRS + Retpoline options
  x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE
2022-03-07 17:29:47 -08:00
Linus Torvalds f81664f760 x86 guest:
* Tweaks to the paravirtualization code, to avoid using them
 when they're pointless or harmful
 
 x86 host:
 
 * Fix for SRCU lockdep splat
 
 * Brown paper bag fix for the propagation of errno
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmIkkdsUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroP15Qf7B8BXNMlNkret5WN/4pGf06gNdIY6
 ZqC8t/Lx1+fCkzGk+VtAw0bxRscOF4z1XzvfywO5ZI5bxQB/b2xTyBkVY90SqhsB
 shug5QpikejpmvVZJXxwD3+loCUah2T6FUT6QJa0sKVhW+XiqOva8fAmYLG5agaa
 VGvqFXTXiVmbiw/O9ZI/CfUC0WNrn+I1iDO+oGWyhv/22tePxGCizVczRFJn6DAD
 Vh5P6AfOqXjmzdpUeOiU544FQZPHAZehb7/xYc0T9GSW4fPnTmHwRzwhUqgJnx7d
 3E+eWGwny+Q/OrpKf7SbxtB65yn7lHRmdN/YtCHygl4sjs6CdjSPY8/9jQ==
 =PPz1
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "x86 guest:

   - Tweaks to the paravirtualization code, to avoid using them when
     they're pointless or harmful

  x86 host:

   - Fix for SRCU lockdep splat

   - Brown paper bag fix for the propagation of errno"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: x86: pull kvm->srcu read-side to kvm_arch_vcpu_ioctl_run
  KVM: x86/mmu: Passing up the error state of mmu_alloc_shadow_roots()
  KVM: x86: Yield to IPI target vCPU only if it is busy
  x86/kvmclock: Fix Hyper-V Isolated VM's boot issue when vCPUs > 64
  x86/kvm: Don't waste memory if kvmclock is disabled
  x86/kvm: Don't use PV TLB/yield when mwait is advertised
2022-03-06 12:08:42 -08:00
Josh Poimboeuf 0de05d056a x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT
The commit

   44a3918c82 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting")

added a warning for the "eIBRS + unprivileged eBPF" combination, which
has been shown to be vulnerable against Spectre v2 BHB-based attacks.

However, there's no warning about the "eIBRS + LFENCE retpoline +
unprivileged eBPF" combo. The LFENCE adds more protection by shortening
the speculation window after a mispredicted branch. That makes an attack
significantly more difficult, even with unprivileged eBPF. So at least
for now the logic doesn't warn about that combination.

But if you then add SMT into the mix, the SMT attack angle weakens the
effectiveness of the LFENCE considerably.

So extend the "eIBRS + unprivileged eBPF" warning to also include the
"eIBRS + LFENCE + unprivileged eBPF + SMT" case.

  [ bp: Massage commit message. ]

Suggested-by: Alyssa Milburn <alyssa.milburn@linux.intel.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-03-05 09:30:47 +01:00
Josh Poimboeuf eafd987d4a x86/speculation: Warn about Spectre v2 LFENCE mitigation
With:

  f8a66d608a ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd")

it became possible to enable the LFENCE "retpoline" on Intel. However,
Intel doesn't recommend it, as it has some weaknesses compared to
retpoline.

Now AMD doesn't recommend it either.

It can still be left available as a cmdline option. It's faster than
retpoline but is weaker in certain scenarios -- particularly SMT, but
even non-SMT may be vulnerable in some cases.

So just unconditionally warn if the user requests it on the cmdline.

  [ bp: Massage commit message. ]

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-03-05 09:16:24 +01:00
Jakub Kicinski 6646dc241d Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:

====================
pull-request: bpf-next 2022-03-04

We've added 32 non-merge commits during the last 14 day(s) which contain
a total of 59 files changed, 1038 insertions(+), 473 deletions(-).

The main changes are:

1) Optimize BPF stackmap's build_id retrieval by caching last valid build_id,
   as consecutive stack frames are likely to be in the same VMA and therefore
   have the same build id, from Hao Luo.

2) Several improvements to arm64 BPF JIT, that is, support for JITing
   the atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and lastly
   atomic[64]_{xchg|cmpxchg}. Also fix the BTF line info dump for JITed
   programs, from Hou Tao.

3) Optimize generic BPF map batch deletion by only enforcing synchronize_rcu()
   barrier once upon return to user space, from Eric Dumazet.

4) For kernel build parse DWARF and generate BTF through pahole with enabled
   multithreading, from Kui-Feng Lee.

5) BPF verifier usability improvements by making log info more concise and
   replacing inv with scalar type name, from Mykola Lysenko.

6) Two follow-up fixes for BPF prog JIT pack allocator, from Song Liu.

7) Add a new Kconfig to allow for loading kernel modules with non-matching
   BTF type info; their BTF info is then removed on load, from Connor O'Brien.

8) Remove reallocarray() usage from bpftool and switch to libbpf_reallocarray()
   in order to fix compilation errors for older glibc, from Mauricio Vásquez.

9) Fix libbpf to error on conflicting name in BTF when type declaration
   appears before the definition, from Xu Kuohai.

10) Fix issue in BPF preload for in-kernel light skeleton where loaded BPF
    program fds prevent init process from setting up fd 0-2, from Yucong Sun.

11) Fix libbpf reuse of pinned perf RB map when max_entries is auto-determined
    by libbpf, from Stijn Tintel.

12) Several cleanups for libbpf and a fix to enforce perf RB map #pages to be
    non-zero, from Yuntao Wang.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (32 commits)
  bpf: Small BPF verifier log improvements
  libbpf: Add a check to ensure that page_cnt is non-zero
  bpf, x86: Set header->size properly before freeing it
  x86: Disable HAVE_ARCH_HUGE_VMALLOC on 32-bit x86
  bpf, test_run: Fix overflow in XDP frags bpf_test_finish
  selftests/bpf: Update btf_dump case for conflicting names
  libbpf: Skip forward declaration when counting duplicated type names
  bpf: Add some description about BPF_JIT_ALWAYS_ON in Kconfig
  bpf, docs: Add a missing colon in verifier.rst
  bpf: Cache the last valid build_id
  libbpf: Fix BPF_MAP_TYPE_PERF_EVENT_ARRAY auto-pinning
  bpf, selftests: Use raw_tp program for atomic test
  bpf, arm64: Support more atomic operations
  bpftool: Remove redundant slashes
  bpf: Add config to allow loading modules with BTF mismatches
  bpf, arm64: Feed byte-offset into bpf line info
  bpf, arm64: Call build_prologue() first in first JIT pass
  bpf: Fix issue with bpf preload module taking over stdout/stdin of kernel.
  bpftool: Bpf skeletons assert type sizes
  bpf: Cleanup comments
  ...
====================

Link: https://lore.kernel.org/r/20220304164313.31675-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-04 19:28:17 -08:00
Paolo Bonzini 0564eeb71b Merge branch 'kvm-bugfixes' into HEAD
Merge bugfixes from 5.17 before merging more tricky work.
2022-03-04 18:39:29 -05:00
Oleg Nesterov bf9ad37dc8 signal, x86: Delay calling signals in atomic on RT enabled kernels
On x86_64 we must disable preemption before we enable interrupts
for stack faults, int3 and debugging, because the current task is using
a per CPU debug stack defined by the IST. If we schedule out, another task
can come in and use the same stack and cause the stack to be corrupted
and crash the kernel on return.

When CONFIG_PREEMPT_RT is enabled, spinlock_t locks become sleeping, and
one of these is the spin lock used in signal handling.

Some of the debug code (int3) causes do_trap() to send a signal.
This function calls a spinlock_t lock that has been converted to a
sleeping lock. If this happens, the above issues with the corrupted
stack is possible.

Instead of calling the signal right away, for PREEMPT_RT and x86,
the signal information is stored on the stacks task_struct and
TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume
code will send the signal when preemption is enabled.

[ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT to
  ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ]
[bigeasy: Add on 32bit as per Yang Shi, minor rewording. ]
[ tglx: Use a config option ]

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/Ygq5aBB/qMQw6aP5@linutronix.de
2022-03-04 14:58:54 +01:00
Jakub Kicinski 80901bff81 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
net/batman-adv/hard-interface.c
  commit 690bb6fb64 ("batman-adv: Request iflink once in batadv-on-batadv check")
  commit 6ee3c393ee ("batman-adv: Demote batadv-on-batadv skip error message")
https://lore.kernel.org/all/20220302163049.101957-1-sw@simonwunderlich.de/

net/smc/af_smc.c
  commit 4d08b7b57e ("net/smc: Fix cleanup when register ULP fails")
  commit 462791bbfa ("net/smc: add sysctl interface for SMC")
https://lore.kernel.org/all/20220302112209.355def40@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03 11:55:12 -08:00
Ingo Molnar 02a08d78f5 perf/x86/intel/uncore: Fix the build on !CONFIG_PHYS_ADDR_T_64BIT
'val2' is unused if !CONFIG_PHYS_ADDR_T_64BIT:

  arch/x86/events/intel/uncore_discovery.c:213:18: error: unused variable ‘val2’ [-Werror=unused-variable]

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-03-03 08:58:22 +01:00
Song Liu 676b2daaba bpf, x86: Set header->size properly before freeing it
On do_jit failure path, the header is freed by bpf_jit_binary_pack_free.
While bpf_jit_binary_pack_free doesn't require proper ro_header->size,
bpf_prog_pack_free still uses it. Set header->size in bpf_int_jit_compile
before calling bpf_jit_binary_pack_free.

Fixes: 1022a5498f ("bpf, x86_64: Use bpf_jit_binary_pack_alloc")
Fixes: 33c9805860 ("bpf: Introduce bpf_jit_binary_pack_[alloc|finalize|free]")
Reported-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Song Liu <song@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220302175126.247459-3-song@kernel.org
2022-03-02 13:24:37 -08:00
Song Liu eed1fcee55 x86: Disable HAVE_ARCH_HUGE_VMALLOC on 32-bit x86
kernel test robot reported kernel BUG like:

[ 44.587744][ T1] kernel BUG at arch/x86/mm/physaddr.c:76!
[ 44.590151][ T1] __vmalloc_area_node (mm/vmalloc.c:622 mm/vmalloc.c:2995)
[ 44.590151][ T1] __vmalloc_node_range (mm/vmalloc.c:3108)
[ 44.590151][ T1] __vmalloc_node (mm/vmalloc.c:3157)

which is triggered with HAVE_ARCH_HUGE_VMALLOC on 32-bit x86. Since BPF
only uses HAVE_ARCH_HUGE_VMALLOC for x86_64, turn it off for 32-bit x86.

Fixes: fac54e2bfb ("x86/Kconfig: Select HAVE_ARCH_HUGE_VMALLOC with HAVE_ARCH_HUGE_VMAP")
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Song Liu <song@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220302175126.247459-2-song@kernel.org
2022-03-02 13:24:37 -08:00
Paolo Bonzini 8d25b7beca KVM: x86: pull kvm->srcu read-side to kvm_arch_vcpu_ioctl_run
kvm_arch_vcpu_ioctl_run is already doing srcu_read_lock/unlock in two
places, namely vcpu_run and post_kvm_run_save, and a third is actually
needed around the call to vcpu->arch.complete_userspace_io to avoid
the following splat:

  WARNING: suspicious RCU usage
  arch/x86/kvm/pmu.c:190 suspicious rcu_dereference_check() usage!
  other info that might help us debug this:
  rcu_scheduler_active = 2, debug_locks = 1
  1 lock held by CPU 28/KVM/370841:
  #0: ff11004089f280b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x87/0x730 [kvm]
  Call Trace:
   <TASK>
   dump_stack_lvl+0x59/0x73
   reprogram_fixed_counter+0x15d/0x1a0 [kvm]
   kvm_pmu_trigger_event+0x1a3/0x260 [kvm]
   ? free_moved_vector+0x1b4/0x1e0
   complete_fast_pio_in+0x8a/0xd0 [kvm]

This splat is not at all unexpected, since complete_userspace_io callbacks
can execute similar code to vmexits.  For example, SVM with nrips=false
will call into the emulator from svm_skip_emulated_instruction().

While it's tempting to never acquire kvm->srcu for an uninitialized vCPU,
practically speaking there's no penalty to acquiring kvm->srcu "early"
as the KVM_MP_STATE_UNINITIALIZED path is a one-time thing per vCPU.  On
the other hand, seemingly innocuous helpers like kvm_apic_accept_events()
and sync_regs() can theoretically reach code that might access
SRCU-protected data structures, e.g. sync_regs() can trigger forced
existing of nested mode via kvm_vcpu_ioctl_x86_set_vcpu_events().

Reported-by: Like Xu <likexu@tencent.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-02 10:55:58 -05:00
Like Xu c6c937d673 KVM: x86/mmu: Passing up the error state of mmu_alloc_shadow_roots()
Just like on the optional mmu_alloc_direct_roots() path, once shadow
path reaches "r = -EIO" somewhere, the caller needs to know the actual
state in order to enter error handling and avoid something worse.

Fixes: 4a38162ee9 ("KVM: MMU: load PDPTRs outside mmu_lock")
Signed-off-by: Like Xu <likexu@tencent.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220301124941.48412-1-likexu@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-02 10:55:58 -05:00
Suma Hegde 91f410aa67 platform/x86: Add AMD system management interface
Recent Fam19h EPYC server line of processors from AMD support system
management functionality via HSMP (Host System Management Port) interface.

The Host System Management Port (HSMP) is an interface to provide
OS-level software with access to system management functions via a
set of mailbox registers.

More details on the interface can be found in chapter
"7 Host System Management Port (HSMP)" of the following PPR
https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip

This patch adds new amd_hsmp module under the drivers/platforms/x86/
which creates miscdevice with an IOCTL interface to the user space.
/dev/hsmp is for running the hsmp mailbox commands.

Signed-off-by: Suma Hegde <suma.hegde@amd.com>
Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com>
Reviewed-by: Carlos Bilbao <carlos.bilbao@amd.com>
Acked-by: Song Liu <song@kernel.org>
Reviewed-by: Nathan Fontenot <nathan.fontenot@amd.com>
Link: https://lore.kernel.org/r/20220222050501.18789-1-nchatrad@amd.com
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2022-03-02 11:42:36 +01:00
Linus Torvalds fb184c4af9 The bigger part of the change is a revert for x86 hosts. Here the
second patch was supposed to fix the first, but in reality it was
 just as broken, so both have to go.
 
 x86 host:
 
 * Revert incorrect assumption that cr3 changes come with preempt notifier
   callbacks (they don't when static branches are changed, for example)
 
 ARM host:
 
 * Correctly synchronise PMR and co on PSCI CPU_SUSPEND
 
 * Skip tests that depend on GICv3 when the HW isn't available
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmIeGnUUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMCYgf9GPVUOQUbHVxVqvKB1ABnY3ZIZuS3
 +/XLgVifSmXb2sPQmcKIPk7eQkxlzpnVdbznJO5qFMtVKRv/ppj+/ly2wwF8l+rR
 m1XvyYo2sukK5vTpBrQiRm3aWY7vpx0ds4DStLnrnBuPF/U7x6WlHSL/BqXaNcSJ
 e+SXd/UFhkg7dEQaU3eqXyf2/mMfR2ZLdUb4v+/UiV7kfzzvRqNERd8HUoVk2FcM
 VYBr07ChaV4XB/dZsCDVSz2Z7f7rH3sMMW82ZHKjuFUEW4Dij9NiX2ycaeRvkSLG
 tnliTuROCY2bOQeIVCTHf5XqCAAm7sA1AoClFaUy30+UW9s9j45NuhUQbA==
 =nuHK
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "The bigger part of the change is a revert for x86 hosts. Here the
  second patch was supposed to fix the first, but in reality it was just
  as broken, so both have to go.

  x86 host:

   - Revert incorrect assumption that cr3 changes come with preempt
     notifier callbacks (they don't when static branches are changed,
     for example)

  ARM host:

   - Correctly synchronise PMR and co on PSCI CPU_SUSPEND

   - Skip tests that depend on GICv3 when the HW isn't available"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: selftests: aarch64: Skip tests if we can't create a vgic-v3
  Revert "KVM: VMX: Save HOST_CR3 in vmx_prepare_switch_to_guest()"
  Revert "KVM: VMX: Save HOST_CR3 in vmx_set_host_fs_gs()"
  KVM: arm64: Don't miss pending interrupts for suspended vCPU
2022-03-01 12:01:18 -08:00
Sean Christopherson b652de1e3d KVM: SVM: Disable preemption across AVIC load/put during APICv refresh
Disable preemption when loading/putting the AVIC during an APICv refresh.
If the vCPU task is preempted and migrated ot a different pCPU, the
unprotected avic_vcpu_load() could set the wrong pCPU in the physical ID
cache/table.

Pull the necessary code out of avic_vcpu_{,un}blocking() and into a new
helper to reduce the probability of introducing this exact bug a third
time.

Fixes: df7e4827c5 ("KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC")
Cc: stable@vger.kernel.org
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 12:21:23 -05:00
Anshuman Khandual cedd3614e5 perf: Add irq and exception return branch types
This expands generic branch type classification by adding two more entries
there in i.e irq and exception return. Also updates the x86 implementation
to process X86_BR_IRET and X86_BR_IRQ records as appropriate. This changes
branch types reported to user space on x86 platform but it should not be a
problem. The possible scenarios and impacts are enumerated here.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/1645681014-3346-1-git-send-email-anshuman.khandual@arm.com
2022-03-01 16:19:01 +01:00
Steve Wahl 71a412ed4c perf/x86/intel/uncore: Make uncore_discovery clean for 64 bit addresses
Support 64-bit BAR size for discovery, and do not truncate return from
generic_uncore_mmio_box_ctl() to 32 bits.

Signed-off-by: Steve Wahl <steve.wahl@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Link: https://lore.kernel.org/r/20220218175418.421268-1-steve.wahl@hpe.com
2022-03-01 16:19:01 +01:00
Sean Christopherson aa9f58415a KVM: SVM: Exit to userspace on ENOMEM/EFAULT GHCB errors
Exit to userspace if setup_vmgexit_scratch() fails due to OOM or because
copying data from guest (userspace) memory failed/faulted.  The OOM
scenario is clearcut, it's userspace's decision as to whether it should
terminate the guest, free memory, etc...

As for -EFAULT, arguably, any guest issue is a violation of the guest's
contract with userspace, and thus userspace needs to decide how to
proceed.  E.g. userspace defines what is RAM vs. MMIO and communicates
that directly to the guest, KVM is not involved in deciding what is/isn't
RAM nor in communicating that information to the guest.  If the scratch
GPA doesn't resolve to a memslot, then the guest is not honoring the
memory configuration as defined by userspace.

And if userspace unmaps an hva for whatever reason, then exiting to
userspace with -EFAULT is absolutely the right thing to do.  KVM's ABI
currently sucks and doesn't provide enough information to act on the
-EFAULT, but that will hopefully be remedied in the future as there are
multiple use cases, e.g. uffd and virtiofs truncation, that shouldn't
require any work in KVM beyond returning -EFAULT with a small amount of
metadata.

KVM could define its ABI such that failure to access the scratch area is
reflected into the guest, i.e. establish a contract with userspace, but
that's undesirable as it limits KVM's options in the future, e.g. in the
potential uffd case any failure on a uaccess needs to kick out to
userspace.  KVM does have several cases where it reflects these errors
into the guest, e.g. kvm_pv_clock_pairing() and Hyper-V emulation, but
KVM would preferably "fix" those instead of propagating the falsehood
that any memory failure is the guest's fault.

Lastly, returning a boolean as an "error" for that a helper that isn't
named accordingly never works out well.

Fixes: ad5b353240 ("KVM: SVM: Do not terminate SEV-ES guests on GHCB validation failure")
Cc: Alper Gun <alpergun@google.com>
Cc: Peter Gonda <pgonda@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220225205209.3881130-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 10:04:03 -05:00
Sean Christopherson 5d6a322156 KVM: WARN if is_unsync_root() is called on a root without a shadow page
WARN and bail if is_unsync_root() is passed a root for which there is no
shadow page, i.e. is passed the physical address of one of the special
roots, which do not have an associated shadow page.  The current usage
squeaks by without bug reports because neither kvm_mmu_sync_roots() nor
kvm_mmu_sync_prev_roots() calls the helper with pae_root or pml4_root,
and 5-level AMD CPUs are not generally available, i.e. no one can coerce
KVM into calling is_unsync_root() on pml5_root.

Note, this doesn't fix the mess with 5-level nNPT, it just (hopefully)
prevents KVM from crashing.

Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220225182248.3812651-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:58:26 -05:00
Sean Christopherson 527d5cd7ee KVM: x86/mmu: Zap only obsolete roots if a root shadow page is zapped
Zap only obsolete roots when responding to zapping a single root shadow
page.  Because KVM keeps root_count elevated when stuffing a previous
root into its PGD cache, shadowing a 64-bit guest means that zapping any
root causes all vCPUs to reload all roots, even if their current root is
not affected by the zap.

For many kernels, zapping a single root is a frequent operation, e.g. in
Linux it happens whenever an mm is dropped, e.g. process exits, etc...

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220225182248.3812651-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:58:25 -05:00
Sean Christopherson 2f6f66ccd2 KVM: Drop kvm_reload_remote_mmus(), open code request in x86 users
Remove the generic kvm_reload_remote_mmus() and open code its
functionality into the two x86 callers.  x86 is (obviously) the only
architecture that uses the hook, and is also the only architecture that
uses KVM_REQ_MMU_RELOAD in a way that's consistent with the name.  That
will change in a future patch, as x86's usage when zapping a single
shadow page x86 doesn't actually _need_ to reload all vCPUs' MMUs, only
MMUs whose root is being zapped actually need to be reloaded.

s390 also uses KVM_REQ_MMU_RELOAD, but for a slightly different purpose.

Drop the generic code in anticipation of implementing s390 and x86 arch
specific requests, which will allow dropping KVM_REQ_MMU_RELOAD entirely.

Opportunistically reword the x86 TDP MMU comment to avoid making
references to functions (and requests!) when possible, and to remove the
rather ambiguous "this".

No functional change intended.

Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Message-Id: <20220225182248.3812651-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:58:25 -05:00
Sean Christopherson f6d0a2521c KVM: x86: Invoke kvm_mmu_unload() directly on CR4.PCIDE change
Replace a KVM_REQ_MMU_RELOAD request with a direct kvm_mmu_unload() call
when the guest's CR4.PCIDE changes.  This will allow tweaking the logic
of KVM_REQ_MMU_RELOAD to free only obsolete/invalid roots, which is the
historical intent of KVM_REQ_MMU_RELOAD.  The recent PCIDE behavior is
the only user of KVM_REQ_MMU_RELOAD that doesn't mark affected roots as
obsolete, needs to unconditionally unload the entire MMU, _and_ affects
only the current vCPU.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220225182248.3812651-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:58:24 -05:00
Hou Wenlong 1e326ad429 KVM: x86/emulator: Move the unhandled outer privilege level logic of far return into __load_segment_descriptor()
Outer-privilege level return is not implemented in emulator,
move the unhandled logic into __load_segment_descriptor to
make it easier to understand why the checks for RET are
incomplete.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Message-Id: <5b7188e6388ac9f4567d14eab32db9adf3e00119.1644292363.git.houwenlong.hwl@antgroup.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:50:49 -05:00
Hou Wenlong 31c66dabaa KVM: x86/emulator: Fix wrong privilege check for code segment in __load_segment_descriptor()
Code segment descriptor can be loaded by jmp/call/ret, iret
and int. The privilege checks are different between those
instructions above realmode. Although, the emulator has
use x86_transfer_type enumerate to differentiate them, but
it is not really used in __load_segment_descriptor(). Note,
far jump/call to call gate, task gate or task state segment
are not implemented in emulator.

As for far jump/call to code segment, if DPL > CPL for conforming
code or (RPL > CPL or DPL != CPL) for non-conforming code, it
should trigger #GP. The current checks are ok.

As for far return, if RPL < CPL or DPL > RPL for conforming
code or DPL != RPL for non-conforming code, it should trigger #GP.
Outer level return is not implemented above virtual-8086 mode in
emulator. So it implies that RPL <= CPL, but the current checks
wouldn't trigger #GP if RPL < CPL.

As for code segment loading in task switch, if DPL > RPL for conforming
code or DPL != RPL for non-conforming code, it should trigger #TS. Since
segment selector is loaded before segment descriptor when load state from
tss, it implies that RPL = CPL, so the current checks are ok.

The only problem in current implementation is missing RPL < CPL check for
far return. However, change code to follow the manual is better.

Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Message-Id: <e01f5ea70fc1f18f23da1182acdbc5c97c0e5886.1644292363.git.houwenlong.hwl@antgroup.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:50:49 -05:00
Hou Wenlong ca85f00225 KVM: x86/emulator: Defer not-present segment check in __load_segment_descriptor()
Per Intel's SDM on the "Instruction Set Reference", when
loading segment descriptor, not-present segment check should
be after all type and privilege checks. But the emulator checks
it first, then #NP is triggered instead of #GP if privilege fails
and segment is not present. Put not-present segment check after
type and privilege checks in __load_segment_descriptor().

Fixes: 38ba30ba51 (KVM: x86 emulator: Emulate task switch in emulator.c)
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Message-Id: <52573c01d369f506cadcf7233812427cf7db81a7.1644292363.git.houwenlong.hwl@antgroup.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:50:49 -05:00
Sean Christopherson b9964ee36b KVM: x86: Make kvm_lapic_set_reg() a "private" xAPIC helper
Hide the lapic's "raw" write helper inside lapic.c to force non-APIC code
to go through proper helpers when modification the vAPIC state.  Keep the
read helper visible to outsiders for now, refactoring KVM to hide it too
is possible, it will just take more work to do so.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220204214205.3306634-11-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:50:48 -05:00
Sean Christopherson a57a31684d KVM: x86: Treat x2APIC's ICR as a 64-bit register, not two 32-bit regs
Emulate the x2APIC ICR as a single 64-bit register, as opposed to forking
it across ICR and ICR2 as two 32-bit registers.  This mirrors hardware
behavior for Intel's upcoming IPI virtualization support, which does not
split the access.

Previous versions of Intel's SDM and AMD's APM don't explicitly state
exactly how ICR is reflected in the vAPIC page for x2APIC, KVM just
happened to speculate incorrectly.

Handling the upcoming behavior is necessary in order to maintain
backwards compatibility with KVM_{G,S}ET_LAPIC, e.g. failure to shuffle
the 64-bit ICR to ICR+ICR2 and vice versa would break live migration if
IPI virtualization support isn't symmetrical across the source and dest.

Cc: Zeng Guang <guang.zeng@intel.com>
Cc: Chao Gao <chao.gao@intel.com>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220204214205.3306634-10-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-03-01 08:50:48 -05:00