linux-sg2042/arch
Linus Torvalds dcc5c6f013 Three interrupt related fixes for X86:
- Move disabling of the local APIC after invoking fixup_irqs() to ensure
    that interrupts which are incoming are noted in the IRR and not ignored.
 
  - Unbreak affinity setting. The rework of the entry code reused the
    regular exception entry code for device interrupts. The vector number is
    pushed into the errorcode slot on the stack which is then lifted into an
    argument and set to -1 because that's regs->orig_ax which is used in
    quite some places to check whether the entry came from a syscall. But it
    was overlooked that orig_ax is used in the affinity cleanup code to
    validate whether the interrupt has arrived on the new target. It turned
    out that this vector check is pointless because interrupts are never
    moved from one vector to another on the same CPU. That check is a
    historical leftover from the time where x86 supported multi-CPU
    affinities, but not longer needed with the now strict single CPU
    affinity. Famous last words ...
 
  - Add a missing check for an empty cpumask into the matrix allocator. The
    affinity change added a warning to catch the case where an interrupt is
    moved on the same CPU to a different vector. This triggers because a
    condition with an empty cpumask returns an assignment from the allocator
    as the allocator uses for_each_cpu() without checking the cpumask for
    being empty. The historical inconsistent for_each_cpu() behaviour of
    ignoring the cpumask and unconditionally claiming that CPU0 is in the
    mask striked again. Sigh.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl9L6WYTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoRV5D/9dRq/4pn5g1esnzm4GhIr2To3Qp6cl
 s7VswTdN8FmWBqVz79ZVYqj663UpL3pPY1np01ctrxRQLeDVfWcI2BMR5irnny8h
 otORhFysuDUl+yfuomWVbzfQQNJ+VeQVeWKD3cIhD1I3sXqDX5Wpa8n086hYKQXx
 eutVC3+JdzJZFm68xarlLW7h2f1au1eZZFgVnyY+J5KO9Dwm63a4RITdDVk7KV4t
 uKEDza5P9SY+kE9LAGNq8BAEObf9FeMXw0mRM7atRKVsJQQGVk6bgiuaRr01w1+W
 hQCPx/3g6PHFnGgx/KQgHf1jgrZFhXOyIDo6ZeFy+SJGIZRB3n8o5Kjns2l8Pa+K
 2qy1TRoZIsGkwGCi/BM6viLzBikbh/gnGYy/8KTEJdKs8P3ZKHUZVSAB1dpapOWX
 4n+rKoVPnvxgRSeZZo+tgLkvUdh+/9Huyr9vHiYjtbbB8tFvjlkOmrZ6sirHByDy
 jg6TjOJVb1CC/PoW4M7JNfmeKvHQnTACwH6djdVGDLPJspuUsYkPI0Uk0CX21SA3
 45Tuylvl9jT6+vq95Av2RbAiipmSpZ/O1NHV8Paf466SKmhUgG3lv5PHh3xTm1U2
 Be/RbJ75x4Muuw42ttU1LcpcLPcOZRQNEREoNd5UysgYYgWRekBvU+ZRQNW4g2nw
 3JDgJgm0iBUN9w==
 =zIi4
 -----END PGP SIGNATURE-----

Merge tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "Three interrupt related fixes for X86:

   - Move disabling of the local APIC after invoking fixup_irqs() to
     ensure that interrupts which are incoming are noted in the IRR and
     not ignored.

   - Unbreak affinity setting.

     The rework of the entry code reused the regular exception entry
     code for device interrupts. The vector number is pushed into the
     errorcode slot on the stack which is then lifted into an argument
     and set to -1 because that's regs->orig_ax which is used in quite
     some places to check whether the entry came from a syscall.

     But it was overlooked that orig_ax is used in the affinity cleanup
     code to validate whether the interrupt has arrived on the new
     target. It turned out that this vector check is pointless because
     interrupts are never moved from one vector to another on the same
     CPU. That check is a historical leftover from the time where x86
     supported multi-CPU affinities, but not longer needed with the now
     strict single CPU affinity. Famous last words ...

   - Add a missing check for an empty cpumask into the matrix allocator.

     The affinity change added a warning to catch the case where an
     interrupt is moved on the same CPU to a different vector. This
     triggers because a condition with an empty cpumask returns an
     assignment from the allocator as the allocator uses for_each_cpu()
     without checking the cpumask for being empty. The historical
     inconsistent for_each_cpu() behaviour of ignoring the cpumask and
     unconditionally claiming that CPU0 is in the mask struck again.
     Sigh.

  plus a new entry into the MAINTAINER file for the HPE/UV platform"

* tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
  x86/irq: Unbreak interrupt affinity setting
  x86/hotplug: Silence APIC only after all interrupts are migrated
  MAINTAINERS: Add entry for HPE Superdome Flex (UV) maintainers
2020-08-30 12:01:23 -07:00
..
alpha treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
arc treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
arm A set of fixes for lockdep, tracing and RCU: 2020-08-30 11:43:50 -07:00
arm64 A set of fixes for interrupt chip drivers: 2020-08-30 11:56:54 -07:00
c6x treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
csky treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
h8300 treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
hexagon treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
ia64 treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
m68k treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
microblaze treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
mips A set of fixes for lockdep, tracing and RCU: 2020-08-30 11:43:50 -07:00
nds32 A set of fixes for lockdep, tracing and RCU: 2020-08-30 11:43:50 -07:00
nios2 mm/nios2: use general page fault accounting 2020-08-12 10:58:03 -07:00
openrisc treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
parisc treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
powerpc A set of fixes for lockdep, tracing and RCU: 2020-08-30 11:43:50 -07:00
riscv treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
s390 A set of fixes for lockdep, tracing and RCU: 2020-08-30 11:43:50 -07:00
sh treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
sparc treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
um treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
x86 Three interrupt related fixes for X86: 2020-08-30 12:01:23 -07:00
xtensa treewide: Use fallthrough pseudo-keyword 2020-08-23 17:36:59 -05:00
.gitignore
Kconfig A set oftimekeeping/VDSO updates: 2020-08-14 14:26:08 -07:00