Some patches related the arm64 Allwinner SoCs, most notably:
- Support for the MMC
- Suport for the USB and mUSB controllers
- New boards: Bananapi M64
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYkFG4AAoJEBx+YmzsjxAg0VUP/iRp85fiUdNxr3I9gm1XpC7o
hUWIlp9/ZMaEvNfGKUxESeiX2awyASaSLpQBiicRvv2mG91+Za3IF4A9/kIJDruA
cGxB7O86siR8qc9SuiMIA7hzEgyS1ayie0KvIOx397jdYarK0GMoDrokkrzN15fz
nop8bSAG0hAGYHyOEUSefQ2Uo/f1uWtOiA0IwHoLEZxZCFfXerT/8MxSKVAV8/2v
BN8aEm31iGuF9J9Hmuh105qo4F6JJdC6vPAiJGWcpV2zqm99vYoNPDPeND18C4TI
ojk6cxzFWtf1QRT6zFmjKEazoRKlQnOw6ZFQ80yDsKJPpOr5CV/Bj2hNgDveQ2W3
ICTwmP6jfTH7q8Jp7MtZTJmYLoTb6qqMlOWGLTvD/utzj3sNvDPr74fk7lyLPUBQ
c86dG9Sm+lEJeqHZuBe9GiwdKYs3kKaiN6AQGYtTLu8EPVZvzq6hF8XEa7DCmKCZ
DT4fGCf64MIwLcYTPqR4A633qlopjDvCA66DJxfzQnmKFO6n5pTff7aj4HvH/uSk
gkQTCzPta+qyb3qWueBmiulLEEcb87EwBBcbt3hzs15rgDBxtl05LcKQ7HHdUM8Q
QZZTwzwH6EHqSefr0pAgJggKExin5YjTKB0UEsGe1yevgCYVca/2q8SQYypoDTJP
tuBg05IB2gj7zitwMK57
=//in
-----END PGP SIGNATURE-----
Merge tag 'sunxi-dt64-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into next/dt64
Pull "Allwinner arm64 changes for 4.11" from Maxime Ripard:
Some patches related the arm64 Allwinner SoCs, most notably:
- Support for the MMC
- Suport for the USB and mUSB controllers
- New boards: Bananapi M64
* tag 'sunxi-dt64-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux:
arm64: allwinner: add BananaPi-M64 support
arm64: allwinner: a64: add UART1 pin nodes
arm64: allwinner: pine64: add MMC support
arm64: allwinner: a64: Increase the MMC max frequency
arm64: allwinner: a64: Add MMC pinctrl nodes
arm64: allwinner: a64: Add MMC nodes
arm64: dts: allwinner: Remove no longer used pinctrl/sun4i-a10.h header
arm64: dts: enable the MUSB controller of Pine64 in host-only mode
arm64: dts: add MUSB node to Allwinner A64 dtsi
arm64: dts: allwinner: enable EHCI1, OHCI1 and USB PHY nodes in Pine64
arm64: dts: allwinner: sort the nodes in sun50i-a64-pine64.dts
arm64: dts: allwinner: add USB1-related nodes of Allwinner A64
To is_vmalloc_addr() to check if an address is a vmalloc address
instead of checking VMALLOC_START and VMALLOC_END manually.
Signed-off-by: Miles Chen <miles.chen@mediatek.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Delete an assignment for the local variable "ret" in an if branch
because it was initialised by the same value.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Mac interrupt code has been debugged. The Penguin deficiencies that
still cause unhandled interrupts aren't fixable here. Besides,
interrupts are fast and frequent and these printk statements
were never really useful IMO. Remove them.
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
In macints.c there is some startup code which disables the SONIC interrupt
in an attempt to avoid an unhandled slot interrupt, which would be fatal.
This only works on those machines where the SONIC device is on-board.
When the mac_sonic driver is built-in, there's little point in doing this,
because the device will be initialized a few seconds later anyway. But
when mac_sonic is a module, the window for an unhandled interrupt is
longer.
Either way, we've already run the gauntlet for 5 or 10 seconds by the time
we get around to disabling this particular device. It's only by sheer luck
that we got this far.
Really, this is too little too late. The general problem of unhandled
early interrupts also affects other devices on other models. There are
better ways to resolve this problem.
1) When using the Penguin bootloader, boot Mac OS with extensions disabled
(by holding down the shift key at startup or by use of the Extensions
Manager control panel). The Penguin docs already contain this advice,
as it is always effective.
2) Have the Penguin bootloader disable the device. It already attempts
to disable slot interrupts. But since some hardware cannot mask slot
interrupts, Penguin should probably close the relevant device
drivers.
3) Use Emile instead of Penguin. AFAIK the boot ROM never enables network
device interrupts and hence they don't need to be disabled.
Remove this hack. It requires maintenance and it doesn't solve the
problem. It improves the odds for a few models, but so does setting
CONFIG_MAC_SONIC=y.
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
mac_nmi_handler() is useless in its present form and locks up my PowerBook
180. Let's throw out the dead code and make it do something useful: print
a register dump and a stack trace.
mac_debug_handler() is also dead code. Remove it along with its static
data.
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
arch/m68k/mac/misc.c does not use any miscdevice so the
inclusion of linux/miscdevice.h is unnecessary.
This patch remove this unnecessary include.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Fix to return error code -ENOMEM from the memory alloc error handling
case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds the hypercall numbers and wrapper functions for the hash page
table resizing hypercalls.
These hypercall numbers are defined in the PAPR ACR "HPT resizing
option".
It also adds a new firmware feature flag to track the presence of the
HPT resizing calls.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Otherwise KVM will fail to pass them through to the host
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The IPIs come in as HVI not EE, so we need to test the appropriate
SRR1 bits. The encoding is such that it won't have false positives
on P7 and P8 so we can just test it like that. We also need to handle
the icp-opal variant of the flush.
Fixes: d74361881f ("powerpc/xics: Add ICP OPAL backend")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Three tiny changes to the ERAT flushing logic: First don't make
it depend on DD1. It hasn't been decided yet but we might run
DD2 in a mode that also requires explicit flushes for performance
reasons so make it unconditional. We also add a missing isync, and
finally remove the flush from _tlbiel_va as it is only necessary
for congruence-class invalidations (PID, LPID and full TLB), not
targetted invalidations.
Fixes: 96ed1fe511 ("powerpc/mm/radix: Invalidate ERAT on tlbiel for POWER9 DD1")
Cc: stable@vger.kernel.org # v4.9+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This reverts commit 020eb3daab.
Gabriel C reports that it causes his machine to not boot, and we haven't
tracked down the reason for it yet. Since the bug it fixes has been
around for a longish time, we're better off reverting the fix for now.
Gabriel says:
"It hangs early and freezes with a lot RCU warnings.
I bisected it down to :
> Ruslan Ruslichenko (1):
> x86/ioapic: Restore IO-APIC irq_chip retrigger callback
Reverting this one fixes the problem for me..
The box is a PRIMERGY TX200 S5 , 2 socket , 2 x E5520 CPU(s) installed"
and Ruslan and Thomas are currently stumped.
Reported-and-bisected-by: Gabriel C <nix.or.die@gmail.com>
Cc: Ruslan Ruslichenko <rruslich@cisco.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@kernel.org # for the backport of the original commit
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Enable all applicable CPUfreq options.
Signed-off-by: Markus Mayer <mmayer@broadcom.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Turn on CPU_SUPPORTS_CPUFREQ and MIPS_EXTERNAL_TIMER for BMIPS.
Signed-off-by: Markus Mayer <mmayer@broadcom.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Ran "make savedefconfig" to bring bmips_stb_defconfig up to date.
Signed-off-by: Markus Mayer <mmayer@broadcom.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Recent versions of OPAL can provide names for the various OPAL interrupts,
so let's use them. This also modernises the code that fetches the
interrupt array to use the helpers provided by the generic code instead
of hand-parsing the property.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Free irqs on error, check allocation of names, consolidate error
handling, whitespace.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
On some CAPP errors we see console messages that prints unknown HMIs for
which CAPI recovery is in progress. This patch fixes this by printing
correct error info for HMI generated due to CAPP recovery.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
These are common on bare metal machines, so put them in the defconfig.
This adds 216KB to the vmlinux size
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Currently in arm64's copy_{to,from}_user, we only check the
source/destination object size if access_ok() tells us the user access
is permissible.
However, in copy_from_user() we'll subsequently zero any remainder on
the destination object. If we failed the access_ok() check, that applies
to the whole object size, which we didn't check.
To ensure that we catch that case, this patch hoists check_object_size()
to the start of copy_from_user(), matching __copy_from_user() and
__copy_to_user(). To make all of our uaccess copy primitives consistent,
the same is done to copy_to_user().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- A relatively large patch restores booting on i.MX platforms that
failed to boot after a cleanup was merged for v4.10.
- A quirk for USB needs to be enabled on the STi platform
- On the Meson platform, we saw memory corruption with part of
the memory used by the secure monitor, so we have to stay out
of that area.
- The same platform also has a problem with ethernet under load,
which is fixed by disabling EEE negotiation.
- imx6dl has an incorrect pin configuration, which prevents SPI
from working.
- Two maintainers have lost their access to their email addresses, so
we should update the MAINTAINERS file before the release
- Renaming one of the orion5x linkstation models to help simplify
the debian install.
- A couple of fixes for build warnings that were introduced during
v4.10-rc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAWJtPYGCrR//JCVInAQIrqxAA4XlWN83jn0+zDt1A7OmrXUhtycD8eJFM
cP0Lc965pa1OVVkcIRRrgvYE9dnILuK6qPit7HbQtQcCSRxb5P2OQ0AyNBjAt6C0
exBD/R13aImPLWWuGl65/WqBaCWaZ9KnVqzNHDXGt9d51NBsqreM9TdwLMvFMQBl
tyvoRNK4TbIMGpOtrnLMTwHLkh4yZXik7srkuwSV0jeIVh7HrQUd2eawqWssuX7A
idkZWBheDhQt2s1tI5wkRf4TFEI6muWpaNaU3NGi9qmQdHpJWc0ivYZtHlE29Fli
T/nXDWmPptRIhOSIney6TwLdgN1Lg4ztRdaowHEpYXnfieUx+P86QJTXhxxo/3eT
be30IhWX4WKAWiQkQHAsVCt6zIYRfXE5N8An6S5MfsC3n1dYAvCCf/qpToGUnoc/
nyZQcbasHaSB3r5YMUmgH1oDowT9FsE/iaOzCr5xymiXgxR/p3gTVxcLn9jgp3Zq
m1hSNfCACGmGLNcQBR/fz63/1b6sanXV6JOSEEB2TfpcQ0Mi0AaeZjAR9JAfQbNR
hG/r8LC2Q0cG10zwOe1Iv60Ery7UsfCtdzEU6D3/gwtlDssSOs351cBoIKWtQxnX
SPcUag9ZHZS1iuaZaSISmOWMyhK7CeCjk12TDFJPBrLolJOhIuHUOW/5cFwydWgp
DLbhSBwqQKU=
=zlTb
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Arnd Bergmann:
- A relatively large patch restores booting on i.MX platforms that
failed to boot after a cleanup was merged for v4.10.
- A quirk for USB needs to be enabled on the STi platform
- On the Meson platform, we saw memory corruption with part of the
memory used by the secure monitor, so we have to stay out of that
area.
- The same platform also has a problem with ethernet under load, which
is fixed by disabling EEE negotiation.
- imx6dl has an incorrect pin configuration, which prevents SPI from
working.
- Two maintainers have lost their access to their email addresses, so
we should update the MAINTAINERS file before the release
- Renaming one of the orion5x linkstation models to help simplify the
debian install.
- A couple of fixes for build warnings that were introduced during
v4.10-rc.
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: defconfigs: make NF_CT_PROTO_SCTP and NF_CT_PROTO_UDPLITE built-in
MAINTAINERS: socfpga: update email for Dinh Nguyen
ARM: orion5x: fix Makefile for linkstation-lschl.dtb
ARM: dts: orion5x-lschl: More consistent naming on linkstation series
ARM: dts: orion5x-lschl: Fix model name
MAINTAINERS: change email address from atmel to microchip
MAINTAINERS: at91: change email address
ARM64: dts: meson-gx: Add firmware reserved memory zones
ARM64: dts: meson-gxbb-odroidc2: fix GbE tx link breakage
ARM: dts: STiH407-family: set snps,dis_u3_susphy_quirk
ARM: dts: imx: Pass 'chosen' and 'memory' nodes
ARM: dts: imx6dl: fix GPIO4 range
ARM: imx: hide unused variable in #ifdef
Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
Now VMs are able to use the EL1 physical timer.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
KVM traps on the EL1 phys timer accesses from VMs, but it doesn't handle
those traps. This results in terminating VMs. Instead, set a handler for
the EL1 phys timer access, and inject an undefined exception as an
intermediate step.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
When scheduling a background timer, consider both of the virtual and
physical timer and pick the earliest expiration time.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Initialize the emulated EL1 physical timer with the default irq number.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Now that we have a separate structure for timer context, make functions
generic so that they can work with any timer context, not just the
virtual timer context. This does not change the virtual timer
functionality.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Make cntvoff per each timer context. This is helpful to abstract kvm
timer functions to work with timer context without considering timer
types (e.g. physical timer or virtual timer).
This also would pave the way for ever doing adjustments of the cntvoff
on a per-CPU basis if that should ever make sense.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The sysfs cpu_capacity entry for each CPU has nothing to do with
PROC_FS, nor it's in /proc/sys path.
Remove such ifdef.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reported-and-suggested-by: Sudeep Holla <sudeep.holla@arm.com>
Fixes: be8f185d8a ('arm64: add sysfs cpu_capacity attribute')
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bit 0x100 of a page table, segment table of region table entry
can be used to disallow code execution for the virtual addresses
associated with the entry.
There is one tricky bit, the system call to return from a signal
is part of the signal frame written to the user stack. With a
non-executable stack this would stop working. To avoid breaking
things the protection fault handler checks the opcode that caused
the fault for 0x0a77 (sys_sigreturn) and 0x0aad (sys_rt_sigreturn)
and injects a system call. This is preferable to the alternative
solution with a stub function in the vdso because it works for
vdso=off and statically linked binaries as well.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add hardware capability bits and feature tags to /proc/cpuinfo for
the "Vector Packed Decimal Facility" (tag "vxd" / hwcap bit 2^12)
and the "Vector Enhancements Facility 1" (tag "vxe" / hwcap bit 2^13).
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The current implementation of setup_randomness uses the stack address
and therefore the pointer to the SYSIB 3.2.2 block as input data
address. Furthermore the length of the input data is the number of
virtual-machine description blocks which is typically one.
This means that typically a single zero byte is fed to
add_device_randomness.
Fix both of these and use the address of the first virtual machine
description block as input data address and also use the correct
length.
Fixes: bcfcbb6bae ("s390: add system information as device randomness")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The early vt220 sclp printk code added an extra new line to each
printed multi-line text. If used for the early sclp console this will
lead to numerous extra new lines. Therefore get rid of this semantic
and require that each to be printed string contains a line feed
character if a new line is wanted.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This patch
- unifies the old sclp early code and the sclp early printk code, so
they can use common functions
- makes sure all sclp early functions and variables have the same
"sclp_early" prefix
- converts the sclp early printk code into readable code by using
existing data structures instead of hard coded magic arrays
- splits the early sclp code into two files: sclp_early.c and
sclp_early_core.c. The core file contains everything that is
required by the kernel decompressor and may not call functions not
contained within the core file. Otherwise the result would be a
link error.
- changes interrupt handling to be completely synchronous. The old
early sclp code had a small window which allowed to receive several
interrupts instead of exactly the single expected interrupt. This
did hide a subtle potential bug, which is fixed with this large
rework.
- contains a couple of small cleanups.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Move the early sclp printk code to the drivers folder where also the
rest of the sclp code can be found. This way it is possible to use the
sclp private header files for further cleanups.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When autonuma (Automatic NUMA balancing) marks a PTE inaccessible it
clears all the protection bits but leave the PTE valid.
With the Radix MMU, an attempt at executing from such a PTE will
take a fault with bit 35 of SRR1 set "SRR1_ISI_N_OR_G".
It is thus incorrect to treat all such faults as errors. We should
pass them to handle_mm_fault() for autonuma to deal with. The case
of pages that are really not executable is handled by the existing
test for VM_EXEC further down.
That leaves us with catching the kernel attempts at executing user
pages. We can catch that earlier, even before we do find_vma.
It is never valid on powerpc for the kernel to take an exec fault
to begin with. So fold that test with the existing test for the
kernel faulting on kernel addresses to bail out early.
Fixes: 1d18ad0268 ("powerpc/mm: Detect instruction fetch denied and report")
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Fix rebase breakage from commit 55dd00a73a ("KVM: x86: add
KVM_HC_CLOCK_PAIRING hypercall", 2017-01-24), courtesy of the
"I could have sworn I had pushed the right branch" department.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This merges in a fix which touches both PPC and KVM code,
which was therefore put into a topic branch in the powerpc
tree.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Due to the reference clock comes from 26M oscillator directly
on mt8173, and it is a fixed-clock in DTS which always turned
on, we ignore it before. But on some platforms, it comes
from PLL, and need be controlled, so here add it, no matter
it is a fixed-clock or not.
Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently we have code inline in the arch timer probe path to cater for
Freescale erratum A-008585, complete with ifdeffery. This is a little
ugly, and will get worse as we try to add more errata handling.
This patch refactors the handling of Freescale erratum A-008585. Now the
erratum is described in a generic arch_timer_erratum_workaround
structure, and the probe path can iterate over these to detect errata
and enable workarounds.
This will simplify the addition and maintenance of code handling
Hisilicon erratum 161010101.
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
[Mark: split patch, correct Kconfig, reword commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
The conflict was an interaction between a bug fix in the
netvsc driver in 'net' and an optimization of the RX path
in 'net-next'.
Signed-off-by: David S. Miller <davem@davemloft.net>
Both of these options are poorly named. The features they provide are
necessary for system security and should not be considered debug only.
Change the names to CONFIG_STRICT_KERNEL_RWX and
CONFIG_STRICT_MODULE_RWX to better describe what these options do.
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
There are multiple architectures that support CONFIG_DEBUG_RODATA and
CONFIG_SET_MODULE_RONX. These options also now have the ability to be
turned off at runtime. Move these to an architecture independent
location and make these options def_bool y for almost all of those
arches.
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
This patch adds a OSTM driver for the Renesas architecture.
The OS Timer (OSTM) has independent channels that can be
used as a freerun or interval times.
This driver uses the first probed device as a clocksource
and then any additional devices as clock events.
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Cleanup removed the file, but Makefile was overlooked. Purge there too.
Fixes: 4b483ed0be ('ARM: ux500: cut some platform data')
Signed-off-by: Olof Johansson <olof@lixom.net>
- enable some simd extensions for guests
- enable nx for guests
- debug log for cpu model
- PER fixes
- remove bitwise annotation from ar_t
- detect guests in operation exception program check loops
- fix potential null-pointer dereference for ucontrol guests
- also contains merge for fix that went into 4.10 to avoid conflicts
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJYmGFiAAoJEBF7vIC1phx8wzYP/2xrpknbKSLhG7Vnn6R7Mmkq
joUl4bfqKoYDVkJ2yYLkN3WfCyIx/MA7t406EYjt/6INQcGWOWR4dG8/LU4sw5xk
1EXckD2YIR1iGIasTCsyDjCpABqsldIUofaKPsNeWFtnnKfR7EK9FJowFOkdRAwk
BUaBG5drayOnLySo02E7BrN3EAqkIuDdZinpM8e25h6nU9dLjS5o8nxX5iIIItgZ
VyHDTfWAGxWMqC5s4MnKsxC01NFA8JJa1KQful199D1jZ2nsC66OobNPr3vpaLFS
Nbolls9AF6jKDLPSJQopkkWcr3BuFFYwZSYsNYDRmuUc2Ellvuf0Ug/X2yQEN4lx
VnCRo9mDNRIryWVg1h003EQSVT7rgi3pBH+T7U+N9JPwdN7RgkDOvOgbMjWO0I3m
glZhJ/l0MIfEfgtam6cu3/k5r/ZKPDoE+kGXZMvOJ4MLI534ErD9yVgEarPYzEM4
fWnOuznUHRUKARhf6zR3DCIyp39UwD+QAfoTPgyvvnUFjWaPaWsuktZ4P4e1KPTT
XDfPTqQQJScBQtwphHYVvDGyfPRv6/taQXFyQAu95O31b8OBeQvlunyivBo7E+dr
ocL0f7pqjKkaw/0qICyLbUL0rBHGNuf3E2ODyyDl3HEI3+NL8c42+M8KaMps4cas
QyhSoeENYV9Kw6UpOBn6
=0x3e
-----END PGP SIGNATURE-----
Merge tag 'kvm-s390-next-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD
KVM: s390: Fixes and features for 4.11 (via kvm/next)
- enable some simd extensions for guests
- enable nx for guests
- debug log for cpu model
- PER fixes
- remove bitwise annotation from ar_t
- detect guests in operation exception program check loops
- fix potential null-pointer dereference for ucontrol guests
- also contains merge for fix that went into 4.10 to avoid conflicts
Numerous MIPS KVM fixes, improvements, and features for 4.11, many of
which continue to pave the way for VZ support, the most interesting of
which are:
- Add GVA->HPA page tables for T&E, to cache GVA mappings.
- Generate fast-path TLB refill exception handler which loads host TLB
entries from GVA page table, avoiding repeated guest memory
translation and guest TLB lookups.
- Use uaccess macros when T&E needs to access guest memory, which with
GVA page tables and the Linux TLB refill handler improves robustness
against TLB faults and fixes EVA hosts.
- Use BadInstr/BadInstrP registers when available to obtain instruction
encodings after a synchronous trap.
- Add GPA->HPA page tables to replace the inflexible linear array,
allowing for multiple sparsely arranged memory regions.
- Properly implement dirty page logging.
- Add KVM_CAP_SYNC_MMU support so that changes in GPA mappings become
effective in guests even if they are already running, allowing for
copy-on-write, KSM, idle page tracking, swapping, and guest memory
ballooning.
- Add KVM_CAP_READONLY_MEM support, so writes to specified memory
regions are treated as MMIO.
- Implement proper CP0_EBase support in T&E.
- Expose a few more missing CP0 registers to userland.
- Add KVM_CAP_NR_VCPUS and KVM_CAP_MAX_VCPUS support, and allow up to 8
VCPUs to be created in a VM.
- Various cleanups and dropping of dead and duplicated code.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYlKZeAAoJEGwLaZPeOHZ6ghsQAL4HRU32rA6XKE6gKgRiIPYC
n8iHv2EuUCSKnZyM1a9o1QdJU02bBBB5TPYw+NUsqNClaiRLHsaNZR0TSze4gmcF
NVdnEIOmKluSPnbSXNstqpihJ/p6vzJuh+2eh/EpRuyunzmQ01B6ffjUalKzPtBx
EfgFv1mBnteqLgYZwlJQwj8ogX+Y92TrfdzzazJAom6MFx/lMnigPnUeiaXEG8u6
VrOr/c6Q6lMz4Yfh0xyskJWN4B4zI6PW8/G3SKvKhl8YIRQdtFAv1OfFPaSbdTko
ZdEsFO9UOr0KQu13f10pHAdwRruF7OMQ+3nRDYttdYKzWUYC6pTm77yOG/3+MNdv
KALwaQqJBglaShjuzM8WBI09sDeKgaJ8LYZOttm9Mb+ltwfKsJZPDvba67Kv5266
jkzroKuZeQC6SvAHAlQ7qKgdQr1wrqF3WwjNMmeqNR4Fiw2C3ni/8N39MY/qi7RX
NXQv/fJ6XqM37RC0XMlyu5O+zVWPf0IZ0VdCcl3kkqbvCyXq8B8u6dC+L9+wGb5r
07ZMwnYC93CFeYfrjHu/GsqKqONfAL5Pz13Y/YUlgX0phaLB+yrkq70cFLB4sA8K
KgBwDuD0Qbmdvtcd97qFvp96GMuspIcOkMEiqbD/XrblYnYddehP81Aojdw/GDdp
C62jTny1c/n95ylfbpkC
=lMjW
-----END PGP SIGNATURE-----
Merge tag 'kvm_mips_4.11_1' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/kvm-mips into HEAD
KVM: MIPS: GVA/GPA page tables, dirty logging, SYNC_MMU etc
Numerous MIPS KVM fixes, improvements, and features for 4.11, many of
which continue to pave the way for VZ support, the most interesting of
which are:
- Add GVA->HPA page tables for T&E, to cache GVA mappings.
- Generate fast-path TLB refill exception handler which loads host TLB
entries from GVA page table, avoiding repeated guest memory
translation and guest TLB lookups.
- Use uaccess macros when T&E needs to access guest memory, which with
GVA page tables and the Linux TLB refill handler improves robustness
against TLB faults and fixes EVA hosts.
- Use BadInstr/BadInstrP registers when available to obtain instruction
encodings after a synchronous trap.
- Add GPA->HPA page tables to replace the inflexible linear array,
allowing for multiple sparsely arranged memory regions.
- Properly implement dirty page logging.
- Add KVM_CAP_SYNC_MMU support so that changes in GPA mappings become
effective in guests even if they are already running, allowing for
copy-on-write, KSM, idle page tracking, swapping, and guest memory
ballooning.
- Add KVM_CAP_READONLY_MEM support, so writes to specified memory
regions are treated as MMIO.
- Implement proper CP0_EBase support in T&E.
- Expose a few more missing CP0 registers to userland.
- Add KVM_CAP_NR_VCPUS and KVM_CAP_MAX_VCPUS support, and allow up to 8
VCPUs to be created in a VM.
- Various cleanups and dropping of dead and duplicated code.
The big feature this time is support for POWER9 using the radix-tree
MMU for host and guest. This required some changes to arch/powerpc
code, so I talked with Michael Ellerman and he created a topic branch
with this patchset, which I merged into kvm-ppc-next and which Michael
will pull into his tree. Michael also put in some patches from Nick
Piggin which fix bugs in the interrupt vector code in relocatable
kernels when coming from a KVM guest.
Other notable changes include:
* Add the ability to change the size of the hashed page table,
from David Gibson.
* XICS (interrupt controller) emulation fixes and improvements,
from Li Zhong.
* Bug fixes from myself and Thomas Huth.
These patches define some new KVM capabilities and ioctls, but there
should be no conflicts with anything else currently upstream, as far
as I am aware.
Add a hypercall to retrieve the host realtime clock and the TSC value
used to calculate that clock read.
Used to implement clock synchronization between host and guest.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vmx_complete_nested_posted_interrupt() can't fail, let's turn it into
a void function.
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kmap() can't fail, therefore it will always return a valid pointer. Let's
just get rid of the unnecessary checks.
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A single patch to enable the thermal DT support in sunxi_defconfig
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYkE92AAoJEBx+YmzsjxAgVTsQALMN7wtP9t3aAeE3xdxIOhMU
afp6FbwL8Wxm3wxpN8WnZcMBVsHrCjI7BGhUDOg1AeT826TOQ8kZWXsKemUNGRSc
iuvc3bLWCWR8z8/3NKH6uqEuHEKimNj681KHoIMNVYGuROvxYaPQZRq/qXl2nyjq
XnsNh470IzSMLTAWz0afWgplAVMKAcmHq3O7YW37uAM2tcNVMkpSkKHyoaNi7JQj
kEUlWX2Fdv9HKWPjvv22ctglY/a9WydCNQL/I7Xp6ECo46ZsPgNTM6YdDsx9K3T3
DLMmbBWnXebBe96XQLdmh9YN5OW3Qky7YuxSNIewzOtW6KfCgZGDh8bRkPsGL9R8
U2J34zLiM2/zKdum73OzjPRUPq9fF8Tr94XAHWUHv3cReQ1ia6HWdBYv9bfxS2RN
QNnwd7MOCms7TPResX1GXmKwBrPMUpBqDTrz6bM4yQU+s6LJUSy3V9O7syamOdlO
5Vi2vzDxQpwrjbODXG3oNnjLkSBYM0iySYB0K7rzStrMFT8snuZivp31b4mwlOf4
4Tf+5Zlr4q1XzQheY1W4qJsbkG1XA8I7vh/C1T2ZL/m1DapoADgg9GfODR9KJzBj
LUAmMimENyMKo+UCcuWCdVMbNbo3UnFC+9Bl9CNQgOXxcGmxr388xZH7IITwZJRW
jkoPNbMs60Ip+w/mcTGx
=tqpm
-----END PGP SIGNATURE-----
Merge tag 'sunxi-defconfig-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into next/defconfig
Pull "Allwinner defconfig changes for 4.11" from Maxime Ripard:
A single patch to enable the thermal DT support in sunxi_defconfig
* tag 'sunxi-defconfig-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux:
ARM: sunxi: Add CONFIG_THERMAL_OF
Utilize the ability to pass board specific MDIO bus information towards a
particular MDIO device thus allowing us to provide the per-port switch layout
to the Marvell 88E6XXX switch driver.
Since we would end-up with conflicting registration paths, do not register the
"dsa" platform device anymore.
Note that the MDIO devices registered by code in net/dsa/dsa2.c does not
parse a dsa_platform_data, but directly take a dsa_chip_data (specific
to a single switch chip), so we update the different call sites to pass
this structure down to orion_ge00_switch_init().
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some patches to support two new SoCs: the H2+ and the V3s.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYkE5wAAoJEBx+YmzsjxAgPrkP/1ytDjHOhEvHyvBiHJkqrY9u
zMAPbXl7MtUjRqHHq0otmDSbL8dPpWYxKI+5gv4mqTmhppxs3430rRxhP/56fB/g
C4v1mL6LUc/mDcmYhIBxt03Ddq4fZCEbFzZp31r9+CqkduO0+WjsE7NgbUERb33r
NJ/OfWc7k7auJYEtZMCiMwn8Y98EJofw+sNlPtsdtVZw0o96Tspd5xkV83bULN6r
LQnBZMQZgmXrKglJrvCikIQPnVSsEtH0sN3xw4pr4sIuOf5VpHi33LnkJbftEq+V
mJoLWTHODPB1VqH1hzceVHLBwXG3kMtbYGYXPJkTy8/eAVxF9Vj/YRls9+XQtFMc
Pl/1LLP2z2OSIDiR/xMpMUEFtc1++q5TDepVGXdW6A3Dv9qhp/fjlIjLF6aUtbMP
67qNw70rSbCcBw9EQAVG2yPM2MSU9lkwm3bb8MY2usduTBPh7huYbxlChZvATrL0
063rJy4BkVwsm1LKcn7nCedDD6HUwcCwXhlklwWTMjY3AZ6CH0VVb5ynn87UkJPq
fACRA6RvzpDETPzPm/ReasT8u30XKKKdKAd77EqZaYUjKWjvHcZ8oIKahK5MKE6G
I7+3CDZsRZlhz/u6fraQ/iFmLTEogEtDr6x6vp1RFQ8tmtkqgzdGJHsP1oRSG4SW
BUqwz7vd/TtzSJK+vxSt
=YGm1
-----END PGP SIGNATURE-----
Merge tag 'sunxi-core-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into next/soc
Pull "Allwinner core changes for 4.11" from Maxime Ripard:
Some patches to support two new SoCs: the H2+ and the V3s.
* tag 'sunxi-core-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux:
arm: sunxi: add support for V3s SoC
ARM: sunxi: add support for H2+ SoC
SoC part of the support for the for Marvell switches with integrated
CPUs based on Armada XP: the SMP support is slightly different.
-----BEGIN PGP SIGNATURE-----
iIEEABECAEEWIQQYqXDMF3cvSLY+g9cLBhiOFHI71QUCWJCa0yMcZ3JlZ29yeS5j
bGVtZW50QGZyZWUtZWxlY3Ryb25zLmNvbQAKCRALBhiOFHI71bmhAJoCMTBerj+q
dN6aACDT7cfSl8BUwQCfSBNQ2+0r6+naWABhDo7DhPoeOoE=
=XnFS
-----END PGP SIGNATURE-----
Merge tag 'mvebu-soc-4.11-2' of git://git.infradead.org/linux-mvebu into next/soc
Pull "mvebu soc for 4.11 (part 2)" from Gregory CLEMENT:
SoC part of the support for the for Marvell switches with integrated
CPUs based on Armada XP: the SMP support is slightly different.
* tag 'mvebu-soc-4.11-2' of git://git.infradead.org/linux-mvebu:
arm: mvebu: support for SMP on 98DX3336 SoC
Support for the audio codec and Mali GPU for the A33
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYmHAxAAoJEBx+YmzsjxAgiG8P/ijFKXwrmqtrk5a3gUvYIZMK
Qw7zFwHAYgijDRKMNOC9TuvRx/5x6k5V/9kT19zavrz9b1lX/PG5XHfVJH4eWD5U
KjTbISJNR2t7uMTiW2Ydree3D/m6AkrulbOF7M9UP8W5dhwbziJ0uOyAMzxIryjl
Q5i3eUg3mUJylGk2SFcX4ANvtTpCPqvUjExRP1hwQSNonVQgaEKg9LKXEdo8o9RT
hlF6yStn8AyNtHX6rzChiv7AxvOoD5vSehEWuWg3O7GTpUX5lWDzNJewlkkJ6aEl
HBzO1Fni6e/TAxXBEeMeQlf6VnFfSjWhkvMEa5OhFB5NeFE+/THTX+tgyidGK6bT
SwW5+YkQp5ZDhnYU40r0WrJwq4rKoJXzzqkjhE2iPng1zqMbMudiQzBXoITNCTC1
zOYCNMun1lg7GbWXC/YolGUe7ORYKhn+bWNQaBPhn1NR6Ck0uNTQZFyCxdDqN3rH
XtC/JKxbkeXNXSAVFe/16yyyp5bk325n7uNxYzFH4ix9ePIUpvvBCzYgzh4EQY3V
OMDPDFsmApV85S+a5U4ZcTe8QVk5PrAmEFGjfO/doMVcduH4TH3kbQz9GvoQtxJv
R73qeTGWzZs1jLo+UovJ9+W8RYGUdlik4G3i0iT5LG4BiLKNmq+YcE8DEPg/C45t
FAByvIheTctIWKeMhQpP
=X2M/
-----END PGP SIGNATURE-----
Merge tag 'sunxi-dt-for-4.11-2' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into next/dt
Pull "Allwinner DT changes for 4.11, part 2" from Maxime Ripard:
Support for the audio codec and Mali GPU for the A33
* tag 'sunxi-dt-for-4.11-2' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux:
ARM: dts: sun8i: sinlinx: Enable audio nodes
ARM: dts: sun8i: parrot: Enable audio nodes
ARM: dts: sun8i: Add audio codec, dai and card for A33
ARM: sun8i: dt: Add mali node
dt-bindings: gpu: Add Mali Utgard bindings
The usual chunk of DT changes, most notably:
- Support for the H2+ and the V3s
- CPUFreq support for the A33
- SPDIF support for the A31 and H3
- New boards: Beelink X2, Lichee Pi One, Lichee Pi Zero,
Orange Pi Zero
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYkFCYAAoJEBx+YmzsjxAgljYP/0seq1LEtMtoTeXHP6ZPkrju
7/pHjEwJFZRH98DZ0QMAcUM+fHB0C4h5UadEXCIyzYBO9yeHzm+Hoys4VvpaQasL
6taHz3zJ5T1/si7HyyP16LEwaFeARmSO3aXh4t5195Y2u3BFD+b2i5BpMquCC5CM
XXJmx3Q487bUWjdufUqiSz0gWK1BfKw/eJAawoJbIl9Z6v7wOvE0SQF8ZtIc2eOl
WX9uyNKdYh4PsSC0TxexDeKHNhLh2hJgO1HxYg+DTdyGM144XO0OSuPFtfilROaL
tViKBohffz8A4K5m5iCaDva+P9318iO2xVN91/DFV6VS2XhMRvUrfZRd9kEvuhvu
FJF3KrIZDLRu7tkcHVt9GDjO3CElejMT+qLOGg7t+9nJ+6RjyHUSBs6HIU0Opsu0
Fw7LyHihzDH/m7m87lxUUmC5rSRqMVfuiKwphHYqDzkYEGEnL4kv055rE6cBFGyM
zr7aajpiFgARPXj1XdZQRnz8uC8kySUf2o1oLPdhP3szX41AWBTiBuT7yqMB6qnD
5B8Xj4zTsjEYfwsaWaI8VhbK2UsLgsUlbUXugKfqtjoPTjaWN6zQ3C6030Bov9za
2jo0+187iYp/B/M4DPiH3Kicfntnj4DWRmBt8wCX/PVXS2tD1S617JzugbX/+EZK
PF328UgsT3c1UkccBK9/
=ig5Q
-----END PGP SIGNATURE-----
Merge tag 'sunxi-dt-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into next/dt
Pull "Allwinner DT changes for 4.11" from Maxime Ripard:
The usual chunk of DT changes, most notably:
- Support for the H2+ and the V3s
- CPUFreq support for the A33
- SPDIF support for the A31 and H3
- New boards: Beelink X2, Lichee Pi One, Lichee Pi Zero,
Orange Pi Zero
* tag 'sunxi-dt-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux: (42 commits)
ARM: dts: sun8i-h3: Add SPDIF to the Beelink X2
ARM: dts: sun8i-h3: Add the SPDIF block to the H3
ARM: dts: sun8i-h3: Add SPDIF TX pin to the H3
ARM: dts: sun8i-h3: Add dts for the Beelink X2 STB
ARM: sun8i: sina33: Enable display
ARM: sun8i: a23/a33: Add the oscillators accuracy
ARM: sun8i: a23/a33: Enable the real LOSC and use it
ARM: dts: sunxi: add support for Lichee Pi Zero board
ARM: dts: sunxi: add dtsi file for V3s SoC
ARM: dts: sun6i: sina31s: Enable USB OTG controller in peripheral mode
ARM: dts: sun8i: reference-design: use AXP223 DTSI
ARM: dts: sun8i: parrot: use AXP223 DTSI
ARM: dts: sun8i: sina33: use AXP223 DTSI
ARM: dts: sun8i: a33-olinuxino: use AXP223 DTSI
ARM: dts: add DTSI for AXP223
dt-bindings: power: axp20x-usb: add axp223 compatible
ARM: dts: sun7i: Add wifi dt node on Banana Pro
ARM: dts: sun6i: Add SPDIF to the Mele I7
devicetree: bindings: Add vendor prefix for Shenzhen Xunlong Software
ARM: dts: sun8i-h3: orange-pi-pc: Enable audio codec
...
The TS-72xx/73xx boards have a CPLD watchdog which is configured to
reset the board after 8 seconds, if the kernel is large enough that this
takes about this time to decompress the kernel, we will encounter a
spurious reboot.
Do not pull ts72xx.h, but instead locally define what we need to disable
the watchdog.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This platform data is revoked: the drivers are getting the DMA
configuration from the device tree, it has been done like that
since the DMA support was merged and this data has not been used
since. The remaining auxdata is also unused.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
1. Add descriptive user-friendly label names for power domains. This
makes debugging easier.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYk3VZAAoJEME3ZuaGi4PXqSoP/jW2BUA0EmHRbOYaaJAgdnEJ
ZhkCmguJepJlqPK1vrPNgCSFDqHdfaTZl5q4lSHf0WCTjL68YCwqnQuzjppL9Qvq
nlvtvkfbXXIBBZurb7W1J7CriKM456NckbNgQGK6bSL6z4unOpoAmyzbBP7+8xJM
O4OM3XvIRUUslQsV5oJv+HlBMNZaZ7bqZ7tg9XEHgldSRhDHpPurmxTwOIssIJ5H
OPbH9hmhubkW64CRALN/jXwYG+s3537ZK/SDy5OPeRw0/Fkscxu3b7pqa3JC/Kar
ViiFS1FT22hsxRVwjNB+67W4rsCNjTlJNmcSKg9KPubsANFrMoOQ3ZzV2zbB8oml
l227IAdwv7+VLYkjreqdirf2UjfYhRBRd5HTQiV1bIJ2XHe7Ar56M2DN9CbXSXK0
HnsYPVCpgSUE2lCNTYf9BXKri9SYYRRaTsfvgpeRk/IzF61fr2LkAMtcoAPsJJob
oWSBjTz/7/zPdZ68ESaJbe2A9i3Ywi0QzGT9emt8xT7N0shS3uwkMB51kQ4/Zqnf
zP31Nf/o5bx1I6Y5aem0mgInpc6cFYkFItRVfSZ3sZbznBjFqIteeffamkF5JBFv
PQYPh14HkT79ktYEmXS1kKHOgJl3QKN1L07tCI190GoLYBHI1w7NeUj9hf/TunwI
SVx2B3hKV1PUkOpzIPp2
=SE3a
-----END PGP SIGNATURE-----
Merge tag 'samsung-dt-4.11-3' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/dt
Pull "Samsung DeviceTree update for v4.11, third round" from Krzysztof Kozlowski
1. Add descriptive user-friendly label names for power domains. This
makes debugging easier.
* tag 'samsung-dt-4.11-3' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
ARM: dts: exynos: Add labels to all existing power domains
This cleans the device tree a bit and rectifies some
clocking bugs. They are in a sense regressions, but there
is no hurry for this platform.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYkamEAAoJEEEQszewGV1z6TIP/2qHFkXa3PfMFp0gP8yR4tY/
ExRZk0ssWGhVNjn/Kfvf30IqyuUL6rDs+squSd69PCDGKcM1b23X5EvcnZ/CVFRe
lBRM1JH0Ko+y1snk/CNDu4KTXsumUZq11nyFVBAwzupcnkgeVM12kFrK3mQHeMGO
MvT2GTz2iD520zlLlxbWYaF4jCADPMPLFl4U8vXRNXVF53XqDbXno+hbdYzDVwb2
oTCScdDkdBmQmAT/wWkm+nqlvVfEdtSOHoHMSe9yFcgL+VD1Gt7+p/BtRnb5X8e4
nZFNAAZLSAmhFDKQr8qaZcMlHoFYUWKRzY1SeWvn38SWDE977P5Rqi5t9zaq5I5d
nzpQ9psmpPQcnNgW7/n0AM2KI9DDjkeknvZuRyBk2ovjnaLZag40T/r1H5RkEt3p
HFR+YMkfUrWGJaKxC9M/egCvaRmZTJQDsgZmPD3ot7HmS2rucC/Mhl5jtM9pnB9U
kIb30msf0l9PZKc2C5HLN214WRD+x9kEKTJ++eF530tuP1kVACa+9E8vIIHNIynk
OACSHBQBO+rTKhi3r00DaDqMyI1UlBKAetaGkhVSgVMbxwWWmWRkU0dFR0zWdAXY
0IuUrsX6j8qinFg46Hl4ztUf8Q4r0aujjtYNYArVKPdXYHueSdOOM83/uXJ1paX2
2QehPr665FMuCXKQ1ZJP
=QJk8
-----END PGP SIGNATURE-----
Merge tag 'ux500-dt-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson into next/dt
Pull "Ux500 Device Tree updates for v4.11" from Linus Walleij:
This cleans the device tree a bit and rectifies some
clocking bugs. They are in a sense regressions, but there
is no hurry for this platform.
* tag 'ux500-dt-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson:
ARM: dts: add the AB8500 sysclk to the device trees
ARM: dts: Ux500: move USB PHY pins to PHY device
ARM: dts: push MMC/SD to board and add comments
Add support for Marvell switches with integrated CPUs based on Armada XP
-----BEGIN PGP SIGNATURE-----
iIEEABECAEEWIQQYqXDMF3cvSLY+g9cLBhiOFHI71QUCWJCV2yMcZ3JlZ29yeS5j
bGVtZW50QGZyZWUtZWxlY3Ryb25zLmNvbQAKCRALBhiOFHI71SkdAJ0S0Drnjneu
0rdBe1D5BsfMyao7pQCgpGRbAwKc6mtY4w78uFxahRUNdxI=
=3/jm
-----END PGP SIGNATURE-----
Merge tag 'mvebu-dt-4.11-3' of git://git.infradead.org/linux-mvebu into next/dt
Merge "mvebu dt for 4.11 (part 3)" from Gregory CLEMENT:
Add support for Marvell switches with integrated CPUs based on Armada XP
* tag 'mvebu-dt-4.11-3' of git://git.infradead.org/linux-mvebu:
ARM: dts: mvebu: Add device tree for db-dxbc2 and db-xc3-24g4xg boards
ARM: dts: mvebu: Add device tree for 98DX3236 SoCs
Since everybody copied my own mistake from the DT binding example,
let's address all the offenders in one swift go.
Most of them got the CPU interface size wrong (4kB, while it should
be 8kB), except for both keystone platforms which got the control
interface wrong (4kB instead of 8kB).
In a few cases where I knew for sure what implementation was used,
I've added the "arm,gic-400" compatible string. I'm 99% sure that
this is what everyone is using, but short of having the TRM for
all the other SoCs, I've left them alone.
Acked-by: Shawn Guo <shawnguo@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Matthias Brugger <matthias.bgg@gmail.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Using native_machine_emergency_restart (called during reboot) will
lead PVH guests to machine_real_restart() where we try to use
real_mode_header which is not initialized.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Since we are not using PIC and (at least currently) don't have IOAPIC
we want to make sure that acpi_irq_model doesn't stay set to
ACPI_IRQ_MODEL_PIC (which is the default value). If we allowed it to
stay then acpi_os_install_interrupt_handler() would try (and fail) to
request_irq() for PIC.
Instead we set the model to ACPI_IRQ_MODEL_PLATFORM which will prevent
this from happening.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Start PVH guest at XEN_ELFNOTE_PHYS32_ENTRY address. Setup hypercall
page, initialize boot_params, enable early page tables.
Since this stub is executed before kernel entry point we cannot use
variables in .bss which is cleared by kernel. We explicitly place
variables that are initialized here into .data.
While adjusting xen_hvm_init_shared_info() make it use cpuid_e?x()
instead of cpuid() (wherever possible).
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
We are replacing existing PVH guests with new implementation.
We are keeping xen_pvh_domain() macro (for now set to zero) because
when we introduce new PVH implementation later in this series we will
reuse current PVH-specific code (xen_pvh_gnttab_setup()), and that
code is conditioned by 'if (xen_pvh_domain())'. (We will also need
a noop xen_pvh_domain() for !CONFIG_XEN_PVH).
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
The new Xen PVH entry point requires page tables to be setup by the
kernel since it is entered with paging disabled.
Pull the common code out of head_32.S so that mk_early_pgtbl_32() can be
invoked from both the new Xen entry point and the existing startup_32()
code.
Convert resulting common code to C.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: matt@codeblueprint.co.uk
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1481215471-9639-1-git-send-email-boris.ostrovsky@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We may or may not have all possible CPUs in MADT on boot but in any
case we're overwriting x86_cpu_to_acpiid mapping with U32_MAX when
acpi_register_lapic() is called again on the CPU hotplug path:
acpi_processor_hotadd_init()
-> acpi_map_cpu()
-> acpi_register_lapic()
As we have the required acpi_id information in acpi_processor_hotadd_init()
propagate it to acpi_map_cpu() to always keep x86_cpu_to_acpiid
mapping valid.
Reported-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Print the secure boot status in the x86 setup_arch() function, but otherwise do
nothing more for now. More functionality will be added later, but this at
least allows for testing.
Signed-off-by: David Howells <dhowells@redhat.com>
[ Use efi_enabled() instead of IS_ENABLED(CONFIG_EFI). ]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1486380166-31868-7-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Get the firmware's secure-boot status in the kernel boot wrapper and stash
it somewhere that the main kernel image can find.
The efi_get_secureboot() function is extracted from the ARM stub and (a)
generalised so that it can be called from x86 and (b) made to use
efi_call_runtime() so that it can be run in mixed-mode.
For x86, it is stored in boot_params and can be overridden by the boot
loader or kexec. This allows secure-boot mode to be passed on to a new
kernel.
Suggested-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1486380166-31868-5-git-send-email-ard.biesheuvel@linaro.org
[ Small readability edits. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
efi_call_runtime() is provided for x86 to be able abstract mixed mode
support. Provide this for ARM also so that common code work in mixed mode
also.
Suggested-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1486380166-31868-3-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Provide the ability to perform mixed-mode runtime service calls for x86 in
the same way the following commit provided the ability to invoke for boot
services:
0a637ee612 ("x86/efi: Allow invocation of arbitrary boot services")
Suggested-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1486380166-31868-2-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There used to be a restriction with KASLR and hibernation, but this is no
longer true, and since commit:
65fe935dd2 ("x86/KASLR, x86/power: Remove x86 hibernation restrictions")
the parameter "kaslr" does no longer exist.
Signed-off-by: Niklas Cassel <niklas.cassel@axis.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Niklas Cassel <niklass@axis.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1486399429-23078-1-git-send-email-niklass@axis.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The debug features currently uses absolute TOD time stamps for the
debug events. Given that the TOD clock can jump forward and backward
due to STP sync checks the order of debug events can get obfuscated.
Replace the absolute TOD time stamps with a delta to the IPL time
stamp. On a STP sync check the TOD clock correction is added to
the IPL time stamp as well to make the deltas unaffected by STP
sync check.
The readout of the debug feature entries will convert the deltas
back to absolute time stamps based on the Unix epoch.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The data stored by the STSI instruction can be up to a page in size
but the memblock_virt_alloc allocation for tl_info only specifies
16 bytes. The memory after the short allocation is overwritten
every time arch_update_cpu_topology is called.
Fixes: 8c91058022 "s390/numa: establish cpu to node mapping early"
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Commit bcfcbb6bae ("s390: add system information as device
randomness") intended to add some virtual machine specific information
to the randomness pool.
Unfortunately it uses the page allocator before it is ready to use. In
result the page allocator always returns NULL and the setup_randomness
function never adds anything to the randomness pool.
To fix this use memblock_alloc and memblock_free instead.
Fixes: bcfcbb6bae ("s390: add system information as device randomness")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Change the device probe test in the via-cuda.c driver so it will load on
Egret-based machines too. Remove the now redundant via-maciisi.c driver.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
All entry points already read the MSR so they can easily do
the right thing.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The branch from hmi_exception_early to hmi_exception_realmode must use
a "relocatable-style" branch, because it is branching from unrelocated
exception code to beyond __end_interrupts.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Without this we will always find the feature disabled.
Fixes: 984d7a1ec6 ("powerpc/mm: Fixup kernel read only mapping")
Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull crypto fixes from Herbert Xu:
- use-after-free in algif_aead
- modular aesni regression when pcbc is modular but absent
- bug causing IO page faults in ccp
- double list add in ccp
- NULL pointer dereference in qat (two patches)
- panic in chcr
- NULL pointer dereference in chcr
- out-of-bound access in chcr
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: chcr - Fix key length for RFC4106
crypto: algif_aead - Fix kernel panic on list_del
crypto: aesni - Fix failure when pcbc module is absent
crypto: ccp - Fix double add when creating new DMA command
crypto: ccp - Fix DMA operations when IOMMU is enabled
crypto: chcr - Check device is allocated before use
crypto: chcr - Fix panic on dma_unmap_sg
crypto: qat - zero esram only for DH85x devices
crypto: qat - fix bar discovery for c62x
start,size has the benefit of being easier to search for (start,end
usually gives you the preceeding vector from the one you want, as first
result).
Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Somewhere along the line, search/replace left some naming garbled,
and untidy alignment (aka. mpe stuffed it up). Might as well fix them
all up now while git blame history doesn't extend too far.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Atomic operation function symbols are exported,when
CONFIG_ARM64_LSE_ATOMICS is defined. Prefix them with notrace, so that
an user can not trace these functions. Tracing these functions causes
kernel crash.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The NUMA code may get confused by the presence of NOMAP regions within
zones, resulting in spurious BUG() checks where the node id deviates
from the containing zone's node id.
Since the kernel has no business reasoning about node ids of pages it
does not own in the first place, enable CONFIG_HOLES_IN_ZONE to ensure
that such pages are disregarded.
Acked-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A typical SMP system expects cache coherency. Initial NPS platform
support was slated to be SMP w/o cache coherency.
However it seems the platform now selects that option, so there is no
point in keeping it around.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
No need for specifying a list of interrupts in the declaration
of IDU interrupt controller anymore since the kernel can obtain
a number of supported interrupts from the build register.
Also delete support of the second parameter for devices which
are connected to IDU because it is not used anywhere.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This enhancement is needed to allow masking all available common interrupts
in IDU interrupt controller in boot time since the kernel can
discover a number of them from the build register. Also now there
is no need to specify in device tree a list of used core interrupts
by IDU. E.g. before:
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
interrupt-parent = <&core_intc>;
#interrupt-cells = <2>;
interrupts = <24 25 26 27 28 29 30 31>;
};
and after:
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
interrupt-parent = <&core_intc>;
#interrupt-cells = <2>;
};
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
After reset all interrupts in the core interrupt controller has
the highest priority P0. If the platform supports Fast IRQs and
has more than 1 banks of registers then CPU automatically switch
banks of registers when P0 interrupt comes.
The problem is that the kernel expects that by default switching
of banks is not used by all interrupts. It is necessary to set a
default nonzero priority for all available interrupts to avoid
undefined behaviour.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Currently Kconfig knob ARC_NUMBER_OF_INTERRUPTS is used as indicator of
hard irq count. But it is flawed that it doesn't affect
- NR_IRQS : for number of virtual interrupts
- NR_CPU_IRQS : for number of hardware interrupts
Moreover the actual hardware irq count might still not be same as
ARC_NUMBER_OF_INTERRUPTS. So use the information availble in the
Build Configuration Registers and get rid of the Kconfig option.
We still need "some" build time info about irq count to set up
sufficient number of vector table entries. This is done with a
sufficiently large NR_CPU_IRQS which will eventually be used soley for
that purpose (subsequent patches will remove its usage elsewhere)
So to summarize what this patch does:
* NR_CPU_IRQS defines a maximum number of hardware interrupts.
* Remove ARC_NUMBER_OF_INTERRUPTS option and create interrupts
table for all possible hardware interrupts.
* Increase a maximum number of virtual IRQs to 512. ARCv2 can
support 240 interrupts in the core interrupts controllers
and 128 interrupts in IDU. Thus 512 virtual IRQs must be
enough for most configurations of boards.
This patch leads to NR_CPU_IRQS in 2 places, to reduce the overall
churn. The next patch will remove the 2nd definition anyways.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: reworked the changelog a bit]
It is better to use it instead of magic numbers.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARC VDK for EVSS uses UIO for communication with Embedded Vision
Subsystem.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
In recent VDKs ARC cores are configured as "run on reset"
which made existing kernel configuration outdated to effect that
slave cores never start execution of the code keeping only master
online.
With that fix we're again in sync with VDK platform.
And while at it we regenerate defconfig via savedefconfig so default
options are now excluded.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The symbols can no longer be used as loadable modules, leading to a harmless Kconfig
warning:
arch/arm/configs/imote2_defconfig:60:warning: symbol value 'm' invalid for NF_CT_PROTO_UDPLITE
arch/arm/configs/imote2_defconfig:59:warning: symbol value 'm' invalid for NF_CT_PROTO_SCTP
arch/arm/configs/ezx_defconfig:68:warning: symbol value 'm' invalid for NF_CT_PROTO_UDPLITE
arch/arm/configs/ezx_defconfig:67:warning: symbol value 'm' invalid for NF_CT_PROTO_SCTP
Let's make them built-in.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
More consistent naming for some orion5x based boards helping the
switch to device tree for debian users.
-----BEGIN PGP SIGNATURE-----
iIEEABECAEEWIQQYqXDMF3cvSLY+g9cLBhiOFHI71QUCWJNvuSMcZ3JlZ29yeS5j
bGVtZW50QGZyZWUtZWxlY3Ryb25zLmNvbQAKCRALBhiOFHI71bUAAJ9F2ae0LEvo
4Fu44238w1Kr6hC6LQCeO16PL46cLMTZj2E/hOy/9eytbM0=
=IcBG
-----END PGP SIGNATURE-----
Merge tag 'mvebu-fixes-4.10-1' of git://git.infradead.org/linux-mvebu into fixes
Pull "mvebu fixes for 4.10 (part 1)" from Gregory CLEMENT:
More consistent naming for some orion5x based boards helping the
switch to device tree for debian users.
* tag 'mvebu-fixes-4.10-1' of git://git.infradead.org/linux-mvebu:
ARM: orion5x: fix Makefile for linkstation-lschl.dtb
ARM: dts: orion5x-lschl: More consistent naming on linkstation series
ARM: dts: orion5x-lschl: Fix model name
This adds an emulation layer for implementations
that lack the l.lwa and l.swa instructions.
It handles these instructions both in kernel space and
user space.
Signed-off-by: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
[shorne@gmail.com: Added delay slot pc adjust logic]
Signed-off-by: Stafford Horne <shorne@gmail.com>
This brings it inline with the other setup oprations done like the cache
enables _ic_enable and _dc_enable. Also, this is going to make it
easier to initialize additional cpu's when smp is introduced.
Signed-off-by: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
[shorne@gmail.com: Added commit body]
Signed-off-by: Stafford Horne <shorne@gmail.com>
The stack size was hard coded to 0x2000, use the standard THREAD_SIZE
definition loaded from thread_info.h.
Signed-off-by: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
[shorne@gmail.com: Added body to the commit message]
Signed-off-by: Stafford Horne <shorne@gmail.com>
By slightly reorganizing the code, the number of registers
used in the tlb miss handlers can be reduced by two,
thus removing the need to save them to memory.
Also, some dead and commented out code is removed.
No functional change.
Signed-off-by: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Motivation for this is to be able to print the way information
properly in print_cpuinfo(), instead of hardcoding it to one.
Signed-off-by: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Signed-off-by: Jonas Bonn <jonas@southpole.se>
[shorne@gmail.com fixed conflict with show_cpuinfo change]
Signed-off-by: Stafford Horne <shorne@gmail.com>
The sparse IRQ framework is preferred nowadays so switch over to it.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Back when this was first written, dma_supported() was somewhat of a
murky mess, with subtly different interpretations being relied upon in
various places. The "does device X support DMA to address range Y?"
uses assuming Y to be physical addresses, which motivated the current
iommu_dma_supported() implementation and are alluded to in the comment
therein, have since been cleaned up, leaving only the far less ambiguous
"can device X drive address bits Y" usage internal to DMA API mask
setting. As such, there is no reason to keep a slightly misleading
callback which does nothing but duplicate the current default behaviour;
we already constrain IOVA allocations to the iommu_domain aperture where
necessary, so let's leave DMA mask business to architecture-specific
code where it belongs.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Sometimes (e.g. early boot) a guest is broken in such ways that it loops
100% delivering operation exceptions (illegal operation) but the pgm new
PSW is not set properly. This will result in code being read from
address zero, which usually contains another illegal op. Let's detect
this case and return to userspace. Instead of only detecting
this for address zero apply a heuristic that will work for any program
check new psw.
We do not want guest problem state to be able to trigger a guest panic,
e.g. by faulting on an address that is the same as the program check
new PSW, so we check for the problem state bit being off.
With proper handling in userspace we
a: get rid of CPU consumption of such broken guests
b: keep the program old PSW. This allows to find out the original illegal
operation - making debugging such early boot issues much easier than
with single stepping
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
User controlled KVM guests do not support the dirty log, as they have
no single gmap that we can check for changes.
As they have no single gmap, kvm->arch.gmap is NULL and all further
referencing to it for dirty checking will result in a NULL
dereference.
Let's return -EINVAL if a caller tries to sync dirty logs for a
UCONTROL guest.
Fixes: 15f36eb ("KVM: s390: Add proper dirty bitmap support to S390 kvm.")
Cc: <stable@vger.kernel.org> # 3.16+
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reported-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Use hlist_for_each_entry() in the first loop in the kretprobe
trampoline_handler() function, because it doesn't change the hlist.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/148637493309.19245.12546866092052500584.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Remove the 'HAVE_KPROBES' dependency from the HAVE_KRETPROBES line,
since HAVE_KPROBES is already selected unconditionally in the Kconfig
line above this one.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David A. Long <dave.long@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/148637486369.19245.316601692744886725.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Hardware flow-control capability must be specified at a platform
level in order to inform the ASC driver that the platform is capable
(i.e. are the lines wired up, etc). STiH4{07,10} devices are indeed
capable, so let's provide the property.
Acked-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Having just defined some new Pinctrl groups for when HW flow-control
is {en,dis}abled, let's reference them for use within the driver.
Acked-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Each serial port which supports HW flow-control should have 2 Pinctrl
groups. One for when HW flow-control is in progress, where the IP
will take over controlling the lines and another group which enables
the lines to be toggled using GPIO mechanisms.
Acked-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When hardware flow-control is disabled, manual toggling of the UART's
reset line (RTS) using userland applications (e.g. stty) is not
possible, since the ASC IP does not provide this functionality in the
same was as some other IPs do. Thus, we have to do this manually.
This patch configures the UART RTS line as a GPIO for manipulation
within the UART driver when HW flow-control is not enabled.
Acked-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This adds AUX vectors for the L1I,D, L2 and L3 cache levels
providing for each cache level the size of the cache in bytes
and the geometry (line size and number of ways).
We chose to not use the existing alpha/sh definition which
packs all the information in a single entry per cache level as
it is too restricted to represent some of the geometries used
on POWER.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
All shipping firmware versions have it wrong in the device-tree
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Retrieved from device-tree when available
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We have two set of identical struct members for the I and D sides
and mostly identical bunches of code to parse the device-tree to
populate them. Instead make a ppc_cache_info structure with one
copy for I and one for D
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
It will be used to calculate the associativity
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In a number of places we called "cache line size" what is actually
the cache block size, which in the powerpc architecture, means the
effective size to use with cache management instructions (it can
be different from the actual cache line size).
We fix the naming across the board and properly retrieve both
pieces of information when available in the device-tree.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We don't patch instructions based on the cache lines or block
sizes these days.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The variables are defined twice in setup_32.c and setup_64.c, do it
once in setup-common.c instead
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
It's an kernel private macro, it doesn't belong there
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Enable the audio codec and the audio dai for the sun8i A33 sinlinx board.
Signed-off-by: Mylène Josserand <mylene.josserand@free-electrons.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Enable the audio codec and the audio dai for the sun8i R16 Parrot board.
Signed-off-by: Mylène Josserand <mylene.josserand@free-electrons.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Add the audio codec, dai and a simple card to be able to use the
audio stream of the builtin codec on sun8i SoC.
This commit adds also an audio-routing for the sound card node to link
the analog DAPM widgets (Right/Left DAC) and the digital one's as they
are created in different drivers.
Signed-off-by: Mylène Josserand <mylene.josserand@free-electrons.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
After:
a33d331761 ("x86/CPU/AMD: Fix Bulldozer topology")
our SMT scheduling topology for Fam17h systems is broken, because
the ThreadId is included in the ApicId when SMT is enabled.
So, without further decoding cpu_core_id is unique for each thread
rather than the same for threads on the same core. This didn't affect
systems with SMT disabled. Make cpu_core_id be what it is defined to be.
Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: <stable@vger.kernel.org> # 4.9
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170205105022.8705-2-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit:
a33d331761 ("x86/CPU/AMD: Fix Bulldozer topology")
restored the initial approach we had with the Fam15h topology of
enumerating CU (Compute Unit) threads as cores. And this is still
correct - they're beefier than HT threads but still have some
shared functionality.
Our current approach has a problem with the Mad Max Steam game, for
example. Yves Dionne reported a certain "choppiness" while playing on
v4.9.5.
That problem stems most likely from the fact that the CU threads share
resources within one CU and when we schedule to a thread of a different
compute unit, this incurs latency due to migrating the working set to a
different CU through the caches.
When the thread siblings mask mirrors that aspect of the CUs and
threads, the scheduler pays attention to it and tries to schedule within
one CU first. Which takes care of the latency, of course.
Reported-by: Yves Dionne <yves.dionne@gmail.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: <stable@vger.kernel.org> # 4.9
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yazen Ghannam <yazen.ghannam@amd.com>
Link: http://lkml.kernel.org/r/20170205105022.8705-1-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Enable ring 3 MONITOR/MWAIT for Intel Xeon Phi codenamed Knights Mill. We
can't guarantee that this (KNM) will be the last CPU model that needs this
hack. But, we do recognize that this is far from optimal, and there is an
effort to ensure we don't keep doing extending this hack forever.
Signed-off-by: Piotr Luc <piotr.luc@intel.com>
Cc: Piotr.Luc@intel.com
Cc: dave.hansen@linux.intel.com
Link: http://lkml.kernel.org/r/1484918557-15481-6-git-send-email-grzegorz.andrejczuk@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Pull irq fixes from Thomas Gleixner:
- Prevent double activation of interrupt lines, which causes problems
on certain interrupt controllers
- Handle the fallout of the above because x86 (ab)uses the activation
function to reconfigure interrupts under the hood.
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/irq: Make irq activate operations symmetric
irqdomain: Avoid activating interrupts more than once
Fix a regression that prevented migration between hosts with different
XSAVE features even if the missing features were not used by the guest
(for stable).
-----BEGIN PGP SIGNATURE-----
iQEcBAABCAAGBQJYlf83AAoJEED/6hsPKofoQI8H/2Y9v5FkIMUeLVPf5nskcomw
pV/IqqMJEQ0sEp0+fkGhk15nykrVpXfOdqgGD8FI9Xk8rlkTEcUSGMGvfXrIk0ir
fzX27ASWrHvyjso+6XZzarSUhMFiBljU+NDcqWgjAeYEA1H+fxtxcomx+KiC1D1H
Q3kYMWTDQ0q/QU0q/4ohVM0gfVIunmVjoJaMK3tlrPP+w4MgMu2WALi0BlZKyugZ
fcVxzgGxPKoxAfXoFHohS7jKhLX9rF8MJoSH2NxInguajpMtf76Jw+YOr10yWtR2
ESY/5JXb4KLE94cwM3XiDghYg2ak/zphTFxBbPHmSxY3nim7QahRyuiMQFr3VN8=
=0UcD
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM fix from Radim Krčmář:
"Fix a regression that prevented migration between hosts with different
XSAVE features even if the missing features were not used by the guest
(for stable)"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: do not save guest-unsupported XSAVE state
To make the code clearer, use rb_entry() instead of open coding it
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Link: http://lkml.kernel.org/r/974a91cd4ed2d04c92e4faa4765077e38f248d6b.1482157956.git.geliangtang@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Exception handlers which may run on IST stack call ist_enter() at the start
of execution and ist_exit() in the end. ist_enter() disables preemption
unconditionally and ist_exit() enables it.
So the extra preempt_disable/enable() pairs nested inside the
ist_enter/exit() regions are pointless and can be removed.
Signed-off-by: Alexander Kuleshov <kuleshovmail@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Jianyu Zhan <nasa4836@gmail.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/20161128075057.7724-1-kuleshovmail@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
commit 8fd524b355 ("x86: Kill bad_dma_address variable") has killed
bad_dma_address variable and used instead of macro DMA_ERROR_CODE
which is always zero. Since dma_addr is unsigned, the statement
dma_addr >= DMA_ERROR_CODE
is always true, and not needed.
arch/x86/kernel/pci-calgary_64.c: In function ‘iommu_free’:
arch/x86/kernel/pci-calgary_64.c:299:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {
Fixes: 8fd524b355 ("x86: Kill bad_dma_address variable")
Signed-off-by: Nikola Pajkovsky <npajkovsky@suse.cz>
Cc: iommu@lists.linux-foundation.org
Cc: Jon Mason <jdmason@kudzu.us>
Cc: Muli Ben-Yehuda <mulix@mulix.org>
Link: http://lkml.kernel.org/r/7612c0f9dd7c1290407dbf8e809def922006920b.1479161177.git.npajkovsky@suse.cz
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Enable ring 3 MONITOR/MWAIT for Intel Xeon Phi x200 codenamed Knights
Landing.
Presence of this feature cannot be detected automatically (by reading any
other MSR) therefore it is required to explicitly check for the family and
model of the CPU before attempting to enable it.
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Cc: Piotr.Luc@intel.com
Cc: dave.hansen@linux.intel.com
Link: http://lkml.kernel.org/r/1484918557-15481-5-git-send-email-grzegorz.andrejczuk@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Introduce ELF_HWCAP2 variable for x86 and reserve its bit 0 to expose the
ring 3 MONITOR/MWAIT.
HWCAP variables contain bitmasks which can be used by userspace
applications to detect which instruction sets are supported by CPU. On x86
architecture information about CPU capabilities can be checked via CPUID
instructions, unfortunately presence of ring 3 MONITOR/MWAIT feature cannot
be checked this way. ELF_HWCAP cannot be used as well, because on x86 it is
set to CPUID[1].EDX which means that all bits are reserved there.
HWCAP2 approach was chosen because it reuses existing solution present
in other architectures, so only minor modifications are required to the
kernel and userspace applications. When ELF_HWCAP2 is defined
kernel maps it to AT_HWCAP2 during the start of the application.
This way the ring 3 MONITOR/MWAIT feature can be detected using getauxval()
API in a simple and fast manner. ELF_HWCAP2 type is u32 to be consistent
with x86 ELF_HWCAP type.
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Cc: Piotr.Luc@intel.com
Cc: dave.hansen@linux.intel.com
Link: http://lkml.kernel.org/r/1484918557-15481-3-git-send-email-grzegorz.andrejczuk@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Define new MSR MISC_FEATURE_ENABLES (0x140).
On supported CPUs if bit 1 of this MSR is set, then calling MONITOR and
MWAIT instructions outside of ring 0 will not cause invalid-opcode
exception.
The MSR MISC_FEATURE_ENABLES is not yet documented in the SDM. Here is the
relevant documentation:
Hex Dec Name Scope
140H 320 MISC_FEATURE_ENABLES Thread
0 Reserved
1 If set to 1, the MONITOR and MWAIT instructions do not
cause invalid-opcode exceptions when executed with CPL > 0
or in virtual-8086 mode. If MWAIT is executed when CPL > 0
or in virtual-8086 mode, and if EAX indicates a C-state
other than C0 or C1, the instruction operates as if EAX
indicated the C-state C1.
63:2 Reserved
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Cc: Piotr.Luc@intel.com
Cc: dave.hansen@linux.intel.com
Link: http://lkml.kernel.org/r/1484918557-15481-2-git-send-email-grzegorz.andrejczuk@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This doesn't have any benefit apart from saving a small amount of memory
when it is disabled. The ifdef hackery in the code makes it dirty
unnecessarily.
Clean it up by removing the Kconfig option completely. Few defconfigs
are also updated and CONFIG_CPU_FREQ_STAT_DETAILS is replaced with
CONFIG_CPU_FREQ_STAT now in them, as users wanted stats to be enabled.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com>
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
- Support for one new SoC, the V3s
- Convertion of two old SoCs to the new framework, the old sun5i family
and the A80
- A bunch of fixes
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYkEtDAAoJEBx+YmzsjxAgLl0QAIt0GnG4tXyTsRqrregO8Qbt
UPK/Qs0hOW5VbpPzKDsWPwNGnTbvYy9I/RDnFiMJtEVU7Cj6ttEO4XvEKnaFF0N+
0ux0ATUIrz6p1pbSUTxV0dCfkqcI78+ECLV4Nsvs7pj5RoJVt4Z5gBMAdErrsB+6
q1VYLXIhs5atK1dTgjEWe4VTwaCpNCwoPMgbedOuZMLB0zFRVxtB/JSxlwGHpdoy
vYZqSUgqmDG1WNLKXAiqsa3k9BnSsX/surAfd6mVlk1MauyKzgHKzxkBrvbNjAMR
juaQcSXwkTOm8bRXj7E7NSFFBWiPUJt6dvXCxRJau79k+bYyPduv0UNu7T0a1IjM
lN0PyJcY22m9O9nsoXVSMvU9YJJd91R94UQRpRSQxN1sOLMGVwnCDhQnB5RJhV3p
gRZNZETTT/rP8E3plJJdce5xCsI6FXUT0jYTQW9cWVTcZbhKsntgNoEE4w/tqWIw
dgE/2UzReyBkWzi9OgjKyca+q8zEd6i1w9DqAg3MwwR8K03oS/aKkBaHjgwk/i8x
uiRXxhT5gLFhYOCwz3DmWw3OjwU7uCifyybGKCoxUE7U/3d2AalsTISCm3PX3dxl
RTDPJWKOSfCyzTJNZpHM5MtIKGh/jM4/yUQ3ARXTGyIa9Uxb4DXlEH6gXvrZ4BUA
UsyJ8d19owDqLIBYaE42
=Jkg2
-----END PGP SIGNATURE-----
Merge tag 'sunxi-clk-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into clk-next
Pull Allwinner clock updates from Maxime Ripard:
- Support for one new SoC, the V3s
- Conversion of two old SoCs to the new framework, the old sun5i family
and the A80
- A bunch of fixes
* tag 'sunxi-clk-for-4.11' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux: (25 commits)
ARM: dts: sun9i: Switch to new clock bindings
clk: sunxi-ng: Add A80 Display Engine CCU
clk: sunxi-ng: Add A80 USB CCU
clk: sunxi-ng: Add A80 CCU
clk: sunxi-ng: Support separately grouped PLL lock status register
clk: sunxi-ng: mux: Get closest parent rate possible with CLK_SET_RATE_PARENT
clk: sunxi-ng: mux: honor CLK_SET_RATE_NO_REPARENT flag
clk: sunxi-ng: mux: Fix determine_rate for mux clocks with pre-dividers
clk: sunxi-ng: a33: Set CLK_SET_RATE_PARENT for the GPU
clk: sunxi-ng: Call divider_round_rate if we only have a single parent
ARM: gr8: Convert to CCU
ARM: sun5i: Convert to CCU
clk: sunxi-ng: Add sun5i CCU driver
clk: sunxi-ng: Implement global pre-divider
clk: sunxi-ng: Implement multiplier maximum
clk: sunxi-ng: mult: Fix minimum in round rate
clk: sunxi-ng: Implement factors offsets
clk: sunxi-ng: multiplier: Add fractional support
clk: sunxi-ng: add support for V3s CCU
dt-bindings: add device binding for the CCU of Allwinner V3s
...
The main change is we're reverting the initial stack protector support we
merged this cycle. It turns out to not work on toolchains built with libc
support, and fixing it will be need to wait for another release.
And the rest are all fairly minor:
- Some pasemi machines were not booting due to a missing error check in
prom_find_boot_cpu().
- In EEH we were checking a pointer rather than the bool it pointed to.
- The clang build was broken by a BUILD_BUG_ON() we added.
- The radix (Power9 only) version of map_kernel_page() was broken if our
memory size was a multiple of 2MB, which it generally isn't.
Thanks to:
Darren Stevens, Gavin Shan, Reza Arbab.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYlGHSAAoJEFHr6jzI4aWAVy4QALSpVLv0900hMtn+5MBhhgad
GxHI3ViJUV+K3gDpagl+Ya++WiPd9ffiK9FqS4Xe/r1r2pgSoNU8SC736Jomb1wL
cCx34E73SLMTfnhm/lVq+a4e+nRnk633C8gdsU4Py+FFOfz7FkeQX/gBGbv+ffng
oVO6Susqo/dKirVo6+BCgVjSLdYezzxw31SVNGYUYz4MFxWUHK+bi/l0FIDQGhjD
mTzV2cPhfjYTAHQtY43in6plQ91Pd2CR5HxL3Bw9DnFhS7zFMBnWPYYWh+kdT/vf
wNS1cFJoGHjXCaXt/eXsk3GPC696bDWGSwpenQewTQdb8yJ5MWP7Jv9KeBN8603x
LSE6o9/RATv7+pJJ9UPP+vLwSXtx2bi1FJlOU4Nbxi5YhfXZncCltMANCNuVix1l
2x5Qhs8GJNt7nqxhv5GwvAmEaKZet1Ucgo+p/9D08bfKk8loN+evRx6fjT5CymE7
s1INIbkzxppebTBjawDA7w6kWXjmgrskWvqjrPUfu91Ro8KmeBVS4t0pXY9D/i8N
hCEMbXNL5qrWTRScOxP3k5cQsOPnQEz6F7AkecYjWfwGG9ZvdcztuhTj1qYRjVaF
9Xk7OWu52/EamFdZ5MGuiVIEp8Vbqy8/0IJ/bdTOEMgjsaziD6LyRNZ6Qoyxg+bf
kFEqmzuKgmz3D4aTklnh
=XUAi
-----END PGP SIGNATURE-----
Merge tag 'powerpc-4.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
"The main change is we're reverting the initial stack protector support
we merged this cycle. It turns out to not work on toolchains built
with libc support, and fixing it will be need to wait for another
release.
And the rest are all fairly minor:
- Some pasemi machines were not booting due to a missing error check
in prom_find_boot_cpu()
- In EEH we were checking a pointer rather than the bool it pointed
to
- The clang build was broken by a BUILD_BUG_ON() we added.
- The radix (Power9 only) version of map_kernel_page() was broken if
our memory size was a multiple of 2MB, which it generally isn't
Thanks to: Darren Stevens, Gavin Shan, Reza Arbab"
* tag 'powerpc-4.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/mm: Use the correct pointer when setting a 2MB pte
powerpc: Fix build failure with clang due to BUILD_BUG_ON()
powerpc: Revert the initial stack protector support
powerpc/eeh: Fix wrong flag passed to eeh_unfreeze_pe()
powerpc: Add missing error check to prom_find_boot_cpu()
Use builtin_platform_driver() helper to simplify the code.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds a Qualcomm specific quirk to the arm_smccc_smc call.
On Qualcomm ARM64 platforms, the SMC call can return before it has
completed. If this occurs, the call can be restarted, but it requires
using the returned session ID value from the interrupted SMC call.
The quirk stores off the session ID from the interrupted call in the
quirk structure so that it can be used by the caller.
This patch folds in a fix given by Sricharan R:
https://lkml.org/lkml/2016/9/28/272
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds a quirk parameter to the arm_smccc_(smc/hvc) calls.
The quirk structure allows for specialized SMC operations due to SoC
specific requirements. The current arm_smccc_(smc/hvc) is renamed and
macros are used instead to specify the standard arm_smccc_(smc/hvc) or
the arm_smccc_(smc/hvc)_quirk function.
This patch and partial implementation was suggested by Will Deacon.
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Saving unsupported state prevents migration when the new host does not
support a XSAVE feature of the original host, even if the feature is not
exposed to the guest.
We've masked host features with guest-visible features before, with
4344ee981e ("KVM: x86: only copy XSAVE state for the supported
features") and dropped it when implementing XSAVES. Do it again.
Fixes: df1daba7d1 ("KVM: x86: support XSAVES usage in the host")
Cc: stable@vger.kernel.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
The modversion symbol CRCs are emitted as ELF symbols, which allows us
to easily populate the kcrctab sections by relying on the linker to
associate each kcrctab slot with the correct value.
This has a couple of downsides:
- Given that the CRCs are treated as memory addresses, we waste 4 bytes
for each CRC on 64 bit architectures,
- On architectures that support runtime relocation, a R_<arch>_RELATIVE
relocation entry is emitted for each CRC value, which identifies it
as a quantity that requires fixing up based on the actual runtime
load offset of the kernel. This results in corrupted CRCs unless we
explicitly undo the fixup (and this is currently being handled in the
core module code)
- Such runtime relocation entries take up 24 bytes of __init space
each, resulting in a x8 overhead in [uncompressed] kernel size for
CRCs.
Switching to explicit 32 bit values on 64 bit architectures fixes most
of these issues, given that 32 bit values are not treated as quantities
that require fixing up based on the actual runtime load offset. Note
that on some ELF64 architectures [such as PPC64], these 32-bit values
are still emitted as [absolute] runtime relocatable quantities, even if
the value resolves to a build time constant. Since relative relocations
are always resolved at build time, this patch enables MODULE_REL_CRCS on
powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC
references into relative references into .rodata where the actual CRC
value is stored.
So redefine all CRC fields and variables as u32, and redefine the
__CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using
inline assembler (which is necessary since 64-bit C code cannot use
32-bit types to hold memory addresses, even if they are ultimately
resolved using values that do not exceed 0xffffffff). To avoid
potential problems with legacy 32-bit architectures using legacy
toolchains, the equivalent C definition of the kcrctab entry is retained
for 32-bit architectures.
Note that this mostly reverts commit d4703aefdb ("module: handle ppc64
relocating kcrctabs when CONFIG_RELOCATABLE=y")
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When building with debugging symbols, take the absolute path to the
vmlinux binary and add it to the special PE/COFF debug table entry.
This allows a debug EFI build to find the vmlinux binary, which is
very helpful in debugging, given that the offset where the Image is
first loaded by EFI is highly unpredictable.
On implementations of UEFI that choose to implement it, this
information is exposed via the EFI debug support table, which is a UEFI
configuration table that is accessible both by the firmware at boot time
and by the OS at runtime, and lists all PE/COFF images loaded by the
system.
The format of the NB10 Codeview entry is based on the definition used
by EDK2, which is our primary reference when it comes to the use of
PE/COFF in the context of UEFI firmware.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: use realpath instead of shell invocation, as discussed on list]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Increase the maximum number of MIPS KVM VCPUs to 8, and implement the
KVM_CAP_NR_VCPUS and KVM_CAP_MAX_CPUS capabilities which expose the
recommended and maximum number of VCPUs to userland. The previous
maximum of 1 didn't allow for any form of SMP guests.
We calculate the values similarly to ARM, recommending as many VCPUs as
there are CPUs online in the system. This will allow userland to know
how many VCPUs it is possible to create.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Expose the CP0_IntCtl register through the KVM register access API,
which is a required register since MIPS32r2. It is currently read-only
since the VS field isn't implemented due to lack of Config3.VInt or
Config3.VEIC.
It is implemented in trap_emul.c so that a VZ implementation can allow
writes.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Expose the CP0_EntryLo0 and CP0_EntryLo1 registers through the KVM
register access API. This is fairly straightforward for trap & emulate
since we don't support the RI and XI bits. For the sake of future
proofing (particularly for VZ) it is explicitly specified that the API
always exposes the 64-bit version of these registers (i.e. with the RI
and XI bits in bit positions 63 and 62 respectively), and they are
implemented in trap_emul.c rather than mips.c to allow them to be
implemented differently for VZ.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Set the default VCPU state closer to the architectural reset state, with
PC pointing at the reset vector (uncached PA 0x1fc00000, which for KVM
T&E is VA 0x5fc00000), and with CP0_Status.BEV and CP0_Status.ERL to 1.
Although QEMU at least will overwrite this state, it makes sense to do
this now that CP0_EBase is properly implemented to check BEV, and now
that we support a sparse GPA layout potentially with a boot ROM at GPA
0x1fc00000.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
The CP0_EBase register is a standard feature of MIPS32r2, so we should
always have been implementing it properly. However the register value
was ignored and wasn't exposed to userland.
Fix the emulation of exceptions and interrupts to use the value stored
in guest CP0_EBase, and fix the masks so that the top 3 bits (rather
than the standard 2) are fixed, so that it is always in the guest KSeg0
segment.
Also add CP0_EBASE to the KVM one_reg interface so it can be accessed by
userland, also allowing the CPU number field to be written (which isn't
permitted by the guest).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Access to various CP0 registers via the KVM register access API needs to
be implementation specific to allow restrictions to be made on changes,
for example when VZ guest registers aren't present, so move them all
into trap_emul.c in preparation for VZ.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that load/store faults due to read only memory regions are treated
as MMIO accesses it is safe to claim support for read only memory
regions (KVM_CAP_READONLY_MEM).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Implement the SYNC_MMU capability for KVM MIPS, allowing changes in the
underlying user host virtual address (HVA) mappings to be promptly
reflected in the corresponding guest physical address (GPA) mappings.
This allows for several features to work with guest RAM which require
mappings to be altered or protected, such as copy-on-write, KSM (Kernel
Samepage Merging), idle page tracking, memory swapping, and guest memory
ballooning.
There are two main aspects of this change, described below.
The KVM MMU notifier architecture callbacks are implemented so we can be
notified of changes in the HVA mappings. These arrange for the guest
physical address (GPA) page tables to be modified and possibly for
derived mappings (GVA page tables and TLBs) to be flushed.
- kvm_unmap_hva[_range]() - These deal with HVA mappings being removed,
for example before a copy-on-write takes place, which requires the
corresponding GPA page table mappings to be removed too.
- kvm_set_spte_hva() - These update a GPA page table entry to match the
new HVA entry, but must be careful to respect KVM specific
configuration such as not dirtying a clean guest page which is dirty
to the host, and write protecting writable pages in read only
memslots (which will soon be supported).
- kvm[_test]_age_hva() - These update GPA page table entries to be old
(invalid) so that access can be tracked, making them young again.
The GPA page fault handling (kvm_mips_map_page) is updated to use
gfn_to_pfn_prot() (which may provide read-only pages), to handle
asynchronous page table invalidation from MMU notifier callbacks, and to
handle more cases in the fast path.
- mmu_notifier_seq is used to detect asynchronous page table
invalidations while we're holding a pfn from gfn_to_pfn_prot()
outside of kvm->mmu_lock, retrying if invalidations have taken place,
e.g. a COW or a KSM page merge.
- The fast path (_kvm_mips_map_page_fast) now handles marking old pages
as young / accessed, and disallowing dirtying of clean pages that
aren't actually writable (e.g. shared pages that should COW, and
read-only memory regions when they are enabled in a future patch).
- Due to the use of MMU notifications we no longer need to keep the
page references after we've updated the GPA page tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Propagate the GPA PTE protection bits on to the GVA PTEs on a mapped
fault (except _PAGE_WRITE, and filtered by the guest TLB entry), rather
than always overriding the protection. This allows dirty page tracking
to work in mapped guest segments as a clear dirty bit in the GPA PTE
will propagate to the GVA PTEs even when the guest TLB has the dirty bit
set.
Since the filtering of protection bits is now abstracted, if the buddy
GVA PTE is also valid, we obtain the corresponding GPA PTE using a
simple non-allocating walk and load that into the GVA PTE similarly
(which may itself be invalid).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Propagate the GPA PTE protection bits on to the GVA PTEs on a KSeg0
fault (except _PAGE_WRITE), rather than always overriding the
protection. This allows dirty page tracking to work in KSeg0 as a clear
dirty bit in the GPA PTE will propagate to the GVA PTEs.
This makes it simpler to use a single kvm_mips_map_page() to obtain both
the main GPA PTE and its buddy (which may be invalid), which also allows
memory regions to be fully accessible when they don't start and end on a
2*PAGE_SIZE boundary.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Update kvm_mips_map_page() to handle logging of dirty guest physical
pages. Upcoming patches will propagate the dirty bit to the GVA page
tables.
A fast path is added for handling protection bits that can be resolved
without calling into KVM, currently just dirtying of clean pages being
written to.
The slow path marks the GPA page table entry writable only on writes,
and at the same time marks the page dirty in the dirty page logging
bitmask.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
When an existing memory region has dirty page logging enabled, make the
entire slot clean (read only) so that writes will immediately start
logging dirty pages (once the dirty bit is transferred from GPA to GVA
page tables in an upcoming patch).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
MIPS hasn't up to this point properly supported dirty page logging, as
pages in slots with dirty logging enabled aren't made clean, and tlbmod
exceptions from writes to clean pages have been assumed to be due to
guest TLB protection and unconditionally passed to the guest.
Use the generic dirty logging helper kvm_get_dirty_log_protect() to
properly implement kvm_vm_ioctl_get_dirty_log(), similar to how ARM
does. This uses xchg to clear the dirty bits when reading them, rather
than wiping them out afterwards with a memset, which would potentially
wipe recently set bits that weren't caught by kvm_get_dirty_log(). It
also makes the pages clean again using the
kvm_arch_mmu_enable_log_dirty_pt_masked() architecture callback so that
further writes after the shadow memslot is flushed will trigger tlbmod
exceptions and dirty handling.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Add a helper function to make a range of guest physical address (GPA)
mappings in the GPA page table clean so that writes can be caught. This
will be used in a few places to manage dirty page logging.
Note that until the dirty bit is transferred from GPA page table entries
to GVA page table entries in an upcoming patch this won't trigger a TLB
modified exception on write.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Rewrite TLB modified exception handling to handle read only GPA memory
regions, instead of unconditionally passing the exception to the guest.
If the guest TLB is not the cause of the exception we call into the
normal TLB fault handling depending on the memory segment, which will
soon attempt to remap the physical page to be writable (handling dirty
page tracking or copy on write in the process).
Failing that we fall back to treating it as MMIO, due to a read only
memory region. Once the capability is enabled, this will allow read only
memory regions (such as the Malta boot flash as emulated by QEMU) to
have writes treated as MMIO, while still allowing reads to run
untrapped.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Treat unhandled accesses to guest KSeg0 as MMIO, rather than only host
KSeg0 addresses. This will allow read only memory regions (such as the
Malta boot flash as emulated by QEMU) to have writes (before reads)
treated as MMIO, and unallocated physical addresses to have all accesses
treated as MMIO.
The MMIO emulation uses the gva_to_gpa callback, so this is also updated
for trap & emulate to handle guest KSeg0 addresses.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Abstract the handling of bad guest loads and stores which may need to
trigger an MMIO, so that the same code can be used in a later patch for
guest KSeg0 addresses (TLB exception handling) as well as for host KSeg1
addresses (existing address error exception and TLB exception handling).
We now use kvm_mips_emulate_store() and kvm_mips_emulate_load() directly
rather than the more generic kvm_mips_emulate_inst(), as there is no
need to expose emulation of any other instructions.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
kvm_mips_map_page() will need to know whether the fault was due to a
read or a write in order to support dirty page tracking,
KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
passed down to it via new bool write_fault arguments to various
functions.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Ignore userland writes to CP0_Config7 rather than reporting an error,
since we do allow reads of this register and it is claimed to exist in
the ioctl API.
This allows userland to blindly save and restore KVM registers without
having to special case certain registers as not being writable, for
example during live migration once dirty page logging is fixed.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Implement the kvm_arch_flush_shadow_all() and
kvm_arch_flush_shadow_memslot() KVM functions for MIPS to allow guest
physical mappings to be safely changed.
The general MIPS KVM code takes care of flushing of GPA page table
entries. kvm_arch_flush_shadow_all() flushes the whole GPA page table,
and is always called on the cleanup path so there is no need to acquire
the kvm->mmu_lock. kvm_arch_flush_shadow_memslot() flushes only the
range of mappings in the GPA page table corresponding to the slot being
flushed, and happens when memory regions are moved or deleted.
MIPS KVM implementation callbacks are added for handling the
implementation specific flushing of mappings derived from the GPA page
tables. These are implemented for trap_emul.c using
kvm_flush_remote_tlbs() which should now be functional, and will flush
the per-VCPU GVA page tables and ASIDS synchronously (before next
entering guest mode or directly accessing GVA space).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Use the lockless GVA helpers to implement the reading of guest
instructions for emulation. This will allow it to handle asynchronous
TLB flushes when they are implemented.
This is a little more complicated than the other two cases (get_inst()
and dynamic translation) due to the need to emulate the appropriate
guest TLB exception when the address isn't present or isn't valid in the
guest TLB.
Since there are several protected cache ops that may need to be
performed safely, this is abstracted by kvm_mips_guest_cache_op() which
is passed a protected cache op function pointer and takes care of the
lockless operation and fault handling / retry if the op should fail,
taking advantage of the new errors which the protected cache ops can now
return. This allows the existing advance fault handling which relied on
host TLB lookups to be removed, along with the now unused
kvm_mips_host_tlb_lookup(),
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Use the lockless GVA helpers to implement the reading of guest
instructions for emulation. This will allow it to handle asynchronous
TLB flushes when they are implemented.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Use the lockless GVA helpers to implement the dynamic translation of
guest instructions. This will allow it to handle asynchronous TLB
flushes when they are implemented.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Add helpers to allow for lockless direct access to the GVA space, by
changing the VCPU mode to READING_SHADOW_PAGE_TABLES for the duration of
the access. This allows asynchronous TLB flush requests in future
patches to safely trigger either a TLB flush before the direct GVA space
access, or a delay until the in-progress lockless direct access is
complete.
The kvm_trap_emul_gva_lockless_begin() and
kvm_trap_emul_gva_lockless_end() helpers take care of guarding the
direct GVA accesses, and kvm_trap_emul_gva_fault() tries to handle a
uaccess fault resulting from a flush having taken place.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
The stale ASID checks taking place on VCPU load can be reduced:
- Now that we check for a stale ASID on guest re-entry, there is no need
to do so when loading the VCPU outside of guest context, since it will
happen before entering the guest. Note that a lot of KVM VCPU ioctls
will cause the VCPU to be loaded but guest context won't be entered.
- There is no need to check for a stale kernel_mm ASID when the guest is
in user mode and vice versa. In fact doing so can potentially be
problematic since the user_mm ASID regeneration may trigger a new ASID
cycle, which would cause the kern_mm ASID to become stale after it has
been checked for staleness.
Therefore only check the ASID for the mm corresponding to the current
guest mode, and only if we're already in guest context. We drop some of
the related kvm_debug() calls here too.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Add handling of TLB invalidation requests before entering guest mode.
This will allow asynchonous invalidation of the VCPU mappings when
physical memory regions are altered. Should the CPU running the VCPU
already be in guest mode an IPI will be sent to trigger a guest exit.
The reload_asid path will be used in a future patch for when GVA is
about to be directly accessed by KVM.
In the process, the stale user ASID check in the re-entry path (for lazy
user GVA flushing) is generalised to check the ASID for the current
guest mode, in case a TLB invalidation request was handled. This has the
side effect of making the ASID checks on vcpu_load too conservative,
which will be addressed in a later patch.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Keep the vcpu->mode and vcpu->cpu variables up to date so that
kvm_make_all_cpus_request() has a chance of functioning correctly. This
will soon need to be used for kvm_flush_remote_tlbs().
We can easily update vcpu->cpu when the VCPU context is loaded or saved,
which will happen when accessing guest context and when the guest is
scheduled in and out.
We need to be a little careful with vcpu->mode though, as we will in
future be checking for outstanding VCPU requests, and this must be done
after the value of IN_GUEST_MODE in vcpu->mode is visible to other CPUs.
Otherwise the other CPU could fail to trigger an IPI to wait for
completion dispite the VCPU request not being seen.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Current guest physical memory is mapped to host physical addresses using
a single linear array (guest_pmap of length guest_pmap_npages). This was
only really meant to be temporary, and isn't sparse, so its wasteful of
memory. A small amount of RAM at GPA 0 and a small boot exception vector
at GPA 0x1fc00000 cannot be represented without a full 128KiB guest_pmap
allocation (MIPS32 with 16KiB pages), which is one reason why QEMU
currently runs its boot code at the top of RAM instead of the usual boot
exception vector address.
Instead use the existing infrastructure for host virtual page table
management to allocate a page table for guest physical memory too. This
should be sufficient for now, assuming the size of physical memory
doesn't exceed the size of virtual memory. It may need extending in
future to handle XPA (eXtended Physical Addressing) in 32-bit guests, as
supported by VZ guests on P5600.
Some of this code is based loosely on Cavium's VZ KVM implementation.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
When exiting from the guest, store the values of the CP0_BadInstr and
CP0_BadInstrP registers if they exist, which contain the encodings of
the instructions which caused the last synchronous exception.
When the instruction is needed for emulation, kvm_get_badinstr() and
kvm_get_badinstrp() are used instead of calling kvm_get_inst() directly,
to decide whether to read the saved CP0_BadInstr/CP0_BadInstrP registers
(if they exist), or read the instruction from memory (if not).
The use of these registers should be more robust than using
kvm_get_inst(), as it actually gives the instruction encoding seen by
the hardware rather than relying on user accessors after the fact, which
can be fooled by incoherent icache or a racing code modification. It
will also work with VZ, where the guest virtual memory isn't directly
accessible by the host with user accessors.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a
fault reading the guest instruction. This has the rather arbitrary magic
value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is
a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)".
Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to
return the instruction via a u32 *out argument. We can then drop the
KVM_INVALID_INST definition entirely.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
In order to make use of the CP0_BadInstr & CP0_BadInstrP registers we
need to be a bit more careful not to treat code fetch faults as MMIO,
lest we hit an UNPREDICTABLE register value when we try to emulate the
MMIO load instruction but there was no valid instruction word available
to the hardware.
Add a kvm_is_ifetch_fault() helper to try to figure out whether a load
fault was due to a code fetch, and prevent MMIO instruction emulation in
that case.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
MIPS KVM uses its own variation of get_new_mmu_context() which takes an
extra vcpu pointer (unused) and does exactly the same thing.
Switch to just using get_new_mmu_context() directly and drop KVM's
version of it as it doesn't really serve any purpose.
The nearby declarations of kvm_mips_alloc_new_mmu_context(),
kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from
kvm_host.h, as no definitions or users exist.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
When exceptions are injected into the MIPS KVM guest, the whole host TLB
is flushed (except any entries in the guest KSeg0 range). This is
certainly not mandated by the architecture when exceptions are taken
(userland can't directly change TLB mappings anyway), and is a pretty
heavyweight operation:
- There may be hundreds of TLB entries especially when a 512 entry FTLB
is present. These are walked and read and conditionally invalidated,
so the TLBINV feature can't be used either.
- It'll indiscriminately wipe out entries belonging to other memory
spaces. A simple ASID regeneration would be much faster to perform,
although it'd wipe out the guest KSeg0 mappings too.
My suspicion is that this was simply to plaster over the fact that
kvm_mips_host_tlb_inv() incorrectly only invalidated TLB entries in the
ASID for guest usermode, and not the ASID for guest kernelmode.
Now that the recent commit "KVM: MIPS/TLB: Flush host TLB entry in
kernel ASID" fixes kvm_mips_host_tlb_inv() to flush TLB entries in the
kernelmode ASID when the guest TLB changes, lets drop these calls and
the otherwise unused kvm_mips_flush_host_tlb().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that KVM no longer uses wired entries we can safely use
local_flush_tlb_all() when we need to flush the entire TLB (on the start
of a new ASID cycle). This doesn't flush wired entries, which allows
other code to use them without KVM clobbering them all the time. It also
is more up to date, knowing about the tlbinv architectural feature,
flushing of micro TLB on cores where that is necessary (Loongson I
believe), and knows to stop the HTW while doing so.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Use protected_writeback_dcache_line() instead of flush_dcache_line(),
and protected_flush_icache_line() instead of flush_icache_line(), so
that CACHEE (the EVA variant) is used on EVA host kernels.
Without this, guest floating point branch delay slot emulation via a
trampoline on the user stack fails on EVA host kernels due to failure of
the icache sync, resulting in the break instruction getting skipped and
execution from the stack.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that we have GVA page tables, use standard user accesses with page
faults disabled to read & modify guest instructions. This should be more
robust (than the rather dodgy method of accessing guest mapped segments
by just directly addressing them) and will also work with Enhanced
Virtual Addressing (EVA) host kernel configurations where dedicated
instructions are needed for accessing user mode memory.
For simplicity and speed we do this regardless of the guest segment the
address resides in, rather than handling guest KSeg0 specially with
kmap_atomic() as before.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that the commpage doesn't use wired TLB entries, the per-CPU
vm_init() callback is the only work done by kvm_mips_init_vm_percpu().
The trap & emulate implementation doesn't actually need to do anything
from vm_init(), and the future VZ implementation would be better served
by a kvm_arch_hardware_enable callback anyway.
Therefore drop the vm_init() callback entirely, allowing the
kvm_mips_init_vm_percpu() function to also be dropped, along with the
kvm_mips_instance atomic counter.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that we have GVA page tables and an optimised TLB refill handler in
place, convert the handling of commpage faults from the guest kernel to
fill the GVA page table and invalidate the TLB entry, rather than
filling the wired TLB entry directly.
For simplicity we no longer use a wired entry for the commpage (refill
should be much cheaper with the fast-path handler anyway). Since we
don't need to manipulate the TLB directly any longer, move the function
from tlb.c to mmu.c. This puts it closer to the similar functions
handling KSeg0 and TLB mapped page faults from the guest.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that we have GVA page tables and an optimised TLB refill handler in
place, convert the handling of page faults in TLB mapped segment from
the guest to fill a single GVA page table entry and invalidate the TLB
entry, rather than filling a TLB entry pair directly.
Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions
in mmu.c and kvm_mips_host_tlb_write() in tlb.c.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Now that we have GVA page tables and an optimised TLB refill handler in
place, convert the handling of KSeg0 page faults from the guest to fill
the GVA page tables and invalidate the TLB entry, rather than filling a
TLB entry directly.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Implement invalidation of specific pairs of GVA page table entries in
one or both of the GVA page tables. This is used when existing mappings
are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due
to the sharing of page tables in the host kernel range, we should be
careful not to allow host pages to be invalidated.
Add a helper kvm_mips_walk_pgd() which can be used when walking of
either GPA (future patches) or GVA page tables is needed, optionally
with allocation of page tables along the way when they don't exist.
GPA page table walking will need to be protected by the kvm->mmu_lock,
so we also add a small MMU page cache in each KVM VCPU, like that found
for other architectures but smaller. This allows enough pages to be
pre-allocated to handle a single fault without holding the lock,
allowing the helper to run with the lock held without having to handle
allocation failures.
Using the same mechanism for GVA allows the same code to be used, and
allows it to use the same cache of allocated pages if the GPA walk
didn't need to allocate any new tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Implement invalidation of large ranges of virtual addresses from GVA
page tables in response to a guest ASID change (immediately for guest
kernel page table, lazily for guest user page table).
We iterate through a range of page tables invalidating entries and
freeing fully invalidated tables. To minimise overhead the exact ranges
invalidated depends on the flags argument to kvm_mips_flush_gva_pt(),
which also allows it to be used in future KVM_CAP_SYNC_MMU patches in
response to GPA changes, which unlike guest TLB mapping changes affects
guest KSeg0 mappings.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any
matching TLB entry in the kernel ASID rather than assuming only the TLB
entries in the user ASID can change. Two new bool user/kernel arguments
allow the caller to indicate whether the mapping should affect each of
the ASIDs for guest user/kernel mode.
- kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can
now invalidate any corresponding TLB entry in both the kernel ASID
(guest kernel may have accessed any guest mapping), and the user ASID
if the entry being replaced is in guest USeg (where guest user may
also have accessed it).
- The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault
handlers in later patches) can now invalidate the corresponding TLB
entry in whichever ASID is currently active, since only a single page
table will have been updated anyway.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
kvm_mips_host_tlb_inv() uses the TLBP instruction to probe the host TLB
for an entry matching the given guest virtual address, and determines
whether a match was found based on whether CP0_Index > 0. This is
technically incorrect as an index of 0 (with the high bit clear) is a
perfectly valid TLB index.
This is harmless at the moment due to the use of at least 1 wired TLB
entry for the KVM commpage, however we will soon be ridding ourselves of
that particular wired entry so lets fix the condition in case the entry
needing invalidation does land at TLB index 0.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Use functions from the general MIPS TLB exception vector generation code
(tlbex.c) to construct a fast path TLB refill handler similar to the
general one, but cut down and capable of preserving K0 and K1.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
tlbex.c uses the implementation dependent $22 CP0 register group on
NetLogic cores, with the help of the c0_kscratch() helper. Allow these
registers to be allocated by the KVM entry code too instead of assuming
KScratch registers are all $31, which will also allow pgd_reg to be
handled since it is allocated that way.
We also drop the masking of kscratch_mask with 0xfc, as it is redundant
for the standard KScratch registers (Config4.KScrExist won't have the
low 2 bits set anyway), and apparently not necessary for NetLogic.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Activate the GVA page tables when in guest context. This will allow the
normal Linux TLB refill handler to fill from it when guest memory is
read, as well as preventing accidental reading from user memory.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Allocate GVA -> HPA page tables for guest kernel and guest user mode on
each VCPU, to allow for fast path TLB refill handling to be added later.
In the process kvm_arch_vcpu_init() needs updating to pass on any error
from the vcpu_init() callback.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Wire up a vcpu uninit implementation callback. This will be used for the
clean up of GVA->HPA page tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Set init_mm as the active_mm and update mm_cpumask(current->mm) to
reflect that it isn't active when in guest context. This prevents cache
management code from attempting cache flushes on host virtual addresses
while in guest context, for example due to a cache management IPIs or
later when writing of dynamically translated code hits copy on write.
We do this using helpers in static kernel code to avoid having to export
init_mm to modules.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
We only need the guest ASID loaded while in guest context, i.e. while
running guest code and while handling guest exits. We load the guest
ASID when entering the guest, however we restore the host ASID later
than necessary, when the VCPU state is saved i.e. vcpu_put() or slightly
earlier if preempted after returning to the host.
This mismatch is both unpleasant and causes redundant host ASID restores
in kvm_trap_emul_vcpu_put(). Lets explicitly restore the host ASID when
returning to the host, and don't bother restoring the host ASID on
context switch in unless we're already in guest context.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Add implementation callbacks for entering the guest (vcpu_run()) and
reentering the guest (vcpu_reenter()), allowing implementation specific
operations to be performed before entering the guest or after returning
to the host without cluttering kvm_arch_vcpu_ioctl_run().
This allows the T&E specific lazy user GVA flush to be moved into
trap_emul.c, along with disabling of the HTW. We also move
kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer
state prior to delivering interrupts.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
The kvm_vcpu_arch structure contains both mm_structs for allocating MMU
contexts (primarily the ASID) but it also copies the resulting ASIDs
into guest_{user,kernel}_asid[] arrays which are referenced from uasm
generated code.
This duplication doesn't seem to serve any purpose, and it gets in the
way of generalising the ASID handling across guest kernel/user modes, so
lets just extract the ASID straight out of the mm_struct on demand, and
in fact there are convenient cpu_context() and cpu_asid() macros for
doing so.
To reduce the verbosity of this code we do also add kern_mm and user_mm
local variables where the kernel and user mm_structs are used.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
The MIPS KVM host and guest GVA ASIDs may need regenerating when
scheduling a process in guest context, which is done from the
kvm_arch_vcpu_load() / kvm_arch_vcpu_put() functions in mmu.c.
However this is a fairly implementation specific detail. VZ for example
may use GuestIDs instead of normal ASIDs to distinguish mappings
belonging to different guests, and even on VZ without GuestID the root
TLB will be used differently to trap & emulate.
Trap & emulate GVA ASIDs only relate to the user part of the full
address space, so can be left active during guest exit handling (guest
context) to allow guest instructions to be easily read and translated.
VZ root ASIDs however are for GPA mappings so can't be left active
during normal kernel code. They also aren't useful for accessing guest
virtual memory, and we should have CP0_BadInstr[P] registers available
to provide encodings of trapping guest instructions anyway.
Therefore move the ASID preemption handling into the implementation
callback.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Convert the get_regs() and set_regs() callbacks to vcpu_load() and
vcpu_put(), which provide a cpu argument and more closely match the
kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by.
This is in preparation for moving ASID management into the
implementations.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
KVM T&E uses an ASID for guest kernel mode and an ASID for guest user
mode. The current ASID is saved when the guest is scheduled out, and
restored when scheduling back in, with checks for whether the ASID needs
to be regenerated.
This isn't really necessary as the ASID can be easily determined by the
current guest mode, so lets simplify it to just read the required ASID
from guest_kernel_asid or guest_user_asid even if the ASID hasn't been
regenerated.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
MIPS incompletely implements the KVM_NMI ioctl to supposedly perform a
CPU reset, but all it actually does is invalidate the ASIDs. It doesn't
expose the KVM_CAP_USER_NMI capability which is supposed to indicate the
presence of the KVM_NMI ioctl, and no user software actually uses it on
MIPS.
Since this is dead code that would technically need updating for GVA
page table handling in upcoming patches, remove it now. If we wanted to
implement NMI injection later it can always be done properly along with
the KVM_CAP_USER_NMI capability, and if we wanted to implement a proper
CPU reset it would be better done with a separate ioctl.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Merge in MIPS prerequisites from GVA page tables and GPA page tables
series. The same branch can also merge into the MIPS tree.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
The protected cache ops contain no out of line fixup code to return an
error code in the event of a fault, with the cache op being skipped in
that case. For KVM however we'd like to detect this case as page
faulting will be disabled so it could happen during normal operation if
the GVA page tables were flushed, and need to be handled by the caller.
Add the out-of-line fixup code to load the error value -EFAULT into the
return variable, and adapt the protected cache line functions to pass
the error back to the caller.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Export to TLB exception code generating functions so that KVM can
construct a fast TLB refill handler for guest context without
reinventing the wheel quite so much.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Add include guards in asm/uasm.h to allow it to be safely used by a new
header asm/tlbex.h in the next patch to expose TLB exception building
functions for KVM to use.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
GPL kernel modules so that MIPS KVM can use the inline page table
management functions and switch between page tables:
- pmd_init() will be used directly by KVM to initialise newly allocated
pmd tables with invalid lower level table pointers.
- invalid_pmd_table is used by pud_present(), pud_none(), and
pud_clear(), which KVM will use to test and clear pud entries.
- tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
to the appropriate GVA page tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Enable support for GCC plugins on powerpc.
Add an additional version check in gcc-plugins-check to advise users to
upgrade to gcc 5.2+ on powerpc to avoid issues with header files (gcc <=
4.6) or missing copies of rs6000-cpus.def (4.8 to 5.1 on 64-bit
targets).
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit 38addce8b6 ("gcc-plugins: Add latent_entropy plugin") excludes
certain powerpc early boot code from the latent entropy plugin by adding
appropriate CFLAGS. It looks like this was supposed to cover
prom_init.o, but ended up saying init.o (which doesn't exist) instead.
Fix the typo.
Fixes: 38addce8b6 ("gcc-plugins: Add latent_entropy plugin")
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The ARM bit sliced AES core code uses the IV buffer to pass the final
keystream block back to the glue code if the input is not a multiple of
the block size, so that the asm code does not have to deal with anything
except 16 byte blocks. This is done under the assumption that the outgoing
IV is meaningless anyway in this case, given that chaining is no longer
possible under these circumstances.
However, as it turns out, the CCM driver does expect the IV to retain
a value that is equal to the original IV except for the counter value,
and even interprets byte zero as a length indicator, which may result
in memory corruption if the IV is overwritten with something else.
So use a separate buffer to return the final keystream block.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 bit sliced AES core code uses the IV buffer to pass the final
keystream block back to the glue code if the input is not a multiple of
the block size, so that the asm code does not have to deal with anything
except 16 byte blocks. This is done under the assumption that the outgoing
IV is meaningless anyway in this case, given that chaining is no longer
possible under these circumstances.
However, as it turns out, the CCM driver does expect the IV to retain
a value that is equal to the original IV except for the counter value,
and even interprets byte zero as a length indicator, which may result
in memory corruption if the IV is overwritten with something else.
So use a separate buffer to return the final keystream block.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The new bitsliced NEON implementation of AES uses a fallback in two
places: CBC encryption (which is strictly sequential, whereas this
driver can only operate efficiently on 8 blocks at a time), and the
XTS tweak generation, which involves encrypting a single AES block
with a different key schedule.
The plain (i.e., non-bitsliced) NEON code is more suitable as a fallback,
given that it is faster than scalar on low end cores (which is what
the NEON implementations target, since high end cores have dedicated
instructions for AES), and shows similar behavior in terms of D-cache
footprint and sensitivity to cache timing attacks. So switch the fallback
handling to the plain NEON driver.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The non-bitsliced AES implementation using the NEON is highly sensitive
to micro-architectural details, and, as it turns out, the Cortex-A53 on
the Raspberry Pi 3 is a core that can benefit from this code, given that
its scalar AES performance is abysmal (32.9 cycles per byte).
The new bitsliced AES code manages 19.8 cycles per byte on this core,
but can only operate on 8 blocks at a time, which is not supported by
all chaining modes. With a bit of tweaking, we can get the plain NEON
code to run at 22.0 cycles per byte, making it useful for sequential
modes like CBC encryption. (Like bitsliced NEON, the plain NEON
implementation does not use any lookup tables, which makes it easy on
the D-cache, and invulnerable to cache timing attacks)
So tweak the plain NEON AES code to use tbl instructions rather than
shl/sri pairs, and to avoid the need to reload permutation vectors or
other constants from memory in every round. Also, improve the decryption
performance by switching to 16x8 pmul instructions for the performing
the multiplications in GF(2^8).
To allow the ECB and CBC encrypt routines to be reused by the bitsliced
NEON code in a subsequent patch, export them from the module.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shuffle some instructions around in the __hround macro to shave off
0.1 cycles per byte on Cortex-A57.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Using simple adrp/add pairs to refer to the AES lookup tables exposed by
the generic AES driver (which could be loaded far away from this driver
when KASLR is in effect) was unreliable at module load time before commit
41c066f2c4 ("arm64: assembler: make adr_l work in modules under KASLR"),
which is why the AES code used literals instead.
So now we can get rid of the literals, and switch to the adr_l macro.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When aesni is built as a module together with pcbc, the pcbc module
must be present for aesni to load. However, the pcbc module may not
be present for reasons such as its absence on initramfs. This patch
allows the aesni to function even if the pcbc module is enabled but
not present.
Reported-by: Arkadiusz Miśkiewicz <arekm@maven.pl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The L2 cache controller on the T2080 SoC has similar capabilities to the
others already supported by the mpc85xx_edac driver. Add it to the list
of compatible devices.
Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Acked-by: Johannes Thumshirn <jth@kernel.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: devicetree@vger.kernel.org
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: linuxppc-dev@lists.ozlabs.org
Link: http://lkml.kernel.org/r/20170201231624.28843-1-chris.packham@alliedtelesis.co.nz
Signed-off-by: Borislav Petkov <bp@suse.de>
Add the device tree entries needed to support the EMAC AXI
bus settings on the Arria10 SoCFPGA chip.
Signed-off-by: Thor Thayer <thor.thayer@linux.intel.com>
Signed-off-by: Dinh Nguyen <dinguyen@kernel.org>
Pull x86 fixes from Ingo Molnar:
"Misc fixes:
- two microcode loader fixes
- two FPU xstate handling fixes
- an MCE timer handling related crash fix"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mce: Make timer handling more robust
x86/microcode: Do not access the initrd after it has been freed
x86/fpu/xstate: Fix xcomp_bv in XSAVES header
x86/fpu: Set the xcomp_bv when we fake up a XSAVES area
x86/microcode/intel: Drop stashed AP patch pointer optimization
Pull perf fixes from Ingo Molnar:
"Five kernel fixes:
- an mmap tracing ABI fix for certain mappings
- a use-after-free fix, found via KASAN
- three CPU hotplug related x86 PMU driver fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/intel/uncore: Make package handling more robust
perf/x86/intel/uncore: Clean up hotplug conversion fallout
perf/x86/intel/rapl: Make package handling more robust
perf/core: Fix PERF_RECORD_MMAP2 prot/flags for anonymous memory
perf/core: Fix use-after-free bug
Pull EFI fixes from Ingo Molnar:
"Two EFI boot fixes, one for arm64 and one for x86 systems with certain
firmware versions"
* 'efi-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi/fdt: Avoid FDT manipulation after ExitBootServices()
x86/efi: Always map the first physical page into the EFI pagetables
- fix noMMU build on cores with MMU.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYk4VkAAoJEFH5zJH4P6BEdxUP/RQRkWmkoJCf5UI7MQOiU1LS
HuT9U4trDyUUPGRYi1oUgSZMjM2IqZZvmUqSQOq7i2Uo52UffTWMBCrniye+CPk9
XLEiAoiD5JCObuDKYQTtpG5BQUqpGsCdV1ZGtpcJofZIef8sxOVvBuyzD+Tr4ok+
OvA4/X20nEX3fp4R2EyEMZg9re+GeJ8vgY/71jAH3bw4SwJ6znsRgNDM2hP7ZaYq
yZfqcwpqzfj8Jrx6Nfq9ibM37iVEam3OoIGJbYFpvF+uLsOyRPMABc7d6oJk2cTo
hFKfu9TlH7vNZOhh812KLqcJSeLf4TxNAUZqKLrCsZ860QnCZoI63911FxwZX3sU
gXYzu4K+c+GhtB48w5UlRlKlHRTqu55MOC+5Pk/lYpm7DXXmgS0cjfRpBvYDFVfH
QIcZ6isGht0TKbLcE79nxnLf1miVajQtSYhtVjND93TwB7ryURLlMdriI7nSYF2M
RksL8m9di1fsZyIHQTaBOlNc9taxnxWJr+2oSJAzJ7f9QSYC4FdYTRwDujx4mPoe
u3w6ED466zPLZzuZeg5qZbZp4s4DGkh9IKHCwZfadmCVtneMlDRV3JRnPvhE740M
+syyJMR8HtLfRMto7fkrkUcK7+pV2jTRTwKk06QIoRlTIwR4uKjO6kVdizNwXQPD
AqmAMqT0YBntVmDgM6sR
=qIDo
-----END PGP SIGNATURE-----
Merge tag 'xtensa-20170202' of git://github.com/jcmvbkbc/linux-xtensa
Pull Xtensa fix from Max Filippov:
"A for an Xtensa build error introduced in reset code refactoring
series in v4.9:
- fix noMMU build on cores with MMU"
* tag 'xtensa-20170202' of git://github.com/jcmvbkbc/linux-xtensa:
xtensa: fix noMMU build on cores with MMU
We recently discovered that __raw_read_system_reg() erroneously mapped
sysreg IDs to the wrong registers.
To ensure that we don't get hit by a similar issue in future, this patch
makes __raw_read_system_reg() use a macro for each case statement,
ensuring that each case reads the correct register.
To ensure that this patch hasn't introduced an issue, I've binary-diffed
the object files before and after this patch. No code or data sections
differ (though some debug section differ due to line numbering
changing).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since it was introduced in commit da8d02d19f ("arm64/capabilities:
Make use of system wide safe value"), __raw_read_system_reg() has
erroneously mapped some sysreg IDs to other registers.
For the fields in ID_ISAR5_EL1, our local feature detection will be
erroneous. We may spuriously detect that a feature is uniformly
supported, or may fail to detect when it actually is, meaning some
compat hwcaps may be erroneous (or not enforced upon hotplug).
This patch corrects the erroneous entries.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: da8d02d19f ("arm64/capabilities: Make use of system wide safe value")
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
The SPE buffer is virtually addressed, using the page tables of the CPU
MMU. Unusually, this means that the EL0/1 page table may be live whilst
we're executing at EL2 on non-VHE configurations. When VHE is in use,
we can use the same property to profile the guest behind its back.
This patch adds the relevant disabling and flushing code to KVM so that
the host can make use of SPE without corrupting guest memory, and any
attempts by a guest to use SPE will result in a trap.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Instead of open-coding the loop, let's use canned macro.
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
chiliBoard has a battery connector, so support charging battery
through tps65217 chip.
Additionally, enabling tps65217 charger allows us to get status
of AC power.
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
On chiliBoard power button is connected to TPS65217. It signals power
button presses by asserting interrupt output and allows to read the
state by i2c bus.
Handle TPS65217 interrupts and enable notifications of power button
presses.
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
The rename of orion5x-lschl.dts needs to be reflected in the Makefile:
make[3]: *** No rule to make target 'arch/arm/boot/dts/orion5x-lschl.dtb', needed by '__build'.
Fixes: 6cfd3cd8d8 ("ARM: dts: orion5x-lschl: More consistent naming on linkstation series")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
pgd_alloc() references init_mm which is not exported to modules. In
order for KVM to be able to use pgd_alloc() to allocate GVA page tables,
move pgd_alloc() into a new pgtable.c file and export it to modules.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
* Return directly after a call of the function "copy_from_user" failed
in a case block.
* Delete the jump label "out" which became unnecessary with
this refactoring.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
The A23 and A33 have an ARM Mali 400 GPU. Now that we have a binding, add
it to our DT.
Acked-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
As we add the ability to do DLPAR of additional devices through
the sysfs interface we need to know which devices are supported.
This adds the reporting of supported devices with a comma separated
list reported in the existing /sys/kernel/dlpar.
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Extend the existing PRRN infrastructure to perform the actual affinity
updating for cpus and memory in addition to the device tree updating.
For cpus, dynamic affinity updating already appears to exist in the
kernel in the form of arch_update_cpu_topology(). For memory, we must
place a READD operation on the hotplug queue for any phandle included in
the PRRN event that is determined to be an LMB.
Signed-off-by: John Allen <jallen@linux.vnet.ibm.com>
Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Currently, memory must be hot removed and subsequently re-added in order
to dynamically update the affinity of LMBs specified by a PRRN event.
Earlier implementations of the PRRN event handler ran into issues in which
the hot remove would occur successfully, but a hotplug event would be
initiated from another source and grab the hotplug lock preventing the hot
add from occurring. To prevent this situation, this patch introduces the
notion of a hot "readd" action for memory which atomizes a hot remove and
a hot add into a single, serialized operation on the hotplug queue.
Signed-off-by: John Allen <jallen@linux.vnet.ibm.com>
Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When adding and removing LMBs we should make the acquire/release of
the DRC a separate step to allow for a few improvements. First
this will ensure that LMBs removed during a remove by count operation
are all available if a error occurs and we need to add them back. By
first removeing all the LMBs from the kernel before releasing their
DRCs the LMBs are available to add back should an error occur.
Also, this will allow for faster re-add operations of memory for
PRRN event handling since we can skip the unneeded step of having
to release the DRC and the acquire it back.
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
CONFIG_PPC_PTDUMP currently selects CONFIG_DEBUG_FS. But CONFIG_DEBUG_FS
is user-selectable, so we shouldn't select it. Instead depend on it.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit db9112173b ("powerpc: Turn on BPF_JIT in ppc64_defconfig")
only added BPF_JIT to the ppc64 defconfig. Add it to our powernv
and pseries defconfigs too.
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We added support for HAVE_CONTEXT_TRACKING, but placed the option inside
PPC_PSERIES.
This has the undesirable effect that NO_HZ_FULL can be enabled on a
kernel with both powernv and pseries support, but cannot on a kernel
with powernv only support.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In __get_user_nosleep, we create an intermediate pointer for the
user address we're about to fetch. We currently don't tag this
pointer as const. Make it const, as we are simply dereferencing
it, and it's scope is limited to the __get_user_nosleep macro.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In __get_user_nocheck, we create an intermediate pointer for the
user address we're about to fetch. We currently don't tag this
pointer as const. Make it const, as we are simply dereferencing
it, and it's scope is limited to the __get_user_nocheck macro.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In __get_user_check, we create an intermediate pointer for the
user address we're about to fetch. We currently don't tag this
pointer as const. Make it const, as we are simply dereferencing
it, and it's scope is limited to the __get_user_check macro.
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
opal_lpc_init() is called from an __init routine, and calls other __init
routines, so should also be __init, init?
Fixes: 023b13a501 ("powerpc/powernv: Add support for direct mapped LPC on POWER9")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull crypto fixes from Herbert Xu:
"This fixes a bug in CBC/CTR on ARM64 that breaks chaining as well as a
bug in the core API that causes registration failures when a driver
unloads and then reloads an algorithm"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: arm64/aes-blk - honour iv_out requirement in CBC and CTR modes
crypto: api - Clear CRYPTO_ALG_DEAD bit before registering an alg
Define and enable pwm1 and pwm3 for stm32f469 discovery board
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Add Timers and it sub-nodes into DT for stm32f429 family.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
During a TLB invalidate sequence targeting the inner shareable domain,
Falkor may prematurely complete the DSB before all loads and stores using
the old translation are observed. Instruction fetches are not subject to
the conditions of this erratum. If the original code sequence includes
multiple TLB invalidate instructions followed by a single DSB, onle one of
the TLB instructions needs to be repeated to work around this erratum.
While the erratum only applies to cases in which the TLBI specifies the
inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or
stronger (OSH, SYS), this changes applies the workaround overabundantly--
to local TLBI, DSB NSH sequences as well--for simplicity.
Based on work by Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit:
8ee83b2ab3 ("perf/x86/intel/pt: Add support for PTWRITE and power event tracing")
forgot to add format strings to the PT driver. So one could enable these features
by setting corresponding bits in the event config, but not by their mnemonic names.
This patch adds the format strings.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Fixes: 8ee83b2ab3 ("perf/x86/intel/pt: Add support for PTWRITE...")
Link: http://lkml.kernel.org/r/20170127151644.8585-2-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Move the check to whether this is a UV system that needs initialization
from is_uv_system() to the internal uv_system_init() function. This is
because on a UV system without a HUB the is_uv_system() returns false.
But we still need some specific UV system initialization. See the
uv_system_init() for change to a quick check if UV is applicable. This
change should not increase overhead since is_uv_system() also called
into this same area.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Russ Anderson <rja@hpe.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dimitri Sivanich <sivanich@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170125163518.256403963@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The initialize PCH NMI I/O function is separate and may be moved to BIOS
for security reasons. This function detects whether the PCH NMI config
has already been done and if not, it will then initialize the PCH here.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Russ Anderson <rja@hpe.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dimitri Sivanich <sivanich@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170125163518.089387859@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Verify that the NMI action being set is valid. The default NMI action
changes from the non-standard 'kdb' to the more standard 'dump'.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Russ Anderson <rja@hpe.com>
Reviewed-by: Alex Thorlton <athorlton@sgi.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dimitri Sivanich <sivanich@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170125163517.922751779@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add a low impact health check triggered by the system NMI command
that essentially checks which CPUs are responding to external NMI's.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Russ Anderson <rja@hpe.com>
Reviewed-by: Alex Thorlton <athorlton@sgi.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dimitri Sivanich <sivanich@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170125163517.756690240@asylum.americas.sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Make it more readable.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Dimitri Sivanich <sivanich@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Travis <travis@sgi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20170114082612.GA27842@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
cputime_t is now only used by two architectures:
* powerpc (when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y)
* s390
And since the core doesn't use it anymore, we don't need any arch support
from the others. So we can remove their stub implementations.
A final cleanup would be to provide an efficient pure arch
implementation of cputime_to_nsec() for s390 and powerpc and finally
remove include/linux/cputime.h .
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-36-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Since the core doesn't deal with cputime_t anymore, most of these APIs
have been left unused. Lets remove these.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-34-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Since the core doesn't deal with cputime_t anymore, most of these APIs
have been left unused. Lets remove these.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-33-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This way we don't need to deal with cputime_t details from the core code.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-32-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This nsec based cputime_t implementation isn't used anymore. We can
remove it.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-31-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There is no need anymore for this cputime_t midlayer. Let's use nsec
units directly.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-30-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Initially, nsec based cputime_t implementation belonged to IA64. It got
exported later for CONFIG_VIRT_CPU_ACCOUNTING_GEN but now it is again
only used by IA64. So let's move it back there.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-29-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This is one more step toward converting cputime accounting to pure nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-25-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This is one more step toward converting cputime accounting to pure nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-24-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This is one more step toward converting cputime accounting to pure nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-23-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This is one more step toward converting cputime accounting to pure nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-22-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Use the new nsec based cputime accessors as part of the whole cputime
conversion from cputime_t to nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-12-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Use the new nsec based cputime accessors as part of the whole cputime
conversion from cputime_t to nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-10-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Use the new nsec based cputime accessors as part of the whole cputime
conversion from cputime_t to nsecs.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-9-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Now that most cputime readers use the transition API which return the
task cputime in old style cputime_t, we can safely store the cputime in
nsecs. This will eventually make cputime statistics less opaque and more
granular. Back and forth convertions between cputime_t and nsecs in order
to deal with cputime_t random granularity won't be needed anymore.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-8-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This API returns a task's cputime in cputime_t in order to ease the
conversion of cputime internals to use nsecs units instead. Blindly
converting all cputime readers to use this API now will later let us
convert more smoothly and step by step all these places to use the
new nsec based cputime.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-7-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Kernel CPU stats are stored in cputime_t which is an architecture
defined type, and hence a bit opaque and requiring accessors and mutators
for any operation.
Converting them to nsecs simplifies the code and is one step toward
the removal of cputime_t in the core code.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Link: http://lkml.kernel.org/r/1485832191-26889-4-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
It is not obvious if the reserved boot area are added correctly, add a
efi_print_memmap() call to print the new memmap.
Tested-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Nicolai Stange <nicstange@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1485868902-20401-10-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Before invoking the arch specific handler, efi_mem_reserve() reserves
the given memory region through memblock.
efi_bgrt_init() will call efi_mem_reserve() after mm_init(), at which
time memblock is dead and should not be used anymore.
The EFI BGRT code depends on ACPI initialization to get the BGRT ACPI
table, so move parsing of the BGRT table to ACPI early boot code to
ensure that efi_mem_reserve() in EFI BGRT code still use memblock safely.
Tested-by: Bhupesh Sharma <bhsharma@redhat.com>
Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Len Brown <lenb@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-acpi@vger.kernel.org
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1485868902-20401-9-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
UEFI v2.6 introduces EFI_MEMORY_ATTRIBUTES_TABLE which describes memory
protections that may be applied to the EFI Runtime code and data regions by
the kernel. This enables the kernel to map these regions more strictly thereby
increasing security.
Presently, the only valid bits for the attribute field of a memory descriptor
are EFI_MEMORY_RO and EFI_MEMORY_XP, hence use these bits to update the
mappings in efi_pgd.
The UEFI specification recommends to use this feature instead of
EFI_PROPERTIES_TABLE and hence while updating EFI mappings we first
check for EFI_MEMORY_ATTRIBUTES_TABLE and if it's present we update
the mappings according to this table and hence disregarding
EFI_PROPERTIES_TABLE even if it's published by the firmware. We consider
EFI_PROPERTIES_TABLE only when EFI_MEMORY_ATTRIBUTES_TABLE is absent.
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Lee, Chun-Yi <jlee@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Shankar <ravi.v.shankar@intel.com>
Cc: Ricardo Neri <ricardo.neri@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1485868902-20401-6-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Eliminate the separate 32-bit and 64x- bit code paths by way of the shiny
new efi_call_proto() macro.
No functional change intended.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1485868902-20401-3-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There's one ARM, one x86_32 and one x86_64 version which can be folded
into a single shared version by masking their differences with the shiny
new efi_call_proto() macro.
No functional change intended.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1485868902-20401-2-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The package management code in uncore relies on package mapping being
available before a CPU is started. This changed with:
9d85eb9119 ("x86/smpboot: Make logical package management more robust")
because the ACPI/BIOS information turned out to be unreliable, but that
left uncore in broken state. This was not noticed because on a regular boot
all CPUs are online before uncore is initialized.
Move the allocation to the CPU online callback and simplify the hotplug
handling. At this point the package mapping is established and correct.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Stephane Eranian <eranian@google.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
Fixes: 9d85eb9119 ("x86/smpboot: Make logical package management more robust")
Link: http://lkml.kernel.org/r/20170131230141.377156255@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The recent conversion to the hotplug state machine kept two mechanisms from
the original code:
1) The first_init logic which adds the number of online CPUs in a package
to the refcount. That's wrong because the callbacks are executed for
all online CPUs.
Remove it so the refcounting is correct.
2) The on_each_cpu() call to undo box->init() in the error handling
path. That's bogus because when the prepare callback fails no box has
been initialized yet.
Remove it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Stephane Eranian <eranian@google.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
Fixes: 1a246b9f58 ("perf/x86/intel/uncore: Convert to hotplug state machine")
Link: http://lkml.kernel.org/r/20170131230141.298032324@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The package management code in RAPL relies on package mapping being
available before a CPU is started. This changed with:
9d85eb9119 ("x86/smpboot: Make logical package management more robust")
because the ACPI/BIOS information turned out to be unreliable, but that
left RAPL in broken state. This was not noticed because on a regular boot
all CPUs are online before RAPL is initialized.
A possible fix would be to reintroduce the mess which allocates a package
data structure in CPU prepare and when it turns out to already exist in
starting throw it away later in the CPU online callback. But that's a
horrible hack and not required at all because RAPL becomes functional for
perf only in the CPU online callback. That's correct because user space is
not yet informed about the CPU being onlined, so nothing caan rely on RAPL
being available on that particular CPU.
Move the allocation to the CPU online callback and simplify the hotplug
handling. At this point the package mapping is established and correct.
This also adds a missing check for available package data in the
event_init() function.
Reported-by: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Stephane Eranian <eranian@google.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 9d85eb9119 ("x86/smpboot: Make logical package management more robust")
Link: http://lkml.kernel.org/r/20170131230141.212593966@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit bf15f86b34 ("xtensa: initialize MMU before jumping to reset
vector") calls MMU management functions even when CONFIG_MMU is not
selected. That breaks noMMU build on cores with MMU.
Don't manage MMU when CONFIG_MMU is not selected.
Cc: stable@vger.kernel.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Add initial set of CoreSight components found on Qualcomm msm8916 and
apq8016 based platforms, including the DragonBoard 410c board.
Signed-off-by: Ivan T. Ivanov <ivan.ivanov@linaro.org>
Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
Signed-off-by: Andy Gross <andy.gross@linaro.org>
This clock has been missing since some early stages of device tree
conversion. Adding the right clocks to the device tree makes USB
work again.
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The physical pins from the SoC are in a sense belonging to the
PHY device (AB8500 USB) rather than the MUSB USB IP block.
The driver definately assumes so: before this change it
complains that it cannot control the pins it is using:
abx5x0-usb ab8500-usb.0: could not get/set default pinstate
After this patch the warning goes away.
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
This moves the enable-active-high setting from the SoC to the
board for the VMMCQ regulators. It should at least be in the
vicinity of the GPIO line it is defined for.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Erik reported that on a preproduction hardware a CMCI storm triggers the
BUG_ON in add_timer_on(). The reason is that the per CPU MCE timer is
started by the CMCI logic before the MCE CPU hotplug callback starts the
timer with add_timer_on(). So the timer is already queued which triggers
the BUG.
Using add_timer_on() is pretty pointless in this code because the timer is
strictlty per CPU, initialized as pinned and all operations which arm the
timer happen on the CPU to which the timer belongs.
Simplify the whole machinery by using mod_timer() instead of add_timer_on()
which avoids the problem because mod_timer() can handle already queued
timers. Use __start_timer() everywhere so the earliest armed expiry time is
preserved.
Reported-by: Erik Veijola <erik.veijola@intel.com>
Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@alien8.de>
Cc: Tony Luck <tony.luck@intel.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701310936080.3457@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Provide human readable names for all power domains defined in Exynos SoCs.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
The recent commit which prevents double activation of interrupts unearthed
interesting code in x86. The code (ab)uses irq_domain_activate_irq() to
reconfigure an already activated interrupt. That trips over the prevention
code now.
Fix it by deactivating the interrupt before activating the new configuration.
Fixes: 08d85f3ea9 "irqdomain: Avoid activating interrupts more than once"
Reported-and-tested-by: Mike Galbraith <efault@gmx.de>
Reported-and-tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701311901580.3457@nanos
Commit cab15ce604 ("arm64: Introduce execute-only page access
permissions") allowed a valid user PTE to have the PTE_USER bit clear.
As a consequence, the pte_valid_not_user() macro in set_pte() was
replaced with pte_valid_global() under the assumption that only user
pages have the nG bit set. EFI mappings, however, also have the nG bit
set and set_pte() wrongly ignores issuing the DSB+ISB.
This patch reinstates the pte_valid_not_user() macro and adds the
PTE_UXN bit check since all kernel mappings have this bit set. For
clarity, pte_exec() is renamed to pte_user_exec() as it only checks for
the absence of PTE_UXN. Consequently, the user executable check in
set_pte_at() drops the pte_ng() test since pte_user_exec() is
sufficient.
Fixes: cab15ce604 ("arm64: Introduce execute-only page access permissions")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ahci driver now supports other refclk clock rates.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Register a fixed rate clock modelling the external SATA oscillator
for da850 (both DT and board file mode).
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
The ahci-da850 SATA driver is now capable of retrieving clocks by
con_id. Add the connection id for the sysclk2-derived SATA clock.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
This merge is because patches in branch v4.11/soc conflict
with cleanup done as part of 0a5011673a ("ARM: davinci:
da850: coding style fix") that is already queued as a
non-critical fix.
These boards are Marvell's evaluation boards for the 98DX4251 and
98DX3336 SoCs.
[gregory.clement@free-electrons.com: fix topic and update Makefile]
Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
The Marvell 98DX3236, 98DX3336, 98DX4521 and variants are switch ASICs
with integrated CPUs. They are similar to the Armada XP SoCs but have
different I/O interfaces.
[gregory.clement@free-electrons.com: fix topic]
Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Enable the SATA node for da850-lcdk. We omit the pinctrl property on
purpose - the muxed SATA pins are not hooked up to anything
SATA-related on the lcdk.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
DTS files, which includes orion5x-linkstation.dtsi, are named:
orion5x-linkstation-*.dts
So we rename the file below:
arch/arm/boot/dts/orion5x-lschl.dts
to the new name:
arch/arm/boot/dts/orion5x-linkstation-lschl.dts
Because DTS conversion of this device was just introduced in 4.9, Debian
is still using legacy device support, other distros are the same,
so here we won't expect any impact actually.
Fixes: f94f268979 ("ARM: dts: orion5x: convert ls-chl to FDT")
Cc: Ashley Hughes <ashley.hughes@blueyonder.co.uk>
Signed-off-by: Roger Shimizu <rogershimizu@gmail.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Model name should be consistent with legacy device file, so that user
can migrate their system from legacy device support to device-tree
safely.
Legacy device file is currently removed, but it can be found on 4.8
or previous version of linux:
arch/arm/mach-orion5x/ls-chl-setup.c
Fixes: f94f268979 ("ARM: dts: orion5x: convert ls-chl to FDT")
Cc: Ashley Hughes <ashley.hughes@blueyonder.co.uk>
Signed-off-by: Roger Shimizu <rogershimizu@gmail.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
This updates the KVM_CAP_SPAPR_RESIZE_HPT capability to advertise the
presence of in-kernel HPT resizing on KVM HV.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds the "guts" of the implementation for the HPT resizing PAPR
extension. It has the code to allocate and clear a new HPT, rehash an
existing HPT's entries into it, and accomplish the switchover for a
KVM guest from the old HPT to the new one.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a not yet working outline of the HPT resizing PAPR
extension. Specifically it adds the necessary ioctl() functions,
their basic steps, the work function which will handle preparation for
the resize, and synchronization between these, the guest page fault
path and guest HPT update path.
The actual guts of the implementation isn't here yet, so for now the
calls will always fail.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The kvm_unmap_rmapp() function, called from certain MMU notifiers, is used
to force all guest mappings of a particular host page to be set ABSENT, and
removed from the reverse mappings.
For HPT resizing, we will have some cases where we want to set just a
single guest HPTE ABSENT and remove its reverse mappings. To prepare with
this, we split out the logic from kvm_unmap_rmapp() to evict a single HPTE,
moving it to a new helper function.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The KVM_PPC_ALLOCATE_HTAB ioctl() is used to set the size of hashed page
table (HPT) that userspace expects a guest VM to have, and is also used to
clear that HPT when necessary (e.g. guest reboot).
At present, once the ioctl() is called for the first time, the HPT size can
never be changed thereafter - it will be cleared but always sized as from
the first call.
With upcoming HPT resize implementation, we're going to need to allow
userspace to resize the HPT at reset (to change it back to the default size
if the guest changed it).
So, we need to allow this ioctl() to change the HPT size.
This patch also updates Documentation/virtual/kvm/api.txt to reflect
the new behaviour. In fact the documentation was already slightly
incorrect since 572abd5 "KVM: PPC: Book3S HV: Don't fall back to
smaller HPT size in allocation ioctl"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This entry is needed for the ahci driver to get a functional clock.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
[nsekhar@ti.com: subject line adjustment]
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Currently, kvmppc_alloc_hpt() both allocates a new hashed page table (HPT)
and sets it up as the active page table for a VM. For the upcoming HPT
resize implementation we're going to want to allocate HPTs separately from
activating them.
So, split the allocation itself out into kvmppc_allocate_hpt() and perform
the activation with a new kvmppc_set_hpt() function. Likewise we split
kvmppc_free_hpt(), which just frees the HPT, from kvmppc_release_hpt()
which unsets it as an active HPT, then frees it.
We also move the logic to fall back to smaller HPT sizes if the first try
fails into the single caller which used that behaviour,
kvmppc_hv_setup_htab_rma(). This introduces a slight semantic change, in
that previously if the initial attempt at CMA allocation failed, we would
fall back to attempting smaller sizes with the page allocator. Now, we
try first CMA, then the page allocator at each size. As far as I can tell
this change should be harmless.
To match, we make kvmppc_free_hpt() just free the actual HPT itself. The
call to kvmppc_free_lpid() that was there, we move to the single caller.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Currently the kvm_hpt_info structure stores the hashed page table's order,
and also the number of HPTEs it contains and a mask for its size. The
last two can be easily derived from the order, so remove them and just
calculate them as necessary with a couple of helper inlines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Currently, the powerpc kvm_arch structure contains a number of variables
tracking the state of the guest's hashed page table (HPT) in KVM HV. This
patch gathers them all together into a single kvm_hpt_info substructure.
This makes life more convenient for the upcoming HPT resizing
implementation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The difference between kvm_alloc_hpt() and kvmppc_alloc_hpt() is not at
all obvious from the name. In practice kvmppc_alloc_hpt() allocates an HPT
by whatever means, and calls kvm_alloc_hpt() which will attempt to allocate
it with CMA only.
To make this less confusing, rename kvm_alloc_hpt() to kvm_alloc_hpt_cma().
Similarly, kvm_release_hpt() is renamed kvm_free_hpt_cma().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Add the da850-ahci driver to davinci defconfig.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
This commit adjusts the names of gatable clock #18 of the Marvell Armada
CP110 system controller. This clock not only controls SD/MMC, but also
the GOP (Group Of Ports) used for networking. So the clock is renamed to
{cpm,cps}-sd-mmc-gop instead of {cpm,cps}-sd-mmc.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Add a DT binding documentation for the Video Data Order Adapter (VDOA)
of the Freescale i.MX6 SoC.
Also, add the compatible property and correct clock to the device tree
to match the documentation.
Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com>
Signed-off-by: Michael Tretter <m.tretter@pengutronix.de>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
We need to cleanup the TSC page before doing kexec/kdump or the new kernel
may crash if it tries to use it.
Fixes: 63ed4e0c67 ("Drivers: hv: vmbus: Consolidate all Hyper-V specific clocksource code")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We need to cleanup the hypercall page before doing kexec/kdump or the new
kernel may crash if it tries to use it. Reuse the now-empty hv_cleanup
function renaming it to hyperv_cleanup and moving to the arch specific
code.
Fixes: 8730046c14 ("Drivers: hv vmbus: Move Hypercall page setup out of common code")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The principles of operations specifies that the breaking event address
is stored to the address 0x110 in the prefix page only for program checks.
The last branch in user space is lost as soon as a branch in kernel space
is executed after e.g. an svc. This makes it impossible to accurately
maintain the breaking event address for a user space process.
Simplify the code, just copy the current breaking event address from
0x110 to the task structure for program checks from user space.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The generate_entropy function used a sha256 for compacting
together 256 bits of entropy into 32 bytes hash. However, it
is questionable if a sha256 can really be used here, as
potential collisions may reduce the max entropy fitting into
a 32 byte hash value. So this batch introduces the use of
sha512 instead and the required buffer adjustments for the
calling functions.
Further more the working buffer for the generate_entropy
function has been widened from one page to two pages. So now
1024 stckf invocations are used to gather 256 bits of
entropy. This has been done to be on the save side if the
jitters of stckf values isn't as good as supposed.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In fips mode only xts keys with 128 bit or 125 bit are allowed.
This fix extends the xts_aes_set_key function to check for these
valid key lengths in fips mode.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Triple-DES implementations will soon be required to check
for uniqueness of keys with fips mode enabled. Add checks
to ensure none of the 3 keys match.
Signed-off-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
As reported by kernel build test:
In file included from arch/arm/mach-omap2/pdata-quirks.c:15:0:
>> arch/arm/mach-omap2/pdata-quirks.c:536:49: error: 'rx51_lirc_data' undeclared here (not in a function)
OF_DEV_AUXDATA("nokia,n900-ir", 0, "n900-ir", &rx51_lirc_data),
^
include/linux/of_platform.h:52:21: note: in definition of macro 'OF_DEV_AUXDATA'
.platform_data = _pdata }
^~~~~~
Since "a92def1 [media] ir-rx51: port to rc-core" the build fails on
some arm configurations.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
This merges in the POWER9 radix MMU host and guest support, which
was put into a topic branch because it touches both powerpc and
KVM code.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a few last pieces of the support for radix guests:
* Implement the backends for the KVM_PPC_CONFIGURE_V3_MMU and
KVM_PPC_GET_RMMU_INFO ioctls for radix guests
* On POWER9, allow secondary threads to be on/off-lined while guests
are running.
* Set up LPCR and the partition table entry for radix guests.
* Don't allocate the rmap array in the kvm_memory_slot structure
on radix.
* Don't try to initialize the HPT for radix guests, since they don't
have an HPT.
* Take out the code that prevents the HV KVM module from
initializing on radix hosts.
At this stage, we only support radix guests if the host is running
in radix mode, and only support HPT guests if the host is running in
HPT mode. Thus a guest cannot switch from one mode to the other,
which enables some simplifications.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
On POWER9 DD1, we need to invalidate the ERAT (effective to real
address translation cache) when changing the PIDR register, which
we do as part of guest entry and exit.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
If we allow LPCR[AIL] to be set for radix guests, then interrupts from
the guest to the host can be delivered by the hardware with relocation
on, and thus the code path starting at kvmppc_interrupt_hv can be
executed in virtual mode (MMU on) for radix guests (previously it was
only ever executed in real mode).
Most of the code is indifferent to whether the MMU is on or off, but
the calls to OPAL that use the real-mode OPAL entry code need to
be switched to use the virtual-mode code instead. The affected
calls are the calls to the OPAL XICS emulation functions in
kvmppc_read_one_intr() and related functions. We test the MSR[IR]
bit to detect whether we are in real or virtual mode, and call the
opal_rm_* or opal_* function as appropriate.
The other place that depends on the MMU being off is the optimization
where the guest exit code jumps to the external interrupt vector or
hypervisor doorbell interrupt vector, or returns to its caller (which
is __kvmppc_vcore_entry). If the MMU is on and we are returning to
the caller, then we don't need to use an rfid instruction since the
MMU is already on; a simple blr suffices. If there is an external
or hypervisor doorbell interrupt to handle, we branch to the
relocation-on version of the interrupt vector.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
With radix, the guest can do TLB invalidations itself using the tlbie
(global) and tlbiel (local) TLB invalidation instructions. Linux guests
use local TLB invalidations for translations that have only ever been
accessed on one vcpu. However, that doesn't mean that the translations
have only been accessed on one physical cpu (pcpu) since vcpus can move
around from one pcpu to another. Thus a tlbiel might leave behind stale
TLB entries on a pcpu where the vcpu previously ran, and if that task
then moves back to that previous pcpu, it could see those stale TLB
entries and thus access memory incorrectly. The usual symptom of this
is random segfaults in userspace programs in the guest.
To cope with this, we detect when a vcpu is about to start executing on
a thread in a core that is a different core from the last time it
executed. If that is the case, then we mark the core as needing a
TLB flush and then send an interrupt to any thread in the core that is
currently running a vcpu from the same guest. This will get those vcpus
out of the guest, and the first one to re-enter the guest will do the
TLB flush. The reason for interrupting the vcpus executing on the old
core is to cope with the following scenario:
CPU 0 CPU 1 CPU 4
(core 0) (core 0) (core 1)
VCPU 0 runs task X VCPU 1 runs
core 0 TLB gets
entries from task X
VCPU 0 moves to CPU 4
VCPU 0 runs task X
Unmap pages of task X
tlbiel
(still VCPU 1) task X moves to VCPU 1
task X runs
task X sees stale TLB
entries
That is, as soon as the VCPU starts executing on the new core, it
could unmap and tlbiel some page table entries, and then the task
could migrate to one of the VCPUs running on the old core and
potentially see stale TLB entries.
Since the TLB is shared between all the threads in a core, we only
use the bit of kvm->arch.need_tlb_flush corresponding to the first
thread in the core. To ensure that we don't have a window where we
can miss a flush, this moves the clearing of the bit from before the
actual flush to after it. This way, two threads might both do the
flush, but we prevent the situation where one thread can enter the
guest before the flush is finished.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
If the guest is in radix mode, then it doesn't have a hashed page
table (HPT), so all of the hypercalls that manipulate the HPT can't
work and should return an error. This adds checks to make them
return H_FUNCTION ("function not supported").
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds code to keep track of dirty pages when requested (that is,
when memslot->dirty_bitmap is non-NULL) for radix guests. We use the
dirty bits in the PTEs in the second-level (partition-scoped) page
tables, together with a bitmap of pages that were dirty when their
PTE was invalidated (e.g., when the page was paged out). This bitmap
is stored in the first half of the memslot->dirty_bitmap area, and
kvm_vm_ioctl_get_dirty_log_hv() now uses the second half for the
bitmap that gets returned to userspace.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adapts our implementations of the MMU notifier callbacks
(unmap_hva, unmap_hva_range, age_hva, test_age_hva, set_spte_hva)
to call radix functions when the guest is using radix. These
implementations are much simpler than for HPT guests because we
have only one PTE to deal with, so we don't need to traverse
rmap chains.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds the code to construct the second-level ("partition-scoped" in
architecturese) page tables for guests using the radix MMU. Apart from
the PGD level, which is allocated when the guest is created, the rest
of the tree is all constructed in response to hypervisor page faults.
As well as hypervisor page faults for missing pages, we also get faults
for reference/change (RC) bits needing to be set, as well as various
other error conditions. For now, we only set the R or C bit in the
guest page table if the same bit is set in the host PTE for the
backing page.
This code can take advantage of the guest being backed with either
transparent or ordinary 2MB huge pages, and insert 2MB page entries
into the guest page tables. There is no support for 1GB huge pages
yet.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds code to branch around the parts that radix guests don't
need - clearing and loading the SLB with the guest SLB contents,
saving the guest SLB contents on exit, and restoring the host SLB
contents.
Since the host is now using radix, we need to save and restore the
host value for the PID register.
On hypervisor data/instruction storage interrupts, we don't do the
guest HPT lookup on radix, but just save the guest physical address
for the fault (from the ASDR register) in the vcpu struct.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds a field in struct kvm_arch and an inline helper to
indicate whether a guest is a radix guest or not, plus a new file
to contain the radix MMU code, which currently contains just a
translate function which knows how to traverse the guest page
tables to translate an address.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
POWER9 adds a register called ASDR (Access Segment Descriptor
Register), which is set by hypervisor data/instruction storage
interrupts to contain the segment descriptor for the address
being accessed, assuming the guest is using HPT translation.
(For radix guests, it contains the guest real address of the
access.)
Thus, for HPT guests on POWER9, we can use this register rather
than looking up the SLB with the slbfee. instruction.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds the implementation of the KVM_PPC_CONFIGURE_V3_MMU ioctl
for HPT guests on POWER9. With this, we can return 1 for the
KVM_CAP_PPC_MMU_HASH_V3 capability.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds two capabilities and two ioctls to allow userspace to
find out about and configure the POWER9 MMU in a guest. The two
capabilities tell userspace whether KVM can support a guest using
the radix MMU, or using the hashed page table (HPT) MMU with a
process table and segment tables. (Note that the MMUs in the
POWER9 processor cores do not use the process and segment tables
when in HPT mode, but the nest MMU does).
The KVM_PPC_CONFIGURE_V3_MMU ioctl allows userspace to specify
whether a guest will use the radix MMU or the HPT MMU, and to
specify the size and location (in guest space) of the process
table.
The KVM_PPC_GET_RMMU_INFO ioctl gives userspace information about
the radix MMU. It returns a list of supported radix tree geometries
(base page size and number of bits indexed at each level of the
radix tree) and the encoding used to specify the various page
sizes for the TLB invalidate entry instruction.
Initially, both capabilities return 0 and the ioctls return -EINVAL,
until the necessary infrastructure for them to operate correctly
is added.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
With host and guest both using radix translation, it is feasible
for the host to take interrupts that come from the guest with
relocation on, and that is in fact what the POWER9 hardware will
do when LPCR[AIL] = 3. All such interrupts use HSRR0/1 not SRR0/1
except for system call with LEV=1 (hcall).
Therefore this adds the KVM tests to the _HV variants of the
relocation-on interrupt handlers, and adds the KVM test to the
relocation-on system call entry point.
We also instantiate the relocation-on versions of the hypervisor
data storage and instruction interrupt handlers, since these can
occur with relocation on in radix guests.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When changing a partition table entry on POWER9, we do a particular
form of the tlbie instruction which flushes all TLBs and caches of
the partition table for a given logical partition ID (LPID).
This instruction has a field in the instruction word, labelled R
(radix), which should be 1 if the partition was previously a radix
partition and 0 if it was a HPT partition. This implements that
logic.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This exports the pgtable_cache array and the pgtable_cache_add
function so that HV KVM can use them for allocating radix page
tables for guests.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This adds definitions for bits in the DSISR register which are used
by POWER9 for various translation-related exception conditions, and
for some more bits in the partition table entry that will be needed
by KVM.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
To use radix as a guest, we first need to tell the hypervisor via
the ibm,client-architecture call first that we support POWER9 and
architecture v3.00, and that we can do either radix or hash and
that we would like to choose later using an hcall (the
H_REGISTER_PROC_TBL hcall).
Then we need to check whether the hypervisor agreed to us using
radix. We need to do this very early on in the kernel boot process
before any of the MMU initialization is done. If the hypervisor
doesn't agree, we can't use radix and therefore clear the radix
MMU feature bit.
Later, when we have set up our process table, which points to the
radix tree for each process, we need to install that using the
H_REGISTER_PROC_TBL hcall.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This fixes the byte index values for some of the option bits in
the "ibm,architectur-vec-5" property. The "platform facilities options"
bits are in byte 17 not byte 14, so the upper 8 bits of their
definitions need to be 0x11 not 0x0E. The "sub processor support" option
is in byte 21 not byte 15.
Note none of these options are actually looked up in
"ibm,architecture-vec-5" at this time, so there is no bug.
When checking whether option bits are set, we should check that
the offset of the byte being checked is less than the vector
length that we got from the hypervisor.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Currently, if the kernel is running on a POWER9 processor under a
hypervisor, it will try to use the radix MMU even though it doesn't have
the necessary code to use radix under a hypervisor (it doesn't negotiate
use of radix, and it doesn't do the H_REGISTER_PROC_TBL hcall). The
result is that the guest kernel will crash when it tries to turn on the
MMU.
This fixes it by looking for the /chosen/ibm,architecture-vec-5
property, and if it exists, clears the radix MMU feature bit, before we
decide whether to initialize for radix or HPT. This property is created
by the hypervisor as a result of the guest calling the
ibm,client-architecture-support method to indicate its capabilities, so
it will indicate whether the hypervisor agreed to us using radix.
Systems without a hypervisor may have this property also (for example,
skiboot creates it), so we check the HV bit in the MSR to see whether we
are running as a guest or not. If we are in hypervisor mode, then we can
do whatever we like including using the radix MMU.
The reason for using this property is that in future, when we have
support for using radix under a hypervisor, we will need to check this
property to see whether the hypervisor agreed to us using radix.
Fixes: 2bfd65e45e ("powerpc/mm/radix: Add radix callbacks for early init routines")
Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
64-bit Book3S exception handlers must find the dynamic kernel base
to add to the target address when branching beyond __end_interrupts,
in order to support kernel running at non-0 physical address.
Support this in KVM by branching with CTR, similarly to regular
interrupt handlers. The guest CTR saved in HSTATE_SCRATCH1 and
restored after the branch.
Without this, the host kernel hangs and crashes randomly when it is
running at a non-0 address and a KVM guest is started.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
CONFIG_DEBUG_NX_TEST has been broken since CONFIG_DEBUG_SET_MODULE_RONX=y
was added in v2.6.37 via:
84e1c6bb38 ("x86: Add RO/NX protection for loadable kernel modules")
since the exception table was then made read-only.
Additionally, the manually constructed extables were never fixed when
relative extables were introduced in v3.5 via:
706276543b ("x86, extable: Switch to relative exception table entries")
However, relative extables won't work for test_nx.c, since test instruction
memory areas may be more than INT_MAX away from an executable fixup
(e.g. stack and heap too far away from executable memory with the fixup).
Since clearly no one has been using this code for a while now, and similar
tests exist in LKDTM, this should just be removed entirely.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jinbum Park <jinb.park7@gmail.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170131003711.GA74048@beast
Signed-off-by: Ingo Molnar <mingo@kernel.org>
1. Use proper drive strengths on Exynos7.
2. Fix significant current leak on Exynos5433-based TM2/TM2E due
to disabled regulator.
3. Add touchkey to TM2, set display clocks for Ultra HD modes.
4. Cleanups and minor fixes for Exynos5433, TM2 and TM2E.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYj31/AAoJEME3ZuaGi4PXXjEQAI3QSvZNYagIX1ph+mRvyw+W
GVBE7NGdkW8Cd3/2lIrLQ/+boqvM27u+4deJFG0Vx3+u2hHBbvoojjFTgQatO26I
Hrf8sAtQ8K7ta9mMQ+CnQl/CyWP7od9aG5XeTy02YQR6+nunWikPfN0xJ+X1+ufN
zvyGILuYlBZMS4aiQK5G4Ue5qqNU7DU0KGH0zDA/BiEJUESEaLvStOa+3YdXC8Qa
WVWCEJQugNswipMf4Y7fi8MiOBsnNgs/dbTFA+XLdHBn8+uxikeFi1CjKlgv4nzE
LegegLss7VZmRZY7IpPqme1ruqF1/YD7JDIuXnVMzQQxlNTkFACsd3ZlIQRQG4zp
QiuNkV1Ymu/FfrYtjjLYOYOrOwKuAFmhXr8m34tgxCXNTvac/hO5+/w65yC/3Uu4
qlYZvEmqZIsJwPFv1PU3LN4gcXiLXle8iWY8Tjad6y7MhIb46iT2MPrR4q/oSN5x
LP7N21VFCdobjxk+zFqvE2jb1BOQuUhrPnHx2zUpH7IrJU8jKVCa9gFx/U5w/tnU
40TYoO7aCk55xDsAWWQhKs6lxbdK/+OH+6vua5Q6dEZH5U9NsD2RgoQD8wPjNwmM
8Kpgmu9oCbyxfCfbBgXm787X5ZcEjU4IrKfp2iMcmwR+QQf+TwQSmEvEFKfKmbc7
rP8YPV0IhPRZRMP9+Nw+
=rfRv
-----END PGP SIGNATURE-----
Merge tag 'samsung-dt64-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/dt64
Samsung DeviceTree ARM64 update for v4.11, second round:
1. Use proper drive strengths on Exynos7.
2. Fix significant current leak on Exynos5433-based TM2/TM2E due
to disabled regulator.
3. Add touchkey to TM2, set display clocks for Ultra HD modes.
4. Cleanups and minor fixes for Exynos5433, TM2 and TM2E.
* tag 'samsung-dt64-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
arm64: dts: exynos: Add clocks to Exynos5433 LPASS module
arm64: dts: exynos: set LDO7 regulator as always on
arm64: dts: exynos: configure TV path clocks for Ultra HD modes
arm64: dts: exynos: Fix drive strength of sd0_xxx pin definitions
arm64: dts: exynos: Disable pull down for audio pins in Exynos5433 SoCs
arm64: dts: exynos: Add TM2 touchkey node
arm64: dts: exynos: Remove unneeded unit names in Exynos5433 nodes
Signed-off-by: Olof Johansson <olof@lixom.net>
Use remove_pagetable() and friends for radix vmemmap removal.
We do not require the special-case handling of vmemmap done in the x86
versions of these functions. This is because vmemmap_free() has already
freed the mapped pages, and calls us with an aligned address range.
So, add a few failsafe WARNs, but otherwise the code to remove physical
mappings is already sufficient for vmemmap.
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tear down and free the four-level page tables of physical mappings
during memory hotremove.
Borrow the basic structure of remove_pagetable() and friends from the
identically-named x86 functions. Reduce the frequency of tlb flushes and
page_table_lock spinlocks by only doing them in the outermost function.
There was some question as to whether the locking is needed at all.
Leave it for now, but we could consider dropping it.
Memory must be offline to be removed, thus not in use. So there
shouldn't be the sort of concurrent page walking activity here that
might prompt us to use RCU.
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Wire up memory hotplug page mapping for radix. Share the mapping
function already used by radix_init_pgtable().
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Move the page mapping code in radix_init_pgtable() into a separate
function that will also be used for memory hotplug.
The current goto loop progressively decreases its mapping size as it
covers the tail of a range whose end is unaligned. Change this to a for
loop which can do the same for both ends of the range.
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Use the new non-PCI ISA bridge support to expose the POWER9
LPC bus as direct mapped via the ISA IO port range. This
enables direct access via drivers such as 8250
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The POWER9 chip supports an LPC bus that isn't hanging
off a PCI bus, so let's add support for that, mapping it
to the reserved space at ISA_IO_BASE
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We'll be adding non-PCI isa bridge support so let's not
have all the definition in pci-bridge.h
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull sparc fixes from David Miller:
"Several small bug fixes and tidies, along with a fix for non-resumable
memory errors triggered by userspace"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
sparc64: Handle PIO & MEM non-resumable errors.
sparc64: Zero pages on allocation for mondo and error queues.
sparc: Fixed typo in sstate.c. Replaced panicing with panicking
sparc: use symbolic names for tsb indexing
User processes trying to access an invalid memory address via PIO will
receive a SIGBUS signal instead of causing a panic. Memory errors will
receive a SIGKILL since a SIGBUS may result in a coredump which may
attempt to repeat the faulting access.
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Error queues use a non-zero first word to detect if the queues are full.
Using pages that have not been zeroed may result in false positive
overflow events. These queues are set up once during boot so zeroing
all mondo and error queue pages is safe.
Note that the false positive overflow does not always occur because the
page allocation for these queues is so early in the boot cycle that
higher number CPUs get fresh pages. It is only when traps are serviced
with lower number CPUs who were given already used pages that this issue
is exposed.
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The power9_idle_stop method currently takes only the requested stop
level as a parameter and picks up the rest of the PSSCR bits from a
hand-coded macro. This is not a very flexible design, especially when
the firmware has the capability to communicate the psscr value and the
mask associated with a particular stop state via device tree.
This patch modifies the power9_idle_stop API to take as parameters the
PSSCR value and the PSSCR mask corresponding to the stop state that
needs to be set. These PSSCR value and mask are respectively obtained
by parsing the "ibm,cpu-idle-state-psscr" and
"ibm,cpu-idle-state-psscr-mask" fields from the device tree.
In addition to this, the patch adds support for handling stop states
for which ESL and EC bits in the PSSCR are zero. As per the
architecture, a wakeup from these stop states resumes execution from
the subsequent instruction as opposed to waking up at the System
Vector.
The older firmware sets only the Requested Level (RL) field in the
psscr and psscr-mask exposed in the device tree. For older firmware
where psscr-mask=0xf, this patch will set the default sane values that
the set for for remaining PSSCR fields (i.e PSLL, MTL, ESL, EC, and
TR). For the new firmware, the patch will validate that the invariants
required by the ISA for the psscr values are maintained by the
firmware.
This skiboot patch that exports fully populated PSSCR values and the
mask for all the stop states can be found here:
https://lists.ozlabs.org/pipermail/skiboot/2016-September/004869.html
[Optimize the number of instructions before entering STOP with
ESL=EC=0, validate the PSSCR values provided by the firimware
maintains the invariants required as per the ISA suggested by Balbir
Singh]
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir pointed out that the name of the function pnv_arch300_idle_init
was inconsistent with the names of the variables and functions
pertaining to POWER9 features in book3s_idle.S.
This patch renames pnv_arch300_idle_init to pnv_power9_idle_init.
This patch does not change any behaviour.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Currently all the low-power idle states are expected to wake up
at reset vector 0x100. Which is why the macro IDLE_STATE_ENTER_SEQ
that puts the CPU to an idle state and never returns.
On ISA v3.0, when the ESL and EC bits in the PSSCR are zero, the CPU
is expected to wake up at the next instruction of the idle
instruction.
This patch adds a new macro named IDLE_STATE_ENTER_SEQ_NORET for the
no-return variant and reuses the name IDLE_STATE_ENTER_SEQ
for a variant that allows resuming operation at the instruction next
to the idle-instruction.
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This driver was written using lirc since rc-core did not support
transmitter-only hardware at that time. Now that it does, port
this driver.
Compile tested only.
Signed-off-by: Sean Young <sean@mess.org>
Cc: Timo Kokkonen <timo.t.kokkonen@iki.fi>
Cc: Ivaylo Dimitrov <ivo.g.dimitrov.75@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Although wbinvd() is faster than flushing many individual pages, it blocks
the memory bus for "long" periods of time (>100us), thus directly causing
unusually large latencies on all CPUs, regardless of any CPU isolation
features that may be active. This is an unpriviledged operatation as it is
exposed to user space via the graphics subsystem.
For 1024 pages, flushing those pages individually can take up to 2200us,
but the task remains fully preemptible during that time.
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
KVM_MEMSLOT_INCOHERENT is not used anymore, as we've killed its
only use in the arm/arm64 MMU code. Let's remove the last artifacts.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Now that we unconditionally flush newly mapped pages to the PoC,
there is no need to care about the "uncached" status of individual
pages - they must all be visible all the way down.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.
But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the
data has made it to the PoC, and we access stale data.
The obvious fix is to always flush to PoC when a page is faulted
in, no matter what the state of the vcpu is.
Cc: stable@vger.kernel.org
Fixes: 2d58b733c8 ("arm64: KVM: force cache clean on page fault when caches are off")
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Userspace requires to store and restore of line_level for
level triggered interrupts using ioctl KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
VGICv3 CPU interface registers are accessed using
KVM_DEV_ARM_VGIC_CPU_SYSREGS ioctl. These registers are accessed
as 64-bit. The cpu MPIDR value is passed along with register id.
It is used to identify the cpu for registers access.
The VM that supports SEIs expect it on destination machine to handle
guest aborts and hence checked for ICC_CTLR_EL1.SEIS compatibility.
Similarly, VM that supports Affinity Level 3 that is required for AArch64
mode, is required to be supported on destination machine. Hence checked
for ICC_CTLR_EL1.A3V compatibility.
The arch/arm64/kvm/vgic-sys-reg-v3.c handles read and write of VGIC
CPU registers for AArch64.
For AArch32 mode, arch/arm/kvm/vgic-v3-coproc.c file is created but
APIs are not implemented.
Updated arch/arm/include/uapi/asm/kvm.h with new definitions
required to compile for AArch32.
The version of VGIC v3 specification is defined here
Documentation/virtual/kvm/devices/arm-vgic-v3.txt
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to implement vGICv3 CPU interface access, we will need to perform
table lookup of system registers. We would need both index_to_params() and
find_reg() exported for that purpose, but instead we export a single
function which combines them both.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
VGICv3 Distributor and Redistributor registers are accessed using
KVM_DEV_ARM_VGIC_GRP_DIST_REGS and KVM_DEV_ARM_VGIC_GRP_REDIST_REGS
with KVM_SET_DEVICE_ATTR and KVM_GET_DEVICE_ATTR ioctls.
These registers are accessed as 32-bit and cpu mpidr
value passed along with register offset is used to identify the
cpu for redistributor registers access.
The version of VGIC v3 specification is defined here
Documentation/virtual/kvm/devices/arm-vgic-v3.txt
Also update arch/arm/include/uapi/asm/kvm.h to compile for
AArch32 mode.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Compared to the armada-xp the 98DX3336 uses different registers to set
the boot address for the secondary CPU so a new enable-method is needed.
This will only work if the machine definition doesn't define an overall
smp_ops because there is not currently a way of overriding this from the
device tree if it is set in the machine definition.
Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Ensure that if userspace supplies insufficient data to
PTRACE_SETREGSET to fill all the registers, the thread's old
registers are preserved.
Cc: <stable@vger.kernel.org> # 3.0.x-
Fixes: 5be6f62b00 ("ARM: 6883/1: ptrace: Migrate to regsets framework")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Asynchronous external abort is coded differently in DFSR with LPAE enabled.
Fixes: 9254970c "ARM: 8447/1: catch pending imprecise abort on unmask".
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the AMD pieces from the generic Makefile so that
$ make arch/x86/events/amd/<file>.s
can work too. Otherwise you get:
$ make arch/x86/events/amd/ibs.s
scripts/Makefile.build:44: arch/x86/events/amd/Makefile: No such file or directory
make[1]: *** No rule to make target 'arch/x86/events/amd/Makefile'. Stop.
Makefile:1636: recipe for target 'arch/x86/events/amd/ibs.s' failed
make: *** [arch/x86/events/amd/ibs.s] Error 2
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/20170126080819.417-1-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch updates the sysfs attributes for AMD Family17h processors. In
Family17h, the event bit position is changed for both the NorthBridge
and Last level cache counters.
The sysfs attributes are assigned based on the family and the type of
the counter.
Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/617570ed3634e804991f95db62c3cf3856a9d2a7.1484598705.git.Janakarajan.Natarajan@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch updates the AMD uncore driver to support AMD Family17h
processors. In Family17h, there are two extra last level cache counters.
The maximum available counters is increased and the number of counters
for each uncore type is now based on the family.
Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/799f9c5be8963cc209d9169a08f4a2643b748dc7.1484598705.git.Janakarajan.Natarajan@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch renames L2 counters to LLC counters. In AMD Family17h
processors, L3 cache counter is supported.
Since older families have at most L2 counters, last level cache (LLC)
indicates L2/L3 based on the family.
Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/5d8cd8736d8d578354597a548e64ff16210c319b.1484598705.git.Janakarajan.Natarajan@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The Banana Pi M64 board is a typical single board computer based on the
Allwinner A64 SoC. Aside from the usual peripherals it features eMMC
storage, which is connected to the 8-bit capable SDHC2 controller.
Also it has a soldered WiFi/Bluetooth chip, so we enable UART1 and SDHC1
as those two interfaces are connected to it.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Chen-Yu Tsai <wens@csie.org>
On many boards UART1 connects to a Bluetooth chip, so add the pinctrl
nodes for the only pins providing access to that UART. That includes
those pins for hardware flow control (RTS/CTS).
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Chen-Yu Tsai <wens@csie.org>
All Pine64 boards connect an micro-SD card slot to the first MMC
controller.
Enable the respective DT node and specify the (always-on) regulator
and card-detect pin.
As a micro-SD slot does not feature a write-protect switch, we disable
this feature.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Chen-Yu Tsai <wens@csie.org>
The eMMC controller seem to have a maximum frequency of 200MHz, while the
regular MMC controllers are capped at 150MHz.
Since older SoCs cannot go that high, we cannot change the default maximum
frequency, but fortunately for us we have a property for that in the DT.
This also has the side effect of allowing to use the MMC HS200 and SD
SDR104 modes for the boards that support it (with either 1.2v or 1.8v IOs).
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Tested-by: Florian Vaussard <florian.vaussard@heig-vd.ch>
Acked-by: Chen-Yu Tsai <wens@csie.org>
The A64 only has a single set of pins for each MMC controller. Since we
already have boards that require all of them, let's add them to the DTSI.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Tested-by: Florian Vaussard <florian.vaussard@heig-vd.ch>
Acked-by: Chen-Yu Tsai <wens@csie.org>
The A64 has 3 MMC controllers, one of them being especially targeted to
eMMC. Among other things, it has a data strobe signal and a 8 bits data
width.
The two other are more usual controllers that will have a 4 bits width at
most and no data strobe signal, which limits it to more usual SD or MMC
peripherals.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Tested-by: Florian Vaussard <florian.vaussard@heig-vd.ch>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Let's log something for changes in facilities, cpuid and ibc now that we
have a cpu model in QEMU. All of these calls are pretty seldom, so we
will not spill the log, the they will help to understand pontential
guest issues, for example if some instructions are fenced off.
As the s390 debug feature has a limited amount of parameters and
strings must not go away we limit the facility printing to 3 double
words, instead of building that list dynamically. This should be enough
for several years. If we ever exceed 3 double words then the logging
will be incomplete but no functional impact will happen.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
reset_guest_reference_bit needs to return the CC, so we can set it in
the guest PSW when emulating RRBE. Right now it only returns 0.
Let's fix that.
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
When we get a PER i-fetch event on an EXECUTE or EXECUTE RELATIVE LONG
instruction, because the executed instruction generated a PER i-fetch
event, then the PER address points at the EXECUTE function, not the
fetched one.
Therefore, when filtering PER events, we have to take care of the
really fetched instruction, which we can only get by reading in guest
virtual memory.
For icpt code 4 and 56, we directly have additional information about an
EXECUTE instruction at hand. For icpt code 8, we always have to read
in guest virtual memory.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[small fixes]
We will have to read instructions not residing at the current PSW
address.
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We already filter PER events reported via icpt code 8. For icpt code
4 and 56, this is still missing.
So let's properly detect if we have a debugging event and if we have to
inject a PER i-fetch event into the guest at all.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We can directly forward the vector BCD instructions to the guest
if available and VX is requested by user space.
Please note that user space will have to take care of the final state
of the facility bit when migrating to older machines.
Signed-off-by: Guenther Hutzl <hutzl@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We can directly forward the vector enhancement facility 1 to the guest
if available and VX is requested by user space.
Please note that user space will have to take care of the final state
of the facility bit when migrating to older machines.
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Maxim Samoylov <max7255@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
sparse with __CHECK_ENDIAN__ shows that ar_t was never properly
used across KVM on s390. We can now:
- fix all places
- do not make ar_t special
Since ar_t is just used as a register number (no endianness issues
for u8), and all other register numbers are also just plain int
variables, let's just use u8, which matches the __u8 in the userspace
ABI for the memop ioctl.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The plo inline assembly has a cc output operand that is always written
to and is also as such an operand declared. Therefore the compiler is
free to omit the rather pointless and misleading initialization.
Get rid of this.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The new Instruction Execution Protection needs to be enabled before
the guest can use it. Therefore we pass the IEP facility bit to the
guest and enable IEP interpretation.
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
When we access guest memory and run into a protection exception, we
need to pass the exception data to the guest. ESOP2 provides detailed
information about all protection exceptions which ESOP1 only partially
provided.
The gaccess changes make sure, that the guest always gets all
available information.
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add detection of NPU2 PHBs. NPU2/NVLink2 has a different register
layout for the TCE kill register therefore TCE invalidation should be
done via the OPAL call rather than using the register directly as it
is for PHB3 and NVLink1. This changes TCE invalidation to use the OPAL
call in the case of a NPU2 PHB model.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
POWER9 contains an off core mmu called the nest mmu (NMMU). This is
used by other hardware units on the chip to translate virtual
addresses into real addresses. The unit attempting an address
translation provides the majority of the context required for the
translation request except for the base address of the partition table
(ie. the PTCR) which needs to be programmed into the NMMU.
This patch adds a call to OPAL to set the PTCR for the nest mmu in
opal_init().
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch updates dev_pm_opp_find_freq_*() routines to get a reference
to the OPPs returned by them.
Also updates the users of dev_pm_opp_find_freq_*() routines to call
dev_pm_opp_put() after they are done using the OPPs.
As it is guaranteed the that OPPs wouldn't get freed while being used,
the RCU read side locking present with the users isn't required anymore.
Drop it as well.
This patch also updates all users of devfreq_recommended_opp() which was
returning an OPP received from the OPP core.
Note that some of the OPP core routines have gained
rcu_read_{lock|unlock}() calls, as those still use RCU specific APIs
within them.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> [Devfreq]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Now that we have a full clock driver for sun9i, switch to it.
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Relax the check preventing us from hotplugging into an offline node.
This limitation was added in commit 482ec7c403 ("[PATCH] powerpc numa:
Support sparse online node map") to prevent adding resources to an
uninitialized node.
These days, there is no harm in doing so. The addition will actually
cause the node to be initialized and onlined; add_memory_resource()
calls hotadd_new_pgdat() (if necessary) and node_set_online().
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The flow of the main loop in parse_numa_properties() is overly
complicated. Simplify it to be less confusing and easier to read.
No functional change.
The end of the main loop in parse_numa_properties() looks like this:
for_each_node_by_type(...) {
...
if (!condition) {
if (--ranges)
goto new_range;
else
continue;
}
statement();
if (--ranges)
goto new_range;
/* else
* continue; <- implicit, this is the end of the loop
*/
}
The only effect of !condition is to skip execution of statement(). This
can be rewritten in a simpler way:
for_each_node_by_type(...) {
...
if (condition)
statement();
if (--ranges)
goto new_range;
}
Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
There are chances that multiple CPUs can call crash_fadump() simultaneously
and would start duplicating same info to vmcoreinfo ELF note section. This
causes makedumpfile to fail during kdump capture. One example is,
triggering dumprestart from HMC which sends system reset to all the CPUs at
once.
makedumpfile --dump-dmesg /proc/vmcore
read_vmcoreinfo_basic_info: Invalid data in /tmp/vmcoreinfoyjgxlL: CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971
makedumpfile Failed.
Running makedumpfile --dump-dmesg /proc/vmcore failed (1).
makedumpfile -d 31 -l /proc/vmcore
read_vmcoreinfo_basic_info: Invalid data in /tmp/vmcoreinfo1mmVdO: CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971CRASHTIME=1475605971
makedumpfile Failed.
Running makedumpfile -d 31 -l /proc/vmcore failed (1).
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The proto VSID is built using both the MMU context id and effective
segment ID (ESID). We should not have overlapping bits between those.
That could result in us having a VSID collision. With the current code
we missed masking the top bits of the ESID. This implies for kernel
address we ended up using the top 4 bits of the ESID as part of the
proto VSID, which is wrong.
The current code use the top 4 context values (0x7fffc - 0x7ffff) for
the kernel. With those context IDs used for the kernel, we don't run
into VSID collisions because we get the same proto VSID irrespective of
whether we mask the ESID bits or not. eg:
ea = 0xf000000000000000
context = 0x7ffff
w/out masking:
proto_vsid = (0x7ffff << 6 | 0xf000000000000000 >> 40)
= (0x1ffffc0 | 0xf00000)
= 0x1ffffc0
with masking:
proto_vsid = (0x7ffff << 6 | ((0xf000000000000000 >> 40) & 0x3f))
= (0x1ffffc0 | (0xf00000 & 0x3f))
= 0x1ffffc0 | 0)
= 0x1ffffc0
So although there is no bug, the code is still overly subtle, so fix it
to save ourselves pain in future.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
1. Remove mach code for Exynos4415 as a continuation of removal
of this SoC.
2. Remove obsolete property from the bindings documentation.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYjkegAAoJEME3ZuaGi4PXSoEP/1us/jBM7gXdyCj2E943Vy6B
KAaXaKOcdqBnJMJzu/7207aBo23g4qqgOATjHSeTa5V8PNZeINJ1gsoFf7IQuAAJ
Y+mr9zCj+439zWb/v2wRrpsy96cznxfm1s7Q+179BKpyC0k/tTuLJT/l7gUU8LO3
EmTTh0lfjnGIZJoPzE8kXCzoXgOiZpV0aHtosqx5k2De4Te/3V5ZXu6H/NCmkR88
Eu4/4/8A+0xRMNYlS8x/ZnJBJN444Ho9sIpl2AA4EUL/4S1zch7HHDdeX1YXbDy5
cCla/TlF6LvPyp9gmRN26VFxbnhHkAqQ6C/U3k5eoOf+SeHP7PTAoXKxUYMmKFCE
3RDBaq415Nz3hSu4yoIDdTplLmGZwzCVebL4XOQ0KPugtjn/+bxGpmPdXWdNNt7B
3BbdQkGvykpDZ2E7bVBaPCBO93gb0u8NgSlEIf8jELMdEXYV9sng/WlK3ATsPdNc
Pt5nX4cPNxjGptVdj5s4Y0U550+9gj8W1XXbqTfLa39zW8EPMD8fH3IPs0oACBgb
1agA+JP3osXlpwi/QsT1Cw883jpnkWnAPKte1ERv3qAn7SEFUvzHAYuaFlW+SJyi
QVJKERU+V73C1iJB/mQxlqDf7v+YWlaDA/JCvn5KUhialwfQZNvpJKzlVRiB7lcl
g5p4j+AghRP+hkMH/C2K
=kl+X
-----END PGP SIGNATURE-----
Merge tag 'samsung-soc-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/soc
Samsung mach/soc update for v4.11, second round:
1. Remove mach code for Exynos4415 as a continuation of removal
of this SoC.
2. Remove obsolete property from the bindings documentation.
* tag 'samsung-soc-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
dt-bindings: video: exynos7-decon: Remove obsolete samsung,power-domain property
ARM: EXYNOS: Remove Exynos4415 arch code (SoC not supported anymore)
Signed-off-by: Olof Johansson <olof@lixom.net>
1. Use bigger reserved memory region for Multi Format Codec on all Exynos
chipsets so it could decode FullHD easily.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYjkdPAAoJEME3ZuaGi4PXl74QAIRRfPza+zFhCJivPcAu+zl3
oPYVa9QgpGr22dJP0hUQXszPpKVzqsMtqIw4zgzGjktwOqgklIosjXe0jsYo/puu
7j/q/RKVprvW6au5ZszDvjUJ/19X6x3fYgD6CO3Ah7pfhTK/KKvtg2D1NgnZCfKl
Bkj2q05CN0G5Yqw3N3ac7tY2atutSNLtseUi6QEu2FOqRkNi24FP5BAhwHx9Zblf
fIi4NZRIQ6E9yXmbtsmgtNNHxuM+vxVGs/8VsWdOMQqo2v+lo+1a/+MaEsXS6/qm
U2GFDyar9bJL8gXaVXFWKTIVigzRMcavSWMAhG3GFCpE/ZcPIUBWMe4+9lzlfH+V
88168+JVMbosZ5ro8sRhyDz4V2oqwVk+uaeIJtTmjY05OejcmErB9t7ivSATKZqX
ssby4H76saxrgzryIhxHAkkQfc/kfQabMlba0Hj7lEyGJbEaUpjCxVXB09hMSPZ0
SbPLPcRckrX39sBysD9xRquoKFa5viseg4RBKb0CLGT8TbaqhhbOB3AI2WkG8fCt
RLx1XEq7G4jE2sZ2l+YOFQObZDTa2hv4NOclUEwwjigirkIe5KUFyBn3bA+i/S5A
/qRi3IMH0fOdLPkQQm/DSnglz4paJMKxzVLk5Naf5zCz7jIhjm2LNZM3rOA4VRKr
4q7pe2qPYgAXZoc7LGVk
=E4FL
-----END PGP SIGNATURE-----
Merge tag 'samsung-dt-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/dt
Samsung DeviceTree update for v4.11, second round:
1. Use bigger reserved memory region for Multi Format Codec on all Exynos
chipsets so it could decode FullHD easily.
* tag 'samsung-dt-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
ARM: dts: exynos: Fix indentation of EHCI and OHCI ports
ARM: dts: exynos: Increase MFC left reserved memory region size
Signed-off-by: Olof Johansson <olof@lixom.net>
1. Add support for Exynos5433 to Power Management Unit (PMU) and Power
Domains drivers.
2. Cleanups of duplicated and unused defines.
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJYjkgyAAoJEME3ZuaGi4PXtVoP/3TJmoi6z6Qmxc9Ux6ATmNKO
KhEM2u5si2psK6DD6YNrmC9st3C2dnhiej6g6KG3dK7WVrSQigjZCCQurFWccx0M
BE5Dr/A9hhJtwLOdLwCcS1Fywv0aFuCEh2F6Jgq6lCs1Z/EFuxZ9DflDLdimakvc
18hyC2kLN3cb7ME10Zr+YMaG5GaoimP5mkUX1czJcaSWtVoXFCW6H9j4QQg2geE1
0sPBjKx4bHEah8LfsckijdwyamE7wGVzDslq+YVAMqPstW781FSFAlNGe2cc+Xlj
GPJfgtgVZYdu4meRZwbli9T4RslsqULPYDUFRoq+ffp1Hrp7kXoBL5jFYK0W9gjD
Sgm/oFGpymIY/GdwyTzYSHPeNxK8RtmNkC3WDY4JY6gn8BcK3mvK/4M9NAQw1fWn
2A+eZjZr+AsyoNJtNk69mjYweSJmJ+pFBnni8ygrejq5FYfwVXD6OenvAxqQ7RDG
aC2OuXhjr1BMoq5hiYen3Eh3b1Okftlu3f6/SVi2h8Ct5W4oK8c08dvbmpOYB3hk
jeEMnh45PkoXHG7vMTU14euJVlTxhhTQ/hFSl21Ous6LdrxaZBQFsrszKma5wtvJ
a9mAzonRJPXH/4Gt6kYOe90VDWp0SWFkOaA1zeIOdVnC0ifn7rZls6df6R6coKDY
3g0X2xPhQiywSSa6Hq1T
=pXm9
-----END PGP SIGNATURE-----
Merge tag 'samsung-drivers-soc-pmu-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/drivers
Continuation of improvements for Exynos PM drivers for v4.11:
1. Add support for Exynos5433 to Power Management Unit (PMU) and Power
Domains drivers.
2. Cleanups of duplicated and unused defines.
* tag 'samsung-drivers-soc-pmu-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
soc: samsung: pmu: Remove duplicated define for ARM_L2_OPTION register
soc: samsung: pmu: Remove unused and duplicated defines
soc: samsung: pm_domains: Add new Exynos5433 compatible
soc: samsung: pmu: Add dummy support for Exynos5433 SoC
Signed-off-by: Olof Johansson <olof@lixom.net>
Enable the NAND framework and the Denali NAND controller driver.
This NAND controller is used on UniPhier SoCs.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
Enable the block layer support for MTD devices.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
- Enable cpufreq support for zx296718 by using new operating-points-v2
bindings, so that it works with the generic cpufreq-dt driver.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjW1hAAoJEFBXWFqHsHzOkYUH/RNoyrOb3+KWfy0pczLBj98F
SmF966hsGNPPKNZ+NFhqWdd1/iT9UzdIax09858HQzvyWSL7LSZNwbkHJU/2McKy
zFo71IAnMNiNti6USiKVNX0HGx9ndnCChzJBUAcum46fJ7Kt0g8zjfBZfGjRq1cQ
b3b1sJXt5P9+ebjUhrGXAykkS4zwpTHCdRAwmtKQEhU8Q5ri2+pqMtwvrt8XDfxl
vFbckTpA0byd60TGvl/HecmafpYhXvOU7QGQaknzOnlH2o5dmwf0/i7Cw0sqyQn7
JeBopKi7N2S2hN7ZlUYcEQ87FRGdf2u5WbQtcTRiYrZwvUf508hNMxGclMKUHNg=
=EYrK
-----END PGP SIGNATURE-----
Merge tag 'zte-dt64-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/dt64
ZTE arm64 device tree update for 4.11:
- Enable cpufreq support for zx296718 by using new operating-points-v2
bindings, so that it works with the generic cpufreq-dt driver.
* tag 'zte-dt64-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
arm64: dts: zx: support cpu-freq for zx296718
Signed-off-by: Olof Johansson <olof@lixom.net>
- Select wireless extensions option for imx_v6_v7_defconfig, so that
wireless works out of box with userspace tools such as 'iwconfig'.
- Enable EXT4 filesystem support for vf610m4_defconfig.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjWf2AAoJEFBXWFqHsHzOSicH/Rg+X7Qat+pMxINB7+enJqlw
mIM5WTUwhO4txugFD84ybcBitC/+t/kS5ffEM7ckkDDn6p6bN0K9vcU137V/CkE2
cMHgnuawXs3qCu4D+h4X3MsdF/AWTOD20dv1W5QMhC9/kANP2emQ6ohB5Wee0Gmx
5Mi4UaD0QNYrFsukayWhQ45i0RbsNRdiyzAntIpY1Xz8JJGL/ufTk233wtfxihsV
8MQw/Ar1Q3Hz8+eIKj+9dqDKV5AUrXQ3GvrvTfwlXlLM+NNSN5UWfJsbzob9754P
h02zRhjtQfjFKTUVAJKQPTNPci8b+8InnKyLgqSNSAnb/2PXtJB0AlFXJfaZ/LM=
=VjCo
-----END PGP SIGNATURE-----
Merge tag 'imx-defconfig-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/defconfig
i.MX defconfig updates for 4.11:
- Select wireless extensions option for imx_v6_v7_defconfig, so that
wireless works out of box with userspace tools such as 'iwconfig'.
- Enable EXT4 filesystem support for vf610m4_defconfig.
* tag 'imx-defconfig-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
ARM: vf610m4: defconfig: enable EXT4 filesystem
ARM: imx_v6_v7_defconfig: Select wireless extensions option
Signed-off-by: Olof Johansson <olof@lixom.net>
- Add support for LS1012A SoC which is an ARMv8 SoC with single
Cortex-A53 core, and the corresponding board support: FRDM, QDS
and RDB.
- Enable TMU (Thermal Monitoring Unit) support for LS1046A SoC.
- Enable PCA9547 device for ls2080a-rdb board by removing 'disabled'
status setting.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjWZxAAoJEFBXWFqHsHzOvGQIAJpdglbg/r997PFL43gODNaU
RfF/userRtzsm3PZ9Uyf2EMg2Q66O92g9lqlr6+QNDSBuLl6iyyIomiCE3P9D/8A
IggNiTKYuEFqWQUCQL6VSoOK5Ha9sa4D23rrGoRGbfKeiiM8M/gn7gNjOAmbbZx6
VwVAAmkLObGmrkxq9IN++u7rIpTWLyOX+6wzQUYFfTD8OqGBrkpFpKOGTZXgzmeE
Q8EebPh/ti2lFjGw1lWECjI4kEXIz7TMnKFsS/fYf7o6s778JQ/uocUl2UtTon1E
aIIU72OMq6Mp/EoaMq3l9t1pgoEZMPVzyGiYPVypfEzkKTh26wb9POc6KiDO0JE=
=g8mg
-----END PGP SIGNATURE-----
Merge tag 'imx-dt64-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/dt64
Freescale arm64 device tree updates for 4.11:
- Add support for LS1012A SoC which is an ARMv8 SoC with single
Cortex-A53 core, and the corresponding board support: FRDM, QDS
and RDB.
- Enable TMU (Thermal Monitoring Unit) support for LS1046A SoC.
- Enable PCA9547 device for ls2080a-rdb board by removing 'disabled'
status setting.
* tag 'imx-dt64-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
dt-bindings: clockgen: Add compatible string for LS1012A
Documentation: DT: add LS1012A compatible for SCFG and DCFG
Documentation: DT: Add entry for FSL LS1012A RDB, FRDM, QDS boards
arm64: dts: ls1046a: Add TMU device tree support
arm64: dts: Add support for FSL's LS1012A SoC
arm64: dts: ls2080a-rdb: remove disable status of pca9547
Signed-off-by: Olof Johansson <olof@lixom.net>
- New board support: vf610-zii-dev-rev-c, imx6ul-opos6uldev,
imx6ul-isiot, imx6qdl-savageboard, imx6q-mccmon6.
- A patch from Alexandre to correct the mangled license text which
has been copied & pasted all over the i.MX device tree files.
- Update cpu nodes of some i.MX SoCs to make them consistent and match
ePAPR spec.
- Add OCOTP device for i.MX6UL SoC.
- Add security violation interrupt for i.MX25 DryIce.
- Enable USB OTG, WIFI and Bluetooth support for i.MX6SX Udoo Neo
board.
- Enable S/PDIF and 2nd display pipeline support for CompuLab board..
- Add SPI and LTC3676 PMIC support for Gateworks Ventana boards.
- A few random device addition for various boards: TVE DAC regulators
for imx53-qsb, EEPROM for vf610-zii-dev, eMMC and NAND support for
i.MX6UL Engicam boards.
- Cleanups on obsoleted or unused properties.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjWLQAAoJEFBXWFqHsHzOGt0H/2WCsuRi6WU7i7Wyk/S0Xq5t
HzA/9m2f7z1vb1tzFknbpcWYa9bdofI7Dz+6KW2vp4fxhMyJn8SQxWwQjVNya2ba
z8XLdEMlIuozBebPcH84ovNzqoBIOEFahGm5smm40s/o7XL8ua9tOEFaUc+jY0Pd
n/3TPmoyBTlbrgiCZfLXCVsczHtEVW5JoQmXT7aDsxT0BQpnwLbgXKVOxgDvsWXM
WtAKjpGvdiApD3znbFjYmRKak+2uilkV5B4jrL4mZf8dxCliykP4XA/LbiMLKAMN
voMpTtTK7UYp59GrwKbqfJ+pPU7b6ndKK59f7FWrTDNgtDGe+J4ExjWwsDnuLfI=
=HAbg
-----END PGP SIGNATURE-----
Merge tag 'imx-dt-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/dt
i.MX device tree updates for 4.11:
- New board support: vf610-zii-dev-rev-c, imx6ul-opos6uldev,
imx6ul-isiot, imx6qdl-savageboard, imx6q-mccmon6.
- A patch from Alexandre to correct the mangled license text which
has been copied & pasted all over the i.MX device tree files.
- Update cpu nodes of some i.MX SoCs to make them consistent and match
ePAPR spec.
- Add OCOTP device for i.MX6UL SoC.
- Add security violation interrupt for i.MX25 DryIce.
- Enable USB OTG, WIFI and Bluetooth support for i.MX6SX Udoo Neo
board.
- Enable S/PDIF and 2nd display pipeline support for CompuLab board..
- Add SPI and LTC3676 PMIC support for Gateworks Ventana boards.
- A few random device addition for various boards: TVE DAC regulators
for imx53-qsb, EEPROM for vf610-zii-dev, eMMC and NAND support for
i.MX6UL Engicam boards.
- Cleanups on obsoleted or unused properties.
* tag 'imx-dt-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux: (32 commits)
ARM: dts: udoo_neo: Add Bluetooth support
ARM: dts: udoo_neo: Add Wifi support
ARM: dts: udoo_neo: Add UDOO Neo USB OTG1 and OTG2 support
ARM: dts: imx6ul: Add Engicam Is.IoT MX6UL NAND initial support
ARM: dts: imx6ul: Add Engicam Is.IoT MX6UL eMMC initial support
ARM: dts: imx6qdl: Fix "ERROR: code indent should use tabs where possible"
ARM: dts: vf610-zii-dev: add EEPROM entry to Rev C
ARM: dts: add Armadeus Systems OPOS6UL and OPOS6ULDEV support
ARM: dts: imx6q-utilite-pro: enable 2nd display pipeline
ARM: dts: vf610-zii-dev: Add .dts file for rev. C
ARM: dts: vf610-zii-dev-rev-b: Remove leftover PWM pingroup
ARM: dts: imx6: Support Savageboard quad
ARM: dts: imx6: Support Savageboard dual
ARM: dts: imx6: Add Savageboard common file
ARM: dts: imx53-qsb: Provide the TVE DAC regulators
ARM: dts: imx6q: Add mccmon6 board support
Doc: devicetree: bindings: Add vendor prefix entry - lwn
ARM: dts: imx/vf: Correct license text
ARM: dts: imx25.dtsi: DryIce security violation interrupt
ARM: dts: imx: Add ocotp node for imx6ul
...
Signed-off-by: Olof Johansson <olof@lixom.net>
- Remove unused flexcan and esdhc device definitions for i.MX25.
- A series from Fabio to remove camera device initialization code from
i.MX platform support, since the corresponding media driver has been
deprecated and removed from kernel tree.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjVZ8AAoJEFBXWFqHsHzOiW8IAKucWgP/WbNyi4AOJf7fyXts
USevVUrv5PMpeekb/EoLilYXNZaaUSnmx/zsAGfG2tBd94FPrDEFcDJLmEmBVKwO
fYxFEBddVJkT+zsXgAkffaEI447tV9Hgaxb91vqc25LKqj85uxzeo3t4YZadHO22
xFIc/hE3GDTiu0L4P0cqAE9hJl7pZGiENta0KZO6g8nvYpFvPddtfQ/GpbKGOJW1
9h/iMP4CbnTgU9pQkm/IwGopejsvWHzlCVrOPJz5Fj7B7Mrf6xQigo5ZovPnPF+m
GYp5C4wZ9oQCsIeaBLEKubgzJA+xXwp1TpzPbGdDPp5rqfsGEKfJiPnzNJK/x68=
=t+pS
-----END PGP SIGNATURE-----
Merge tag 'imx-cleanup-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/soc
i.MX cleanup for 4.11:
- Remove unused flexcan and esdhc device definitions for i.MX25.
- A series from Fabio to remove camera device initialization code from
i.MX platform support, since the corresponding media driver has been
deprecated and removed from kernel tree.
* tag 'imx-cleanup-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
ARM: mach-mx27_3ds: Remove camera support
ARM: mach-pcm037: Remove camera support
ARM: mach-mx35_3ds: Remove camera support
ARM: mx31moboard-smartbot: Remove camera support
ARM: mx31moboard-marxbot: Remove camera support
ARM: mach-mx31_3ds: Remove camera support
ARM: imx: remove unused device definitions
Signed-off-by: Olof Johansson <olof@lixom.net>
- A couple of fixes on anatop regulator voltage and constraints
according to hardware datasheet.
- Correct FEC interrupt routing for i.MX6QP which has got the hardware
bug found on i.MX6Q fixed.
- Remove unit address from i.MX6 LDB device node to fix DTC warning.
- A fix on imx53-qsb board FEC pinmux config to remove the dependency
on firmware for setting up pins.
- A series from Sascha to fix LPSR pins for i.MX7 boards.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJYjVP+AAoJEFBXWFqHsHzOcvEIAKmxLYG5bmXkUNm+cJQ0oJde
MPh984lN1Xzx0PFlXJoC3mMC4W0RCibLjjSMS3s28ddzw0AKyAWHVkPDzUtuFQOT
s7iUjgXhfpwxeIt5DGZUXbzfZVXjRrVgWfYSMPnc2Q+PUAHQDhqTqylCTz5yF6qf
/XpN5uBaq/h++CJ+e06h9xSuIaCcEtGzLoUA+qXJBseBCLxlaR7zlt5bVL338oOt
P8LuKk8Xcmc6QstpC5HyH8XuHQ8h+Gj0tLKkFeEnLY5JFmRo6e1wOq9EPW+AHfBp
vgW5cI5cgdtn7/TuAbmCOoE052jd8oVK3LnKEfFXHxcBCDIDYz580CdbTFe06+g=
=tRE+
-----END PGP SIGNATURE-----
Merge tag 'imx-fixes-nc-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into next/dt
i.MX non-critical device tree fixes for 4.11:
- A couple of fixes on anatop regulator voltage and constraints
according to hardware datasheet.
- Correct FEC interrupt routing for i.MX6QP which has got the hardware
bug found on i.MX6Q fixed.
- Remove unit address from i.MX6 LDB device node to fix DTC warning.
- A fix on imx53-qsb board FEC pinmux config to remove the dependency
on firmware for setting up pins.
- A series from Sascha to fix LPSR pins for i.MX7 boards.
* tag 'imx-fixes-nc-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
ARM: dts: imx53-qsb-common: fix FEC pinmux config
ARM: imx6: remove unit address from LDB node
ARM: imx6qp: adapt DT to changed FEC interrupts
ARM: imx6: fix regulator constraints on anatop 1p1 and 2p5
ARM: imx6: fix min/max voltage of anatop 2p5 regulator
ARM: dts: imx7: Add "LPSR" to LPSR iomux pin names
ARM: dts: imx7d-cl-som: Fix OTG power pinctrl
ARM: dts: imx7d-sdb: Fix watchdog and pwm pinmux
ARM: dts: imx7s-warp: Fix watchdog pinmux
Signed-off-by: Olof Johansson <olof@lixom.net>
- Add a new Armada 8K based board: MACCHIATOBin
- Enable AHCI on the Armada 7K/8K SoCs
-----BEGIN PGP SIGNATURE-----
iIEEABECAEEWIQQYqXDMF3cvSLY+g9cLBhiOFHI71QUCWIt9MyMcZ3JlZ29yeS5j
bGVtZW50QGZyZWUtZWxlY3Ryb25zLmNvbQAKCRALBhiOFHI71Zw/AJ9NEi6v2zy2
F16VJxKCgnhcJd++zwCfXA4/Z8eBtxXwsIPNLPMCTc9FtC0=
=JDKj
-----END PGP SIGNATURE-----
Merge tag 'mvebu-dt64-4.11-2' of git://git.infradead.org/linux-mvebu into next/dt64
mvebu dt64 for 4.11 (part 2)
- Add a new Armada 8K based board: MACCHIATOBin
- Enable AHCI on the Armada 7K/8K SoCs
* tag 'mvebu-dt64-4.11-2' of git://git.infradead.org/linux-mvebu:
arm64: dts: marvell: add generic-ahci compatibles for CP110 ahci
arm64: dts: marvell: Add DT for MACCHIATOBin board
Signed-off-by: Olof Johansson <olof@lixom.net>