Commit Graph

98471 Commits

Author SHA1 Message Date
David Hildenbrand 433b9ee43c KVM: s390: remove _bh locking from start_stop_lock
The start_stop_lock is no longer acquired when in atomic context, therefore we
can convert it into an ordinary spin_lock.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-21 13:22:34 +02:00
David Hildenbrand 4ae3c0815f KVM: s390: remove _bh locking from local_int.lock
local_int.lock is not used in a bottom-half handler anymore, therefore we can
turn it into an ordinary spin_lock at all occurrences.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-21 13:22:28 +02:00
David Hildenbrand 0759d0681c KVM: s390: cleanup handle_wait by reusing kvm_vcpu_block
This patch cleans up the code in handle_wait by reusing the common code
function kvm_vcpu_block.

signal_pending(), kvm_cpu_has_pending_timer() and kvm_arch_vcpu_runnable() are
sufficient for checking if we need to wake-up that VCPU. kvm_vcpu_block
uses these functions, so no checks are lost.

The flag "timer_due" can be removed - kvm_cpu_has_pending_timer() tests whether
the timer is pending, thus the vcpu is correctly woken up.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-21 13:22:16 +02:00
Yi Li a28e3f4b90 arm64: dmi: Add SMBIOS/DMI support
SMbios is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.

This has been tested by dmidecode and lshw tools.

Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-21 10:22:21 +01:00
Richard Weinberger bb6a1b2e18 um: segv: Save regs only in case of a kernel mode fault
...otherwise me lose user mode regs and the resulting
stack trace is useless.

Signed-off-by: Richard Weinberger <richard@nod.at>
2014-07-20 13:39:27 +02:00
Richard Weinberger 468f65976a um: Fix hung task in fix_range_common()
If do_ops() fails we have to release current->mm->mmap_sem
otherwise the failing task will never terminate.

Reported-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2014-07-20 13:16:20 +02:00
Richard Weinberger 284e6d3951 um: Ensure that a stub page cannot get unmapped
Trinity discovered an execution path such that a task
can unmap his stub page.

Reported-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2014-07-20 13:09:15 +02:00
Richard Weinberger ae5db6d123 Revert "um: Fix wait_stub_done() error handling"
This reverts commit 0974a9cadc.
The real for for that issue is to release current->mm->mmap_sem in
fix_range_common().

Signed-off-by: Richard Weinberger <richard@nod.at>
2014-07-20 12:56:34 +02:00
Linus Torvalds d057190925 Merge branch 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fixes from Thomas Gleixner:
 "The locking department delivers:

   - A rather large and intrusive bundle of fixes to address serious
     performance regressions introduced by the new rwsem / mcs
     technology.  Simpler solutions have been discussed, but they would
     have been ugly bandaids with more risk than doing the right thing.

   - Make the rwsem spin on owner technology opt-in for architectures
     and enable it only on the known to work ones.

   - A few fixes to the lockdep userspace library"

* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
  locking/mutex: Disable optimistic spinning on some architectures
  locking/rwsem: Reduce the size of struct rw_semaphore
  locking/rwsem: Rename 'activity' to 'count'
  locking/spinlocks/mcs: Micro-optimize osq_unlock()
  locking/spinlocks/mcs: Introduce and use init macro and function for osq locks
  locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead
  locking/spinlocks/mcs: Rename optimistic_spin_queue() to optimistic_spin_node()
  locking/rwsem: Allow conservative optimistic spinning when readers have lock
  tools/liblockdep: Account for bitfield changes in lockdeps lock_acquire
  tools/liblockdep: Remove debug print left over from development
  tools/liblockdep: Fix comparison of a boolean value with a value of 2
2014-07-19 06:27:55 -10:00
Linus Torvalds d614cb0bc3 ARM: SoC fixes for 3.16-rc
A smaller set of fixes this week, and all regression fixes:
  - a handful of issues fixed on at91 with common clock conversion
  - a set of fixes for Marvell mvebu (SMP, coherency, PM)
  - a clock fix for i.MX6Q.
  - ... and a SMP/hotplug fix for Exynos
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJTygXeAAoJEIwa5zzehBx3bt0P/2ofpoOuYRV88sHjI9w+0R+F
 6t8WIFtTSFypI3zD6cSFBR38wTHI4mJ/jBb0ZnIhGXZE3Bzl/n9Moz7UElxsDD9v
 AjMWzyx6XrSJSCATczN/CDMX38QN+0NZW+hdXODGz9g7DrVGT/Z2jqugkaPAkAwy
 gVBmCqa+nkksfQCcQF3LDVmCyDUMHKILfUvyQJ217QbIavxO3kU/2wLdgEQpUCrI
 YUWAnAj+S/xoxd6OYJr9nMd+M6P9nkRdy+dD56nJtSiZdFwFoI+EgfhUkT3iezPN
 q3aYg3GbgiM/Fp8IO58tE2CbbG/xWJH+kwkJ03yl3z1Gx2KqAYeBpy2QMLBR9rUf
 F0axul3EeW9Gf7OEEFKQbCW8ETaP2AMEbm11FZkjJxMlNjbG9zkYFnl0oedLXxTA
 AcOPB7ABIWU1PsXXTqD9ZxjZmAsKL4CCck0BnWdOyQT5c9gA4ePEGEDMjeT/OiZE
 QwlujHFl4M4E1XFJRL6RiBYppNLBKTsrgl+HaoDSW/MbD350WqbOFTzngw9Xy/rO
 n7YNxUR2QFfWCNY1Zk8J8oJI/ISxla2bthhIe0+l/kk/zVUM3OMEClp0Fdw/L55X
 Md/fc7FzQKV9GPhtSz1RGDN4bjdJuGmitjMrYf+YhbWHa6iKS3XkkHBNpkKhY8Kf
 h9MsTmjd0En4BJLUqf0h
 =LOtI
 -----END PGP SIGNATURE-----

Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "A smaller set of fixes this week, and all regression fixes:
   - a handful of issues fixed on at91 with common clock conversion
   - a set of fixes for Marvell mvebu (SMP, coherency, PM)
   - a clock fix for i.MX6Q.
   - ... and a SMP/hotplug fix for Exynos"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: EXYNOS: Fix core ID used by platsmp and hotplug code
  ARM: at91/dt: add missing clocks property to pwm node in sam9x5.dtsi
  ARM: at91/dt: fix usb0 clocks definition in sam9n12 dtsi
  ARM: at91: at91sam9x5: correct typo error for ohci clock
  ARM: clk-imx6q: parent lvds_sel input from upstream clock gates
  ARM: mvebu: Fix coherency bus notifiers by using separate notifiers
  ARM: mvebu: Fix the operand list in the inline asm of armada_370_xp_pmsu_idle_enter
  ARM: mvebu: fix SMP boot for Armada 38x and Armada 375 Z1 in big endian
2014-07-18 20:49:47 -10:00
Linus Torvalds 1b9f0efd61 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Peter Anvin:
 "A couple of key fixes and a few less critical ones.  The main ones
  are:

   - add a .bss section to the PE/COFF headers when building with EFI
     stub

   - invoke the correct paravirt magic when building the espfix page
     tables

  Unfortunately both of these areas also have at least one additional
  fix each still in thie pipeline, but which are not yet ready to push"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86: Remove unused variable "polling"
  x86/espfix/xen: Fix allocation of pages for paravirt page tables
  x86/efi: Include a .bss section within the PE/COFF headers
  efi: fdt: Do not report an error during boot if UEFI is not available
  efi/arm64: efistub: remove local copy of linux_banner
2014-07-18 20:46:55 -10:00
Tomasz Figa 9637f30e6b ARM: EXYNOS: Fix core ID used by platsmp and hotplug code
When CPU topology is specified in device tree, cpu_logical_map() does
not return core ID anymore, but rather full MPIDR value. This breaks
existing calculation of PMU register offsets on Exynos SoCs.

This patch fixes the problem by adjusting the code to use only core ID
bits of the value returned by cpu_logical_map() to allow CPU topology to
be specified in device tree on Exynos SoCs.

Signed-off-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-18 17:12:57 -07:00
Olof Johansson e5c6cac6e3 The i.MX fixes for 3.16, 2nd take:
It fixes a hard machine hang regression for boards where only pcie is
 active but no sata, as the latest imx6-pcie driver is no longer enabling
 the upstream clock directly but only lvds clk out.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJTyNTvAAoJEFBXWFqHsHzOm6kIAIQnvL429KlsyQAkZTpwHR/l
 omETpfgmjTIpGJ4hYE04Kdi8w/O7GrAVUFe0moBETPRshHBJhYGCDgVuM38fA/PB
 dd6vkCL1rS1bELaFFfTzFE07BlbZRSXy6PEs8/9wcE8vQOJ/BEKjscNY6PspKDMb
 txRnmDUf9R+YdKBAY7CWTXC465Vtfiz8vFf1v73t+URxi/YTAut7s50V1IaXZf1E
 g+W8G6SME8j1mOfPrq6hRdxijLsJ0QpKDVZay4Sb19+WMnLXXrc4M3skQsDUScp8
 3dfdJBy/fVtFwQlmcK2z78rr6netMTbIVTDJjbJiz2Eb0kIZXgsDW5Jkgr+6uqE=
 =S50Z
 -----END PGP SIGNATURE-----

Merge tag 'imx-fixes-3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into fixes

Merge "ARM: imx: fixes for 3.16, 2nd take" from Shawn Guo:

The i.MX fixes for 3.16, 2nd take:

It fixes a hard machine hang regression for boards where only pcie is
active but no sata, as the latest imx6-pcie driver is no longer enabling
the upstream clock directly but only lvds clk out.

* tag 'imx-fixes-3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
  ARM: clk-imx6q: parent lvds_sel input from upstream clock gates

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-18 14:40:17 -07:00
Olof Johansson 054388947c Second AT91 fixes series for 3.16
- fix clock definitions after the move to CCF for:
   - at91sam9n12 (ohci)
   - at91sam9x5 (ohci, pwm)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQEcBAABAgAGBQJTySlWAAoJEAf03oE53VmQPgUH/A51n4r01YIkuBfRcUlejKYt
 ick9f3AyWY9nGZn0B5ytkVCyFSu8ibKPiU3sLPTRFkel+mBuviK4AdiRaOc1MpIH
 VXUS8hxXo8HWxORibZNuVmiKOV1xNtVz6sRhJvN2C7tNFC3ObqmU5DM6E0M6MCN+
 lmpS/nA0fPEwuyHaCSBQGcHtfO6PRYZ1xLoCp3O8p/F+KQAWuaXAb2FXLwpFwMzj
 HHBVblBa1H1/dKivm7o74rTmq0O40RtvEfnL65tk1kpI43iuv0fNpnA8iHfhgJvd
 9P5mYRUj4tP3SMiTDGHCgMPx4qFOqcdvLmm4SbGPFdA5ecGemE8oaQB416YIRB0=
 =XoCN
 -----END PGP SIGNATURE-----

Merge tag 'at91-fixes' of git://github.com/at91linux/linux-at91 into fixes

Merge "at91: fixes for 3.16 #2" from Nicolas Ferre:

Second AT91 fixes series for 3.16
- fix clock definitions after the move to CCF for:
  * at91sam9n12 (ohci)
  * at91sam9x5 (ohci, pwm)

* tag 'at91-fixes' of git://github.com/at91linux/linux-at91:
  ARM: at91/dt: add missing clocks property to pwm node in sam9x5.dtsi
  ARM: at91/dt: fix usb0 clocks definition in sam9n12 dtsi
  ARM: at91: at91sam9x5: correct typo error for ohci clock

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-18 14:39:18 -07:00
Olof Johansson 81cca645b6 mvebu fixes for v3.16 (round 3)
- Fix SMP boot on 38x/375 in big endian
  - Fix operand list for pmsu on 370/XP
  - Fix coherency bus notifiers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJTyZAQAAoJEP45WPkGe8Zn+DIQAIWvUhQ/7rsThFThmlsa4t01
 8l4PqWlPznHHfSAfcM5JZtGBpKvqHhf+e6Hn8wPXek4u7v1x2K6dREk2JgsGZQMP
 rsA2Ajn8jseFQb+iBnzdr1eV0AkztlGy0pJ4N+S4pogp4pzn6WPPNGz7P9UUzW/U
 U2I8NycTqzsq8siODK/AbqLfFfok2M/++QgNOdEli1cQ44NdYyAzVLeqe/C9Ou6K
 fn6RdscbvK/jWmrWi9CS4lhnhNkG8HBxxpzF4Rm06dWDU6z+B/HECq8yjHJlX9rx
 EsxiJRV6nzUiws+/o19CUsl/lsJP0pfiTDXCoiUUhOovYUukm632ySdG5QfjnYaK
 zsRw9hBnHCfHW5QEt6NaY5fVknnQPmJMM7WsW9B7PtQX4Rl38CWhLdq3LAbPVv9V
 ze1AllUSmBLTYuQHFMuA602ZzngFcw1c+ZOmfrOpX+QYlyiv1CkqUOXiVGHNb2Nn
 NPiCZaDp8d+JvWloOme0aZX+XfgfUOeXxogtYCtFBTGe9C+P6oqzPni3hqcvL7PA
 PUo6BRe1KIOaQuUm0Eh/XqWC5Nyo0gcXm1oM8JgovVTT6RQndPIQLfO9isOa5A+b
 PaLrAYtzHge+cCU4TJShYzjcVGzz1K2hsINjJ9NlW8172LbC1g5wQWrUVPFHrLuz
 WoZYmkmNzNd8EGQwXdkj
 =tq91
 -----END PGP SIGNATURE-----

Merge tag 'mvebu-fixes-3.16-3' of git://git.infradead.org/linux-mvebu into fixes

Merge "mvebu fixes for v3.16 (round 3)" from Jason Cooper:

 - Fix SMP boot on 38x/375 in big endian
 - Fix operand list for pmsu on 370/XP
 - Fix coherency bus notifiers

* tag 'mvebu-fixes-3.16-3' of git://git.infradead.org/linux-mvebu:
  ARM: mvebu: Fix coherency bus notifiers by using separate notifiers
  ARM: mvebu: Fix the operand list in the inline asm of armada_370_xp_pmsu_idle_enter
  ARM: mvebu: fix SMP boot for Armada 38x and Armada 375 Z1 in big endian

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-18 14:38:28 -07:00
Ard Biesheuvel 99a5603e2a efi/arm64: Handle missing virtual mapping for UEFI System Table
If we cannot resolve the virtual address of the UEFI System Table, its
physical offset must be missing from the virtual memory map, and there
is really no point in proceeding with installing the virtual memory map
and the runtime services dispatch table. So back out gracefully.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:24:04 +01:00
Daniel Kiper c7341d6a61 arch/x86/xen: Silence compiler warnings
Compiler complains in the following way when x86 32-bit kernel
with Xen support is build:

  CC      arch/x86/xen/enlighten.o
arch/x86/xen/enlighten.c: In function ‘xen_start_kernel’:
arch/x86/xen/enlighten.c:1726:3: warning: right shift count >= width of type [enabled by default]

Such line contains following EFI initialization code:

boot_params.efi_info.efi_systab_hi = (__u32)(__pa(efi_systab_xen) >> 32);

There is no issue if x86 64-bit kernel is build. However, 32-bit case
generate warning (even if that code will not be executed because Xen
does not work on 32-bit EFI platforms) due to __pa() returning unsigned long
type which has 32-bits width. So move whole EFI initialization stuff
to separate function and build it conditionally to avoid above mentioned
warning on x86 32-bit architecture.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <Konrad.wilk@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:24:03 +01:00
Michael Brown aeffc4928e x86/efi: Request desired alignment via the PE/COFF headers
The EFI boot stub goes to great pains to relocate the kernel image to
an appropriately aligned address, as indicated by the ->kernel_alignment
field in the bzImage header.  However, for the PE stub entry case, we
can request that the EFI PE/COFF loader do the work for us.

Fix by exposing the desired alignment via the SectionAlignment field
in the PE/COFF headers.  Despite its name, this field provides an
overall alignment requirement for the loaded file.  (Naturally, the
FileAlignment field describes the alignment for individual sections.)

There is no way in the PE/COFF headers to express the concept of
min_alignment; we therefore do not expose the minimum (as opposed to
preferred) alignment.

Signed-off-by: Michael Brown <mbrown@fensystems.co.uk>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:24:02 +01:00
Ulf Winkelvos fb86b2440d x86/efi: Add better error logging to EFI boot stub
Hopefully this will enable us to better debug:
https://bugzilla.kernel.org/show_bug.cgi?id=68761

Signed-off-by: Ulf Winkelvos <ulf@winkelvos.de>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:24:01 +01:00
Daniel Kiper f383d00a0d arch/x86: Remove efi_set_rtc_mmss()
efi_set_rtc_mmss() is never used to set RTC due to bugs found
on many EFI platforms. It is set directly by mach_set_rtc_mmss().
Hence, remove unused efi_set_rtc_mmss() function.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:59 +01:00
Daniel Kiper 9402973d49 arch/x86: Replace plain strings with constants
We've got constants, so let's use them instead of hard-coded values.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:59 +01:00
Daniel Kiper be81c8a1da xen: Put EFI machinery in place
This patch enables EFI usage under Xen dom0. Standard EFI Linux
Kernel infrastructure cannot be used because it requires direct
access to EFI data and code. However, in dom0 case it is not possible
because above mentioned EFI stuff is fully owned and controlled
by Xen hypervisor. In this case all calls from dom0 to EFI must
be requested via special hypercall which in turn executes relevant
EFI code in behalf of dom0.

When dom0 kernel boots it checks for EFI availability on a machine.
If it is detected then artificial EFI system table is filled.
Native EFI callas are replaced by functions which mimics them
by calling relevant hypercall. Later pointer to EFI system table
is passed to standard EFI machinery and it continues EFI subsystem
initialization taking into account that there is no direct access
to EFI boot services, runtime, tables, structures, etc. After that
system runs as usual.

This patch is based on Jan Beulich and Tang Liang work.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Tang Liang <liang.tang@oracle.com>
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:58 +01:00
Daniel Kiper 0fe376e18f arch/x86: Remove redundant set_bit(EFI_MEMMAP) call
Remove redundant set_bit(EFI_MEMMAP, &efi.flags) call.
It is executed earlier in efi_memmap_init().

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:57 +01:00
Daniel Kiper 9f2adfd8a9 arch/x86: Remove redundant set_bit(EFI_SYSTEM_TABLES) call
Remove redundant set_bit(EFI_SYSTEM_TABLES, &efi.flags) call.
It is executed earlier in efi_systab_init().

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:56 +01:00
Daniel Kiper 9f27bc543b efi: Introduce EFI_PARAVIRT flag
Introduce EFI_PARAVIRT flag. If it is set then kernel runs
on EFI platform but it has not direct control on EFI stuff
like EFI runtime, tables, structures, etc. If not this means
that Linux Kernel has direct access to EFI infrastructure
and everything runs as usual.

This functionality is used in Xen dom0 because hypervisor
has full control on EFI stuff and all calls from dom0 to
EFI must be requested via special hypercall which in turn
executes relevant EFI code in behalf of dom0.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:55 +01:00
Daniel Kiper 67a9b9c53c arch/x86: Do not access EFI memory map if it is not available
Do not access EFI memory map if it is not available. At least
Xen dom0 EFI implementation does not have an access to it.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:54 +01:00
Daniel Kiper abc93f8eb6 efi: Use early_mem*() instead of early_io*()
Use early_mem*() instead of early_io*() because all mapped EFI regions
are memory (usually RAM but they could also be ROM, EPROM, EEPROM, flash,
etc.) not I/O regions. Additionally, I/O family calls do not work correctly
under Xen in our case. early_ioremap() skips the PFN to MFN conversion
when building the PTE. Using it for memory will attempt to map the wrong
machine frame. However, all artificial EFI structures created under Xen
live in dom0 memory and should be mapped/unmapped using early_mem*() family
calls which map domain memory.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Salter <msalter@redhat.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:54 +01:00
Daniel Kiper 4fa62481e2 arch/ia64: Define early_memunmap()
This is odd to use early_iounmap() function do tear down mapping
created by early_memremap() function, even if it works right now,
because they belong to different set of functions. The former is
I/O related function and the later is memory related. So, create
early_memunmap() macro which in real is early_iounmap(). This
thing will help to not confuse code readers longer by mixing
functions from different classes.

EFI patches following this patch uses that functionality.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:53 +01:00
Matt Fleming 44be28e9dd x86/reboot: Add EFI reboot quirk for ACPI Hardware Reduced flag
It appears that the BayTrail-T class of hardware requires EFI in order
to powerdown and reboot and no other reliable method exists.

This quirk is generally applicable to all hardware that has the ACPI
Hardware Reduced bit set, since usually ACPI would be the preferred
method.

Cc: Len Brown <len.brown@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:52 +01:00
Matt Fleming 8562c99cdd efi/reboot: Add generic wrapper around EfiResetSystem()
Implement efi_reboot(), which is really just a wrapper around the
EfiResetSystem() EFI runtime service, but it does at least allow us to
funnel all callers through a single location.

It also simplifies the callsites since users no longer need to check to
see whether EFI_RUNTIME_SERVICES are enabled.

Cc: Tony Luck <tony.luck@intel.com>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:23:51 +01:00
Ard Biesheuvel f4f75ad574 efi: efistub: Convert into static library
This patch changes both x86 and arm64 efistub implementations
from #including shared .c files under drivers/firmware/efi to
building shared code as a static library.

The x86 code uses a stub built into the boot executable which
uncompresses the kernel at boot time. In this case, the library is
linked into the decompressor.

In the arm64 case, the stub is part of the kernel proper so the library
is linked into the kernel proper as well.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:22:19 +01:00
Heiko Carstens 2a62a57a06 s390/ftrace: remove check of obsolete variable function_trace_stop
Remove check of obsolete variable function_trace_stop as requested by
Steven Rostedt.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:58:11 -04:00
Steven Rostedt (Red Hat) ac694fda32 arm64, ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

arm64 was broken anyway, as it had an ifdef testing
 CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST which is only set if
the arch supports the code (which it obviously did not), and
it was testing a non existent ftrace_trace_stop instead of
function_trace_stop.

Link: http://lkml.kernel.org/r/20140627124421.GP26276@arm.com

Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:58:10 -04:00
Steven Rostedt (Red Hat) 6c56524101 Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/3144266.ziutPk5CNZ@vapier

Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:58:10 -04:00
Steven Rostedt (Red Hat) ad515ea288 metag: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Acked-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:58:09 -04:00
Steven Rostedt (Red Hat) 8c39a514ee microblaze: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/53C8D82B.4030204@monstr.eu

Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:08 -04:00
Steven Rostedt (Red Hat) 44304cfbcb MIPS: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Cc: Ralf Baechle <ralf@linux-mips.org>
Tested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:05 -04:00
Steven Rostedt (Red Hat) 4082b8664c parisc: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de

Cc: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:05 -04:00
Steven Rostedt (Red Hat) 41dc27e3b9 sh: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

[ Please test this on your arch ]

Cc: Matt Fleming <matt@console-pimps.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:04 -04:00
Steven Rostedt (Red Hat) 2563b9d965 sparc64,ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/20140703.211820.1674895115102216877.davem@davemloft.net

Cc: David S. Miller <davem@davemloft.net>
OKed-to-go-through-tracing-tree-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:03 -04:00
Steven Rostedt (Red Hat) 6beecb9562 tile: ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Cc: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Zhigang Lu<zlu@tilera.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:03 -04:00
Steven Rostedt (Red Hat) fdc841b58c ftrace: x86: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

Link: http://lkml.kernel.org/r/53C54D32.6000000@zytor.com

Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:57:02 -04:00
Steven Rostedt (Red Hat) 7fa322dba3 sh: ftrace: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:56:57 -04:00
Steven Rostedt (Red Hat) 96d4f43e3d powerpc/ftrace: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Cc: Anton Blanchard <anton@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:56:56 -04:00
Steven Rostedt (Red Hat) 3a46588e4b parisc: ftrace: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de

Cc: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:56:56 -04:00
Steven Rostedt (Red Hat) 6a8a505113 MIPS: ftrace: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Cc: Ralf Baechle <ralf@linux-mips.org>
Tested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:56:55 -04:00
Steven Rostedt (Red Hat) e1dc5007cf microblaze: ftrace: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Link: http://lkml.kernel.org/r/53C8D874.9090601@monstr.eu

Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:55:45 -04:00
Mark Rutland d7a49086f2 arm64: cpuinfo: print info for all CPUs
Currently reading /proc/cpuinfo will result in information being read
out of the MIDR_EL1 of the current CPU, and the information is not
associated with any particular logical CPU number.

This is problematic for systems with heterogeneous CPUs (i.e.
big.LITTLE) where MIDR fields will vary across CPUs, and the output will
differ depending on the executing CPU.

This patch reorganises the code responsible for /proc/cpuinfo to print
information per-cpu. In the process, we perform several cleanups:

* Property names are coerced to lower-case (to match "processor" as per
  glibc's expectations).
* Property names are simplified and made to match the MIDR field names.
* Revision is changed to hex as with every other field.
* The meaningless Architecture property is removed.
* The ripe-for-abuse Machine field is removed.

The features field (a human-readable representation of the hwcaps)
remains printed once, as this is expected to remain in use as the
globally support CPU features. To enable the possibility of the addition
of per-cpu HW feature information later, this is printed before any
CPU-specific information.

Comments are added to guide userspace developers in the right direction
(using the hwcaps provided in auxval). Hopefully where userspace
applications parse /proc/cpuinfo rather than using the readily available
hwcaps, they limit themselves to reading said first line.

If CPU features differ from each other, the previously installed sanity
checks will give us some advance notice with warnings and
TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems.
Rework will be required in many places to support such systems anyway.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marcus Shawcroft <marcus.shawcroft@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: remove machine_name as it is no longer reported]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 18:33:23 +01:00
Mark Rutland 127161aaf0 arm64: add runtime system sanity checks
Unexpected variation in certain system register values across CPUs is an
indicator of potential problems with a system. The kernel expects CPUs
to be mostly identical in terms of supported features, even in systems
with heterogeneous CPUs, with uniform instruction set support being
critical for the correct operation of userspace.

To help detect issues early where hardware violates the expectations of
the kernel, this patch adds simple runtime sanity checks on important ID
registers in the bring up path of each CPU.

Where CPUs are fundamentally mismatched, set TAINT_CPU_OUT_OF_SPEC.
Given that the kernel assumes CPUs are identical feature wise, let's not
pretend that we expect such configurations to work. Supporting such
configurations would require massive rework, and hopefully they will
never exist.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:11 +01:00
Mark Rutland 59ccc0d41b arm64: cachetype: report weakest cache policy
In big.LITTLE systems, the I-cache policy may differ across CPUs, and
thus we must always meet the most stringent maintenance requirements of
any I-cache in the system when performing maintenance to ensure
correctness. Unfortunately this requirement is not met as we always look
at the current CPU's cache type register to determine the maintenance
requirements.

This patch causes the I-cache policy of all CPUs to be taken into
account for icache_is_aliasing and icache_is_aivivt. If any I-cache in
the system is aliasing or AIVIVT, the respective function will return
true. At boot each CPU may set flags to identify that at least one
I-cache in the system is aliasing and/or AIVIVT.

The now unused and potentially misleading icache_policy function is
removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:10 +01:00
Mark Rutland df857416a1 arm64: cpuinfo: record cpu system register values
Several kernel subsystems need to know details about CPU system register
values, sometimes for CPUs other than that they are executing on. Rather
than hard-coding system register accesses and cross-calls for these
cases, this patch adds logic to record various system register values at
boot-time. This may be used for feature reporting, firmware bug
detection, etc.

Separate hooks are added for the boot and hotplug paths to enable
one-time intialisation and cold/warm boot value mismatch detection in
later patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:09 +01:00
Mark Rutland 89c4a306e7 arm64: add MIDR_EL1 field accessors
The MIDR_EL1 register is composed of a number of bitfields, and uses of
the fields has so far involved open-coding of the shifts and masks
required.

This patch adds shifts and masks for each of the MIDR_EL1 subfields, and
also provides accessors built atop of these. Existing uses within
cputype.h are updated to use these accessors.

The read_cpuid_part_number macro is modified to return the extracted
bitfield rather than returning the value in-place with all other fields
(including revision) masked out, to better match the other accessors.
As the value is only used in comparison with the *_CPU_PART_* macros
which are similarly updated, and these values are never exposed to
userspace, this change should not affect any functionality.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:08 +01:00
Lorenzo Pieralisi 18ab7db6b7 arm64: kernel: add missing __init section marker to cpu_suspend_init
Suspend init function must be marked as __init, since it is not needed
after the kernel has booted. This patch moves the cpu_suspend_init()
function to the __init section.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:59 +01:00
Lorenzo Pieralisi b9e97ef93c arm64: kernel: add __init marker to PSCI init functions
PSCI init functions must be marked as __init so that they are freed
by the kernel upon boot.

This patch marks the PSCI init functions as such since they need not
be persistent in the kernel address space after the kernel has booted.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:45 +01:00
Lorenzo Pieralisi 756854d9b9 arm64: kernel: enable PSCI cpu operations on UP systems
PSCI CPU operations have to be enabled on UP kernels so that calls
like eg cpu_suspend can be made functional on UP too.

This patch reworks the PSCI CPU operations so that they can be
enabled on UP systems.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:25 +01:00
Boris BREZILLON e0d69e119f ARM: at91/dt: add missing clocks property to pwm node in sam9x5.dtsi
The pwm driver requires a clocks property referencing the pwm peripheral
clk.

Signed-off-by: Boris BREZILLON <boris.brezillon@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2014-07-18 15:56:35 +02:00
Boris BREZILLON 043dfc1b62 ARM: at91/dt: fix usb0 clocks definition in sam9n12 dtsi
udphs_clk (USB Device Controller clock) is referenced instead of
uhphs_clk (USB Host Controller clock).

Signed-off-by: Boris BREZILLON <boris.brezillon@free-electrons.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2014-07-18 15:56:35 +02:00
Bo Shen dba1fd0bff ARM: at91: at91sam9x5: correct typo error for ohci clock
Correct the typo error for the second "uhphs_clk".

Signed-off-by: Bo Shen <voice.shen@atmel.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
2014-07-18 15:56:34 +02:00
Sebastian Hesselbarth 8cf2389bc4 ARM: 8100/1: Fix preemption disable in iwmmxt_task_enable()
commit 431a84b1a4
 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")
introduced macros {inc,dec}_preempt_count to iwmmxt_task_enable
to make it run with preemption disabled.

Unfortunately, other functions in iwmmxt.S also use concan_{save,dump,load}
sections located in iwmmxt_task_enable() to deal with iWMMXt coprocessor.
This causes an unbalanced preempt_count due to excessive dec_preempt_count
and destroyed return addresses in callers of concan_ labels due to a register
collision:

Linux version 3.16.0-rc3-00062-gd92a333-dirty (jef@armhf) (gcc version 4.8.3 (Debian 4.8.3-4) ) #5 PREEMPT Thu Jul 3 19:46:39 CEST 2014
CPU: ARMv7 Processor [560f5815] revision 5 (ARMv7), cr=10c5387d
CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
Machine model: SolidRun CuBox
...
PJ4 iWMMXt v2 coprocessor enabled.
...
Unable to handle kernel paging request at virtual address fffffffe
pgd = bb25c000
[fffffffe] *pgd=3bfde821, *pte=00000000, *ppte=00000000
Internal error: Oops: 80000007 [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 62 Comm: startpar Not tainted 3.16.0-rc3-00062-gd92a333-dirty #5
task: bb230b80 ti: bb256000 task.ti: bb256000
PC is at 0xfffffffe
LR is at iwmmxt_task_copy+0x44/0x4c
pc : [<fffffffe>]    lr : [<800130ac>]    psr: 40000033
sp : bb257de8  ip : 00000013  fp : bb257ea4
r10: bb256000  r9 : fffffdfe  r8 : 76e898e6
r7 : bb257ec8  r6 : bb256000  r5 : 7ea12760  r4 : 000000a0
r3 : ffffffff  r2 : 00000003  r1 : bb257df8  r0 : 00000000
Flags: nZcv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
Control: 10c5387d  Table: 3b25c019  DAC: 00000015
Process startpar (pid: 62, stack limit = 0xbb256248)

This patch fixes the issue by moving concan_{save,dump,load} into separate
code sections and make iwmmxt_task_enable() call them in the same way the
other functions use concan_ symbols. The test for valid ownership is moved
to concan_save and is safe for the other user of it, iwmmxt_task_disable().
The register collision is also resolved by moving concan_ symbols as
{inc,dec}_preempt_count are now local to iwmmxt_task_enable().

Fixes: 431a84b1a4 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")

Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18 12:30:14 +01:00
Will Deacon 5959e25729 arm64: fpsimd: avoid restoring fpcr if the contents haven't changed
Writing to the FPCR is commonly implemented as a self-synchronising
operation in the CPU, so avoid writing to the register when the saved
value matches that in the hardware already.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 10:21:17 +01:00
Lucas Stach 03e97220b9 ARM: clk-imx6q: parent lvds_sel input from upstream clock gates
The i.MX6 reference manual doesn't make a clear distinction
between the fixed clock divider and the enable gate for the
pcie and sata reference clocks. This lead to the lvds mux
inputs in the imx6q clk driver to be parented from the
ref clock (which is the divider) instead of the actual gate,
which in turn prevents the upstream clock to actually be
enabled when lvds clk out is active.

This fixes a hard machine hang regression in kernel 3.16 for
boards where only pcie is active but no sata, as with this
kernel version the imx6-pcie driver is no longer enabling
the upstream clock directly but only lvds clk out.

Reported-by: Arne Ruhnau <arne.ruhnau@target-sg.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Arne Ruhnau <arne.ruhnau@target-sg.com>
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
2014-07-18 15:57:17 +08:00
Russell King 6b076991dc ARM: DMA: ensure that old section mappings are flushed from the TLB
When setting up the CMA region, we must ensure that the old section
mappings are flushed from the TLB before replacing them with page
tables, otherwise we can suffer from mismatched aliases if the CPU
speculatively prefetches from these mappings at an inopportune time.

A mismatched alias can occur when the TLB contains a section mapping,
but a subsequent prefetch causes it to load a page table mapping,
resulting in the possibility of the TLB containing two matching
mappings for the same virtual address region.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-17 19:26:08 +01:00
Ian Campbell ad789ba5f7 arm64: Align the kbuild output for VDSOL and VDSOA
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kbuild@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:19:29 +01:00
Will Deacon 601255ae3c arm64: vdso: move data page before code pages
Andy pointed out that binutils generates additional sections in the vdso
image (e.g. section string table) which, if our .text section gets big
enough, could cross a page boundary and end up screwing up the location
where the kernel expects to put the data page.

This patch solves the issue in the same manner as x86_32, by moving the
data page before the code pages.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:57 +01:00
Will Deacon 2fea7f6c98 arm64: vdso: move to _install_special_mapping and remove arch_vma_name
_install_special_mapping replaces install_special_mapping and removes
the need to detect special VMA in arch_vma_name.

This patch moves the vdso and compat vectors page code over to the new
API.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:46 +01:00
Will Deacon 8715493852 arm64: vdso: put vdso datapage in a separate vma
The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).

This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:36 +01:00
Catalin Marinas b2f8c07bcb arm64: Remove duplicate (SWAPPER|IDMAP)_DIR_SIZE definitions
Just keep the asm/page.h definition as this is included in vmlinux.lds.S
as well.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
2014-07-17 16:02:42 +01:00
Jungseok Lee ac7b406c1a arm64: Use pr_* instead of printk
This patch fixed the following checkpatch complaint as using pr_*
instead of printk.

WARNING: printk() should include KERN_ facility level

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:02:32 +01:00
Steven Rostedt (Red Hat) 84b2bc7fa0 ftrace/x86: Add call to ftrace_graph_is_dead() in function graph code
ftrace_stop() is going away as it disables parts of function tracing
that affects users that should not be affected. But ftrace_graph_stop()
is built on ftrace_stop(). Here's another example of killing all of
function tracing because something went wrong with function graph
tracing.

Instead of disabling all users of function tracing on function graph
error, disable only function graph tracing. To do this, the arch code
must call ftrace_graph_is_dead() before it implements function graph.

Link: http://lkml.kernel.org/r/53C54D18.3020602@zytor.com

Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-17 09:45:08 -04:00
Steven Rostedt (Red Hat) b8f99b3e0e x86, power, suspend: Annotate restore_processor_state() with notrace
ftrace_stop() is used to stop function tracing during suspend and resume
which removes a lot of possible debugging opportunities with tracing.
The reason was that some function in the resume path was causing a triple
fault if it were to be traced. The issue I found was that doing something
as simple as calling smp_processor_id() would reboot the box!

When function tracing was first created I didn't have a good way to figure
out what function was having issues, or it looked to be multiple ones. To
fix it, we just created a big hammer approach to the problem which was to
add a flag in the mcount trampoline that could be checked and not call
the traced functions.

Lately I developed better ways to find problem functions and I can bisect
down to see what function is causing the issue. I removed the flag that
stopped tracing and proceeded to find the problem function and it ended
up being restore_processor_state(). This function makes sense as when the
CPU comes back online from a suspend it calls this function to set up
registers, amongst them the GS register, which stores things such as
what CPU the processor is (if you call smp_processor_id() without this
set up properly, it would fault).

By making restore_processor_state() notrace, the system can suspend and
resume without the need of the big hammer tracing to stop.

Link: http://lkml.kernel.org/r/3577662.BSnUZfboWb@vostro.rjw.lan

Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-17 09:45:05 -04:00
Steven Rostedt (Red Hat) 1026ff9b8e ftrace/x86: Have function graph tracer use its own trampoline
The function graph trampoline is called from the function trampoline
and both do a save and restore of registers. The save of registers
done by the function trampoline when only the function graph tracer
is running is a waste of CPU cycles.

As the function graph tracer trampoline in x86 is dependent from
the function trampoline, we can call it directly when a function
is only being traced by the function graph trampoline.

Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-17 09:44:37 -04:00
Wanpeng Li 963fee1656 KVM: nVMX: Fix virtual interrupt delivery injection
This patch fix bug reported in https://bugzilla.kernel.org/show_bug.cgi?id=73331,
after the patch http://www.spinics.net/lists/kvm/msg105230.html applied, there is
some progress and the L2 can boot up, however, slowly. The original idea of this
fix vid injection patch is from "Zhang, Yang Z" <yang.z.zhang@intel.com>.

Interrupt which delivered by vid should be injected to L1 by L0 if current is in
L1, or should be injected to L2 by L0 through the old injection way if L1 doesn't
have set External-interrupt exiting bit. The current logic doen't consider these
cases. This patch fix it by vid intr to L1 if current is L1 or L2 through old
injection way if L1 doen't have External-interrupt exiting bit set.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-17 14:45:26 +02:00
Davidlohr Bueso 3a6bfbc91d arch, locking: Ciao arch_mutex_cpu_relax()
The arch_mutex_cpu_relax() function, introduced by 34b133f, is
hacky and ugly. It was added a few years ago to address the fact
that common cpu_relax() calls include yielding on s390, and thus
impact the optimistic spinning functionality of mutexes. Nowadays
we use this function well beyond mutexes: rwsem, qrwlock, mcs and
lockref. Since the macro that defines the call is in the mutex header,
any users must include mutex.h and the naming is misleading as well.

This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
only if you can do it with very low latency") and (ii) defines it in
each arch's asm/processor.h local header, just like for regular cpu_relax
functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
and thus we can take it out of mutex.h. While this can seem redundant,
I believe it is a good choice as it allows us to move out arch specific
logic from generic locking primitives and enables future(?) archs to
transparently define it, similarly to System Z.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bharat Bhushan <r65777@freescale.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Howells <dhowells@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Joe Perches <joe@perches.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Joseph Myers <joseph@codesourcery.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: Qiaowei Ren <qiaowei.ren@intel.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Steven Rostedt <srostedt@redhat.com>
Cc: Stratos Karafotis <stratosk@semaphore.gr>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vasily Kulikov <segoon@openwall.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wolfram Sang <wsa@the-dreams.de>
Cc: adi-buildroot-devel@lists.sourceforge.net
Cc: linux390@de.ibm.com
Cc: linux-alpha@vger.kernel.org
Cc: linux-am33-list@redhat.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-cris-kernel@axis.com
Cc: linux-hexagon@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
Cc: linux@lists.openrisc.net
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: linux-metag@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: linux-xtensa@linux-xtensa.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-17 12:32:47 +02:00
Ingo Molnar b5e4111f02 Merge branch 'locking/urgent' into locking/core, before applying larger changes and to refresh the branch with fixes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-17 11:45:29 +02:00
Bjorn Helgaas 792688fde4 Merge branches 'pci/host-generic', 'pci/host-mvebu', 'pci/host-rcar', 'pci/host-tegra', 'pci/msi', 'pci/misc', 'pci/resource' and 'pci/virtualization' into next
* pci/host-generic:
  PCI: generic: Fix GPL v2 license string typo

* pci/host-mvebu:
  PCI: mvebu: Fix GPL v2 license string typo

* pci/host-rcar:
  PCI: rcar: Fix GPL v2 license string typo

* pci/host-tegra:
  PCI: tegra: Fix GPL v2 license string typo

* pci/msi:
  PCI/MSI: Use irq_get_msi_desc() to simplify code
  PCI/MSI: Remove unused list access in __pci_restore_msix_state()
  PCI/MSI: Retrieve first MSI IRQ from msi_desc rather than pci_dev
  PCI/MSI: Remove unused function msi_remove_pci_irq_vectors()
  PCI/MSI: Add msi_setup_entry() to clean up MSI initialization

* pci/misc:
  PCI: Configure ASPM when enabling device
  x86: don't exclude low BIOS area when allocating address space for non-PCI cards
  PCI: Add include guard to include/linux/pci_ids.h
  x86, ia64: Move EFI_FB vga_default_device() initialization to pci_vga_fixup()

* pci/resource:
  PCI: Tidy resource assignment messages
  PCI: Return conventional error values from pci_revert_fw_address()
  PCI: Cleanup control flow
  PCI: Support BAR sizes up to 128GB
  PCI: Keep original resource if we fail to expand it

* pci/virtualization:
  powerpc/pci: Remove duplicate logic
  PCI: Make resetting secondary bus logic common
2014-07-16 17:09:47 -06:00
Alexander Yarygin 3be8e2a0a5 perf kvm: Add stat support on s390
On s390, the vmexit event has a tree-like structure: between
exit_event_begin and exit_event_end several other events may happen and
with each of them refining the previous ones.

This patch adds a decoder for such events to the generic code and also
the files <asm/kvm_perf.h> and kvm-stat.c for s390.

Commands 'perf kvm stat record', 'report' and 'live' are supported.

Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1404397747-20939-5-git-send-email-yarygin@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-07-16 17:57:33 -03:00
Alexander Yarygin 44b3802122 perf kvm: Use defines of kvm events
Currently perf-kvm uses string literals for kvm event names, but it
works only for x86, because other architectures may have other names for
those events.

To reduce dependence on architecture, we add <asm/kvm_perf.h> file with
defines for:

- kvm_entry and kvm_exit events,
- exit reason field name in kvm_exit event,
- length of exit reasons strings,
- vcpu_id field name in kvm trace events,

and replace literals in perf-kvm.

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by David Ahern <dsahern@gmail.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1404397747-20939-2-git-send-email-yarygin@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2014-07-16 17:57:32 -03:00
Linus Torvalds bcf44bfe5e Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fixes from Ingo Molnar:
 "A cpufreq lockup fix and a compiler warning fix"

* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched: Fix compiler warnings
  x86, tsc: Fix cpufreq lockup
2014-07-16 10:11:02 -10:00
Linus Torvalds d14aef3872 Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
 "Tooling fixes and an Intel PMU driver fixlet"

* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf: Do not allow optimized switch for non-cloned events
  perf/x86/intel: ignore CondChgd bit to avoid false NMI handling
  perf symbols: Get kernel start address by symbol name
  perf tools: Fix segfault in cumulative.callchain report
2014-07-16 10:10:27 -10:00
Vivien Didelot 832fcc899a x86/platform/ts5500: Add support for TS-5400 boards
This patch extends the TS-5500 platform driver to support the
similar Technologic Systems TS-5400 Single Board Computer:

  http://wiki.embeddedarm.com/wiki/TS-5400

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Savoir-faire Linux Inc. <kernel@savoirfairelinux.com>
Link: http://lkml.kernel.org/r/1404860269-11837-4-git-send-email-vivien.didelot@savoirfairelinux.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 21:17:42 +02:00
Vivien Didelot 84e288d418 x86/platform/ts5500: Add a 'name' sysfs attribute
Add a new "name" attribute to the TS5500 sysfs group, to clarify
which supported board model it is.

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Savoir-faire Linux Inc. <kernel@savoirfairelinux.com>
Link: http://lkml.kernel.org/r/1404860269-11837-3-git-send-email-vivien.didelot@savoirfairelinux.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 21:17:41 +02:00
Vivien Didelot 1d2408754d x86/platform/ts5500: Use the DEVICE_ATTR_RO() macro
Use the DEVICE_ATTR_RO() helper macro to simplify the declaration
of read-only sysfs attributes in the TS5500 code..

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Savoir-faire Linux Inc. <kernel@savoirfairelinux.com>
Link: http://lkml.kernel.org/r/1404860269-11837-2-git-send-email-vivien.didelot@savoirfairelinux.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 21:17:40 +02:00
Christoph Schulz cbace46a97 x86: don't exclude low BIOS area when allocating address space for non-PCI cards
Commit 30919b0bf3 ("x86: avoid low BIOS area when allocating address
space") moved the test for resource allocations that fall within the first
1MB of address space from the PCI-specific path to a generic path, such
that all resource allocations will avoid this area.  However, this breaks
ISA cards which need to allocate a memory region within the first 1MB.  An
example is the i82365 PCMCIA controller and derivatives like the Ricoh
RF5C296/396 which map part of the PCMCIA socket memory address space into
the first 1MB of system memory address space.  They do not work anymore as
no usable memory region exists due to this change:

  Intel ISA PCIC probe: Ricoh RF5C296/396 ISA-to-PCMCIA at port 0x3e0 ofs 0x00, 2 sockets
  host opts [0]: none
  host opts [1]: none
  ISA irqs (scanned) = 3,4,5,9,10 status change on irq 10
  pcmcia_socket pcmcia_socket1: pccard: PCMCIA card inserted into slot 1
  pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
  pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean.
  pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x0d0000-0x0dffff: clean.
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x0e0000-0x0effff: clean.
  pcmcia_socket pcmcia_socket0: cs: memory probe 0x60000000-0x60ffffff: clean.
  pcmcia_socket pcmcia_socket0: cs: memory probe 0xa0000000-0xa0ffffff: clean.
  pcmcia_socket pcmcia_socket1: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
  pcmcia_socket pcmcia_socket1: cs: IO port probe 0xa00-0xaff: clean.
  pcmcia_socket pcmcia_socket1: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0d0000-0x0dffff: clean.
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0e0000-0x0effff: clean.
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x60000000-0x60ffffff: clean.
  pcmcia_socket pcmcia_socket1: cs: memory probe 0xa0000000-0xa0ffffff: clean.
  pcmcia_socket pcmcia_socket1: cs: memory probe 0x0cc000-0x0effff: excluding 0xe0000-0xeffff
  pcmcia_socket pcmcia_socket1: cs: unable to map card memory!

If filtering out the first 1MB is reverted, everything works as expected.

Tested-by: Robert Resch <fli4l@robert.reschpara.de>
Signed-off-by: Christoph Schulz <develop@kristov.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org	# v2.6.37+
2014-07-16 12:29:36 -06:00
Jan Beulich 3bab13b015 x86/debug: Drop several unnecessary CFI annotations
With the conversion of the register saving code from macros to
functions, and with those functions not clobbering most of the
registers they spill, there's no need to annotate most of the
spill operations; the only exceptions being %rbx (always
modified) and %rcx (modified on the error_kernelspace: path).

Also remove a bogus commented out annotation - there's no
register %orig_rax after all.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/53AAE69A020000780001D3C7@mail.emea.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 15:23:06 +02:00
Peter Zijlstra 4badad352a locking/mutex: Disable optimistic spinning on some architectures
The optimistic spin code assumes regular stores and cmpxchg() play nice;
this is found to not be true for at least: parisc, sparc32, tile32,
metag-lock1, arc-!llsc and hexagon.

There is further wreckage, but this in particular seemed easy to
trigger, so blacklist this.

Opt in for known good archs.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Waiman Long <waiman.long@hp.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: John David Anglin <dave.anglin@bell.net>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: stable@vger.kernel.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 14:57:07 +02:00
Andy Lutomirski 0cdd192cf4 kprobes/x86: Don't try to resolve kprobe faults from userspace
This commit:

    commit 6f6343f53d
    Author: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
    Date:   Thu Apr 17 17:17:33 2014 +0900

        kprobes/x86: Call exception handlers directly from do_int3/do_debug

appears to have inadvertently dropped a check that the int3 came
from kernel mode.  Trying to dereference addr when addr is
user-controlled is completely bogus.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/c4e339882c121aa76254f2adde3fcbdf502faec2.1405099506.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 14:16:32 +02:00
David Rientjes 4485154138 perf/x86/intel: Avoid spamming kernel log for BTS buffer failure
It's unnecessary to excessively spam the kernel log anytime the BTS buffer
cannot be allocated, so make this allocation __GFP_NOWARN.

The user probably will want to at least find some artifact that the
allocation has failed in the past, probably due to fragmentation because
of its large size, when it's not allocated at bootstrap.  Thus, add a
WARN_ONCE() so something is left behind for them to understand why perf
commnads that require PEBS is not working properly.

Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1406301600460.26302@chino.kir.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:31:30 +02:00
Zhouyi Zhou 503d3291a9 perf/x86/amd: Try to fix some mem allocation failure handling
According to Peter's advice, put the failure handling to a goto chain.
Compiled in x86_64, could you check if there is anything that I missed.

Signed-off-by: Zhouyi Zhou <yizhouzhou@ict.ac.cn>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1402459743-20513-1-git-send-email-zhouzhouyi@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:31:06 +02:00
Kan Liang 338b522ca4 perf/x86/intel: Protect LBR and extra_regs against KVM lying
With -cpu host, KVM reports LBR and extra_regs support, if the host has
support.

When the guest perf driver tries to access LBR or extra_regs MSR,
it #GPs all MSR accesses,since KVM doesn't handle LBR and extra_regs support.
So check the related MSRs access right once at initialization time to avoid
the error access at runtime.

For reproducing the issue, please build the kernel with CONFIG_KVM_INTEL = y
(for host kernel).
And CONFIG_PARAVIRT = n and CONFIG_KVM_GUEST = n (for guest kernel).
Start the guest with -cpu host.
Run perf record with --branch-any or --branch-filter in guest to trigger LBR
Run perf stat offcore events (E.g. LLC-loads/LLC-load-misses ...) in guest to
trigger offcore_rsp #GP

Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Maria Dimakopoulou <maria.n.dimakopoulou@gmail.com>
Cc: Mark Davies <junk@eslaf.co.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Link: http://lkml.kernel.org/r/1405365957-20202-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:18:43 +02:00
Stephane Eranian 7711fe4fc2 perf/x86/intel/uncore: Fix SNB-EP/IVT Cbox filter mappings
This patch fixes the SNB-EP and IVT Cbox filter mapping
table. The table controls which filters are supported by
which events. There were several mistakes in those tables
causing some filters to be ignored, such as NID on
TOR_INSERTS.

Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: zheng.z.yan@intel.com
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140630144624.GA2604@quad
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:18:41 +02:00
Vince Weaver 1996388e9f perf/x86/intel: Use proper dTLB-load-misses event on IvyBridge
This was discussed back in February:

	https://lkml.org/lkml/2014/2/18/956

But I never saw a patch come out of it.

On IvyBridge we share the SandyBridge cache event tables, but the
dTLB-load-miss event is not compatible.  Patch it up after
the fact to the proper DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1407141528200.17214@vincent-weaver-1.umelst.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:18:40 +02:00
Paul Bolle d3f44fbabe x86: Remove unused variable "polling"
Compile tested. "polling" is unused since commit f80c5b39b8
("sched/idle, x86: Switch from TS_POLLING to TIF_POLLING_NRFLAG").

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Link: http://lkml.kernel.org/r/1404138749.2978.6.camel@x41
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 12:58:47 +02:00
Martin Schwidefsky 9f86745722 s390: fix restore of invalid floating-point-control
The fixup of the inline assembly to restore the floating-point-control
register needs to check for instruction address *after* the lfcp
instruction as the specification and data exceptions are suppresssing.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-07-16 10:48:12 +02:00
Martin Schwidefsky dab6cf55f8 s390/ptrace: fix PSW mask check
The PSW mask check of the PTRACE_POKEUSR_AREA command is incorrect.
The PSW_MASK_USER define contains the PSW_MASK_ASC bits, the ptrace
interface accepts all combinations for the address-space-control
bits. To protect the kernel space the PSW mask check in ptrace needs
to reject the address-space-control bit combination for home space.

Fixes CVE-2014-3534

Cc: stable@vger.kernel.org
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-07-16 10:48:11 +02:00
Yijing Wang 8fb878c5f1 s390/MSI: Use standard mask and unmask funtions
MSI irqchip in s390 has its own mask and unmask MSI irq
functions, zpci_enable_irq() and zpci_disable_irq().
They mask and unmask MSI irq in standard ways, no arch
special. MSI driver provides two global standard functions
mask_msi_irq() and unmask_msi_irq(). Local zpci_enable_irq()
and zpci_disable_irq() are almost the same as the standard
two. the difference is local mask/unmask functions
read the mask status before mask and unmask everytime.
Then change the value and rewrite to hardware. In standard
functions, save the mask status after mask and unmask msi
irq, and use the cached status to change the mask status.
When we mask or unmask a MSI irq, we always cache its
mask status except we know need not to cache it, like in
pci_msi_shutdown. So use the standard functions to replace
the local is safe.

Signed-off-by: Yijing Wang <wangyijing@huawei.com>
[sebott: fixed inverted function pointers]
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-07-16 10:48:10 +02:00
David Hildenbrand 4a36b44c77 s390: require mvcos facility, not tod clock steering facility
Inlined uaccess functions require the mvcos facility (bit 27), not the tod
clock steering facility (bit 28) for z10 and newer machines.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-07-16 10:48:09 +02:00
Chris Zankel 4ed2ad38b3 Xtensa fixes for 3.16:
- resolve FIXMEs in double exception handler for window overflow. This
   fix makes native building of linux on xtensa host possible;
 - fix sysmem region removal issue introduced in 3.15.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJTxQ0lAAoJEFH5zJH4P6BEIS4P/3HE24wfqFYryzKOh2P8FBVa
 YiIrsZ43X2FHp43WwifPy2BK5DDObCNdxHMxJuR4Yx5xrwYEFdH2ztLjuQWmefoL
 TCZu1dBwkZYZp86OR7D4hJ2eOHGzOsm7/c/o+InDq2ci7qVmDq5qsbc407gf2lFe
 hHnsG7OCxCKHBCxFrxfd61gQupAlSGYph4pUI46U2UGVvy1n2vy2urLPPrv5unhv
 eXwmlpB+XfApG3jtTNHQgPy0PR0nmHZqe7xhwR9wk+8HjWu6Zh4nvaPQ6tbNnynh
 gQTSwlN27Vw1JTo/d/+gVKAfKs88Bsujl9fXAhWCxdgR3LdErU2dEdpOpW5rlLZw
 hdD61t68/uyroeeyvDvJM7cU/s2CljHJ0wqFEhmbdeiP5wexSU8fg/qmq9FDMASJ
 YM7ABcyhmyLl5EQqum+h5ta6hTZWCZVa1qAw0IIRY48/+eufiCfGHQieHUqrDOfj
 fqWXIFYjyyN4kY63Qca84qjCJD+FOmw1aCXqieVmdvs/jw41eyuz87zjvWdMoJJH
 SiaGsPfsccTEvnJ2DbYsWewNtCeMsGQMiptJk1lC0FtmHmr4TCQFX4QmBVJSh2wT
 DLz+k5Zh3EkhSBdyy0GiMYc1wnqjOWvTtsvoUTLNoJ5Vi+2jLPclzk1RY3hx41SN
 rV1U8UbT0nmqJaOo1thd
 =R7uM
 -----END PGP SIGNATURE-----

Merge tag 'xtensa-for-next-20140715' of git://github.com/jcmvbkbc/linux-xtensa into for_next

Xtensa fixes for 3.16:
- resolve FIXMEs in double exception handler for window overflow. This
  fix makes native building of linux on xtensa host possible;
- fix sysmem region removal issue introduced in 3.15.
2014-07-15 22:18:30 -07:00
Linus Torvalds 5615f9f822 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:

 1) Bluetooth pairing fixes from Johan Hedberg.

 2) ieee80211_send_auth() doesn't allocate enough tail room for the SKB,
    from Max Stepanov.

 3) New iwlwifi chip IDs, from Oren Givon.

 4) bnx2x driver reads wrong PCI config space MSI register, from Yijing
    Wang.

 5) IPV6 MLD Query validation isn't strong enough, from Hangbin Liu.

 6) Fix double SKB free in openvswitch, from Andy Zhou.

 7) Fix sk_dst_set() being racey with UDP sockets, leading to strange
    crashes, from Eric Dumazet.

 8) Interpret the NAPI budget correctly in the new systemport driver,
    from Florian Fainelli.

 9) VLAN code frees percpu stats in the wrong place, leading to crashes
    in the get stats handler.  From Eric Dumazet.

10) TCP sockets doing a repair can crash with a divide by zero, because
    we invoke tcp_push() with an MSS value of zero.  Just skip that part
    of the sendmsg paths in repair mode.  From Christoph Paasch.

11) IRQ affinity bug fixes in mlx4 driver from Amir Vadai.

12) Don't ignore path MTU icmp messages with a zero mtu, machines out
    there still spit them out, and all of our per-protocol handlers for
    PMTU can cope with it just fine.  From Edward Allcutt.

13) Some NETDEV_CHANGE notifier invocations were not passing in the
    correct kind of cookie as the argument, from Loic Prylli.

14) Fix crashes in long multicast/broadcast reassembly, from Jon Paul
    Maloy.

15) ip_tunnel_lookup() doesn't interpret wildcard keys correctly, fix
    from Dmitry Popov.

16) Fix skb->sk assigned without taking a reference to 'sk' in
    appletalk, from Andrey Utkin.

17) Fix some info leaks in ULP event signalling to userspace in SCTP,
    from Daniel Borkmann.

18) Fix deadlocks in HSO driver, from Olivier Sobrie.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (93 commits)
  hso: fix deadlock when receiving bursts of data
  hso: remove unused workqueue
  net: ppp: don't call sk_chk_filter twice
  mlx4: mark napi id for gro_skb
  bonding: fix ad_select module param check
  net: pppoe: use correct channel MTU when using Multilink PPP
  neigh: sysctl - simplify address calculation of gc_* variables
  net: sctp: fix information leaks in ulpevent layer
  MAINTAINERS: update r8169 maintainer
  net: bcmgenet: fix RGMII_MODE_EN bit
  tipc: clear 'next'-pointer of message fragments before reassembly
  r8152: fix r8152_csum_workaround function
  be2net: set EQ DB clear-intr bit in be_open()
  GRE: enable offloads for GRE
  farsync: fix invalid memory accesses in fst_add_one() and fst_init_card()
  igb: do a reset on SR-IOV re-init if device is down
  igb: Workaround for i210 Errata 25: Slow System Clock
  usbnet: smsc95xx: add reset_resume function with reset operation
  dp83640: Always decode received status frames
  r8169: disable L23
  ...
2014-07-15 08:42:52 -07:00
Boris Ostrovsky 8762e50928 x86/espfix/xen: Fix allocation of pages for paravirt page tables
init_espfix_ap() is currently off by one level when informing hypervisor
that allocated pages will be used for ministacks' page tables.

The most immediate effect of this on a PV guest is that if
'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor
will refuse to use it for a page table (which it shouldn't be anyway). This will
result in warnings by both Xen and Linux.

More importantly, a subsequent write to that page (again, by a PV guest) is
likely to result in fatal page fault.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org>
2014-07-14 13:47:32 -07:00
H. Peter Anvin e0463e42d7 * Remove a duplicate copy of linux_banner from the arm64 EFI stub
which, apart from reducing code duplication also stops the arm64 stub
    being rebuilt every time make is invoked - Ard Biesheuvel
 
  * Fix the EFI fdt code to not report a boot error if UEFI is
    unavailable since booting without UEFI parameters is a valid use case
    for non-UEFI platforms - Catalin Marinas
 
  * Include a .bss section in the EFI boot stub PE/COFF headers to fix a
    memory corruption bug - Michael Brown
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJTw9I4AAoJEC84WcCNIz1VJcwQAKBZXaIjsxWqtMsqvB7RmCtw
 5wi44hf2sdrwKCBUdxRBHBoiISps3f+5VJZN25eJ5QG+eqcqoQkGoQsAVlMbGrOJ
 iyYF79REUj/ujovtodKQPOci4ueYhiRemBc8o6abOjSwdAe3fj5vpQgKQuS9RwtG
 SIy4MBucE58Zle6vGHXLxHKdJVCB0/ALESwjg9fXWluopHdWkmmN0kKhYlysElHx
 7zn2C2RoVDNQQwknueUznktKUH9pxLBEW74qE98CBZ9Spt8QpjvT22UkxoOU8grU
 RnogYgJhwiy1+GNQ5524rZwcM2XP1qFhCcWoKxP3I0UhTOe2Za7Ogi8ggUAXbvEh
 +hCsCIM3DqzAROOQMsmUZHkMJ0Gi2HSL12tt1KHNZ2zh74hiMPqDAmzkjBuf2KE0
 5Lxdnc47UmHE9qVduj8fg2A+cMLV/K4NBlitOXq6lJp9+n8Wa93MLF0WENd9akE+
 c9u7sAoTwG/5DUDy3rB1H5P65/WKyXNe9Db8JdZp2+62dxmsNyF++zkApDJT3Op7
 MyFTpVkeZrWZbZVwZJXZuKNdh19Jv/adZrvC7OKvcDBfjow8AGzbDDnr4ez42QT8
 OkM3uGyBuVTgtUdmPYNREXP5zjr1NfZV/w1fbslaJl/UbFZrNUILTP5YyqTphv2M
 W9eO6CyrmX10gd5v47A7
 =Cvne
 -----END PGP SIGNATURE-----

Merge tag 'efi-urgent' into x86/urgent

 * Remove a duplicate copy of linux_banner from the arm64 EFI stub
   which, apart from reducing code duplication also stops the arm64 stub
   being rebuilt every time make is invoked - Ard Biesheuvel

 * Fix the EFI fdt code to not report a boot error if UEFI is
   unavailable since booting without UEFI parameters is a valid use case
   for non-UEFI platforms - Catalin Marinas

 * Include a .bss section in the EFI boot stub PE/COFF headers to fix a
   memory corruption bug - Michael Brown

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-14 13:45:44 -07:00
Borislav Petkov af0fa6f6b5 x86, cpu: Kill cpu_has_mp
It was used only for checking for some K7s which didn't have MP support,
see

http://www.hardwaresecrets.com/article/How-to-Transform-an-Athlon-XP-into-an-Athlon-MP/24

and it was unconditionally set on 64-bit for no reason. Kill it.

Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1403609105-8332-4-git-send-email-bp@alien8.de
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-07-14 12:21:40 -07:00
Borislav Petkov 26bfa5f894 x86, amd: Cleanup init_amd
Distribute family-specific code to corresponding functions.

Also,

* move the direct mapping splitting around the TSEG SMM area to
bsp_init_amd().

* kill ancient comment about what we should do for K5.

* merge amd_k7_smp_check() into its only caller init_amd_k7 and drop
cpu_has_mp macro.

Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1403609105-8332-3-git-send-email-bp@alien8.de
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-07-14 12:21:40 -07:00
Borislav Petkov 80a208bd39 x86/cpufeature: Add bug flags to /proc/cpuinfo
Dump the flags which denote we have detected and/or have applied bug
workarounds to the CPU we're executing on, in a similar manner to the
feature flags.

The advantage is that those are not accumulating over time like the CPU
features.

Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1403609105-8332-2-git-send-email-bp@alien8.de
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-07-14 12:21:39 -07:00
Sekhar Nori ba394f0b6a ARM: OMAP2+: l2c: squelch warning dump on power control setting
On OMAP SOCs using PL310 controllers, power_ctrl register is not
accessible from non-secure software even on PL310 versions which
support it. The secure code takes care of setting it up correctly
and power transitions are proven on these devices.

For example, AM437x has L2C-310 version r3p3 and ROM code on that
device does not support writing to L2C-310 power control register.
The L2C driver, however, tries writing to this register for all
revisions >= r3p0.

This leads to a warning dump on boot which leads most users to believe
that L2 cache is non-functional.

Since the problem is understood, and cannot be addressed through
software, replace the warning with a pr_info() while maintaining the
WARN_ON() for other truly unexpected scenarios.

Reported-by: Nishanth Menon <nm@ti.com>
Tested-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2014-07-14 09:24:43 -07:00
Heiko Stübner 1fe69496cf ARM: rockchip: Select ARCH_HAS_RESET_CONTROLLER
All known Rockchip SoCs have a reset controller in their CRUs, so it's
helpful to have the reset controller framework selected by default,
only be deselected by the user in special cases.

Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Acked-By: Max Schwarz <max.schwarz@online.de>
Tested-By: Max Schwarz <max.schwarz@online.de>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
2014-07-13 12:17:11 -07:00
Linus Torvalds 2f3870e9e8 ARM: SoC fixes for 3.16-rc
This week's arm-soc fixes:
 
 - Another set of OMAP fixes
   * Clock fixes
   * Restart handling
   * PHY regulators
   * SATA hwmod data for DRA7
   + Some trivial fixes and removal of a bit of dead code
 - Exynos fixes
   * A bunch of clock fixes
   * Some SMP fixes
   * Exynos multi-core timer: register as clocksource and fix ftrace.
   + a few other minor fixes
 
 There's also a couple more patches, and at91 fix for USB caused by common
 clock conversion, and more MAINTAINERS entries for shmobile.
 
 We're definitely switching to only regression fixes from here on out,
 we've been a little less strict than usual up until now.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJTws/yAAoJEIwa5zzehBx35GQP/jUx/0+qiNkrupcmMoDr9ykP
 QSQPWRVBancZ5eO3gBvnde0P/4gaTQZb7s31nCkX7RQfpe4078eR9isrqs7du3rw
 BNsdJxtWIBzhnx19GDhjZM2BbYoVlDIETQTKUDPPhMg/I/k4ozLLcC+02uAMG1/g
 TiCjOHF968Cq1bCwyvqJgiTDjjAPoHLrnD2aDsAuwYi2QPKNWkv0uHZFQmV2ooja
 9Fk3Km32wirTvfjKir0r/BhV5oEdwv3y/HYRNG+bJOkvxDph8i5t2EvLeCl/yctq
 KLcHBJjLsF2MgvCpoCfjg8OFyYA7qmZDNVxfiywJnWVUR6w0kOU3MggPopsikoY/
 xU3MKJSu/36cNJER3Rl51taD9tq+4hVhKTjiLBkD0MsD9jN5ewvDqI5BKpjL5wlZ
 I36eZmQE4yPnd6is1RS7uuTcN/uXBtOAhNjPS42xkW5lo9W7BWOUOlB1dzlcZTky
 J6/h9WzODKcgaeTx55ks4wjhQmdzzO3nQk0ion3YEv27RKkSUJy8PNDAfMKAAF3p
 OWzqUIfB2mMGQBGXcShXAAyC3S2jGJptlKL3Wno0LkPXBGqlRXHMYBvGoFEdP/iV
 F24TDLxJIL/lPbpQEey3J4qZANrpuAm0HYmg+r5KiVHbMkcW2Z000w4kRoO+tIIc
 Bxzxrot9PVAFfE1SkVYt
 =NB6g
 -----END PGP SIGNATURE-----

Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "This week's arm-soc fixes:

   - Another set of OMAP fixes
     * Clock fixes
     * Restart handling
     * PHY regulators
     * SATA hwmod data for DRA7
     + Some trivial fixes and removal of a bit of dead code
   - Exynos fixes
     * A bunch of clock fixes
     * Some SMP fixes
     * Exynos multi-core timer: register as clocksource and fix ftrace.
     + a few other minor fixes

  There's also a couple more patches, and at91 fix for USB caused by
  common clock conversion, and more MAINTAINERS entries for shmobile.

  We're definitely switching to only regression fixes from here on out,
  we've been a little less strict than usual up until now"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (26 commits)
  ARM: at91: at91sam9x5: add clocks for usb device
  ARM: EXYNOS: Register cpuidle device only on exynos4210 and 5250
  ARM: dts: Add clock property for mfc_pd in exynos5420
  clk: exynos5420: Add IDs for clocks used in PD mfc
  ARM: EXYNOS: Add support for clock handling in power domain
  ARM: OMAP2+: Remove non working OMAP HDMI audio initialization
  ARM: imx: fix shared gate clock
  ARM: dts: Update the parent for Audss clocks in Exynos5420
  ARM: EXYNOS: Update secondary boot addr for secure mode
  ARM: dts: Fix TI CPSW Phy mode selection on IGEP COM AQUILA.
  ARM: dts: am335x-evmsk: Enable the McASP FIFO for audio
  ARM: dts: am335x-evm: Enable the McASP FIFO for audio
  ARM: OMAP2+: Make GPMC skip disabled devices
  ARM: OMAP2+: create dsp device only on OMAP3 SoCs
  ARM: dts: dra7-evm: Make VDDA_1V8_PHY supply always on
  ARM: DRA7/AM43XX: fix header definition for omap44xx_restart
  ARM: OMAP2+: clock/dpll: fix _dpll_test_fint arithmetics overflow
  ARM: DRA7: hwmod: Add SYSCONFIG for usb_otg_ss
  ARM: DRA7: hwmod: Fixup SATA hwmod
  ARM: OMAP3: PRM/CM: Add back macros used by TI DSP/Bridge driver
  ...
2014-07-13 12:10:18 -07:00
Linus Torvalds 5fa77b54ef Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "Another round of fixes for ARM:
   - a set of kprobes fixes from Jon Medhurst
   - fix the revision checking for the L2 cache which wasn't noticed to
     have been broken"

* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
  ARM: l2c: fix revision checking
  ARM: kprobes: Fix test code compilation errors for ARMv4 targets
  ARM: kprobes: Disallow instructions with PC and register specified shift
  ARM: kprobes: Prevent known test failures stopping other tests running
2014-07-13 12:09:18 -07:00
Linus Torvalds 33fe3aee03 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
Pull m68k fixes from Geert Uytterhoeven:
 "Summary:
  - Fix for a boot regression introduced in v3.16-rc1,
  - Fix for a build issue in -next"

Christoph Hellwig questioned why mach_random_get_entropy should be
exported to modules, and Geert explains that random_get_entropy() is
called by at least the crypto layer and ends up using it on m68k.  On
most other architectures it just uses get_cycles() (which is typically
inlined and doesn't need exporting),

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k:
  m68k: Export mach_random_get_entropy to modules
  m68k: Fix boot regression on machines with RAM at non-zero
2014-07-13 12:04:06 -07:00
Linus Torvalds 54f8c2aa14 Merge branch 'parisc-3.16-5' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
Pull parisc fixes from Helge Deller:
 "The major patch in here is one which fixes the fanotify_mark() syscall
  in the compat layer of the 64bit parisc kernel.  It went unnoticed so
  long, because the calling syntax when using a 64bit parameter in a
  32bit syscall is quite complex and even worse, it may be even
  different if you call syscall() or the glibc wrapper.  This patch
  makes the kernel accept the calling convention when called by the
  glibc wrapper.

  The other two patches are trivial and remove unused headers, #includes
  and adds the serial ports of the fastest C8000 workstation to the
  parisc-kernel internal hardware database"

* 'parisc-3.16-5' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
  parisc: drop unused defines and header includes
  parisc: fix fanotify_mark() syscall on 32bit compat kernel
  parisc: add serial ports of C8000/1GHz machine to hardware database
2014-07-13 12:02:05 -07:00
Helge Deller fe22ddcb9f parisc: drop unused defines and header includes
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # 3.13+
2014-07-13 15:56:12 +02:00
Helge Deller ab8a261ba5 parisc: fix fanotify_mark() syscall on 32bit compat kernel
On parisc we can not use the existing compat implementation for fanotify_mark()
because for the 64bit mask parameter the higher and lower 32bits are ordered
differently than what the compat function expects from big endian
architectures.

Specifically:
It finally turned out, that on hppa we end up with different assignments
of parameters to kernel arguments depending on if we call the glibc
wrapper function
 int fanotify_mark (int __fanotify_fd, unsigned int __flags,
                    uint64_t __mask, int __dfd, const char *__pathname);
or directly calling the syscall manually
 syscall(__NR_fanotify_mark, ...)

Reason is, that the syscall() function is implemented as C-function and
because we now have the sysno as first parameter in front of the other
parameters the compiler will unexpectedly add an empty paramenter in
front of the u64 value to ensure the correct calling alignment for 64bit
values.
This means, on hppa you can't simply use syscall() to call the kernel
fanotify_mark() function directly, but you have to use the glibc
function instead.

This patch fixes the kernel in the hppa-arch specifc coding to adjust
the parameters in a way as if userspace calls the glibc wrapper function
fanotify_mark().

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # 3.13+
2014-07-13 15:55:04 +02:00
Helge Deller eadcc7208a parisc: add serial ports of C8000/1GHz machine to hardware database
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # 3.13+
2014-07-13 15:51:58 +02:00
Olof Johansson cacadb4ff9 Samsung fixes-3 for v3.16
- update the parent for Auudss clock because kernel will be hang
   during late boot if the parent clock is disabled in bootloader.
 - enable clk handing in power domain because while power domain
   on/off, its regarding clock source will be reset and it causes
   a problem so need to handle it.
 - add mux clocks to be used by power domain for exynos5420-mfc
   during power domain on/off and property in device tree also.
 - register cpuidle only for exynos4210 and exynos5250 because a
   system failure will be happened on other exynos SoCs.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJTwbTSAAoJEA0Cl+kVi2xqK9UP/3wm1ukMWNekDZ97rTsEN4Uo
 qDladKJf54DANuBkbXMWL3AT6O0Ja679Xm3rjvvGGMyuM+Ga/S7RO4Odvq9970HZ
 /Auv+MQnU/t7L3UW/4jvPSL5aRTxBJ9ylpcH7HNsvUg51bOuovHcQyafag8tKt1M
 /rbRcKK6076KvctT1h677NX4+TYcFMbp08qYlmWaLGSXvijgTErHFdhoB/Mbf1+Q
 SKduITGVTmRQ4cB1Dxn1fVoAb8UIJWiWWW2Ndi57gn/blaM4iE/K6oYSV8972HtZ
 WMaFcka06FBBuFpKDjQp092altyAJbSwTURJEadI6Nrw+uqs6uMEX9hKeFdYvaKJ
 avbhz7YlDK3NSCvriJkdp0faWHxLlr0ZLV3aIye3o7JKa68Bp/Un6Y6L+5dEAdnk
 K3BiFxomdtTw5S39qnpttshwStUBCK9FxNuiPaO0FNPCiIEtQsobTCpYZ5vAZZFk
 A9lqgdQT1u/gRxn02KPz0CKz5EYhlJvJTxiX83+vv/9DUI4ulBu9oJyDLbKszZ07
 XqqAsk9cpBlr2NxnBUeAO8R4lBBjyf16pWRJBxGvlrz97OONS+OOygVufUS8o5Jw
 p8Bgf8xeRA0udxj4X2KgyKzM3TNyGUxUD5tSMwvTmWIc7HGzjzL/Fv8NBwUGKMTs
 4RtpSqM59UZuVbqXxUeP
 =didu
 -----END PGP SIGNATURE-----

Merge tag 'samsung-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes

Merge "Samsung fixes-3 for 3.16" from Kukjin Kim:

Samsung fixes-3 for v3.16
- update the parent for Auudss clock because kernel will be hang
  during late boot if the parent clock is disabled in bootloader.
- enable clk handing in power domain because while power domain
  on/off, its regarding clock source will be reset and it causes
  a problem so need to handle it.
- add mux clocks to be used by power domain for exynos5420-mfc
  during power domain on/off and property in device tree also.
- register cpuidle only for exynos4210 and exynos5250 because a
  system failure will be happened on other exynos SoCs.

* tag 'samsung-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
  ARM: EXYNOS: Register cpuidle device only on exynos4210 and 5250
  ARM: dts: Add clock property for mfc_pd in exynos5420
  clk: exynos5420: Add IDs for clocks used in PD mfc
  ARM: EXYNOS: Add support for clock handling in power domain
  ARM: dts: Update the parent for Audss clocks in Exynos5420

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-12 21:19:21 -07:00
Bo Shen 363d4ddc17 ARM: at91: at91sam9x5: add clocks for usb device
Add clocks for usb device, or else switch to CCF, the gadget
won't work.

Reported-by: Jiri Prchal <jiri.prchal@aksignal.cz>
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Tested-by: Jiri Prchal <jiri.prchal@aksignal.cz>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-12 09:15:11 -07:00
Russell King cda390bb8f Merge branch 'kprobes-test-fixes' of git://git.linaro.org/people/tixy/kernel into fixes 2014-07-12 13:59:24 +01:00
Borislav Petkov b08ee5f7e4 x86: Simplify __HAVE_ARCH_CMPXCHG tests
Both the 32-bit and 64-bit cmpxchg.h header define __HAVE_ARCH_CMPXCHG
and there's ifdeffery which checks it. But since both bitness define it,
we can just as well move it up to the main cmpxchg header and simpify a
bit of code in doing that.

Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/20140711104338.GB17083@pd.tnic
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-11 17:28:51 -07:00
Linus Torvalds 47ea8dd871 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Peter Anvin:
 "A couple of further build fixes for the VDSO code.

  This is turning into a bit of a headache, and Andy has already come up
  with a more ultimate cleanup, but most likely that is 3.17 material"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86-32, vdso: Fix vDSO build error due to missing align_vdso_addr()
  x86-64, vdso: Fix vDSO build breakage due to empty .rela.dyn
2014-07-11 17:10:05 -07:00
Andy Lutomirski da861e18ec x86, vdso: Get rid of the fake section mechanism
Now that we can tolerate extra things dangling off the end of the
vdso image, we can strip the vdso the old fashioned way rather than
using an overcomplicated custom stripping algorithm.

This is a partial reversion of:
    6f121e5 x86, vdso: Reimplement vdso.so preparation in build-time C

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/50e01ed6dcc0575d20afd782f9fe98d5ee3e2d8a.1405040914.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-11 16:58:07 -07:00
Andy Lutomirski e6577a7ce9 x86, vdso: Move the vvar area before the vdso text
Putting the vvar area after the vdso text is rather complicated: it
only works of the total length of the vdso text mapping is known at
vdso link time, and the linker doesn't allow symbol addresses to
depend on the sizes of non-allocatable data after the PT_LOAD
segment.

Moving the vvar area before the vdso text will allow is to safely
map non-allocatable data after the vdso text, which is a nice
simplification.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/156c78c0d93144ff1055a66493783b9e56813983.1405040914.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-11 16:57:51 -07:00
Linus Torvalds ef24209fb2 ARM64 implementation of TASK_SIZE_OF and exporting two functions to
modules.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.9 (GNU/Linux)
 
 iQIcBAABAgAGBQJTwCfUAAoJEGvWsS0AyF7xW0oQAI/rO2ZLTORjNQC19/su+Ji4
 Xp/4ztUk3YBaAc8u8R/OtJq8FIsh7nrGWW1mg2E8xwW52IsGjbZ8AKkNRw8vXRZB
 X9Csvb+h4TLeqGof0uIqR9ob7ug2Dh5aWWwXkiiA9pi9OCjAljbK1O51DEs0tfmF
 rc0omlqh5TdBxMVz6sIdkpnd4qfvqthNKzfjmlSi6Yn2v9iS2rzbAv7lJ1ryeeCO
 fXjmqLyfACUHqWf+HWP8DPICGWVNVrbZ9bSyQ/ZrjvLprhQnxrkC2TUjyFM4vrST
 JSH4/FJuj50BshplK8lwQFO6l/d/en+Zpjj34OvIE4iu8KMOqrRo3sWORIuS3iEg
 m3EFQGxJIMp7kJt4jVKQm9+o4UoErER0xtbaBgzI9h36msTRJrsfss8Xe+6BPSlT
 F/6k+DkWjqvsckGMp/2nRub8bcjaoLIalS4FKZRG8PLNiKWA2bln+r3a34eMVTk0
 HN3SmPZl3T3sZblWRVtmtmwYSlMRnC1kE1oqWnR6Y1MIU89bGvIKASr7gRAocfFd
 Eq+hagDBG+YfzsBobaGpSpZqnF4tU3xeyIoOKT5QOTTxaLAW84Y/vDRyC/Dz6oZa
 ffAUCVBFubhBxtnjmpL7Svj/Q5QiE8v0P84Eyeo16tKVMyiDj+0F252U+5Sns5lL
 L1O1H16dEjFP/A0Th7ay
 =et7V
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:
 "ARM64 implementation of TASK_SIZE_OF and exporting two functions to
  modules"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: implement TASK_SIZE_OF
  arm64: export __cpu_{clear,copy}_user_page functions
2014-07-11 15:09:15 -07:00
Linus Torvalds c7c3ae2596 Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
Pull powerpc fixes from Ben Herrenschmidt:
 "Here are a few more powerpc fixes for 3.16

  There's a small series of 3 patches that fix saving/restoring MMUCR2
  when using KVM without which perf goes completely bonkers in the host
  system.  Another perf fix from Anton that's been rotting away in
  patchwork due to my poor eyesight, a couple of compile fixes, a little
  addition to the WSP removal by Michael (removing a bit more dead
  stuff) and a fix for an embarassing regression with our soft irq
  masking"

* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
  powerpc/perf: Never program book3s PMCs with values >= 0x80000000
  powerpc: Disable RELOCATABLE for COMPILE_TEST with PPC64
  powerpc/perf: Clear MMCR2 when enabling PMU
  powerpc/perf: Add PPMU_ARCH_207S define
  powerpc/kvm: Remove redundant save of SIER AND MMCR2
  powerpc/powernv: Check for IRQHAPPENED before sleeping
  powerpc: Clean up MMU_FTRS_A2 and MMU_FTR_TYPE_3E
  powerpc/cell: Fix compilation with CONFIG_COREDUMP=n
2014-07-11 09:32:39 -07:00
Geert Uytterhoeven 5bc8c7cdeb m68k: Export mach_random_get_entropy to modules
When a module calls random_get_entropy():

    ERROR: "mach_random_get_entropy" [crypto/drbg.ko] undefined!
    make[1]: *** [__modpost] Error 1

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2014-07-11 10:37:53 +02:00
Nadav Amit 68efa764f3 KVM: x86: Emulator support for #UD on CPL>0
Certain instructions (e.g., mwait and monitor) cause a #UD exception when they
are executed in user mode. This is in contrast to the regular privileged
instructions which cause #GP. In order not to mess with SVM interception of
mwait and monitor which assumes privilege level assertions take place before
interception, a flag has been added.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:05 +02:00
Nadav Amit 10e38fc7ca KVM: x86: Emulator flag for instruction that only support 16-bit addresses in real mode
Certain instructions, such as monitor and xsave do not support big real mode
and cause a #GP exception if any of the accessed bytes effective address are
not within [0, 0xffff].  This patch introduces a flag to mark these
instructions, including the necassary checks.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:04 +02:00
Paolo Bonzini 44583cba91 KVM: x86: use kvm_read_guest_page for emulator accesses
Emulator accesses are always done a page at a time, either by the emulator
itself (for fetches) or because we need to query the MMU for address
translations.  Speed up these accesses by using kvm_read_guest_page
and, in the case of fetches, by inlining kvm_read_guest_virt_helper and
dropping the loop around kvm_read_guest_page.

This final tweak saves 30-100 more clock cycles (4-10%), bringing the
count (as measured by kvm-unit-tests) down to 720-1100 clock cycles on
a Sandy Bridge Xeon host, compared to 2300-3200 before the whole series
and 925-1700 after the first two low-hanging fruit changes.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:04 +02:00
Paolo Bonzini 719d5a9b24 KVM: x86: ensure emulator fetches do not span multiple pages
When the CS base is not page-aligned, the linear address of the code could
get close to the page boundary (e.g. 0x...ffe) even if the EIP value is
not.  So we need to first linearize the address, and only then compute
the number of valid bytes that can be fetched.

This happens relatively often when executing real mode code.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:04 +02:00
Paolo Bonzini 17052f16a5 KVM: emulate: put pointers in the fetch_cache
This simplifies the code a bit, especially the overflow checks.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:03 +02:00
Paolo Bonzini 9506d57de3 KVM: emulate: avoid per-byte copying in instruction fetches
We do not need a memory copying loop anymore in insn_fetch; we
can use a byte-aligned pointer to access instruction fields directly
from the fetch_cache.  This eliminates 50-150 cycles (corresponding to
a 5-10% improvement in performance) from each instruction.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:03 +02:00
Paolo Bonzini 5cfc7e0f5e KVM: emulate: avoid repeated calls to do_insn_fetch_bytes
do_insn_fetch_bytes will only be called once in a given insn_fetch and
insn_fetch_arr, because in fact it will only be called at most twice
for any instruction and the first call is explicit in x86_decode_insn.
This observation lets us hoist the call out of the memory copying loop.
It does not buy performance, because most fetches are one byte long
anyway, but it prepares for the next patch.

The overflow check is tricky, but correct.  Because do_insn_fetch_bytes
has already been called once, we know that fc->end is at least 15.  So
it is okay to subtract the number of bytes we want to read.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:02 +02:00
Paolo Bonzini 285ca9e948 KVM: emulate: speed up do_insn_fetch
Hoist the common case up from do_insn_fetch_byte to do_insn_fetch,
and prime the fetch_cache in x86_decode_insn.  This helps a bit the
compiler and the branch predictor, but above all it lays the
ground for further changes in the next few patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:02 +02:00
Bandan Das 41061cdb98 KVM: emulate: do not initialize memopp
rip_relative is only set if decode_modrm runs, and if you have ModRM
you will also have a memopp.  We can then access memopp unconditionally.
Note that rip_relative cannot be hoisted up to decode_modrm, or you
break "mov $0, xyz(%rip)".

Also, move typecast on "out of range value" of mem.ea to decode_modrm.

Together, all these optimizations save about 50 cycles on each emulated
instructions (4-6%).

Signed-off-by: Bandan Das <bsd@redhat.com>
[Fix immediate operands with rip-relative addressing. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:01 +02:00
Bandan Das 573e80fe04 KVM: emulate: rework seg_override
x86_decode_insn already sets a default for seg_override,
so remove it from the zeroed area. Also replace set/get functions
with direct access to the field.

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:01 +02:00
Bandan Das c44b4c6ab8 KVM: emulate: clean up initializations in init_decode_cache
A lot of initializations are unnecessary as they get set to
appropriate values before actually being used. Optimize
placement of fields in x86_emulate_ctxt

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:00 +02:00
Bandan Das 02357bdc8c KVM: emulate: cleanup decode_modrm
Remove the if conditional - that will help us avoid
an "else initialize to 0" Also, rearrange operators
for slightly better code.

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:00 +02:00
Bandan Das 685bbf4ac4 KVM: emulate: Remove ctxt->intercept and ctxt->check_perm checks
The same information can be gleaned from ctxt->d and avoids having
to zero/NULL initialize intercept and check_perm

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:14:00 +02:00
Bandan Das 1498507a47 KVM: emulate: move init_decode_cache to emulate.c
Core emulator functions all belong in emulator.c,
x86 should have no knowledge of emulator internals

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:59 +02:00
Paolo Bonzini f5f87dfbc7 KVM: emulate: simplify writeback
The "if/return" checks are useless, because we return X86EMUL_CONTINUE
anyway if we do not return.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:59 +02:00
Paolo Bonzini 54cfdb3e95 KVM: emulate: speed up emulated moves
We can just blindly move all 16 bytes of ctxt->src's value to ctxt->dst.
write_register_operand will take care of writing only the lower bytes.

Avoiding a call to memcpy (the compiler optimizes it out) gains about
200 cycles on kvm-unit-tests for register-to-register moves, and makes
them about as fast as arithmetic instructions.

We could perhaps get a larger speedup by moving all instructions _except_
moves out of x86_emulate_insn, removing opcode_len, and replacing the
switch statement with an inlined em_mov.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:58 +02:00
Paolo Bonzini d40a6898e5 KVM: emulate: protect checks on ctxt->d by a common "if (unlikely())"
There are several checks for "peculiar" aspects of instructions in both
x86_decode_insn and x86_emulate_insn.  Group them together, and guard
them with a single "if" that lets the processor quickly skip them all.
Make this more effective by adding two more flag bits that say whether the
.intercept and .check_perm fields are valid.  We will reuse these
flags later to avoid initializing fields of the emulate_ctxt struct.

This skims about 30 cycles for each emulated instructions, which is
approximately a 3% improvement.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:58 +02:00
Paolo Bonzini e24186e097 KVM: emulate: move around some checks
The only purpose of this patch is to make the next patch simpler
to review.  No semantic change.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:57 +02:00
Paolo Bonzini 6addfc4299 KVM: x86: avoid useless set of KVM_REQ_EVENT after emulation
Despite the provisions to emulate up to 130 consecutive instructions, in
practice KVM will emulate just one before exiting handle_invalid_guest_state,
because x86_emulate_instruction always sets KVM_REQ_EVENT.

However, we only need to do this if an interrupt could be injected,
which happens a) if an interrupt shadow bit (STI or MOV SS) has gone
away; b) if the interrupt flag has just been set (other instructions
than STI can set it without enabling an interrupt shadow).

This cuts another 700-900 cycles from the cost of emulating an
instruction (measured on a Sandy Bridge Xeon: 1650-2600 cycles
before the patch on kvm-unit-tests, 925-1700 afterwards).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:57 +02:00
Paolo Bonzini 37ccdcbe07 KVM: x86: return all bits from get_interrupt_shadow
For the next patch we will need to know the full state of the
interrupt shadow; we will then set KVM_REQ_EVENT when one bit
is cleared.

However, right now get_interrupt_shadow only returns the one
corresponding to the emulated instruction, or an unconditional
0 if the emulated instruction does not have an interrupt shadow.
This is confusing and does not allow us to check for cleared
bits as mentioned above.

Clean the callback up, and modify toggle_interruptibility to
match the comment above the call.  As a small result, the
call to set_interrupt_shadow will be skipped in the common
case where int_shadow == 0 && mask == 0.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:56 +02:00
Paolo Bonzini 98eb2f8b14 KVM: vmx: speed up emulation of invalid guest state
About 25% of the time spent in emulation of invalid guest state
is wasted in checking whether emulation is required for the next
instruction.  However, this almost never changes except when a
segment register (or TR or LDTR) changes, or when there is a mode
transition (i.e. CR0 changes).

In fact, vmx_set_segment and vmx_set_cr0 already modify
vmx->emulation_required (except that the former for some reason
uses |= instead of just an assignment).  So there is no need to
call guest_state_valid in the emulation loop.

Emulation performance test results indicate 1650-2600 cycles
for common instructions, versus 2300-3200 before this patch on
a Sandy Bridge Xeon.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:13:56 +02:00
Matthias Lange 22d48b2d2a KVM: svm: writes to MSR_K7_HWCR generates GPE in guest
Since commit 575203 the MCE subsystem in the Linux kernel for AMD sets bit 18
in MSR_K7_HWCR. Running such a kernel as a guest in KVM on an AMD host results
in a GPE injected into the guest because kvm_set_msr_common returns 1. This
patch fixes this by masking bit 18 from the MSR value desired by the guest.

Signed-off-by: Matthias Lange <matthias.lange@kernkonzept.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:11:59 +02:00
Nadav Amit 5f7552d4a5 KVM: x86: Pending interrupt may be delivered after INIT
We encountered a scenario in which after an INIT is delivered, a pending
interrupt is delivered, although it was sent before the INIT.  As the SDM
states in section 10.4.7.1, the ISR and the IRR should be cleared after INIT as
KVM does.  This also means that pending interrupts should be cleared.  This
patch clears upon reset (and INIT) the pending interrupts; and at the same
occassion clears the pending exceptions, since they may cause a similar issue.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:11:58 +02:00
Jim Mattson 80112c89ed KVM: Synthesize G bit for all segments.
We have noticed that qemu-kvm hangs early in the BIOS when runnning nested
under some versions of VMware ESXi.

The problem we believe is because KVM assumes that the platform preserves
the 'G' but for any segment register. The SVM specification itemizes the
segment attribute bits that are observed by the CPU, but the (G)ranularity bit
is not one of the bits itemized, for any segment. Though current AMD CPUs keep
track of the (G)ranularity bit for all segment registers other than CS, the
specification does not require it. VMware's virtual CPU may not track the
(G)ranularity bit for any segment register.

Since kvm already synthesizes the (G)ranularity bit for the CS segment. It
should do so for all segments. The patch below does that, and helps get rid of
the hangs. Patch applies on top of Linus' tree.

Signed-off-by: Jim Mattson <jmattson@vmware.com>
Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-11 09:11:56 +02:00
Anton Blanchard f56029410a powerpc/perf: Never program book3s PMCs with values >= 0x80000000
We are seeing a lot of PMU warnings on POWER8:

    Can't find PMC that caused IRQ

Looking closer, the active PMC is 0 at this point and we took a PMU
exception on the transition from negative to 0. Some versions of POWER8
have an issue where they edge detect and not level detect PMC overflows.

A number of places program the PMC with (0x80000000 - period_left),
where period_left can be negative. We can either fix all of these or
just ensure that period_left is always >= 1.

This patch takes the second option.

Cc: <stable@vger.kernel.org>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 13:50:47 +10:00
Guenter Roeck fb43e8477e powerpc: Disable RELOCATABLE for COMPILE_TEST with PPC64
powerpc:allmodconfig has been failing for some time with the following
error.

arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
arch/powerpc/kernel/exceptions-64s.S:1312: Error: attempt to move .org backwards
make[1]: *** [arch/powerpc/kernel/head_64.o] Error 1

A number of attempts to fix the problem by moving around code have been
unsuccessful and resulted in failed builds for some configurations and
the discovery of toolchain bugs.

Fix the problem by disabling RELOCATABLE for COMPILE_TEST builds instead.
While this is less than perfect, it avoids substantial code changes
which would otherwise be necessary just to make COMPILE_TEST builds
happy and might have undesired side effects.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:09 +10:00
Joel Stanley b50a6c584b powerpc/perf: Clear MMCR2 when enabling PMU
On POWER8 when switching to a KVM guest we set bits in MMCR2 to freeze
the PMU counters. Aside from on boot they are then never reset,
resulting in stuck perf counters for any user in the guest or host.

We now set MMCR2 to 0 whenever enabling the PMU, which provides a sane
state for perf to use the PMU counters under either the guest or the
host.

This was manifesting as a bug with ppc64_cpu --frequency:

    $ sudo ppc64_cpu --frequency
    WARNING: couldn't run on cpu 0
    WARNING: couldn't run on cpu 8
      ...
    WARNING: couldn't run on cpu 144
    WARNING: couldn't run on cpu 152
    min:    18446744073.710 GHz (cpu -1)
    max:    0.000 GHz (cpu -1)
    avg:    0.000 GHz

The command uses a perf counter to measure CPU cycles over a fixed
amount of time, in order to approximate the frequency of the machine.
The counters were returning zero once a guest was started, regardless of
weather it was still running or had been shut down.

By dumping the value of MMCR2, it was observed that once a guest is
running MMCR2 is set to 1s - which stops counters from running:

    $ sudo sh -c 'echo p > /proc/sysrq-trigger'
    CPU: 0 PMU registers, ppmu = POWER8 n_counters = 6
    PMC1:  5b635e38 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000
    PMC5:  1bf5a646 PMC6: 5793d378 PMC7: deadbeef PMC8: deadbeef
    MMCR0: 0000000080000000 MMCR1: 000000001e000000 MMCRA: 0000040000000000
    MMCR2: fffffffffffffc00 EBBHR: 0000000000000000
    EBBRR: 0000000000000000 BESCR: 0000000000000000
    SIAR:  00000000000a51cc SDAR:  c00000000fc40000 SIER:  0000000001000000

This is done unconditionally in book3s_hv_interrupts.S upon entering the
guest, and the original value is only save/restored if the host has
indicated it was using the PMU. This is okay, however the user of the
PMU needs to ensure that it is in a defined state when it starts using
it.

Fixes: e05b9b9e5c ("powerpc/perf: Power8 PMU support")
Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:08 +10:00
Joel Stanley 4d9690dd56 powerpc/perf: Add PPMU_ARCH_207S define
Instead of separate bits for every POWER8 PMU feature, have a single one
for v2.07 of the architecture.

This saves us adding a MMCR2 define for a future patch.

Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:07 +10:00
Joel Stanley f73128f4f6 powerpc/kvm: Remove redundant save of SIER AND MMCR2
These two registers are already saved in the block above. Aside from
being unnecessary, by the time we get down to the second save location
r8 no longer contains MMCR2, so we are clobbering the saved value with
PMC5.

MMCR2 primarily consists of counter freeze bits. So restoring the value
of PMC5 into MMCR2 will most likely have the effect of freezing
counters.

Fixes: 72cde5a88d ("KVM: PPC: Book3S HV: Save/restore host PMU registers that are new in POWER8")
Cc: stable@vger.kernel.org
Signed-off-by: Joel Stanley <joel@jms.id.au>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:07 +10:00
Preeti U Murthy c733cf83bb powerpc/powernv: Check for IRQHAPPENED before sleeping
Commit 8d6f7c5a: "powerpc/powernv: Make it possible to skip the IRQHAPPENED
check in power7_nap()" added code that prevents cpus from checking for
pending interrupts just before entering sleep state, which is wrong. These
interrupts are delivered during the soft irq disabled state of the cpu.

A cpu cannot enter any idle state with pending interrupts because they will
never be serviced until the next time the cpu is woken up by some other
interrupt. Its only then that the pending interrupts are replayed. This can result
in device timeouts or warnings about this cpu being stuck.

This patch fixes ths issue by ensuring that cpus check for pending interrupts
just before entering any idle state as long as they are not in the path of split
core operations.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:06 +10:00
Michael Ellerman cd68098bce powerpc: Clean up MMU_FTRS_A2 and MMU_FTR_TYPE_3E
In fb5a515704 "powerpc: Remove platforms/wsp and associated pieces",
we removed the last user of MMU_FTRS_A2. So remove it.

MMU_FTRS_A2 was the last user of MMU_FTR_TYPE_3E, so remove it also.
This leaves some unreachable code in mmu_context_nohash.c, so remove
that also.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:05 +10:00
Michael Ellerman e623fbf1c4 powerpc/cell: Fix compilation with CONFIG_COREDUMP=n
Commit 046d662f48 "coredump: make core dump functionality optional"
made the coredump optional, but didn't update the spufs code that
depends on it. That leads to build errors such as:

  arch/powerpc/platforms/built-in.o: In function `.spufs_arch_write_note':
  coredump.c:(.text+0x22cd4): undefined reference to `.dump_emit'
  coredump.c:(.text+0x22cf4): undefined reference to `.dump_emit'
  coredump.c:(.text+0x22d0c): undefined reference to `.dump_align'
  coredump.c:(.text+0x22d48): undefined reference to `.dump_emit'
  coredump.c:(.text+0x22e7c): undefined reference to `.dump_skip'

Fix it by adding some ifdefs in the cell code.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-11 12:55:05 +10:00
Tomasz Figa bed7118988 ARM: EXYNOS: Register cpuidle device only on exynos4210 and 5250
Currently, the exynos cpuidle driver works correctly only on exynos4210
and 5250. Trying to use it with just one CPU online on any other exynos
SoCs will lead to system failure, due to unsupported AFTR mode on other
SoCs. This patch fixes the problem by registering the driver only on
supported SoCs and letting others simply use default WFI mode until
support for them is added.

Signed-off-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-11 08:15:32 +09:00
Jan Beulich d093601be5 x86-32, vdso: Fix vDSO build error due to missing align_vdso_addr()
Relying on static functions used just once to get inlined (and
subsequently have dead code paths eliminated) is wrong: Compilers are
free to decide whether they do this, regardless of optimization level.
With this not happening for vdso_addr() (observed with gcc 4.1.x), an
unresolved reference to align_vdso_addr() causes the build to fail.

[ hpa: vdso_addr() is never actually used on x86-32, as calculate_addr
  in map_vdso() is always false.  It ought to be possible to clean
  this up further, but this fixes the immediate problem. ]

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/53B5863B02000078000204D5@mail.emea.novell.com
Acked-by: Andy Lutomirski <luto@amacapital.net>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-10 16:06:04 -07:00
Arun Kumar K cacaeb8293 ARM: dts: Add clock property for mfc_pd in exynos5420
Adding the optional clock property for the mfc_pd for
handling the re-parenting while pd on/off.

Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reviewed-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-11 08:04:03 +09:00
Prathyush K c760569d0e ARM: EXYNOS: Add support for clock handling in power domain
While powering on/off a local powerdomain in exynos5 chipsets, the
input clocks to each device gets modified. This behaviour is based
on the SYSCLK_SYS_PWR_REG registers.
E.g. SYSCLK_MFC_SYS_PWR_REG = 0x0, the parent of input clock to MFC
                             (aclk333) gets modified to oscclk
                            = 0x1, no change in clocks.
The recommended value of SYSCLK_SYS_PWR_REG before power gating any
domain is 0x0. So we must also restore the clocks while powering on
a domain everytime.

This patch adds the framework for getting the required mux and parent
clocks through a power domain device node. With this patch, while
powering off a domain, parent is set to oscclk and while powering back
on, its re-set to the correct parent which is as per the recommended
pd on/off sequence.

Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Andrew Bresticker <abrestic@chromium.org>
Signed-off-by: Arun Kumar K <arun.kk@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reviewed-by: Tomasz Figa <t.figa@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-11 08:03:19 +09:00
Jan Beulich 9f88b906b4 x86-64, vdso: Fix vDSO build breakage due to empty .rela.dyn
Certain ld versions (observed with 2.20.0) put an empty .rela.dyn
section into shared object files, breaking the assumption on the number
of sections to be copied to the final output. Simply discard any empty
SHT_REL and SHT_RELA sections to address this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/53B5861E02000078000204D1@mail.emea.novell.com
Acked-by: Andy Lutomirski <luto@amacapital.net>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-10 15:59:56 -07:00
Bruno Prémont 20cde69402 x86, ia64: Move EFI_FB vga_default_device() initialization to pci_vga_fixup()
Commit b4aa016305 ("efifb: Implement vga_default_device() (v2)") added
efifb vga_default_device() so EFI systems that do not load shadow VBIOS or
setup VGA get proper value for boot_vga PCI sysfs attribute on the
corresponding PCI device.

Xorg doesn't detect devices when boot_vga=0, e.g., on some EFI systems such
as MacBookAir2,1.  Xorg detects the GPU and finds the DRI device but then
bails out with "no devices detected".

Note: When vga_default_device() is set boot_vga PCI sysfs attribute
reflects its state.  When unset this attribute is 1 whenever
IORESOURCE_ROM_SHADOW flag is set.

With introduction of sysfb/simplefb/simpledrm efifb is getting obsolete
while having native drivers for the GPU also makes selecting sysfb/efifb
optional.

Remove the efifb implementation of vga_default_device() and initialize
vgaarb's vga_default_device() with the PCI GPU that matches boot
screen_info in pci_fixup_video().

[bhelgaas: remove unused "dev" in efifb_setup()]
Fixes: b4aa016305 ("efifb: Implement vga_default_device() (v2)")
Tested-by: Anibal Francisco Martinez Cortina <linuxkid.zeuz@gmail.com>
Signed-off-by: Bruno Prémont <bonbons@linux-vserver.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Matthew Garrett <matthew.garrett@nebula.com>
CC: stable@vger.kernel.org	# v3.5+
2014-07-10 16:48:48 -06:00
Olof Johansson c58a27a49a Fixes for omaps for the -rc series. It's mostly fixes for clock rates,
restart handling and phy regulators and SATA interconnect data.
 
 Also few build fixes related to the DSP driver in staging, and trivial
 stuff like removal of broken and soon to be unused platform data init
 for HDMI audio that would be good to get into the -rc series if not
 too late.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJTvTCLAAoJEBvUPslcq6Vz5J0QAKJJlkjQwpr+nV8yfgPSNASR
 yEwRdPYF2k3bxcjusv10OyOFQp+3UcMILATR0hE29khr32Yh3qs4zYgSajs2Fr3Y
 ziNVqbknm4WQ5vrQXKN12P1xeb67bX0MrmZBQyqzDPc8bPdlohIkBBE6NQ6jI6Jn
 5xXn7auDgSYhIB3w09QZX6yzm3+06qG+Typ9ZgLr/OaF0KUGv5KcwJi/T6/PTbl+
 SJMJYJbVZoD1v4yTVGskqolYi7Yy8sMbAGIGKUw40f3aOI5SSctgFynHzy75hyNC
 aZ14PtfWFkiJ32xXHMFGJ7dficqSzC5QTrItcP+kjRpktYwaDMMse5rewQ4nToMo
 Y6cxBWy3rMT3uX6XSEGkI16PiEjtnOUj02czEO45wLTWIFyZTHkdUtG3j5BMhuDz
 IvkDnxCyKjbEt6zCWRZH2L0JTRitIJUaKEeZwgM89T0oBMOvX9y6ikrnS1yeamO6
 prWMyPA395wU7M7UWFthI/b2uSS2xHBJ2Lf4yHj2lEKHlai4AKXwgWzctmboiZjx
 7T2rCY0/N15EdUuAWdOdDHe4bPl9Dg99peheMs9gUtOM0rQmsSN4stAGwC+VAduj
 aFE4mDcPgUcRX9IpYPGSZY3X3PTjESl2t5rDZQVtxw9+VzUfdDiOAIiBHBOZGunG
 PDJPg6AJGeBHGKkn+k0K
 =paeu
 -----END PGP SIGNATURE-----

Merge tag 'omap-for-v3.16/fixes-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes

Merge "omap fixes against v3.16-rc4" from Tony Lindgren:

Fixes for omaps for the -rc series. It's mostly fixes for clock rates,
restart handling and phy regulators and SATA interconnect data.

Also few build fixes related to the DSP driver in staging, and trivial
stuff like removal of broken and soon to be unused platform data init
for HDMI audio that would be good to get into the -rc series if not
too late.

* tag 'omap-for-v3.16/fixes-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
  ARM: OMAP2+: Remove non working OMAP HDMI audio initialization
  ARM: dts: Fix TI CPSW Phy mode selection on IGEP COM AQUILA.
  ARM: dts: am335x-evmsk: Enable the McASP FIFO for audio
  ARM: dts: am335x-evm: Enable the McASP FIFO for audio
  ARM: OMAP2+: Make GPMC skip disabled devices
  ARM: OMAP2+: create dsp device only on OMAP3 SoCs
  ARM: dts: dra7-evm: Make VDDA_1V8_PHY supply always on
  ARM: DRA7/AM43XX: fix header definition for omap44xx_restart
  ARM: OMAP2+: clock/dpll: fix _dpll_test_fint arithmetics overflow
  ARM: DRA7: hwmod: Add SYSCONFIG for usb_otg_ss
  ARM: DRA7: hwmod: Fixup SATA hwmod
  ARM: OMAP3: PRM/CM: Add back macros used by TI DSP/Bridge driver
  ARM: dts: dra7xx-clocks: Fix the l3 and l4 clock rates

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-10 13:47:51 -07:00
Linus Torvalds 9ab6e6e7db Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
 "This push fixes an error in sha512_ssse3 that leads to incorrect
  output as well as a memory leak in caam_jr when the module is
  unloaded"

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: caam - fix memleak in caam_jr module
  crypto: sha512_ssse3 - fix byte count to bit count conversion
2014-07-10 11:30:57 -07:00
Michael Brown c7fb93ec51 x86/efi: Include a .bss section within the PE/COFF headers
The PE/COFF headers currently describe only the initialised-data
portions of the image, and result in no space being allocated for the
uninitialised-data portions.  Consequently, the EFI boot stub will end
up overwriting unexpected areas of memory, with unpredictable results.

Fix by including a .bss section in the PE/COFF headers (functionally
equivalent to the init_size field in the bzImage header).

Signed-off-by: Michael Brown <mbrown@fensystems.co.uk>
Cc: Thomas Bächler <thomas@archlinux.org>
Cc: Josh Boyer <jwboyer@fedoraproject.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-10 14:21:39 +01:00
David Hildenbrand 6352e4d2dd KVM: s390: implement KVM_(S|G)ET_MP_STATE for user space state control
This patch
- adds s390 specific MP states to linux headers and documents them
- implements the KVM_{SET,GET}_MP_STATE ioctls
- enables KVM_CAP_MP_STATE
- allows user space to control the VCPU state on s390.

If user space sets the VCPU state using the ioctl KVM_SET_MP_STATE, we can disable
manual changing of the VCPU state and trust user space to do the right thing.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-10 14:11:17 +02:00
David Hildenbrand 7a42fdc20f KVM: s390: remove __cpu_is_stopped and expose is_vcpu_stopped
The function "__cpu_is_stopped" is not used any more. Let's remove it and
expose the function "is_vcpu_stopped" instead, which is actually what we want.

This patch also converts an open coded check for CPUSTAT_STOPPED to
is_vcpu_stopped().

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-10 14:09:49 +02:00
David Hildenbrand 32f5ff63ff KVM: s390: move finalization of SIGP STOP orders to kvm_s390_vcpu_stop
Let's move the finalization of SIGP STOP and SIGP STOP AND STORE STATUS orders to
the point where the VCPU is actually stopped.

This change is needed to prepare for a user space driven VCPU state change. The
action_bits may only be cleared when setting the cpu state to STOPPED while
holding the local irq lock.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-10 14:09:44 +02:00
David Hildenbrand 7dfc63cf97 KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time
A SIGP STOP (AND STORE STATUS) order is complete as soon as the VCPU has been
stopped. This patch makes sure that only one SIGP STOP (AND STORE STATUS) may
be pending at a time (as defined by the architecture). If the action_bits are
still set, a SIGP STOP has been issued but not completed yet. The VCPU is busy
for further SIGP STOP orders.

Also set the CPUSTAT_STOP_INT after the action_bits variable has been modified
(the same order that is used when injecting a KVM_S390_SIGP_STOP from
userspace).

Both changes are needed in preparation for a user space driven VCPU state change
(to avoid race conditions).

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-07-10 14:09:34 +02:00
Mark Rutland da57a369d3 arm64: Enable TEXT_OFFSET fuzzing
The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).

This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:58 +01:00
Mark Rutland a2c1d73b94 arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.

Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.

This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.

Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.

The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:40 +01:00
Mark Rutland bd00cd5f8c arm64: place initial page tables above the kernel
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.

To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.

This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:12 +01:00
Mark Rutland 909a4069da arm64: head.S: remove unnecessary function alignment
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.

Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.

This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:35:56 +01:00
Catalin Marinas ebe6152e72 arm64: Cast KSTK_(EIP|ESP) to unsigned long
This is for similarity with thread_saved_(pc|sp) and to avoid some
compiler warnings in the audit code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 11:37:40 +01:00
AKASHI Takahiro 875cbf3e46 arm64: Add audit support
On AArch64, audit is supported through generic lib/audit.c and
compat_audit.c, and so this patch adds arch specific definitions required.

Acked-by Will Deacon <will.deacon@arm.com>
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 11:06:00 +01:00
AKASHI Takahiro 5701ede884 arm64: audit: Add audit hook in syscall_trace_enter/exit()
This patch adds auditing functions on entry to or exit from
every system call invocation.

Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 11:06:00 +01:00
Catalin Marinas f3e5c847ec arm64: Add __NR_* definitions for compat syscalls
This patch adds __NR_* definitions to asm/unistd32.h, moves the
__NR_compat_* definitions to asm/unistd.h and removes all the explicit
unistd32.h includes apart from the one building the compat syscall
table. The aim is to have the compat __NR_* definitions available but
without colliding with the native syscall definitions (required by
lib/compat_audit.c to avoid duplicating the audit header files between
native and compat).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 11:02:40 +01:00
Larry Bassel 6c81fe7925 arm64: enable context tracking
Make calls to ct_user_enter when the kernel is exited
and ct_user_exit when the kernel is entered (in el0_da,
el0_ia, el0_svc, el0_irq and all of the "error" paths).

These macros expand to function calls which will only work
properly if el0_sync and related code has been rearranged
(in a previous patch of this series).

The calls to ct_user_exit are made after hw debugging has been
enabled (enable_dbg_and_irq).

The call to ct_user_enter is made at the beginning of the
kernel_exit macro.

This patch is based on earlier work by Kevin Hilman.
Save/restore optimizations were also done by Kevin.

Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 10:10:21 +01:00
Larry Bassel 6ab6463aeb arm64: adjust el0_sync so that a function can be called
To implement the context tracker properly on arm64,
a function call needs to be made after debugging and
interrupts are turned on, but before the lr is changed
to point to ret_to_user(). If the function call
is made after the lr is changed the function will not
return to the correct place.

For similar reasons, defer the setting of x0 so that
it doesn't need to be saved around the function call
(save far_el1 in x26 temporarily instead).

Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 10:10:20 +01:00
Geert Uytterhoeven f1a1b63529 m68k: Fix boot regression on machines with RAM at non-zero
My enhancement to store the initial mapping size for later reuse in commit
486df8bc46 ("m68k: Increase initial mapping
to 8 or 16 MiB if possible") broke booting on machines where RAM doesn't
start at address zero.

Use pc-relative addressing to fix this.

Reported-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Andreas Schwab <schwab@linux-m68k.org>
2014-07-10 09:58:26 +02:00
Nadav Amit 98eff52ab5 KVM: x86: Fix lapic.c debug prints
In two cases lapic.c does not use the apic_debug macro correctly. This patch
fixes them.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:57 +02:00
Tomasz Grabiec 0d3da0d26e KVM: x86: fix TSC matching
I've observed kvmclock being marked as unstable on a modern
single-socket system with a stable TSC and qemu-1.6.2 or qemu-2.0.0.

The culprit was failure in TSC matching because of overflow of
kvm_arch::nr_vcpus_matched_tsc in case there were multiple TSC writes
in a single synchronization cycle.

Turns out that qemu does multiple TSC writes during init, below is the
evidence of that (qemu-2.0.0):

The first one:

 0xffffffffa08ff2b4 : vmx_write_tsc_offset+0xa4/0xb0 [kvm_intel]
 0xffffffffa04c9c05 : kvm_write_tsc+0x1a5/0x360 [kvm]
 0xffffffffa04cfd6b : kvm_arch_vcpu_postcreate+0x4b/0x80 [kvm]
 0xffffffffa04b8188 : kvm_vm_ioctl+0x418/0x750 [kvm]

The second one:

 0xffffffffa08ff2b4 : vmx_write_tsc_offset+0xa4/0xb0 [kvm_intel]
 0xffffffffa04c9c05 : kvm_write_tsc+0x1a5/0x360 [kvm]
 0xffffffffa090610d : vmx_set_msr+0x29d/0x350 [kvm_intel]
 0xffffffffa04be83b : do_set_msr+0x3b/0x60 [kvm]
 0xffffffffa04c10a8 : msr_io+0xc8/0x160 [kvm]
 0xffffffffa04caeb6 : kvm_arch_vcpu_ioctl+0xc86/0x1060 [kvm]
 0xffffffffa04b6797 : kvm_vcpu_ioctl+0xc7/0x5a0 [kvm]

 #0  kvm_vcpu_ioctl at /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:1780
 #1  kvm_put_msrs at /build/buildd/qemu-2.0.0+dfsg/target-i386/kvm.c:1270
 #2  kvm_arch_put_registers at /build/buildd/qemu-2.0.0+dfsg/target-i386/kvm.c:1909
 #3  kvm_cpu_synchronize_post_init at /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:1641
 #4  cpu_synchronize_post_init at /build/buildd/qemu-2.0.0+dfsg/include/sysemu/kvm.h:330
 #5  cpu_synchronize_all_post_init () at /build/buildd/qemu-2.0.0+dfsg/cpus.c:521
 #6  main at /build/buildd/qemu-2.0.0+dfsg/vl.c:4390

The third one:

 0xffffffffa08ff2b4 : vmx_write_tsc_offset+0xa4/0xb0 [kvm_intel]
 0xffffffffa04c9c05 : kvm_write_tsc+0x1a5/0x360 [kvm]
 0xffffffffa090610d : vmx_set_msr+0x29d/0x350 [kvm_intel]
 0xffffffffa04be83b : do_set_msr+0x3b/0x60 [kvm]
 0xffffffffa04c10a8 : msr_io+0xc8/0x160 [kvm]
 0xffffffffa04caeb6 : kvm_arch_vcpu_ioctl+0xc86/0x1060 [kvm]
 0xffffffffa04b6797 : kvm_vcpu_ioctl+0xc7/0x5a0 [kvm]

 #0  kvm_vcpu_ioctl at /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:1780
 #1  kvm_put_msrs  at /build/buildd/qemu-2.0.0+dfsg/target-i386/kvm.c:1270
 #2  kvm_arch_put_registers  at /build/buildd/qemu-2.0.0+dfsg/target-i386/kvm.c:1909
 #3  kvm_cpu_synchronize_post_reset  at /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:1635
 #4  cpu_synchronize_post_reset  at /build/buildd/qemu-2.0.0+dfsg/include/sysemu/kvm.h:323
 #5  cpu_synchronize_all_post_reset () at /build/buildd/qemu-2.0.0+dfsg/cpus.c:512
 #6  main  at /build/buildd/qemu-2.0.0+dfsg/vl.c:4482

The fix is to count each vCPU only once when matched, so that
nr_vcpus_matched_tsc holds the size of the matched set. This is
achieved by reusing generation counters. Every vCPU with
this_tsc_generation == cur_tsc_generation is in the matched set. The
match set is cleared by setting cur_tsc_generation to a value which no
other vCPU is set to (by incrementing it).

I needed to bump up the counter size form u8 to u64 to ensure it never
overflows. Otherwise in cases TSC is not written the same number of
times on each vCPU the counter could overflow and incorrectly indicate
some vCPUs as being in the matched set. This scenario seems unlikely
but I'm not sure if it can be disregarded.

Signed-off-by: Tomasz Grabiec <tgrabiec@cloudius-systems.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:57 +02:00
Jan Kiszka 6cbc5f5a80 KVM: nSVM: Set correct port for IOIO interception evaluation
Obtaining the port number from DX is bogus as a) there are immediate
port accesses and b) user space may have changed the register content
while processing the PIO access. Forward the correct value from the
instruction emulator instead.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:56 +02:00
Jan Kiszka 6493f1574e KVM: nSVM: Fix IOIO size reported on emulation
The access size of an in/ins is reported in dst_bytes, and that of
out/outs in src_bytes.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:56 +02:00
Jan Kiszka 9bf418335e KVM: nSVM: Fix IOIO bitmap evaluation
First, kvm_read_guest returns 0 on success. And then we need to take the
access size into account when testing the bitmap: intercept if any of
bits corresponding to the access is set.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:55 +02:00
Jan Kiszka 62baf44cad KVM: nSVM: Do not report CLTS via SVM_EXIT_WRITE_CR0 to L1
CLTS only changes TS which is not monitored by selected CR0
interception. So skip any attempt to translate WRITE_CR0 to
CR0_SEL_WRITE for this instruction.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-09 18:09:55 +02:00
Laura Abbott c0c264ae51 arm64: Add CONFIG_CC_STACKPROTECTOR
arm64 currently lacks support for -fstack-protector. Add
similar functionality to arm to detect stack corruption.

Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-09 12:23:48 +01:00
Zi Shen Lim 4e6f708409 arm64: topology: add MPIDR-based detection
Create cpu topology based on MPIDR. When hardware sets MPIDR to sane
values, this method will always work. Therefore it should also work well
as the fallback method. [1]

When we have multiple processing elements in the system, we create
the cpu topology by mapping each affinity level (from lowest to highest)
to threads (if they exist), cores, and clusters.

[1] http://www.spinics.net/lists/arm-kernel/msg317445.html

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Zi Shen Lim <zlim@broadcom.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-09 12:22:40 +01:00
Marc Zyngier 021f653791 irqchip: gic-v3: Initial support for GICv3
The Generic Interrupt Controller (version 3) offers services that are
similar to GICv2, with a number of additional features:
- Affinity routing based on the CPU MPIDR (ARE)
- System register for the CPU interfaces (SRE)
- Support for more that 8 CPUs
- Locality-specific Peripheral Interrupts (LPIs)
- Interrupt Translation Services (ITS)

This patch adds preliminary support for GICv3 with ARE and SRE,
non-secure mode only. It relies on higher exception levels to grant ARE
and SRE access.

Support for LPI and ITS will be added at a later time.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Zi Shen Lim <zlim@broadcom.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Reviewed-by: Yun Wu <wuyun.wu@huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com>
Tested-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-07-08 22:11:47 +00:00
Colin Cross fa2ec3ea10 arm64: implement TASK_SIZE_OF
include/linux/sched.h implements TASK_SIZE_OF as TASK_SIZE if it
is not set by the architecture headers.  TASK_SIZE uses the
current task to determine the size of the virtual address space.
On a 64-bit kernel this will cause reading /proc/pid/pagemap of a
64-bit process from a 32-bit process to return EOF when it reads
past 0xffffffff.

Implement TASK_SIZE_OF exactly the same as TASK_SIZE with
test_tsk_thread_flag instead of test_thread_flag.

Cc: stable@vger.kernel.org
Signed-off-by: Colin Cross <ccross@android.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-08 17:30:59 +01:00
Mark Salter bec7cedc8a arm64: export __cpu_{clear,copy}_user_page functions
The __cpu_clear_user_page() and __cpu_copy_user_page() functions
are not currently exported. This prevents modules from using
clear_user_page() and copy_user_page().

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-08 17:30:51 +01:00
Ezequiel Garcia a728b97742 ARM: mvebu: Fix coherency bus notifiers by using separate notifiers
Currently, the coherency fabric support registers two bus notifiers;
one for platform, one for pci bus types, with the same notifier block.
However, this is illegal and can cause serious issues: the notifier
block is also a link in the notifier list and cannot be inserted twice.

This commit fixes this by using different notifier blocks (with the same
notifier callback) to set the platform and pci bus types notifiers.

Fixes: b0063aad5d ("ARM: mvebu: use hardware I/O coherency also for PCI devices")
Reported-by: Paolo Pisati <p.pisati@gmail.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Link: https://lkml.kernel.org/r/1404826657-6977-1-git-send-email-ezequiel.garcia@free-electrons.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-07-08 13:53:53 +00:00
Nitesh Narayan Lal 6f39da1cad crypto: dts - Addition of missing SEC compatibile property in c29x device tree
The driver is compatible with SEC version 4.0, which was missing from
device tree resulting that the caam driver doesn't gets probed. Since
SEC is backward compatible with older versions, so this patch adds those
missing versions in c29x device tree.

Signed-off-by: Nitesh Narayan Lal <b44382@freescale.com>
Signed-off-by: Vakul Garg <b16394@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-08 21:06:32 +08:00
Gregory CLEMENT 0d461e1b08 ARM: mvebu: Fix the operand list in the inline asm of armada_370_xp_pmsu_idle_enter
In the inline asm part of the function armada_370_xp_pmsu_idle_enter()
the input operand was used. The intent here was to let the compiler
choose this register so it could do the optimization it
needed.

However an input operand is not supposed to be modified by the inline
asm code. This can lead to improper generated instructions.

In some case generated instruction the compiler made the choice to
reuse the same register to store the return value. But in the assembly
part this register was modified, so it can lead to return an wrong
value.

The fix is to use a clobber. Thanks to this the compiler will know
that the value of this register will be modified.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Link: https://lkml.kernel.org/r/1404483736-16938-1-git-send-email-gregory.clement@free-electrons.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-07-08 12:25:39 +00:00
Paolo Bonzini bb18b526a9 Patch queue for 3.16 - 2014-07-08
A few bug fixes to make 3.16 work well with KVM on PowerPC:
 
   - Fix ppc32 module builds
   - Fix Little Endian hosts
   - Fix Book3S HV HPTE lookup with huge pages in guest
   - Fix BookE lock leak
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQIcBAABAgAGBQJTu8HqAAoJECszeR4D/txg7u0QAMk3SC2+yBVsOAusB0YvERyU
 Y9X4Lz9fALdeZf2Fd2Qk9BD5y283LDppZqyoy+dmef9DXopfCv0Kh4rl/GrlG9ny
 aHeiBfJGpIpjqZnvkZP0Ln9zpyg7gMLRVNfNJvZWji8RcHly6m9/bxEkG0HnX6Hn
 /2UkUzOdk2aymjzMqFXdHODdC0JsGtWtGBiVC+HOtIf1D3TX42R4KI+ieOSKjGDp
 OYgN2XskOMgiXvPtEx2yMyHAAw5OTCVNdFt6Co1x0qUsz560Wy3Hy6QCwiroLrPH
 rjxkHhcQN0GJJLXs/jajdDJoEp5wYLRomReZbdrKgBj+zGvQQgGRD+RO9iyfedlm
 4hTw98tgmHcPgFTIXQlG5U8Cn0/oPr/k7FWBZJDpiUCTNRI/rsL6eHX7Wu/ylUfm
 uvcwdl5tXdM2OMHE2wEB4pEwSAK4TNGjx237txNgaeLu4ZT8yk4TQnOXlxyMJQe7
 /Bfh8oUKBqRlWAymwut8y/cazZCRDFAx88ovwqAW9GXxgB+tiCeIDLNnLYEkjEmV
 8l+viAjZz3LbzLeFxCxHnNha9WhK7A7kNGhYaWn1+N2Zlz1F3u3mQm5QoZ1UJgIH
 TtbwWsfM7jYrlUsJB1xTeL5Hs8JhOTp+kgLpMbRXe1sNX1xqh+OQZHsJ16VB6zU9
 RiOjHnv2D9/icH0B2DsW
 =+sQF
 -----END PGP SIGNATURE-----

Merge tag 'signed-for-3.16' of git://github.com/agraf/linux-2.6 into kvm-master

Patch queue for 3.16 - 2014-07-08

A few bug fixes to make 3.16 work well with KVM on PowerPC:

  - Fix ppc32 module builds
  - Fix Little Endian hosts
  - Fix Book3S HV HPTE lookup with huge pages in guest
  - Fix BookE lock leak
2014-07-08 12:08:58 +02:00
Jyri Sarha 1d29a0722f ARM: OMAP2+: Remove non working OMAP HDMI audio initialization
This code is not working currently and it can be removed. There is a
conflict in sharing resources with the actual HDMI driver and with
the ASoC HDMI audio DAI driver.

Signed-off-by: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2014-07-08 01:08:44 -07:00
Bandan Das 9242b5b60d KVM: x86: Check for nested events if there is an injectable interrupt
With commit b6b8a1451f that introduced
vmx_check_nested_events, checks for injectable interrupts happen
at different points in time for L1 and L2 that could potentially
cause a race. The regression occurs because KVM_REQ_EVENT is always
set when nested_run_pending is set even if there's no pending interrupt.
Consequently, there could be a small window when check_nested_events
returns without exiting to L1, but an interrupt comes through soon
after and it incorrectly, gets injected to L2 by inject_pending_event
Fix this by adding a call to check for nested events too when a check
for injectable interrupt returns true

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-08 10:06:42 +02:00
Tony Lindgren aa3b465b5e Some miscellaneous fixes for OMAP clock code, DRA7xx device data, and
PRCM code (when DSPBridge is used) for v3.16-rc.
 
 Basic build, boot, and PM test logs are available here:
 
 http://www.pwsan.com/omap/testlogs/prcm-a-v3.16-rc/20140706174258/
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJTusBYAAoJEMePsQ0LvSpL0JgQAKsVXDTh1yeLzU1NT3Np0zJs
 rptjUTz3KGdq0ReU5N1Oe0J/cGbz4JFcN/Ug7l2fywKFeqK7QBBzcWL9NBVYKP+v
 OndbBi7OARd6iYEYsJwgFERe86ZwpE1KpR4Vnyo9uv3sA2AbbXbwvbjC0d/sktnV
 oCC83X2ahYauPj0/6suHtiZamuTvThCmM3hxMH2TFFoPaQKKV5BHp8dRXNjCZ5jg
 s/dfCX3dgb9S9HGbgsZBToqTmyMQ09hv0H0m3KAOveJQFgdwBSDgE94chOSdx3Kk
 DanBawF1LJmkpFwLUcTIbIkdBjGBBat4b2EVgPjyEFqWqWJgEHs56vSLsSwCkbi5
 9tIu67aUP7VJCsibWECAOMtli7uYy/liYY/dUZhqrck6TT1tukhHKjjsuWr/9xY+
 TU/Rd8PA5ytp92r2AkdN+Ztz6j1HUQQbGPmmIfOHuBB4WilwSF0Zgx+3c6bc9hMf
 36J0qLYowaBYY57UN6joLGiPNcR7TgsEunCzsCxuGGby4rpFqy95Ml2aWFRn32bH
 LUgmAgaSNlk+v4E1iG7jJMHoH2xpKw+2+PNkIVC3WE8saE10qjZvebNUVJXb9bY1
 VMuLHHrSc148ou0g+rM4ehF3PEbIBPd4SOxFwVsefPbAnpUSC+hj+SptYGWbLPJJ
 D+2noXhqssqVlvizxGoj
 =CanJ
 -----END PGP SIGNATURE-----

Merge tag 'for-v3.16-rc/omap-fixes-b' of git://git.kernel.org/pub/scm/linux/kernel/git/pjw/omap-pending into omap-for-v3.16/fixes

Some miscellaneous fixes for OMAP clock code, DRA7xx device data, and
PRCM code (when DSPBridge is used) for v3.16-rc.

Basic build, boot, and PM test logs are available here:

http://www.pwsan.com/omap/testlogs/prcm-a-v3.16-rc/20140706174258/
2014-07-08 01:03:54 -07:00
Shawn Guo 63288b721a ARM: imx: fix shared gate clock
Let's say clock A and B are two gate clocks that share the same register
bit in hardware.  Therefore they are registered as shared gate clocks
with imx_clk_gate2_shared().

In a scenario that only clock A is enabled by clk_enable(A) while B is
not used, the shared gate will be unexpectedly disabled in hardware.
It happens because clk_enable(A) increments the share_count from 0 to 1,
while clock B is unused to clock core, and therefore the core function
will just disable B by calling clk->ops->disable() directly.  The
consequence of that call is share_count is decremented to 0 and the gate
is disabled in hardware, even though clock A is still in use.

The patch fixes the issue by initializing the share_count per hardware
state and returns enable state per share_count from .is_enabled() hook,
in case it's a shared gate.

While at it, add a check in clk_gate2_disable() to ensure it's never
called with a zero share_count.

Reported-by: Fabio Estevam <fabio.estevam@freescale.com>
Fixes: f9f28cdf21 ("ARM: imx: add shared gate clock support")
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-07 21:21:11 -07:00
Olof Johansson 069c70cb07 Samsung fixes-2 for v3.16
- fix the check for SMP configuration with using CONFIG_SMP
   not just SMP
 - fix the number of pwm-cells for exynos4 pwm
 - fix ftrace for exynos_mct
 - register exynos_mct for stable udely
 - fix secondary boot addr for secure mode for exynos SoCs
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJTuyiZAAoJEA0Cl+kVi2xqDCQQAI59MYn7mOSxQ4egjR65SFXc
 5g0yQIGsfWw1+FXDr1X64Okq5HjY8YHTbkyo9nzjNcmABwHK/oJXWVpJuk4b61e6
 eKA5hgiSa1grvz4uzW1ZR+pRooEOn7sJe3OYcesPrsbnsXBLzmV+9HJ2x657asCx
 Ran010mw+QNfyOikARFIWaVB9REbK1n5mcKAoAeW3iFAp94xCH0d5Qj0IiQxAam9
 8zdEogfY3+YcB+frOaZH1OzVCZ1wLjDdmv86SwvcixuvPU7Lcr91vDFbc0cE7DVj
 pHZtIoMi8RZk3twtMLhAnJz+fNygUGN7kBMW3P42ULkgMxIQMGfqmWvgBpUJ8XO6
 2wVZ6WnW6jN1OXyNsNM/yyDtm+hdryaIP+WdMfrol8gRevilNniyPwd83HSKTJg8
 HHAazUAZZTS+04x19aBBO2RU5vhHSimbOOsXIlJen4Tz5BBwebDQ38JnKRSElgm1
 5w+8BajzVt5YTaW2NJ7T87wb/ytV8/MNKZ58GOzh2EXIbnohKbs0qM1ip0RztWLA
 ZvEyTF86+fA55W5wrSb6qfz428hCWkJ1PnPCXVPvffNGsrdOM+ziC8G1fhDVw5TJ
 TVoktLhz7kU+1aB7272NbXVI9GaJ8vTl0pMpcN5sHI4NCq3g+8SylfaJt3aW5zcy
 kKsxM4bvZyMXstwAIlVo
 =jlRt
 -----END PGP SIGNATURE-----

Merge tag 'samsung-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes

Merge "Samsung fixes-2 for v3.16" from Kukjin Kim:

- fix the check for SMP configuration with using CONFIG_SMP
  not just SMP
- fix the number of pwm-cells for exynos4 pwm
- fix ftrace for exynos_mct
- register exynos_mct for stable udely
- fix secondary boot addr for secure mode for exynos SoCs

* tag 'samsung-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
  ARM: EXYNOS: Update secondary boot addr for secure mode
  clocksource: exynos_mct: Register the timer for stable udelay
  clocksource: exynos_mct: Fix ftrace
  ARM: dts: fix pwm-cells in pwm node for exynos4
  ARM: EXYNOS: Fix the check for non-smp configuration

Signed-off-by: Olof Johansson <olof@lixom.net>
2014-07-07 21:10:17 -07:00
Tushar Behera be0b420ad6 ARM: dts: Update the parent for Audss clocks in Exynos5420
Currently CLK_FOUT_EPLL was set as one of the parents of AUDSS mux.
As per the user manual, it should be CLK_MAU_EPLL.

The problem surfaced when the bootloader in Peach-pit board set
the EPLL clock as the parent of AUDSS mux. While booting the kernel,
we used to get a system hang during late boot if CLK_MAU_EPLL was
disabled.

Signed-off-by: Tushar Behera <tushar.b@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reported-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Tested-by: Doug Anderson <dianders@chromium.org>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-08 08:31:41 +09:00
Sachin Kamat 35e75645f1 ARM: EXYNOS: Update secondary boot addr for secure mode
Almost all Exynos-series of SoCs that run in secure mode don't need
additional offset for every CPU, with Exynos4412 being the only
exception.

Tested on Origen-Quad (Exynos4412) and Arndale-Octa (Exynos5420).

While at it, fix the coding style (space around *).

Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Tested-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
2014-07-08 08:03:49 +09:00