The dma_contiguous_remap() function clears existing section maps using
the wrong size (PGDIR_SIZE instead of PMD_SIZE). This is a bug which
does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same.
On LPAE systems, however, this bug causes the kernel to hang at this point.
This fix has been tested on both LPAE and non-LPAE kernel builds.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
This patch adds support for CMA to dma-mapping subsystem for ARM
architecture. By default a global CMA area is used, but specific devices
are allowed to have their private memory areas if required (they can be
created with dma_declare_contiguous() function during board
initialisation).
Contiguous memory areas reserved for DMA are remapped with 2-level page
tables on boot. Once a buffer is requested, a low memory kernel mapping
is updated to to match requested memory access type.
GFP_ATOMIC allocations are performed from special pool which is created
early during boot. This way remapping page attributes is not needed on
allocation time.
CMA has been enabled unconditionally for ARMv6+ systems.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
This patch add a complete implementation of DMA-mapping API for
devices which have IOMMU support.
This implementation tries to optimize dma address space usage by remapping
all possible physical memory chunks into a single dma address space chunk.
DMA address space is managed on top of the bitmap stored in the
dma_iommu_mapping structure stored in device->archdata. Platform setup
code has to initialize parameters of the dma address space (base address,
size, allocation precision order) with arm_iommu_create_mapping() function.
To reduce the size of the bitmap, all allocations are aligned to the
specified order of base 4 KiB pages.
dma_alloc_* functions allocate physical memory in chunks, each with
alloc_pages() function to avoid failing if the physical memory gets
fragmented. In worst case the allocated buffer is composed of 4 KiB page
chunks.
dma_map_sg() function minimizes the total number of dma address space
chunks by merging of physical memory chunks into one larger dma address
space chunk. If requested chunk (scatter list entry) boundaries
match physical page boundaries, most calls to dma_map_sg() requests will
result in creating only one chunk in dma address space.
dma_map_page() simply creates a mapping for the given page(s) in the dma
address space.
All dma functions also perform required cache operation like their
counterparts from the arm linear physical memory mapping version.
This patch contains code and fixes kindly provided by:
- Krishna Reddy <vdumpa@nvidia.com>,
- Andrzej Pietrasiewicz <andrzej.p@samsung.com>,
- Hiroshi DOYU <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch converts dma_alloc/free/mmap_{coherent,writecombine}
functions to use generic alloc/free/mmap methods from dma_map_ops
structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been
introduced to implement writecombine methods.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch just performs a global cleanup in DMA mapping implementation
for ARM architecture. Some of the tiny helper functions have been moved
to the caller code, some have been merged together.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch removes dma bounce hooks from the common dma mapping
implementation on ARM architecture and creates a separate set of
dma_map_ops for dma bounce devices.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch converts all dma_sg methods to be generic (independent of the
current DMA mapping implementation for ARM architecture). All dma sg
operations are now implemented on top of respective
dma_map_page/dma_sync_single_for* operations from dma_map_ops structure.
Before this patch there were custom methods for all scatter/gather
related operations. They iterated over the whole scatter list and called
cache related operations directly (which in turn checked if we use dma
bounce code or not and called respective version). This patch changes
them not to use such shortcut. Instead it provides similar loop over
scatter list and calls methods from the device's dma_map_ops structure.
This enables us to use device dependent implementations of cache related
operations (direct linear or dma bounce) depending on the provided
dma_map_ops structure.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch modifies dma-mapping implementation on ARM architecture to
use common dma_map_ops structure and asm-generic/dma-mapping-common.h
helpers.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch removes the need for the offset parameter in dma bounce
functions. This is required to let dma-mapping framework on ARM
architecture to use common, generic dma_map_ops based dma-mapping
helpers.
Background and more detailed explaination:
dma_*_range_* functions are available from the early days of the dma
mapping api. They are the correct way of doing a partial syncs on the
buffer (usually used by the network device drivers). This patch changes
only the internal implementation of the dma bounce functions to let
them tunnel through dma_map_ops structure. The driver api stays
unchanged, so driver are obliged to call dma_*_range_* functions to
keep code clean and easy to understand.
The only drawback from this patch is reduced detection of the dma api
abuse. Let us consider the following code:
dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE);
dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE);
Without the patch such code fails, because dma bounce code is unable
to find the bounce buffer for the given dma_address. After the patch
the above sync call will be equivalent to:
dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE);
which succeeds.
I don't consider this as a real problem, because DMA API abuse should be
caught by debug_dma_* function family. This patch lets us to simplify
the internal low-level implementation without chaning the driver visible
API.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code
easier to read.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Replace all calls to printk with pr_* functions family.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
A zero value for prot_sect in the memory types table implies that
section mappings should never be created for the memory type in question.
This is checked for in alloc_init_section().
With LPAE, we set a bit to mask access flag faults for kernel mappings.
This breaks the aforementioned (!prot_sect) check in alloc_init_section().
This patch fixes this bug by first checking for a non-zero
prot_sect before setting the PMD_SECT_AF flag.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For the SOC chips using tauros2 cache, will need disable
and resume tauros2 cache for SOC suspend/resume.
Signed-off-by: Chao Xie <chao.xie@marvell.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
When enable ARCH_SUSPEND_POSSIBLE, it need defintion of
cpu_mohawk_do_suspend and cpu_mohawk_do_resume
Signed-off-by: Chao Xie <chao.xie@marvell.com>
Signed-off-by: Haojian Zhuang <<haojian.zhuang@gmail.com>
This patch removes support for ARMv3 CPUs, which haven't worked properly
for quite some time (see the FIXME comment in arch/arm/mm/fault.c). The
only V3 parts left is the cache model for ARMv3, which is needed for some
odd reason by ARM740T CPUs, and being able to build with -march=armv3,
which is required for the RiscPC platform due to its bus structure.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The cacheflush syscall can fail for two reasons:
(1) The arguments are invalid (nonsensical address range or no VMA)
(2) The region generates a translation fault on a VIPT or PIPT cache
This patch allows do_cache_op to return an error code to userspace in
the case of the above. The various coherent_user_range implementations
are modified to return 0 in the case of VIVT caches or -EFAULT in the
case of an abort on v6/v7 cores.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference
from the function arm_memory_present() to the function
.init.text:memory_present()
The function arm_memory_present() references
the function __init memory_present().
This is often because arm_memory_present lacks a __init
annotation or the annotation of memory_present is wrong.
WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch
in reference from the function alloc_init_pud() to the function
.init.text:alloc_init_section()
The function alloc_init_pud() references
the function __init alloc_init_section().
This is often because alloc_init_pud lacks a __init
annotation or the annotation of alloc_init_section is wrong.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PL310 errata #588369 and #727915 require writes to the debug registers
of the cache controller to work around known problems. Writing these
registers on L220 may cause deadlock, so ensure that we only perform
this operation when we identify a PL310 at probe time.
Cc: stable@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The workaround for PL310 erratum #753970 can lead to deadlock on systems
with an L220 cache controller.
This patch makes the workaround effective only when the cache controller
is identified as a PL310 at probe time.
Cc: stable@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Erratum #326103 ("FSR write bit incorrect on a SWP to read-only memory")
only affects the ARM 1136 core prior to r1p0. The workaround
disassembles the faulting instruction to determine whether it was a read
or write access on all v6 cores.
An issue has been reported on the ARM 11MPCore whereby loading the
faulting instruction may happen in parallel with that page being
unmapped, resulting in a deadlock due to the lack of TLB broadcasting
in hardware:
http://lists.infradead.org/pipermail/linux-arm-kernel/2012-March/091561.html
This patch limits the workaround so that it is only used on affected
cores, which are known to be UP only. Other v6 cores can rely on the
FSR to indicate the access type correctly.
Cc: stable@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current_mm variable was used to store the new mm between the
switch_mm() and switch_to() calls where an IPI to reset the context
could have set the wrong mm. Since the interrupts are disabled during
context switch, there is no need for this variable, current->active_mm
already points to the current mm when interrupts are re-enabled.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since the ASIDs must be unique to an mm across all the CPUs in a system,
the __new_context() function needs to broadcast a context reset event to
all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
cannot be issued with interrupts disabled and ARM had to define
__ARCH_WANT_INTERRUPTS_ON_CTXSW.
This patch changes the check_context() function to
check_and_switch_context() called from switch_mm(). In case of
ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
interrupts are disabled, it defers the __new_context() and
cpu_switch_mm() calls to the post-lock switch hook where the interrupts
are enabled. Setting the reserved TTBR0 was also moved to
check_and_switch_context() from cpu_v7_switch_mm().
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.
This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.
This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: add LPAE support]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently when ThumbEE is not enabled (!CONFIG_ARM_THUMBEE) the ThumbEE
register states are not saved/restored at context switch. The default state
of the ThumbEE Ctrl register (TEECR) allows userspace accesses to the
ThumbEE Base Handler register (TEEHBR). This can cause unexpected behaviour
when people use ThumbEE on !CONFIG_ARM_THUMBEE kernels, as well as allowing
covert communication - eg between userspace tasks running inside chroot
jails.
This patch sets up TEECR in order to prevent user-space access to TEEHBR
when !CONFIG_ARM_THUMBEE. In this case, tasks are sent SIGILL if they try to
access TEEHBR.
Cc: stable@vger.kernel.org
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 94e5a85b ("ARM: earlier initialization of vectors page") made it
the responsibility of paging_init to initialise the vectors page.
This patch adds a call to early_trap_init for the !CONFIG_MMU case,
placing the vectors at CONFIG_VECTORS_BASE.
Cc: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The description for the CPU_HIGH_VECTOR Kconfig option for nommu builds
doesn't make any sense.
This patch fixes up the trivial grammatical error.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
commit 8878a539ff was done by me
to make the page fault handler retryable as well as interruptible.
Due to this commit, there is a mistake in the way in which
tsk->[maj|min]_flt counter gets incremented for VM_FAULT_ERROR:
If VM_FAULT_ERROR is returned in the fault flags by handle_mm_fault,
then either maj_flt or min_flt will get incremented. This is wrong
as in the case of a VM_FAULT_ERROR we need to be skip ahead to the
error handling code in do_page_fault.
Added a check after the call to __do_page_fault() to check for
(fault & VM_FAULT_ERROR).
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rob Herring has done a sweeping change cleaning up all of the mach/io.h includes,
moving some of the oft-repeated macros to a common location and removing a bunch of
boiler plate. This is another step closer to a common zImage for multiple platforms.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPcpqHAAoJEIwa5zzehBx3xCMP/2evrPQyorzMBztrFB4Ry9Ol
qNkSVNsemZjdtkY2dnJv+zJ/Xb0PPDU9EuBHr/SpqmVrRZEZeJND42wZK/OTFCBZ
Ufi7KP1qE30daO5H3YmL+58/Ixir5fTHqggqolHhTcEYU2hnHgLBI4rIFu92kSO7
TMyrAUs14jSkTVZc6HSF83w3PfQWhMzWvspJVHQ6RebZRruETAr7v9weVMbgxcDk
jQ5XJ9y73rGs2AF8bZTpUdFPzkcac7UiHn3/XyqoZs8RNCL98BGpskzhILyTARf5
X90c9mqQF+AEbb9QSDDd52uYFsJ/5COJvWdlExRI9gZZDI8Pd05ijZBR9IdGJg/B
NsVsl98wvZ/zjHJ/Sb2qt5ruet7PiQUGhkshB42jVHsaWfRM030sKGYxQ8pX5Tsa
cSagnfBCvAZ9VjDLkXrnEbWRNTz8LSwn9l63z0jmtm5D8+vbpMtgvtWARtuZ4RNn
D8wIWoyT0ytVZnosu5441TEgCejtcKOEFzThvKDYMeMJZ/rqVkAbcznapoC2qUd4
fceNlLfQFvW7xpY1MY8mhlwC0ki4hM9MSDieaXUyefvAU/hoSp8MveVUH5UspYfb
0FpkEhzklW/g0/fuq0DJQIrMn7dajjUvVZIUQtiVQuFHOr6RUbFG5vmXuCbAyx10
PE2K4rnKz+PC8bKab7v9
=YIsn
-----END PGP SIGNATURE-----
Merge tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull "ARM: cleanups of io includes" from Olof Johansson:
"Rob Herring has done a sweeping change cleaning up all of the
mach/io.h includes, moving some of the oft-repeated macros to a common
location and removing a bunch of boiler plate. This is another step
closer to a common zImage for multiple platforms."
Fix up various fairly trivial conflicts (<mach/io.h> removal vs changes
around it, tegra localtimer.o is *still* gone, yadda-yadda).
* tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (29 commits)
ARM: tegra: Include assembler.h in sleep.S to fix build break
ARM: pxa: use common IOMEM definition
ARM: dma-mapping: convert ARCH_HAS_DMA_SET_COHERENT_MASK to kconfig symbol
ARM: __io abuse cleanup
ARM: create a common IOMEM definition
ARM: iop13xx: fix missing declaration of iop13xx_init_early
ARM: fix ioremap/iounmap for !CONFIG_MMU
ARM: kill off __mem_pci
ARM: remove bunch of now unused mach/io.h files
ARM: make mach/io.h include optional
ARM: clps711x: remove unneeded include of mach/io.h
ARM: dove: add explicit include of dove.h to addr-map.c
ARM: at91: add explicit include of hardware.h to uncompressor
ARM: ep93xx: clean-up mach/io.h
ARM: tegra: clean-up mach/io.h
ARM: orion5x: clean-up mach/io.h
ARM: davinci: remove unneeded mach/io.h include
[media] davinci: remove includes of mach/io.h
ARM: OMAP: Remove remaining includes for mach/io.h
ARM: msm: clean-up mach/io.h
...
Pull more ARM updates from Russell King.
This got a fair number of conflicts with the <asm/system.h> split, but
also with some other sparse-irq and header file include cleanups. They
all looked pretty trivial, though.
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (59 commits)
ARM: fix Kconfig warning for HAVE_BPF_JIT
ARM: 7361/1: provide XIP_VIRT_ADDR for no-MMU builds
ARM: 7349/1: integrator: convert to sparse irqs
ARM: 7259/3: net: JIT compiler for packet filters
ARM: 7334/1: add jump label support
ARM: 7333/2: jump label: detect %c support for ARM
ARM: 7338/1: add support for early console output via semihosting
ARM: use set_current_blocked() and block_sigmask()
ARM: exec: remove redundant set_fs(USER_DS)
ARM: 7332/1: extract out code patch function from kprobes
ARM: 7331/1: extract out insn generation code from ftrace
ARM: 7330/1: ftrace: use canonical Thumb-2 wide instruction format
ARM: 7351/1: ftrace: remove useless memory checks
ARM: 7316/1: kexec: EOI active and mask all interrupts in kexec crash path
ARM: Versatile Express: add NO_IOPORT
ARM: get rid of asm/irq.h in asm/prom.h
ARM: 7319/1: Print debug info for SIGBUS in user faults
ARM: 7318/1: gic: refactor irq_start assignment
ARM: 7317/1: irq: avoid NULL check in for_each_irq_desc loop
ARM: 7315/1: perf: add support for the Cortex-A7 PMU
...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAT3NKzROxKuMESys7AQKElw/+JyDxJSlj+g+nymkx8IVVuU8CsEwNLgRk
8KEnRfLhGtkXFLSJYWO6jzGo16F8Uqli1PdMFte/wagSv0285/HZaKlkkBVHdJ/m
u40oSjgT013bBh6MQ0Oaf8pFezFUiQB5zPOA9QGaLVGDLXCmgqUgd7exaD5wRIwB
ZmyItjZeAVnDfk1R+ZiNYytHAi8A5wSB+eFDCIQYgyulA1Igd1UnRtx+dRKbvc/m
rWQ6KWbZHIdvP1ksd8wHHkrlUD2pEeJ8glJLsZUhMm/5oMf/8RmOCvmo8rvE/qwl
eDQ1h4cGYlfjobxXZMHqAN9m7Jg2bI946HZjdb7/7oCeO6VW3FwPZ/Ic75p+wp45
HXJTItufERYk6QxShiOKvA+QexnYwY0IT5oRP4DrhdVB/X9cl2MoaZHC+RbYLQy+
/5VNZKi38iK4F9AbFamS7kd0i5QszA/ZzEzKZ6VMuOp3W/fagpn4ZJT1LIA3m4A9
Q0cj24mqeyCfjysu0TMbPtaN+Yjeu1o1OFRvM8XffbZsp5bNzuTDEvviJ2NXw4vK
4qUHulhYSEWcu9YgAZXvEWDEM78FXCkg2v/CrZXH5tyc95kUkMPcgG+QZBB5wElR
FaOKpiC/BuNIGEf02IZQ4nfDxE90QwnDeoYeV+FvNj9UEOopJ5z5bMPoTHxm4cCD
NypQthI85pc=
=G9mT
-----END PGP SIGNATURE-----
Merge tag 'split-asm_system_h-for-linus-20120328' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-asm_system
Pull "Disintegrate and delete asm/system.h" from David Howells:
"Here are a bunch of patches to disintegrate asm/system.h into a set of
separate bits to relieve the problem of circular inclusion
dependencies.
I've built all the working defconfigs from all the arches that I can
and made sure that they don't break.
The reason for these patches is that I recently encountered a circular
dependency problem that came about when I produced some patches to
optimise get_order() by rewriting it to use ilog2().
This uses bitops - and on the SH arch asm/bitops.h drags in
asm-generic/get_order.h by a circuituous route involving asm/system.h.
The main difficulty seems to be asm/system.h. It holds a number of
low level bits with no/few dependencies that are commonly used (eg.
memory barriers) and a number of bits with more dependencies that
aren't used in many places (eg. switch_to()).
These patches break asm/system.h up into the following core pieces:
(1) asm/barrier.h
Move memory barriers here. This already done for MIPS and Alpha.
(2) asm/switch_to.h
Move switch_to() and related stuff here.
(3) asm/exec.h
Move arch_align_stack() here. Other process execution related bits
could perhaps go here from asm/processor.h.
(4) asm/cmpxchg.h
Move xchg() and cmpxchg() here as they're full word atomic ops and
frequently used by atomic_xchg() and atomic_cmpxchg().
(5) asm/bug.h
Move die() and related bits.
(6) asm/auxvec.h
Move AT_VECTOR_SIZE_ARCH here.
Other arch headers are created as needed on a per-arch basis."
Fixed up some conflicts from other header file cleanups and moving code
around that has happened in the meantime, so David's testing is somewhat
weakened by that. We'll find out anything that got broken and fix it..
* tag 'split-asm_system_h-for-linus-20120328' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-asm_system: (38 commits)
Delete all instances of asm/system.h
Remove all #inclusions of asm/system.h
Add #includes needed to permit the removal of asm/system.h
Move all declarations of free_initmem() to linux/mm.h
Disintegrate asm/system.h for OpenRISC
Split arch_align_stack() out from asm-generic/system.h
Split the switch_to() wrapper out of asm-generic/system.h
Move the asm-generic/system.h xchg() implementation to asm-generic/cmpxchg.h
Create asm-generic/barrier.h
Make asm-generic/cmpxchg.h #include asm-generic/cmpxchg-local.h
Disintegrate asm/system.h for Xtensa
Disintegrate asm/system.h for Unicore32 [based on ver #3, changed by gxt]
Disintegrate asm/system.h for Tile
Disintegrate asm/system.h for Sparc
Disintegrate asm/system.h for SH
Disintegrate asm/system.h for Score
Disintegrate asm/system.h for S390
Disintegrate asm/system.h for PowerPC
Disintegrate asm/system.h for PA-RISC
Disintegrate asm/system.h for MN10300
...
Disintegrate asm/system.h for ARM.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
Print debug information on user faults for SIGBUS if user_debug = 16
in the kernel command line.
Reference: <1327333344-26340-1-git-send-email-javi.merino@arm.com>
Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This line is irritating and wrong when modules are not supported, so
don't show it then.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull PCI changes (including maintainer change) from Jesse Barnes:
"This pull has some good cleanups from Bjorn and Yinghai, as well as
some more code from Yinghai to better handle resource re-allocation
when enabled.
There's also a new initcall_debug feature from Arjan which will print
out quirk timing information to help identify slow quirks for fixing
or refinement (Yinghai sent in a few patches to do just that once the
new debug code landed).
Beyond that, I'm handing off PCI maintainership to Bjorn Helgaas.
He's been a core PCI and Linux contributor for some time now, and has
kindly volunteered to take over. I just don't feel I have the time
for PCI review and work that it deserves lately (I've taken on some
other projects), and haven't been as responsive lately as I'd like, so
I approached Bjorn asking if he'd like to manage things. He's going
to give it a try, and I'm confident he'll do at least as well as I
have in keeping the tree managed, patches flowing, and keeping things
stable."
Fix up some fairly trivial conflicts due to other cleanups (mips device
resource fixup cleanups clashing with list handling cleanup, ppc iseries
removal clashing with pci_probe_only cleanup etc)
* 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci: (112 commits)
PCI: Bjorn gets PCI hotplug too
PCI: hand PCI maintenance over to Bjorn Helgaas
unicore32/PCI: move <asm-generic/pci-bridge.h> include to asm/pci.h
sparc/PCI: convert devtree and arch-probed bus addresses to resource
powerpc/PCI: allow reallocation on PA Semi
powerpc/PCI: convert devtree bus addresses to resource
powerpc/PCI: compute I/O space bus-to-resource offset consistently
arm/PCI: don't export pci_flags
PCI: fix bridge I/O window bus-to-resource conversion
x86/PCI: add spinlock held check to 'pcibios_fwaddrmap_lookup()'
PCI / PCIe: Introduce command line option to disable ARI
PCI: make acpihp use __pci_remove_bus_device instead
PCI: export __pci_remove_bus_device
PCI: Rename pci_remove_behind_bridge to pci_stop_and_remove_behind_bridge
PCI: Rename pci_remove_bus_device to pci_stop_and_remove_bus_device
PCI: print out PCI device info along with duration
PCI: Move "pci reassigndev resource alignment" out of quirks.c
PCI: Use class for quirk for usb host controller fixup
PCI: Use class for quirk for ti816x class fixup
PCI: Use class for quirk for intel e100 interrupt fixup
...
Pull kmap_atomic cleanup from Cong Wang.
It's been in -next for a long time, and it gets rid of the (no longer
used) second argument to k[un]map_atomic().
Fix up a few trivial conflicts in various drivers, and do an "evil
merge" to catch some new uses that have come in since Cong's tree.
* 'kmap_atomic' of git://github.com/congwang/linux: (59 commits)
feature-removal-schedule.txt: schedule the deprecated form of kmap_atomic() for removal
highmem: kill all __kmap_atomic() [swarren@nvidia.com: highmem: Fix ARM build break due to __kmap_atomic rename]
drbd: remove the second argument of k[un]map_atomic()
zcache: remove the second argument of k[un]map_atomic()
gma500: remove the second argument of k[un]map_atomic()
dm: remove the second argument of k[un]map_atomic()
tomoyo: remove the second argument of k[un]map_atomic()
sunrpc: remove the second argument of k[un]map_atomic()
rds: remove the second argument of k[un]map_atomic()
net: remove the second argument of k[un]map_atomic()
mm: remove the second argument of k[un]map_atomic()
lib: remove the second argument of k[un]map_atomic()
power: remove the second argument of k[un]map_atomic()
kdb: remove the second argument of k[un]map_atomic()
udf: remove the second argument of k[un]map_atomic()
ubifs: remove the second argument of k[un]map_atomic()
squashfs: remove the second argument of k[un]map_atomic()
reiserfs: remove the second argument of k[un]map_atomic()
ocfs2: remove the second argument of k[un]map_atomic()
ntfs: remove the second argument of k[un]map_atomic()
...
Pull trivial tree from Jiri Kosina:
"It's indeed trivial -- mostly documentation updates and a bunch of
typo fixes from Masanari.
There are also several linux/version.h include removals from Jesper."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (101 commits)
kcore: fix spelling in read_kcore() comment
constify struct pci_dev * in obvious cases
Revert "char: Fix typo in viotape.c"
init: fix wording error in mm_init comment
usb: gadget: Kconfig: fix typo for 'different'
Revert "power, max8998: Include linux/module.h just once in drivers/power/max8998_charger.c"
writeback: fix fn name in writeback_inodes_sb_nr_if_idle() comment header
writeback: fix typo in the writeback_control comment
Documentation: Fix multiple typo in Documentation
tpm_tis: fix tis_lock with respect to RCU
Revert "media: Fix typo in mixer_drv.c and hdmi_drv.c"
Doc: Update numastat.txt
qla4xxx: Add missing spaces to error messages
compiler.h: Fix typo
security: struct security_operations kerneldoc fix
Documentation: broken URL in libata.tmpl
Documentation: broken URL in filesystems.tmpl
mtd: simplify return logic in do_map_probe()
mm: fix comment typo of truncate_inode_pages_range
power: bq27x00: Fix typos in comment
...
[swarren@nvidia.com: highmem: Fix ARM build break due to __kmap_atomic rename]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Cong Wang <amwang@redhat.com>
With commit 4fe7ef3a08 (ARM: provide runtime hook for ioremap/iounmap),
compiles with !CONFIG_MMU were broken. Rename nommu __iounmap to
__arm_iounmap and add arch_ioremap_caller and arch_iounmap. Its
not expected that these need to be overriden for !CONFIG_MMU, so setting
the function ptrs has no effect in this case.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
We have compile time over-ride of ioremap and iounmap, but an run-time
override is needed for multi-platform builds. This adds an extra function
pointer check, but ioremap is not peformance critical. The option for
compile time selection remains.
The caller variant is used here to provide correct caller information as
ARM can only support level 0 for __builtin_return_address.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Erratum #743622 affects all r2 variants of the Cortex-A9 processor, so
ensure that the workaround is applied regardless of the revision.
Cc: <stable@vger.kernel.org>
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The PCI core provides a pci_flags definition (currently __weak), so drop
the arm definition in favor of that.
We EXPORT_SYMBOL(pci_flags) as arm did previously. I'm dubious about
this: no other architecture exports it, and I didn't see any modules in
the tree that reference it.
CC: Rob Herring <rob.herring@calxeda.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: linux-arm-kernel@lists.infradead.org
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Bootup with lockdep enabled has been broken on v7 since b46c0f7465
("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").
This is because v7_setup (which is called very early during boot) calls
v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
ends up attempting to call into lockdep C code (trace_hardirqs_off())
when we are in no position to execute it (no stack, MMU off).
Fix this by using a notrace variant of save_and_disable_irqs. The code
already uses the notrace variant of restore_irqs.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch expands the Kconfig dependencies for ARM_LPAE to not allow
enabling when architectures other than ARMv7 are built into the kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no need to include the header twice, so get rid of the
duplicate.
Signed-off-by: Jesper Juhl <jj@chaosbits.net>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
armv7's flush_cache_all() flushes caches via set/way. To
determine the cache attributes (line size, number of sets,
etc.) the assembly first writes the CSSELR register to select a
cache level and then reads the CCSIDR register. The CSSELR register
is banked per-cpu and is used to determine which cache level CCSIDR
reads. If the task is migrated between when the CSSELR is written and
the CCSIDR is read the CCSIDR value may be for an unexpected cache
level (for example L1 instead of L2) and incorrect cache flushing
could occur.
Disable interrupts across the write and read so that the correct
cache attributes are read and used for the cache flushing
routine. We disable interrupts instead of disabling preemption
because the critical section is only 3 instructions and we want
to call v7_dcache_flush_all from __v7_setup which doesn't have a
full kernel stack with a struct thread_info.
This fixes a problem we see in scm_call() when flush_cache_all()
is called from preemptible context and sometimes the L2 cache is
not properly flushed out.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This reverts commit 3c424f3598.
Joachim Eastwood reports:
| "ARM: 7304/1: ioremap: fix boundary check when reusing static mapping"
| Commit: 3c424f3598 in Linus master
|
| Breaks booting on my custom AT91RM9200 board.
| There isn't any error messages or anything that indicates what goes
| wrong it just stops after; Uncompressing Linux... done, booting the
| kernel.
|
| Reverting it makes my board boot again.
and further debugging reveals:
ioremap: pfn=fffff phys=fffff000 offset=400 size=1000
ioremap: area c3ffdfc0: phys_addr=200000 pfn=200 size=4000
ioremap: found: addr fef74000 => fed73000 => fed73400
Clearly, an area for pfn 0x200, 16K can't ever satisfy a request for pfn
0xfffff. This happens because the changed if statement becomes:
if (0x00200 > 0xfffff ||
0xfffff000 + 0x400 + 0x1000-1 > 0x00200000 + 0x4000-1)
and therefore:
if (0x00200 > 0xfffff ||
0x000003ff > 0x00203fff)
The if condition fails, and so we _believe_ that the SRAM mapping fits
our request. Clearly that's totally bogus.
Moreover, the original premise of the 'fix' patch was wrong:
| The condition checking boundaries of the requested and existing
| mappings didn't take in-page offset into consideration though,
| which lead to obscure and hard to debug problems when requested
| mapping crossed end of the static one.
as the code immediately above this loop does:
size = PAGE_ALIGN(offset + size);
so 'size' already contains the requested offset into the page.
So, revert the broken 'fix'.
Acked-by: Nicolas Pitre <nico@linaro.org>
Since commit 576d2f2525 "ARM: add
generic ioremap optimization by reusing static mappings" ioremap()
is trying to reuse existing static mapping when possible.
The condition checking boundaries of the requested and existing
mappings didn't take in-page offset into consideration though,
which lead to obscure and hard to debug problems when requested
mapping crossed end of the static one.
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Get rid of the TOP_PTE() macro as we now have proper accessor functions
instead. No one should be directly referencing the top pte table
anymore.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide get_top_pte() to complement set_top_pte(), moving the only
users of TOP_PTE to arch/arm/mm/mm.h.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A number of places establish a PTE in our top page table and
immediately flush the TLB. Rather than having this at every callsite,
provide an inline function for this purpose.
This changes some global tlb flushes to be local; each time we setup
one of these mappings, we always do it with preemption disabled which
would prevent us migrating to another CPU.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
mk_pte is provided to do this translation for us, so use it rather
than open-coding it in the copypage code.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the TOP_PTE address definitions to one central place so that it's
easy to discover what they're being used for. This helps to ensure
that there are no overlaps.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Initialize the contents of the vectors page immediately after we
allocate the page, but before we map it. This avoids any possible
aliases with other mappings which may need to be flushed after the
page has been mapped irrespective of the cache type.
We follow this later with a flush_cache_all() after all static memory
mappings have been initialized, which ensures that this is safe from
any cache effects.
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a new seqfile for reporting coherent DMA allocations. This contains
the address range, size and the function which was used to allocate
each region, allowing these allocations to be viewed in much the same
way as /proc/vmallocinfo.
The DMA coherent region has limited space, so this allows allocation
failures to be viewed, as well as finding out how much space is being
used.
Make sure this file is only readable by root - same as vmallocinfo - to
prevent information leakage.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On v7, we use the same cache maintenance instructions for data lines
as for unified lines. This was not the case for v6, where HARVARD_CACHE
was defined to indicate the L1 cache topology.
This patch removes the erroneous compile-time check for HARVARD_CACHE in
proc-v7.S, ensuring that we perform I-side invalidation at boot.
Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Cc: stable <stable@vger.kernel.org>
Acked-by: Catalin Marinas <Catalin.Marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The merging of commits 1b6ba46b ("ARM: LPAE: MMU setup for the 3-level
page table format") and b4244738 ("ARM: 7202/1: Add Cortex-A7 proc info")
during the merge window ended up putting the Cortex-A7 proc_info into a
code block guarded by !CONFIG_ARM_LPAE. This makes Cortex-A7 platforms
unbootable when LPAE is enabled.
This patch moves the proc_info structure for Cortex-A7 outside of the
guarded block.
Cc: Pawel Moll <pawel.moll@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
To ensure correct alignment of cacheline-aligned data, the maximum
cacheline size needs to be known at compile time.
Since Cortex-A8 and Cortex-A15 have 64-byte cachelines (and it is likely
that there will be future ARMv7 implementations with the same line size)
then it makes sense to assume that CPU_V7 implies a 64-byte L1 cacheline
size. For CPUs with smaller caches, this will result in some harmless
padding but will help with single zImage work and avoid hitting subtle
bugs with misaligned data structures.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
__u32 exists to avoid namespace clashes with userspace programs. It
should not be used outside header files, so convert to use u32 instead.
Also, don't mix uint32_t and __u32 - use the same type throughout the
file for consistency.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 716a3dc200 (ARM: Add arm_memblock_steal() to allocate memory
away from the kernel) added a function which calls memblock_alloc().
This causes a section conflict:
WARNING: vmlinux.o(.text+0xc614): Section mismatch in reference from the function arm_memblock_steal() to the function .init.text:memblock_alloc()
The function arm_memblock_steal() references
the function __init memblock_alloc().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Several platforms are now using the memblock_alloc+memblock_free+
memblock_remove trick to obtain memory which won't be mapped in the
kernel's page tables. Most platforms do this (correctly) in the
->reserve callback. However, OMAP has started to call these functions
outside of this callback, and this is extremely unsafe - memory will
not be unmapped, and could well be given out after memblock is no
longer responsible for its management.
So, provide arm_memblock_steal() to perform this function, and ensure
that it panic()s if it is used inappropriately. Convert everyone
over, including OMAP.
As a result, OMAP with OMAP4_ERRATA_I688 enabled will panic on boot
with this change. Mark this option as BROKEN and make it depend on
BROKEN. OMAP needs to be fixed, or 137d105d50 (ARM: OMAP4: Fix
errata i688 with MPU interconnect barriers.) reverted until such
time it can be fixed correctly.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Many architectures don't want to pull in iomap.c,
so they ended up duplicating pci_iomap from that file.
That function isn't trivial, and we are going to modify it
https://lkml.org/lkml/2011/11/14/183
so the duplication hurts.
This reduces the scope of the problem significantly,
by moving pci_iomap to a separate file and
referencing that from all architectures.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAABAgAGBQJPBZXBAAoJECgfDbjSjVRpuuYIAIMD0wE96MuTOSBJX4VG8VAP
UyjL9dsfMRy8CKioQo5/fxpTY07YBCWmNauSSX7pzgcoUKBfYIGn4Z1qwGYsWK9M
CzLs6PXLTugw0FtKobHZl/klRTWEBS6YOUjp9x568rplwF+Ppk7b993uj7eS/g+e
T0mUKzqg4/UavbHd9+W5KgC4drQ5hgtu2WZHoUxBK4umnd3C2G+U82Sthg50o/XU
SC8IGm39K8I36HoIWgXj3Y7nkOP3mQELohOT4ZPiVSmLvGS4i47+ix75anO+8ZvZ
jxHr8RC85IK1Nd89NZhbKOyvx0QQiwoKUZaTwcWXJNSOADzZnM6icdIsodc+Elo=
=ccQZ
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
lib: use generic pci_iomap on all architectures
Many architectures don't want to pull in iomap.c,
so they ended up duplicating pci_iomap from that file.
That function isn't trivial, and we are going to modify it
https://lkml.org/lkml/2011/11/14/183
so the duplication hurts.
This reduces the scope of the problem significantly,
by moving pci_iomap to a separate file and
referencing that from all architectures.
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
alpha: drop pci_iomap/pci_iounmap from pci-noop.c
mn10300: switch to GENERIC_PCI_IOMAP
mn10300: add missing __iomap markers
frv: switch to GENERIC_PCI_IOMAP
tile: switch to GENERIC_PCI_IOMAP
tile: don't panic on iomap
sparc: switch to GENERIC_PCI_IOMAP
sh: switch to GENERIC_PCI_IOMAP
powerpc: switch to GENERIC_PCI_IOMAP
parisc: switch to GENERIC_PCI_IOMAP
mips: switch to GENERIC_PCI_IOMAP
microblaze: switch to GENERIC_PCI_IOMAP
arm: switch to GENERIC_PCI_IOMAP
alpha: switch to GENERIC_PCI_IOMAP
lib: add GENERIC_PCI_IOMAP
lib: move GENERIC_IOMAP to lib/Kconfig
Fix up trivial conflicts due to changes nearby in arch/{m68k,score}/Kconfig
* 'for-linus' of git://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (207 commits)
ARM: 7267/1: Remove BUILD_BUG_ON from asm/bug.h
ARM: 7269/1: mach-sa1100: fix sched_clock breakage
ARM: 7198/1: arm/imx6: add restart support for imx6q
ARM: restart: remove the now empty arch_reset()
ARM: restart: remove comments about adding code to arch_reset()
ARM: restart: lpc32xx & u300: remove unnecessary printk
ARM: restart: plat-samsung: remove plat/reset.h and s5p_reset_hook
ARM: restart: w90x900: use new restart hook
ARM: restart: Versatile Express: use new restart hook
ARM: restart: versatile: use new restart hook
ARM: restart: u300: use new restart hook
ARM: restart: tegra: use new restart hook
ARM: restart: spear: use new restart hook
ARM: restart: shark: use new restart hook
ARM: restart: sa1100: use new restart hook
ARM: 7252/1: restart: S5PV210: use new restart hook
ARM: 7251/1: restart: S5PC100: use new restart hook
ARM: 7250/1: restart: S5P64X0: use new restart hook
ARM: 7266/1: restart: S3C64XX: use new restart hook
ARM: 7265/1: restart: S3C24XX: use new restart hook
...
Fix up trivial conflict in arch/arm/mm/init.c due to removal of
memblock_init() clashing with the movement of the sorting of the meminfo
array.
* 'core-memblock-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
memblock: Reimplement memblock allocation using reverse free area iterator
memblock: Kill early_node_map[]
score: Use HAVE_MEMBLOCK_NODE_MAP
s390: Use HAVE_MEMBLOCK_NODE_MAP
mips: Use HAVE_MEMBLOCK_NODE_MAP
ia64: Use HAVE_MEMBLOCK_NODE_MAP
SuperH: Use HAVE_MEMBLOCK_NODE_MAP
sparc: Use HAVE_MEMBLOCK_NODE_MAP
powerpc: Use HAVE_MEMBLOCK_NODE_MAP
memblock: Implement memblock_add_node()
memblock: s/memblock_analyze()/memblock_allow_resize()/ and update users
memblock: Track total size of regions automatically
powerpc: Cleanup memblock usage
memblock: Reimplement memblock_enforce_memory_limit() using __memblock_remove()
memblock: Make memblock functions handle overflowing range @size
memblock: Reimplement __memblock_remove() using memblock_isolate_range()
memblock: Separate out memblock_isolate_range() from memblock_set_node()
memblock: Kill memblock_init()
memblock: Kill sentinel entries at the end of static region arrays
memblock: Add __memblock_dump_all()
...
Activation conditions for a workaround should not be encoded in the
workaround's direct dependencies if this makes otherwise reasonable
configuration choices impossible.
This patches uses the SMP/UP patching facilities instead to compile
out the workaround if the configuration means that it is definitely
not needed.
This means that configs for buggy silicon can simply select
ARM_ERRATA_751472, without preventing a UP kernel from being built
or duplicatiing knowledge about when to activate the workaround.
This seems the correct way to do things, because the erratum is a
property of the silicon, irrespective of what the kernel config
happens to be.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Making CACHE_L2X0 depend on (huge list of MACH_ and ARCH_ configs)
is bothersome to maintain and likely to lead to merge conflicts.
This patch moves the knowledge of which platforms have a L2x0 or
PL310 cache controller to the individual machines. To enable this,
a new MIGHT_HAVE_CACHE_L2X0 config option is introduced to allow
machines to indicate that they may have such a cache controller
independently of each other.
Boards/SoCs which cannot reliably operate without the L2 cache
controller support will need to select CACHE_L2X0 directly from
their own Kconfigs instead. This applies to some TrustZone-enabled
boards where Linux runs in the Normal World, for example.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Anton Vorontsov <cbouatmailru@gmail.com>
(for cns3xxx)
Acked-by: Tony Lindgren <tony@atomide.com>
(for omap)
Acked-by: Shawn Guo <shawn.guo@linaro.org>
(for imx)
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
(for exynos)
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
(for imx)
Acked-by: Olof Johansson <olof@lixom.net>
(for tegra)
This patch adds processor info for ARM Ltd. Cortex-A7.
A7 is architecturally identical to A15 so it shares the
same SMP initialization code and hwcaps.
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The only function of memblock_analyze() is now allowing resize of
memblock region arrays. Rename it to memblock_allow_resize() and
update its users.
* The following users remain the same other than renaming.
arm/mm/init.c::arm_memblock_init()
microblaze/kernel/prom.c::early_init_devtree()
powerpc/kernel/prom.c::early_init_devtree()
openrisc/kernel/prom.c::early_init_devtree()
sh/mm/init.c::paging_init()
sparc/mm/init_64.c::paging_init()
unicore32/mm/init.c::uc32_memblock_init()
* In the following users, analyze was used to update total size which
is no longer necessary.
powerpc/kernel/machine_kexec.c::reserve_crashkernel()
powerpc/kernel/prom.c::early_init_devtree()
powerpc/mm/init_32.c::MMU_init()
powerpc/mm/tlb_nohash.c::__early_init_mmu()
powerpc/platforms/ps3/mm.c::ps3_mm_add_memory()
powerpc/platforms/embedded6xx/wii.c::wii_memory_fixups()
sh/kernel/machine_kexec.c::reserve_crashkernel()
* x86/kernel/e820.c::memblock_x86_fill() was directly setting
memblock_can_resize before populating memblock and calling analyze
afterwards. Call memblock_allow_resize() before start populating.
memblock_can_resize is now static inside memblock.c.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "H. Peter Anvin" <hpa@zytor.com>
memblock_init() initializes arrays for regions and memblock itself;
however, all these can be done with struct initializers and
memblock_init() can be removed. This patch kills memblock_init() and
initializes memblock with struct initializer.
The only difference is that the first dummy entries don't have .nid
set to MAX_NUMNODES initially. This doesn't cause any behavior
difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "H. Peter Anvin" <hpa@zytor.com>
24aa07882b (memblock, x86: Replace memblock_x86_reserve/free_range()
with generic ones) removed arch/x86/include/asm/memblock.h and dropped
its inclusion from include/linux/memblock.h which breaks other
architectures which depended on the generic memblock.h pulling in the
arch specific one.
However, the proper fix isn't adding back the asm inclusion. memblock
doesn't have any arch dependent part and doesn't need arch specific
header file and asm/memblock.h files are either practically empty or
contain mostly unrelated arch specific stuff.
* In microblaze, sh, powerpc, sparc and openrisc, asm/memblock.h is
either empty or just contains unused MEMBLOCK_DBG() macro. Remove
them.
* In arm and unicore32, asm/memblock.h contains arch specific stuff.
Include it directly from its users. It might be a good idea to
rename the header file to avoid confusion.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
This patch adds the ARM_LPAE and ARCH_PHYS_ADDR_T_64BIT Kconfig entries
allowing LPAE support to be compiled into the kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Memory banks living outside of the 32-bit physical address
space do not have a 1:1 pa <-> va mapping and therefore the
__va macro may wrap.
This patch ensures that such banks are marked as highmem so
that the Kernel doesn't try to split them up when it sees that
the wrapped virtual address overlaps the vmalloc space.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
With LPAE, the pgd is a separate page table with entries pointing to the
pmd. The identity_mapping_add() function needs to ensure that the pgd is
populated before populating the pmd level. The do..while blocks now loop
over the pmd in order to have the same implementation for the two page
table formats. The pmd_addr_end() definition has been removed and the
generic one used instead. The pmd clean-up is done in the pgd_free()
function.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0
rather than a separate Context ID register. This patch makes the
necessary changes to handle context switching on LPAE.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The DFSR and IFSR register format is different when LPAE is enabled. In
addition, DFSR and IFSR have similar definitions for the fault type.
This modifies the fault code to correctly handle the new format.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the MMU initialisation for the LPAE page table format.
The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
proc-v7-3level.S file contains the TTB initialisation, context switch
and PTE setting code with the LPAE. The TTBRx split is based on the
PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
(supersections) and a few other memory types in mmu.c are conditionally
compiled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch modifies the pgd/pmd/pte manipulation functions to support
the 3-level page table format. Since there is no need for an 'ext'
argument to cpu_set_pte_ext(), this patch conditionally defines a
different prototype for this function when CONFIG_ARM_LPAE.
The patch also introduces the L_PGD_SWAPPER flag to mark pgd entries
pointing to pmd tables pre-allocated in the swapper_pg_dir and avoid
trying to free them at run-time. This flag is 0 with the classic page
table format.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch modifies the proc-v7.S file so that it only contains code
shared between classic MMU and LPAE. The non-common code is factored out
into a separate file.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The FSR structure is different with LPAE and this patch moves the
classic MMU specific definition to a separate fsr-2level.c file that is
included in fault.c. It also moves the fsr_fs and FSR bits to the
fault.h file.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With the arch/arm code conversion to pgtable-nopud.h, the section and
supersection (un|re)map code triggers compiler warnings on UP systems.
This is caused by pmd_offset() being given a pgd_t argument rather than
a pud_t one. This patch makes the necessary conversion with the
assumption that the pud is folded into the pgd. The page table setting
code only loops over the pmd which is enough with the classic page
tables. This code is not compiled when LPAE is enabled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ARM SMP booting code allocates a temporary set of page tables
containing an identity mapping of the kernel image and provides this
to secondary CPUs for initial booting.
In reality, we only need to include the __turn_mmu_on function in the
identity mapping since the rest of the kernel is executing from virtual
addresses after this point.
This patch adds __turn_mmu_on to the .idmap.text section, allowing the
SMP booting code to use the idmap_pgd directly and not have to populate
its own set of page table.
As a result of this patch, we can make the identity_mapping_add function
static (since it is only used within mm/idmap.c) and also remove the
identity_mapping_del function. The identity map population is moved to
an early initcall so that it is setup in time for secondary CPU bringup.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For soft-rebooting a system, it is necessary to map the MMU-off code
with an identity mapping so that execution can continue safely once the
MMU has been switched off.
Currently, switch_mm_for_reboot takes out a 1:1 mapping from 0x0 to
TASK_SIZE during reboot in the hope that the reset code lives at a
physical address corresponding to a userspace virtual address.
This patch modifies the code so that we switch to the idmap_pgd tables,
which contain a 1:1 mapping of the cpu_reset code. This has the
advantage of only remapping the code that we need and also means we
don't need to worry about allocating a pgd from an atomic context in the
case that the physical address of the cpu_reset code aliases with the
virtual space used by the kernel.
Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The CPU reset functions disable the MMU and therefore must be executed
with an identity mapping in place.
This patch places the CPU reset functions into the .idmap.text section,
causing the idmap code to include them as part of the identity mapping.
Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When disabling and re-enabling the MMU, it is necessary to take out an
identity mapping for the code that manipulates the SCTLR in order to
avoid it disappearing from under our feet. This is useful when soft
rebooting and returning from CPU suspend.
This patch allocates a set of page tables during boot and populates them
with an identity mapping for the .idmap.text section. This means that
users of the identity map do not need to manage their own pgd and can
instead annotate their functions with __idmap or, in the case of assembly
code, place them in the correct section.
Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit d065bd810b
(mm: retry page fault when blocking on disk transfer) and
commit 37b23e0525
(x86,mm: make pagefault killable)
The above commits introduced changes into the x86 pagefault handler
for making the page fault handler retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial
during OOM killer invocation.
Port these changes to ARM.
Without these changes, my ARM board encounters many hang and livelock
scenarios.
After applying this patch, OOM feature performance improves according to
my testing.
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Similar to other architectures, this adds topdown mmap support in user
process address space allocation policy. This allows mmap sizes greater
than 2GB. This support is largely copied from MIPS and the generic
implementations.
The address space randomization is moved into arch_pick_mmap_layout.
Tested on V-Express with ubuntu and a mmap test from here:
https://bugs.launchpad.net/bugs/861296
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arm copied pci_iomap from generic code, probably to avoid
pulling the rest of iomap.c in. Since that's in
a separate file now, we can reuse the common implementation.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Now that we have all the static mappings from iotable_init() located
in the vmalloc area, it is trivial to optimize ioremap by reusing those
static mappings when the requested physical area fits in one of them,
and so in a generic way for all platforms.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Firstly, there is no need to have a double pointer here as we're only
walking the vmlist and not modifying it.
Secondly, for the same reason, we don't need a write lock but only a
read lock here, since the lock only protects the coherency of the list
nothing else.
Lastly, the reason for holding a lock is not what the comment says, so
let's remove that misleading piece of information.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
In order to remove the build time variation between different SOCs with
regards to VMALLOC_END, the iotable mappings are now allocated inside
the vmalloc region. This allows for VMALLOC_END to be identical across
all machines.
The value for VMALLOC_END is now set to 0xff000000 which is right where
the consistent DMA area starts.
To accommodate all static mappings on machines with possible highmem usage,
the default vmalloc area size is changed to 240 MB so that VMALLOC_START
is no higher than 0xf0000000 by default.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Tested-by: Jamie Iles <jamie@jamieiles.com>
dma_alloc_coherent wants to split pages after allocation in order to
reduce the memory footprint. This does not work well with GFP_COMP
pages, so drop this flag before allocation.
This patch is ported from arch/avr32
(commit 3611553ef9).
[swarren: s/HUGETLB_PAGE/HUGETLBFS/ in comment, minor comment cleanup]
Signed-off-by: Sumit Bhattacharya <sumitb@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are already cache type decoding functions, so use those instead
of custom decode code which only works for ARMv6.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 99d1717d (ARM: Add init_consistent_dma_size()) introduces dynamic
allocation of the consistent_pte array. The number of PTEs should be
calculated based on the number of PMD entries rather than PGD, hence the
PMD_SHIFT.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Kconfig options for the PL310 errata workarounds do not use a
consistent naming scheme for either the config option or the bool
description.
This patch tidies up the options by ensuring that the bool descriptions
are prefixed with "PL310 errata:" and the config options are prefixed
with PL310_ERRATA_, making it much clearer in menuconfig as to what the
workarounds are for.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some upcoming changes must know the VMALLOC_START value, which is based
on high_memory, before bootmem_init() is called.
The best location to set it is in sanity_check_meminfo() where the needed
computation is already done, and in the non MMU case it is trivial to do
now that the meminfo array is already sorted at that point.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
The meminfo array has to be sorted before sanity_check_meminfo() in
arch/arm/mm/mmu.c is called for it to work properly. This also allows
for a simpler find_limits() in arch/arm/mm/init.c.
The sort is moved to arch/arm/kernel/setup.c because that's where the
meminfo array is populated. Eventually this should be improved upon
to make the memory bank parser a bit more robust against problems
such as overlapping memory ranges.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
setup_mm_for_reboot() doesn't make use of its argument, so remove it.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits)
Revert "tracing: Include module.h in define_trace.h"
irq: don't put module.h into irq.h for tracking irqgen modules.
bluetooth: macroize two small inlines to avoid module.h
ip_vs.h: fix implicit use of module_get/module_put from module.h
nf_conntrack.h: fix up fallout from implicit moduleparam.h presence
include: replace linux/module.h with "struct module" wherever possible
include: convert various register fcns to macros to avoid include chaining
crypto.h: remove unused crypto_tfm_alg_modname() inline
uwb.h: fix implicit use of asm/page.h for PAGE_SIZE
pm_runtime.h: explicitly requires notifier.h
linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h
miscdevice.h: fix up implicit use of lists and types
stop_machine.h: fix implicit use of smp.h for smp_processor_id
of: fix implicit use of errno.h in include/linux/of.h
of_platform.h: delete needless include <linux/module.h>
acpi: remove module.h include from platform/aclinux.h
miscdevice.h: delete unnecessary inclusion of module.h
device_cgroup.h: delete needless include <linux/module.h>
net: sch_generic remove redundant use of <linux/module.h>
net: inet_timewait_sock doesnt need <linux/module.h>
...
Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in
- drivers/media/dvb/frontends/dibx000_common.c
- drivers/media/video/{mt9m111.c,ov6650.c}
- drivers/mfd/ab3550-core.c
- include/linux/dmaengine.h
* 'next/soc' of git://git.linaro.org/people/arnd/arm-soc: (21 commits)
MAINTAINERS: add ARM/FREESCALE IMX6 entry
arm/imx: merge i.MX3 and i.MX6
arm/imx6q: add suspend/resume support
arm/imx6q: add device tree machine support
arm/imx6q: add smp and cpu hotplug support
arm/imx6q: add core drivers clock, gpc, mmdc and src
arm/imx: add gic_handle_irq function
arm/imx6q: add core definitions and low-level debug uart
arm/imx6q: add device tree source
ARM: highbank: add suspend support
ARM: highbank: Add cpu hotplug support
ARM: highbank: add SMP support
MAINTAINERS: add Calxeda Highbank ARM platform
ARM: add Highbank core platform support
ARM: highbank: add devicetree source
ARM: l2x0: add empty l2x0_of_init
picoxcell: add a definition of VMALLOC_END
picoxcell: remove custom ioremap implementation
picoxcell: add the DTS for the PC7302 board
picoxcell: add the DTS for pc3x2 and pc3x3 devices
...
Fix up trivial conflicts in arch/arm/Kconfig, and some more header file
conflicts in arch/arm/mach-omap2/board-generic.c (as per an ealier merge
by Arnd).
These files all make use of one of the EXPORT_SYMBOL variants
or the THIS_MODULE macro. So they will need <linux/export.h>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Building these files does not reveal a hidden need for
any of these. Since module.h brings in the whole kitchen
sink, it just needlessly adds 30k+ lines to the cpp burden.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
The patch merges the build of imx3 and imx6. The Kconfig symbol
ARCH_IMX_V6_V7 is introduced to replace ARCH_MX3 and ARCH_MX6.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
This adds basic support for the Calxeda Highbank platform.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Reviewed-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Shawn Guo <shawn.guo@linaro.org>
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock()
lockdep: Comment all warnings
lib: atomic64: Change the type of local lock to raw_spinlock_t
locking, lib/atomic64: Annotate atomic64_lock::lock as raw
locking, x86, iommu: Annotate qi->q_lock as raw
locking, x86, iommu: Annotate irq_2_ir_lock as raw
locking, x86, iommu: Annotate iommu->register_lock as raw
locking, dma, ipu: Annotate bank_lock as raw
locking, ARM: Annotate low level hw locks as raw
locking, drivers/dca: Annotate dca_lock as raw
locking, powerpc: Annotate uic->lock as raw
locking, x86: mce: Annotate cmci_discover_lock as raw
locking, ACPI: Annotate c3_lock as raw
locking, oprofile: Annotate oprofilefs lock as raw
locking, video: Annotate vga console lock as raw
locking, latencytop: Annotate latency_lock as raw
locking, timer_stats: Annotate table_lock as raw
locking, rwsem: Annotate inner lock as raw
locking, semaphores: Annotate inner lock as raw
locking, sched: Annotate thread_group_cputimer as raw
...
Fix up conflicts in kernel/posix-cpu-timers.c manually: making
cputimer->cputime a raw lock conflicted with the ABBA fix in commit
bcd5cff721 ("cputimer: Cure lock inversion").
This allows mapping external memory such as SRAM for use.
This is needed for some small chunks of code, such as reprogramming
SDRAM memory source clocks that can't be executed in SDRAM. Other
use cases include some PM related code.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Andres Salomon <dilinger@queued.net>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
we save the l2x0 registers at the first initialization, and platform codes
can get them to restore l2x0 status after wakeup.
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
this patch fixes the error in Rob Herring's
ARM: 7009/1: l2x0: Add OF based initialization
http://www.spinics.net/lists/arm-kernel/msg131123.html
it has been in rmk/for-next with commit 41c86ff5b
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Rob Herring <robherring2@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
using cpu_relax in busy loops is a well-known idiom in the kernel.
It's more for documentation purposes than technically needed here.
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds probing for ARM L2x0 cache controllers via device tree. Support
includes the L210, L220, and PL310 controllers. The binding allows setting
up cache RAM latencies and filter addresses (PL310 only).
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Barry Song <21cnbao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The definition of __exception_irq_entry for
CONFIG_FUNCTION_GRAPH_TRACER=y needs linux/ftrace.h, but this creates a
circular dependency with it's current home in asm/system.h. Create
asm/exception.h and update all current users.
v4: - rebase to rmk/for-next
v3: - remove redundant includes of linux/ftrace.h
v2: - document the usage restricitions of __exception*
Cc: Zoltan Devai <zdevai@gmail.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch defines the (pte|pmd)val_t as u32 and changes the page table
types to be based on these. The PMD bits are converted to the
corresponding type using the _AT macro.
The flush_pmd_entry/clean_pmd_entry argument was changed to (void *) to
allow them to be used with both PGD and PMD pointers and avoid code
duplication.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Support for the cpu_suspend functions is only built-in
when CONFIG_PM_SLEEP is enabled, but omap3/4, exynos4
and pxa always call cpu_suspend when CONFIG_PM is enabled.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The two functions cpu_is_v6_unaligned and safe_usermode
are only defined when CONFIG_PROC_FS is enabled, but
are used outside of the #ifdef.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Dave Martin <dave.martin@linaro.org>
The VM subsystem assumes that there are valid memmap entries from
the bank start aligned to MAX_ORDER_NR_PAGES.
On the Ux500 we have a lot of mem=N arguments on the commandline
triggering this bug several times over and causing kernel
oops messages.
Cc: stable@kernel.org
Cc: Michael Bohan <mbohan@codeaurora.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Johan Palsson <johan.palsson@stericsson.com>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If the attempt to map a page for DMA fails (eg, because we're out of
mapping space) then we must not hold on to the page we allocated for
DMA - doing so will result in a memory leak.
Cc: <stable@kernel.org>
Reported-by: Bryan Phillippe <bp@darkforest.org>
Tested-by: Bryan Phillippe <bp@darkforest.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On certain architectures, there might be a need to mark certain
addresses with strongly ordered memory attributes to avoid ordering
issues at the interconnect level.
On OMAP4, the asynchronous bridge buffers can only be drained
with strongly ordered accesses and hence the need to mark the
memory strongly ordered.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Tested-by: Vishwanath BS <vishwanath.bs@ti.com>
There is no need to save and restore the context ID register on ARMv6
and ARMv7 with a temporary page table as we write the context ID
register when we switch back to the real page tables for the thread.
Moreover, the temporary page tables do not contain any non-global
mappings, so the context ID value should not be used. To be safe,
initialize the register to a reserved context ID value.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Only use the preallocated page table during the resume, not while
suspending. This avoids the overhead of having to switch unnecessarily
to the resume page table in the suspend path.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Preallocate a page table and setup an identity mapping for the MMU
enable code. This means we don't have to "borrow" a page table to
do this, avoiding complexities with L2 cache coherency.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements a workaround for erratum 764369 affecting
Cortex-A9 MPCore with two or more processors (all current revisions).
Under certain timing circumstances, a data cache line maintenance
operation by MVA targeting an Inner Shareable memory region may fail to
proceed up to either the Point of Coherency or to the Point of
Unification of the system. This workaround adds a DSB instruction before
the relevant cache maintenance functions and sets a specific bit in the
diagnostic control register of the SCU.
Cc: <stable@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Annotate the low level hardware locks which must not be preempted.
In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Commit be020f8618, "ARM: entry: abort-macro: specify registers to be
used for macros", while replacing register numbers with macro parameter
names, mismatched the name used for r1. For me, this resulted in user
space built for EABI with -march=armv4t -mtune=arm920t -mthumb-interwork
-mthumb broken on my OMAP1510 based Amstrad Delta (old ABI and no thumb
still worked for me though).
Fix this by using correct parameter name fsr instead of mismatched psr,
used by callers for another purpose.
Signed-off-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fighting unfixed U-Boots and other beasts that may the cache in
a locked-down state when starting the kernel, we make sure to
disable all cache lock-down when initializing the l2x0 so we
are in a known state.
Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Cc: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Adrian Bunk <adrian.bunk@movial.com>
Cc: Rob Herring <robherring2@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reported-by: Jan Rinze <janrinze@gmail.com>
Tested-by: Robert Marklund <robert.marklund@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When ARCH_HAS_HOLES_MEMORYMODEL is selected, pfn_valid calls
memblock_is_memory to test validity of a pfn:
> memblock_is_memory(pfn << PAGE_SHIFT);
On LPAE systems this cuts off the top bits, as the shift occurs before
the value is promoted to a phys_addr_t.
This patch replaces the shift with a call to __pfn_to_phys (which casts
pfn to phys_addr_t before shifting), preventing the loss of significant
bits.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For ARMv7 kernels running in the non-secure world, writing to the
auxillary control register causes an abort, so we must avoid directly
writing the auxillary control register. If the ACR has already been
reinitialized by SoC code, don't try to restore it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a dsb after the isb to ensure that the previous writes to the
CP15 registers take effect before we enable the MMU.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM920 and ARM926 save four registers, not three. Fix the size of
the suspend region required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
r1 stores the v:p offset from the CPU invariant resume code, and is
expected to be preserved by the CPU specific code. Overwriting it is
not a good idea.
We've managed to get away with it on sa1100 platforms because most
happen to have PHYS_OFFSET == PAGE_OFFSET, but that may not be the
case depending on kernel configuration. So fix this latent bug.
This fixes xsc3 as well which was saving and restoring this register
independently.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cpu_v7_reset disables the MMU and then branches to the provided address.
On Thumb-2 kernels, we should take care to clear the Thumb Exception
enable bit in the System Control Register, otherwise this may wreak
havok in the code to which we are branching (for example, an ARM kernel
image via kexec).
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have
the same value (21). This patch converts the PGDIR_* uses in the kernel
to the PMD_* equivalent so that LPAE builds can reuse the same code.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This function can be called during boot to increase the size of the consistent
DMA region above it's default value of 2MB. It must be called before the memory
allocator is initialised, i.e. before any core_initcall.
Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This patch is a workaround for the 364296 ARM1136 r0p2 erratum (possible
cache data corruption with hit-under-miss enabled). It sets the
undocumented bit 31 in the auxiliary control register and the FI bit in
the control register, thus disabling hit-under-miss without putting the
processor into full low interrupt latency mode.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Siarhei Siamashka <siarhei.siamashka@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With the UM_SIGNAL alignment fault mode, no siginfo structure is
passed to userspace.
POSIX specifies how siginfo_t should be populated for alignment
faults, so this patch does just that:
* si_signo = SIGBUS
* si_code = BUS_ADRALN
* si_addr = misaligned data address at which access was attempted
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, it's possible to set the kernel to ignore alignment
faults when changing the alignment fault handling mode at runtime
via /proc/sys/alignment, even though this is undesirable on ARMv6
and above, where it can result in infinite spins where an un-fixed-
up instruction repeatedly faults.
In addition, the kernel clobbers any alignment mode specified on
the command-line if running on ARMv6 or above.
This patch factors out the necessary safety check into a couple of
new helper functions, and checks and modifies the fault handling
mode as appropriate on boot and on writes to /proc/cpu/alignment.
Prior to ARMv6, the behaviour is unchanged.
For ARMv6 and above, the behaviour changes as follows:
* Attempting to ignore faults on ARMv6 results in the mode being
forced to UM_FIXUP instead. A warning is printed if this
happened as a result of a write to /proc/cpu/alignment. The
user's UM_WARN bit (if present) is still honoured.
* An alignment= argument from the kernel command-line is now
honoured, except that the kernel will modify the specified mode
as described above. This is allows modes such as UM_SIGNAL and
UM_WARN to be active immediately from boot, which is useful for
debugging purposes.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
poison_init_mem() used a loop of:
while ((count = count - 4))
which has 2 problems - an off by one error so that we do one less word
than we should, and the other is that if count == 0 then we loop forever
and poison too much. On a platform with HAVE_TCM=y but nothing in the
TCM's, this caused corruption and the platform failed to boot.
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The file mm/proc-arm946.S contains a typo and is missing a structure
member in __arm946_proc_info. The former prevents compilation
and the latter causes problems during boot. It is likely this
file was manually copied from a similar file and not tested, then
later updates to the *_proc_info structures missed this file.
This patch will apply (with offset) with or without the
recent macro unification work that has been done in this directory.
This was verified against linux-next/stable last week.
See arm-linux-kernel thread:
http://lists.arm.linux.org.uk/lurker/message/20110718.103237.0106d468.en.html
Signed-off-by: Brian S. Julin <bri@abrij.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'next/cross-platform' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/linux-arm-soc:
ARM: Consolidate the clkdev header files
ARM: set vga memory base at run-time
ARM: convert PCI defines to variables
ARM: pci: make pcibios_assign_all_busses use pci_has_flag
ARM: remove unnecessary mach/hardware.h includes
pci: move microblaze and powerpc pci flag functions into asm-generic
powerpc: rename ppc_pci_*_flags to pci_*_flags
Fix up conflicts in arch/microblaze/include/asm/pci-bridge.h
* 'next/soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/linux-arm-soc:
MAINTAINERS: add maintainer of CSR SiRFprimaII machine
ARM: CSR: initializing L2 cache
ARM: CSR: mapping early DEBUG_LL uart
ARM: CSR: Adding CSR SiRFprimaII board support
OMAP4: clocks: Update the clock tree with 4460 clock nodes
OMAP4: PRCM: OMAP4460 specific PRM and CM register bitshifts
OMAP4: ID: add omap_has_feature for max freq supported
OMAP: ID: introduce chip detection for OMAP4460
ARM: Xilinx: merge board file into main platform code
ARM: Xilinx: Adding Xilinx board support
Fix up conflicts in arch/arm/mach-omap2/cm-regbits-44xx.h
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (237 commits)
ARM: 7004/1: fix traps.h compile warnings
ARM: 6998/2: kernel: use proper memory barriers for bitops
ARM: 6997/1: ep93xx: increase NR_BANKS to 16 for support of 128MB RAM
ARM: Fix build errors caused by adding generic macros
ARM: CPU hotplug: ensure we migrate all IRQs off a downed CPU
ARM: CPU hotplug: pass in proper affinity mask on IRQ migration
ARM: GIC: avoid routing interrupts to offline CPUs
ARM: CPU hotplug: fix abuse of irqdesc->node
ARM: 6981/2: mmci: adjust calculation of f_min
ARM: 7000/1: LPAE: Use long long printk format for displaying the pud
ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state
ARM: btc: avoid invalidating the branch target cache on kernel TLB maintanence
ARM: ARM_DMA_ZONE_SIZE is no more
ARM: mach-shark: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-sa1100: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-realview: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-pxa: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-ixp4xx: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-h720x: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-davinci: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
...
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (123 commits)
perf: Remove the nmi parameter from the oprofile_perf backend
x86, perf: Make copy_from_user_nmi() a library function
perf: Remove perf_event_attr::type check
x86, perf: P4 PMU - Fix typos in comments and style cleanup
perf tools: Make test use the preset debugfs path
perf tools: Add automated tests for events parsing
perf tools: De-opt the parse_events function
perf script: Fix display of IP address for non-callchain path
perf tools: Fix endian conversion reading event attr from file header
perf tools: Add missing 'node' alias to the hw_cache[] array
perf probe: Support adding probes on offline kernel modules
perf probe: Add probed module in front of function
perf probe: Introduce debuginfo to encapsulate dwarf information
perf-probe: Move dwarf library routines to dwarf-aux.{c, h}
perf probe: Remove redundant dwarf functions
perf probe: Move strtailcmp to string.c
perf probe: Rename DIE_FIND_CB_FOUND to DIE_FIND_CB_END
tracing/kprobe: Update symbol reference when loading module
tracing/kprobes: Support module init function probing
kprobes: Return -ENOENT if probe point doesn't exist
...
Commit 66a625a (ARM: mm: proc-macros: Add generic proc/cache/tlb struct
definition macros) introduced build errors when PM_SLEEP is not enabled.
The per-CPU do_suspend/do_resume functions are defined via the
preprocessor to constant 0. However, the macros which use these were
converted to assembly, resulting in undefined references to these
functions. Fix that by moving the ! ifdef section into proc-macros.S
and deleting it from all effected proc-*.S files.
Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently using just long but this is not enough for the LPAE format
(64-bit entries).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Kernel space needs very little in the way of BTC maintanence as most
mappings which are created and destroyed are non-executable, and so
could never enter the instruction stream.
The case which does warrant BTC maintanence is when a module is loaded.
This creates a new executable mapping, but at that point the pages have
not been initialized with code and data, so at that point they contain
unpredictable information. Invalidating the BTC at this stage serves
little useful purpose.
Before we execute module code, we call flush_icache_range(), which deals
with the BTC maintanence requirements. This ensures that we have a BTC
maintanence operation before we execute code via the newly created
mapping.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Having this value defined at compile time prevents multiple machines with
conflicting definitions to coexist. Move it to a variable in preparation
for having a per machine value selected at run time. This is relevant
only when CONFIG_ZONE_DMA is selected.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Convert the incorrectly named PCIMEM_BASE to a variable called vga_base.
This removes the dependency on mach/hardware.h.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Convert PCIBIOS_MIN_IO and PCIBIOS_MIN_MEM to variables to allow
multi-platform builds. This also removes the requirement for a platform to
have a mach/hardware.h.
The default values for i/o and mem are 0x1000 and 0x01000000, respectively.
Per Arnd Bergmann, other values are likely to be incorrect, but this commit
does not try to address that issue.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Convert pcibios_assign_all_busses from a define to inline so platforms can
control this setting.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Remove some includes of mach/hardware.h which are not needed. hardware.h
will be removed completely for tegra and cns3xxx in follow on patch.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask. Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
SiRFprimaII is the latest generation application processor from CSR’s
Multifunction SoC product family. Designed around an ARM cortex A9 core,
high-speed memory bus, advanced 3D accelerator and full-HD multi-format
video decoder, SiRFprimaII is able to meet the needs of complicated
applications for modern multifunction devices that require heavy concurrent
applications and fluid user experience. Integrated with GPS baseband,
analog and PMU, this new platform is designed to provide a cost effective
solution for Automotive and Consumer markets.
This patch adds the basic support for this SoC and EVB board based on device
tree. It is following the ZYNQ of Xilinx in some degree.
Signed-off-by: Binghua Duan <Binghua.Duan@csr.com>
Signed-off-by: Rongjun Ying <Rongjun.Ying@csr.com>
Signed-off-by: Zhiwu Song <Zhiwu.Song@csr.com>
Signed-off-by: Yuping Luo <Yuping.Luo@csr.com>
Signed-off-by: Bin Shi <Bin.Shi@csr.com>
Signed-off-by: Huayi Li <Huayi.Li@csr.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Originally introduced to maintain coherency between icache and dcache
in v6 nonaliasing mode. This is now handled by __sync_icache_dcache since
c0177800, therefore unnecessary in this function.
Signed-off-by: Heechul Yun <heechul@illinois.edu>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Poisoning __init marked memory can be useful when tracking down
obscure memory corruption bugs. Therefore, poison init memory
with 0xe7fddef0 to catch bugs earlier. The poison value is an
undefined instruction in ARM mode and branch to an undefined
instruction in Thumb mode.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Place the init sections between the text and data sections. This
means all code is grouped together at the beginning of the kernel
image, and all data is at the end of the image. This avoids problems
with the 24-bit branch instruction relocations becoming invalid with
large initramfs images.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds simple definitions of cpu_reset for ARMv6 and ARMv7
cores, which disable the MMU via the SCTLR.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Multicore implementations of the Cortex-A15 require bit 6 of the
auxiliary control register to be set in order for cache and TLB
maintenance operations to be broadcast between CPUs.
This patch adds a new proc_info structure for Cortex-A15, which enables
the SMP bit during setup and includes the new HWCAP for integer
division.
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds processor info for ARM Ltd. Cortex A5,
which has SCU initialisation procedure identical to A9.
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As most of the proc info content is common across all v7
processors, this patch converts existing A9 and generic v7
descriptions into a macro (allowing extra flags in future).
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
CNS3xxx SOCs have L310-compatible cache controller, so let's use it.
With this patch benchmarking with 'gzip' shows that performance is
doubled, and I'm still able to boot full-fledged userland over NFS
(using PCIe NIC), so the support should be pretty robust.
p.s. While CNS3xxx reports that it has PL310, it still needs to wait
on cache line operations, so we should not select 'CACHE_PL310',
which is a micro-optimization that removes these waits for v7 CPUs.
Someday we'd better rename CACHE_PL310 Kconfig option into
NO_CACHE_WAIT or something less ambiguous.
Signed-off-by: Anton Vorontsov <avorontsov@mvista.com>
Without this patch, xscale_80200_A0_A1 is missing the
icache_flush_all entry, which would result in the wrong functions
being called at run-time.
This patch re-uses xscale_icache_flush_all for
xscale_80200_A0_A1_cache_fns.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
This patch also defines a suitable flush_icache_all implementation
which would otherwise be missing, resulting in a link failure.
Thanks to Nicolas Pitre for suggesting the code for this.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This patch adds some generic macros to reduce boilerplate when
declaring certain common structures in arch/arm/mm/*.S
Thanks to Russell King for outlining what the
define_processor_functions macro could look like.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.
This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock. This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that we pass r2 into these helper functions as the pointer to
pt_regs, use r2 as the base of the registers on the stack rather
than using the stack pointer directly.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tail-call the main C data abort handler code from the per-CPU helper
code. Update the comments in the code wrt the new calling and return
register state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows us to pass the pt_regs pointer in to these functions
ready for tail-calling the abort handler.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tail-call the main C prefetch abort handler code from the per-CPU
helper code. Also note that the helper function becomes ABI
compliant in terms of the registers preserved.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid enabling interrupts if the parent context had interrupts enabled
in the abort handler assembly code, and move this into the breakpoint/
page/alignment fault handlers instead.
This gets rid of some special-casing for the breakpoint fault handlers
from the low level abort handler path.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This avoids unnecessary instructions for CPUs which implement the IFAR
(instruction fault address register).
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We can test bits 27:25 and 20 of the instruction at the same time;
there's no need to separate out the check of bit 20.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>