Commit Graph

580 Commits

Author SHA1 Message Date
Paul Mundt bb29c677b3 sh: Split out MMUCR.URB based entry wiring in to shared helper.
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the
helpers out in to a generic tlb-urb that can be used by any parts
equipped with MMUCR.URB.

At the same time, move the SH-5 code out-of-line, as we require single
global state for DTLB entry wiring.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 15:20:35 +09:00
Paul Mundt acf2c9685f sh: Kill off duplicate address alignment in ioremap_fixed().
This is already taken care of in the top-level ioremap, and now that
no one should be calling ioremap_fixed() directly we can simply throw the
mapping displacement in as an additional argument.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 13:49:19 +09:00
Paul Mundt d57d64080d sh: Prevent 64-bit pgprot clobbering across ioremap implementations.
Presently 'flags' gets passed around a lot between the various ioremap
helpers and implementations, which is only 32-bits. In the X2TLB case
we use 64-bit pgprots which presently results in the upper 32bits being
chopped off (which handily include our read/write/exec permissions).

As such, we convert everything internally to using pgprot_t directly and
simply convert over with pgprot_val() where needed. With this in place,
transparent fixmap utilization for early ioremap works as expected.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 13:34:38 +09:00
Paul Mundt af1415314a sh: Flag __ioremap_caller() __init_refok.
The mem_init_done test makes sure that this path is only entered in
__init cases, so leaving ioremap_fixed() as __init and flagging the
caller __init_refok is sufficient.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:45:00 +09:00
Paul Mundt 12b6b01cb4 sh: Handle unmapping of fixed slots transparently in iounmap().
iounmap() should balance whatever is done by ioremap(). Presently
ioremap() can do any of fixed mappings, PMB mappings, or page table
mappings. Presently only the latter two are handled through the standard
unmap path, so tie in the fixed unmapping, too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:33:08 +09:00
Paul Mundt 4f744affc3 sh: Make iounmap_fixed() return success/failure for iounmap() path.
This converts iounmap_fixed() to return success/error if it handled the
unmap request or not. At the same time, drop the __init label, as this
can be called in to later.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:30:29 +09:00
Paul Mundt 0b59e38ffa sh: Merge _32/_64 ioremap implementations.
There is nothing of interest in the _64 version anymore, so the _32 one
can be renamed and used unconditionally.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:21:32 +09:00
Paul Mundt d9b9487af7 sh: Handle early ioremaps through fixed mappings.
This adds in a mem_init_done to work out when a standard ioremap() is
possible, falling back to the fixmap based ioremap otherwise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:08:32 +09:00
Paul Mundt 8faba61215 Merge branch 'sh/ioremap-fixed' 2010-01-18 20:42:39 +09:00
Matt Fleming 3d467676ab sh: Setup early PMB mappings.
More and more boards are going to start shipping that boot with the MMU
in 32BIT mode by default. Previously we relied on the bootloader to
setup PMB mappings for use by the kernel but we also need to cater for
boards whose bootloaders don't set them up.

If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB
mappings and can compress our address space. Usually, the distance
between the the cached and uncached mappings of RAM is always 512MB,
however we can compress the distance to be the amount of RAM on the
board.

pmb_init() now becomes much simpler. It no longer has to calculate any
mappings, it just has to synchronise the software PMB table with the
hardware.

Tested on SDK7786 and SH7785LCR.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 19:33:10 +09:00
Paul Mundt 78bf04fc96 sh: Tidy up non-translatable checks in iounmap path.
This tidies up the iounmap path with consolidated checks for
nontranslatable mappings. This is in preparation of unifying
the implementations.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-17 01:45:26 +09:00
Matt Fleming 597fe76ec3 sh: Use ioremap_fixed() to implement SH-5 ioremap()
Use the fixmap-based memory mapping implementation for SH-5's ioremap()
functions and delete the old static allocator that was borrowed from
sparc.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:31:51 +00:00
Matt Fleming 4d35b93a66 sh: Add fixed ioremap support
Some devices need to be ioremap'd and accessed very early in the boot
process. It is not possible to use the standard ioremap() function in
this case because that requires kmalloc()'ing some virtual address space
and kmalloc() may not be available so early in boot.

This patch provides fixmap mappings that allow physical address ranges
to be remapped into the kernel address space during the early boot
stages.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:31:36 +00:00
Matt Fleming 07cad4dc1b sh: Generalise the pte handling code for the fixmap path
Generalise the code for setting and clearing pte's and allow TLB entries
to be pinned and unpinned if the _PAGE_WIRED flag is present.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:29:23 +00:00
Matt Fleming 24ef7fc4dc sh: Acquire some more page flags for SH-5.
We need some more page flags to hook up _PAGE_WIRED (and eventually
other things). So use the unused PTE bits above the PPN field as no
implementations use these for anything currently.

Now that we have _PAGE_WIRED let's provide the SH-5 functions for wiring
up TLB entries.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:29:06 +00:00
Matt Fleming 8eda551420 sh: New extended page flag to wire/unwire TLB entries
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation
for wiring TLB entries and use it in the fixmap code path so that we can
wire the fixmap TLB entry.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:28:57 +00:00
Paul Mundt a6198a238b sh: Guard against early IPIs in flush_cache_all().
flush_cache_all() gets called in to when we do some early ioremapping.
Unfortunately on SDK7786 the interrupt controller itself requires
ioremapping, leading to a bit of a chicken and egg scenario. For now,
don't bother with IPI crosscalls if there aren't any other CPUs online.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-15 14:21:37 +09:00
Paul Mundt 782bb5a532 sh: default to extended TLB support.
All SH-X2 and SH-X3 parts support an extended TLB mode, which has been
left as experimental since support was originally merged. Now that it's
had some time to stabilize and get some exposure to various platforms,
we can drop it as an option and default enable it across the board.

This is also good future proofing for newer parts that will drop support
for the legacy TLB mode completely.

This will also force 3-level page tables for all newer parts, which is
necessary both for the varying page sizes and larger memories.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13 19:11:14 +09:00
Paul Mundt a0ab36689a sh: fixed PMB mode refactoring.
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.

Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.

With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13 18:31:48 +09:00
Paul Mundt cbf6b1ba7a sh: Always provide thread_info allocators.
Presently the thread_info allocators are special cased, depending on
THREAD_SHIFT < PAGE_SHIFT. This provides a sensible definition for them
regardless of configuration, in preparation for extended CPU state.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-12 19:01:11 +09:00
Paul Mundt a99eae5417 sh: Split out the unaligned counters and user bits.
This splits out the unaligned access counters and userspace bits in to
their own generic interface, which will allow them to be wired up on sh64
too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-12 16:12:25 +09:00
Paul Mundt 56d45b62ce sh: Fix up nommu build for out-of-line pgtable changes.
pgtable_cache_init() has been moved out-of-line, so we also need a dummy
definition for it on nommu to fix up the build.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-06 14:45:14 +09:00
Paul Mundt a7595fe7e8 Merge branch 'sh/pgtable' of git://github.com/mfleming/linux-2.6 2010-01-05 12:27:46 +09:00
Paul Mundt 921a220857 Merge branch 'sh/stable-updates' 2010-01-04 16:45:56 +09:00
Paul Mundt 5e9daa0f26 sh: Don't default enable PMB support.
This has the adverse effect of converting many 29bit configs to 32bit
mode, while this is a change that needs to be done manually for each
platform. Turn it off by default in order to cut down on spurious bug
reports.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-04 11:16:33 +09:00
Paul Mundt b4e2a2a2f3 sh: Disable PMB for SH4AL-DSP CPUs.
While the PMB is available on SH-4A parts, SH4AL-DSP parts exclude it
altogether. As such, explicitly disable PMB support for these parts. If
this changes in the future for newer subtypes, this will have to be made
more fine-grained.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-04 11:13:54 +09:00
Matt Fleming 2a5eacca85 sh: Move page table allocation out of line
We also switched away from quicklists and instead moved to slab
caches. After benchmarking both implementations the difference is
negligible. The slab caches suit us better though because the size of a
pgd table is just 4 entries when we're using a 3-level page table layout
and quicklists always deal with pages.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-02 01:02:25 +00:00
Matt Fleming b4c8927623 sh: Optimise flush_dcache_page() on SH4
If the page is not mapped into any process's address space then aliases
cannot exist in the cache. So reduce the amount of flushing we perform.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-02 00:51:52 +00:00
Matt Fleming 3f5ab76816 sh: Correct the PTRS_PER_PMD and PMD_SHIFT values
The previous expressions were wrong which made free_pmd_range() explode
when using anything other than 4KB pages (which is why 8KB and 64KB
pages were disabled with the 3-level page table layout).

The problem was that pmd_offset() was returning an index of non-zero
when it should have been returning 0. This non-zero offset was used to
calculate the address of the pmd table to free in free_pmd_range(),
which ended up trying to free an object that was not aligned on a page
boundary.

Now 3-level page tables should work with 4KB, 8KB and 64KB pages.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-02 00:51:45 +00:00
Paul Mundt 17eb9d6282 Merge branches 'sh/g3-prep' and 'sh/stable-updates' 2009-12-24 15:16:41 +09:00
Markus Pietrek 76382b5bdb sh: Ensure all PG_dcache_dirty pages are written back.
With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.

Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-24 15:12:02 +09:00
Matt Fleming 5d9b4b19f1 sh: Definitions for 3-level page table layout
If using 64-bit PTEs and 4K pages then each page table has 512 entries
(as opposed to 1024 entries with 32-bit PTEs). Unlike MIPS, SH follows
the convention that all structures in the page table (pgd_t, pmd_t,
pgprot_t, etc) must be the same size. Therefore, 64-bit PTEs require
64-bit PGD entries, etc. Using 2-levels of page tables and 64-bit PTEs
it is only possible to map 1GB of virtual address space.

In order to map all 4GB of virtual address space we need to adopt a
3-level page table layout. This actually works out better for
CONFIG_SUPERH32 because we only waste 2 PGD entries on the P1 and P2
areas (which are untranslated) instead of 256.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-17 14:31:20 +09:00
Paul Mundt e0aa51f54f Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 2009-12-15 12:10:10 +09:00
Paul Mundt bf3cdeda90 sh: wire up vmallocinfo support in ioremap() implementations.
This wires up the caller information for the ioremap VMA, which allows
for more helpful caller tracking via /proc/vmallocinfo. Follows the x86
and powerpc changes of the same nature.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-14 14:23:41 +09:00
Al Viro e77414e0aa fix broken aliasing checks for MAP_FIXED on sparc32, mips, arm and sh
We want addr - (pgoff << PAGE_SHIFT) consistently coloured...

Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-11 06:44:59 -05:00
Magnus Damm b25b975846 sh: NUMA lmb fixes
This patch updates the NUMA version of setup_memory()
with UMA code changes and also modifies the last argument
to lmb_alloc_base() to use an address instead of pfn.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-09 12:40:44 +09:00
Magnus Damm f3a4c00ad3 sh: fix size calculation for NUMA node 0
Fix the NUMA size calculation for node 0. Do the same
as the UMA version of setup_memory() and use address
instead of pfn when calculating the size.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-09 12:40:42 +09:00
Matt Fleming e717cc6c07 sh: Can't compare physical and virtual addresses for aliases
It does not make sense to compare virtual and physical addresses for
aliasing, only virtual addresses can be compared for aliases.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-09 12:34:46 +09:00
Matt Fleming a781d1e5ff sh: Drop associative writes for SH-4 cache flushes.
When flushing/invalidating the icache/dcache via the memory-mapped IC/OC
address arrays, the associative bit should only be used in conjunction with
virtual addresses. However, we currently flush cache lines based on physical
address, so stop using the associative bit.

It is a better strategy to use non-associative writes (and physical tags) for
flushing the caches anyway, because flushing by virtual address (as with the
A-bit set) requires a valid TLB entry for that virtual address. If one does not
exist in the TLB no exception is generated and the flush is silently ignored.

This is also future-proofing for SH-4A parts which are gradually phasing out
associative writes to the cache array due to the aforementioned case of certain
flushes silently turning in to nops.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-04 16:18:11 +09:00
Paul Mundt 7e01c94998 sh: Partial revert of copy/clear_user_highpage() optimizations.
These still require more testing, so revert them for now. We keep the
off-by-1 in the fixmap colouring and drop the rest.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-04 15:14:52 +09:00
Stuart Menefy 39ac11c160 sh: Improve performance of SH4 versions of copy/clear_user_highpage
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.

However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.

Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.

Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-24 17:13:35 +09:00
Paul Mundt 3af539e59c sh64: Fix up reworked cache op build.
This gets the build fixed up for the sh64 cache enabled case.
Disabling still needs further abstraction for independent I/D-cache
disabling.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-12 17:03:28 +09:00
Paul Mundt a4d9d0b8a8 sh: Enable PMB support for all SH-4A CPUs.
Presently the PMB options were limited to a number of CPUs they were
tested with, but it is generally available on all SH-4A CPUs, so just
drop the subtype conditionals.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-11 10:56:13 +09:00
Paul Mundt 76d2318020 Merge branch 'sh/stable-updates' 2009-11-09 10:55:36 +09:00
Matt Fleming a9d244a2ff sh: Account for cache aliases in flush_icache_range()
The icache may also contain aliases so we must account for them just
like we do when manipulating the dcache. We usually get away with
aliases in the icache because the instructions that are read from memory
are read-only, i.e. they never change. However, the place where this
bites us is when the code has been modified.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-09 10:45:30 +09:00
Roel Kluin 9016332014 sh: Make sure indexes are positive
The indexes are signed, make sure they are not negative
when we read array elements.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-04 11:48:07 +09:00
Matt Fleming eb3118f652 sh: Do not apply virt_to_phys() to a physical address
The variable 'phys' already contains the physical address to flush. It
is not a virtual address and should not be passed to virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-30 11:53:22 +09:00
Paul Mundt 9b3b21f788 Merge branch 'sh/stable-updates' 2009-10-27 17:10:24 +09:00
Paul Mundt 94c285108e sh: Bump up dma_ops initialization far earlier in the boot process.
Presently this was tacked on to the dma debug init bits from
fs_initcall(), which is far too late for devices setting up their own
per-device coherent areas.

Throw this in the beginning of mem_init(), as per the x86 iommu
allocation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 17:07:45 +09:00
Paul Mundt 0a993b0a29 sh64: cache flush symbol exports.
These were previously hidden in sh_ksyms_32, despite also being needed
for sh64 now that the cache.c code is shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 10:51:35 +09:00
Paul Mundt ffb4a73d89 sh: Fix hugetlbfs dependencies for SH-3 && MMU configurations.
The hugetlb dependencies presently depend on SUPERH && MMU while the
hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This
unfortunately allows SH-3 + MMU configurations to enable hugetlbfs
without a corresponding HPAGE_SHIFT definition, resulting in the build
blowing up.

As SH-3 doesn't support variable page sizes, we tighten up the
dependenies a bit to prevent hugetlbfs from being enabled. These days
we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using
that rather than adding to the list of corner cases in fs/Kconfig.

Reported-by: Kristoffer Ericson <kristoffer.ericson@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 07:22:37 +09:00
Paul Mundt f32154c9b5 sh: Add dma-mapping support for dma_alloc/free_coherent() overrides.
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-26 09:50:51 +09:00
Paul Mundt 73c926bee0 sh: Convert to asm-generic/dma-mapping-common.h
This converts the old DMA mapping support to the new generic
dma-mapping-common.h abstraction.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-20 12:55:56 +09:00
Paul Mundt 896f0c0e8e sh: Support SCHED_MC for SH-X3 multi-cores.
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 18:00:02 +09:00
Paul Mundt abeaf33a41 Merge branch 'sh/stable-updates'
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-10-16 15:14:50 +09:00
Magnus Damm 5fb80ae8bd sh: disabled cache handling fix.
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:38:48 +09:00
Valentin Sitdikov a7a7c0e1d1 sh: Fix up single page flushing to use PAGE_SIZE.
Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.

Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:15:38 +09:00
Paul Mundt 95019b48ad Merge branch 'sh/stable-updates' 2009-10-13 11:27:08 +09:00
Paul Mundt 964f7e5a56 sh: force dcache flush if dcache_dirty bit set.
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.

This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.

Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 11:18:34 +09:00
Matt Fleming 20b5014b3e sh: Fold fixed-PMB support into dynamic PMB support
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:34 +09:00
Matt Fleming ef269b3276 sh: Fix the offset from P1SEG/P2SEG where we map RAM
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB,
as well as RAM.

With this change my 7785LCR board can switch to 32bit MMU mode at
runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:26 +09:00
Matt Fleming 3105121949 sh: Remap physical memory into P1 and P2 in pmb_init()
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:03 +09:00
Matt Fleming edd7de803c sh: Get rid of the kmem cache code
Unfortunately, at the time during in boot when we want to be setting up
the PMB entries, the kmem subsystem hasn't been initialised.

We now match pmb_map slots with pmb_entry_list slots. When we find an
empty slot in pmb_map, we set the bit, thereby acquiring the
corresponding pmb_entry_list entry. There is a benefit in using this
static array of struct pmb_entry's; we don't need to acquire any locks
in order to traverse the list of struct pmb_entry's.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:47 +09:00
Matt Fleming 8386aebb9e sh: Make most PMB functions static
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.

Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:37 +09:00
Matt Fleming b336f124b1 sh: CONFIG_PMB doesn't mean the MMU is in 32bit mode
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:23 +09:00
Matt Fleming 1f69b6af91 sh: Prepare for dynamic PMB support
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:12 +09:00
Matt Fleming 8bd642b17b sh: Obliterate the P1 area macros
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.

Likewise, replace a P1SEGADDR() use with virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:02 +09:00
Matt Fleming 067784f623 sh: Allocate PMB entry slot earlier
Simplify set_pmb_entry() by removing the possibility of not finding a
free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so
that if there are no free slots we fail at allocation time, rather than
in set_pmb_entry().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:49:57 +09:00
Paul Mundt 5e3679c594 Merge branch 'sh/cachetlb' 2009-10-10 21:36:53 +09:00
Matt Fleming a2767cfb1d sh: Don't allocate smaller sized mappings on every iteration
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,

	 entry 1: 128MB
	 entry 2:  64MB
	 entry 3:  16MB
	 entry 4:  16MB

This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.

With this patch, we now only need two entries in the PMB to map 256MB of
address space,

	  entry 1: 128MB
	  entry 2: 128MB

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:26:35 +09:00
Matt Fleming 2bea7ea7d5 sh: Try PMB mapping based on physical address, not mapping size
We should favour PMB mappings when the physical address cannot be
reached with 29-bits.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:25:10 +09:00
Matt Fleming fc2bdefdde sh: Plug PMB alloc memory leak
If we fail to allocate a PMB entry in pmb_remap() we must remember to
clear and free any PMB entries that we may have previously allocated,
e.g. if we were allocating a multiple entry mapping.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:24:09 +09:00
Matt Fleming a6325247f5 sh: Sprinkle __uses_jump_to_uncached
Fix some callers of jump_to_uncached() and back_to_cached() that were
not annotated with __uses_jump_to_uncached.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:23:57 +09:00
KAMEZAWA Hiroyuki 3089aa1b0c kcore: use registerd physmem information
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,

	- range of physical memory
	- range of vmalloc area
	- text, etc...

are registered but "range of physical memory" has some troubles.  It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes.  Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug.  Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki a0614da88b kcore: register vmalloc area in generic way
For /proc/kcore, vmalloc areas are registered per arch.  But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them.  By this.  archs which have no kclist_add() hooks can see vmalloc
area correctly.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki c30bb2a25f kcore: add kclist types
Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.

This patch add kclist types as
  KCORE_RAM
  KCORE_VMALLOC
  KCORE_TEXT
  KCORE_OTHER

This "type" is used in a patch following this for detecting KCORE_RAM.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
Geert Uytterhoeven cc013a8890 arches: drop superfluous casts in nr_free_pages() callers
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'.  This made the casts to 'unsigned long' in most callers superfluous,
so remove them.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 07:17:34 -07:00
Ingo Molnar cdd6c482c9 perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!

In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.

Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.

All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)

The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.

Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.

User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)

This patch has been generated via the following script:

  FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

  sed -i \
    -e 's/PERF_EVENT_/PERF_RECORD_/g' \
    -e 's/PERF_COUNTER/PERF_EVENT/g' \
    -e 's/perf_counter/perf_event/g' \
    -e 's/nb_counters/nb_events/g' \
    -e 's/swcounter/swevent/g' \
    -e 's/tpcounter_event/tp_event/g' \
    $FILES

  for N in $(find . -name perf_counter.[ch]); do
    M=$(echo $N | sed 's/perf_counter/perf_event/g')
    mv $N $M
  done

  FILES=$(find . -name perf_event.*)

  sed -i \
    -e 's/COUNTER_MASK/REG_MASK/g' \
    -e 's/COUNTER/EVENT/g' \
    -e 's/\<event\>/event_id/g' \
    -e 's/counter/event/g' \
    -e 's/Counter/Event/g' \
    $FILES

... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.

Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.

( NOTE: 'counters' are still the proper terminology when we deal
  with hardware registers - and these sed scripts are a bit
  over-eager in renaming them. I've undone some of that, but
  in case there's something left where 'counter' would be
  better than 'event' we can undo that on an individual basis
  instead of touching an otherwise nicely automated patch. )

Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 14:28:04 +02:00
Paul Mundt c8c2df9055 sh: Fix up sh7705 flush_dcache_page() build.
Type mismatch caused the page deref to blow up, fix it up as per the sh4
change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-15 09:47:35 +09:00
Paul Mundt f9e2bdfdbb sh: Factor in cpu id for selection of cache colour fixmap.
In the SMP VIPT case the page copy/clear ops still perform colouring,
care needs to be taken that CPUs don't end up stepping on each other,
so we give them a bit of room to work with.

At the same time, we reduce the worst-case colouring given that these
pages are always consumed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 17:14:19 +09:00
Paul Mundt c4845a4b22 sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.
If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
that we purge the cache 4k at a time. Make it explicitly 4k per
iteration, rather than iterating for PAGE_SIZE before looping over again.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 17:13:07 +09:00
Paul Mundt deaef20e97 sh: Rework sh4_flush_cache_page() for coherent kmap mapping.
This builds on top of the MIPS r4k code that does roughly the same thing.
This permits the use of kmap_coherent() for mapped pages with dirty
dcache lines and falls back on kmap_atomic() otherwise.

This also fixes up a problem with the alias check and defers to
shm_align_mask directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 16:06:39 +09:00
Paul Mundt bd6df57481 sh: Kill off segment-based d-cache flushing on SH-4.
This kills off the unrolled segment based flushers on SH-4 and switches
over to a generic unrolled approach derived from the writethrough segment
flusher.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:22:15 +09:00
Paul Mundt 31c9efde78 sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().
PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:10:28 +09:00
Paul Mundt 654d364e26 sh: sh4_flush_cache_mm() optimizations.
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:04:06 +09:00
Paul Mundt 682f88ab74 sh: Cleanup whitespace damage in sh4_flush_icache_range().
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 13:19:46 +09:00
Paul Mundt 6e4154d4c2 sh: Use more aggressive dcache purging in kmap teardown.
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-08 16:21:00 +09:00
Paul Mundt 0906a3ad33 sh: Fix up and optimize the kmap_coherent() interface.
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.

One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-03 17:21:10 +09:00
Paul Mundt 6f3795788b sh: Fix up UP deadlock with SMP-aware cache ops.
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:21:36 +09:00
Paul Mundt 983f4c514c Revert "sh: Kill off now redundant local irq disabling."
This reverts commit 64a6d72213.

Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:12:55 +09:00
Paul Mundt ac6a0cf671 Merge branch 'master' into sh/smp
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-09-01 13:54:14 +09:00
Matt Fleming ce3f7cb96e sh: Fix dcache flushing for N-way write-through caches.
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 13:32:48 +09:00
Paul Mundt e76a0136a3 sh: Fix up sh4_flush_dcache_page() build on UP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-27 11:31:16 +09:00
Stuart Menefy ffad9d7a54 sh: Fix problems with cache flushing when cache is in write-through mode
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 18:39:39 +09:00
Stuart Menefy a1fce73235 sh: Fix overzealous checking in __ioremap()
Allow peripherals before the start of RAM to be remapped.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 18:29:25 +09:00
Stuart Menefy a5cf9e2444 sh: Improve comments int SH4 cache flushing code
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 17:36:24 +09:00
Paul Mundt 64a6d72213 sh: Kill off now redundant local irq disabling.
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 18:21:07 +09:00
Yoshihiro Shimoda c01f0f1a4a sh: Add initial support for SH7757 CPU subtype
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:25:47 +09:00
Paul Mundt f26b2a562b sh: Make cache flushers SMP-aware.
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:23:14 +09:00
Paul Mundt c139a59587 sh: Fix up cache-sh4 build on SMP.
mapping is unused on the SMP build, trigger a build error. Move it under
the ifdef.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-20 15:24:41 +09:00
Michael Trimarchi 6503fe4a65 sh: Better description of SH-4 PTEA register update.
Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-20 13:27:44 +09:00
Paul Mundt e055d41ff5 sh: Build fix for disabled caches.
This fixes up the build when caches are disabled, by linking in all of
the cache routines directly. This paves the way for splitting out
separate I and D cache disabling, similar to what sh64 had, and which
we want for SH-X3 anyways.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-19 17:57:01 +09:00
Paul Mundt 1b3edd9745 sh: Merge the _32/_64 variants of arch/sh/mm/Makefile.
Now that there is sufficient shared infrastructure, merge the Makefiles.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 03:49:21 +09:00
Paul Mundt 2b4315185a sh: Wire up sh5_cache_init().
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 02:16:44 +09:00
Paul Mundt 8c41cdcaff sh64: Kill off dead i/d-cache disabled bits.
These will be handled through the shared cache interface instead, and
they are presently undefined anyways.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 02:15:50 +09:00
Paul Mundt 94ecd224c9 sh: Fix up the SH-5 build with caches enabled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 01:50:17 +09:00
Paul Mundt 65305ae816 sh: Convert cache disabled SH-5 over to new cache interface.
The caches enabled case needs more work, but is presently broken
regardless, so this can be done incrementally.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-16 00:53:56 +09:00
Paul Mundt 0d051d90bb sh: Convert SH7705 extended mode to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:53:39 +09:00
Paul Mundt 79f1c9da5e sh: Convert SH-3 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:42:55 +09:00
Paul Mundt a58e1a2ab4 sh: Convert SH-2A to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:38:29 +09:00
Paul Mundt 109b44a82a sh: Convert SH-2 to new cacheflush interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:35:15 +09:00
Paul Mundt 37443ef3f0 sh: Migrate SH-4 cacheflush ops to function pointers.
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:29:49 +09:00
Paul Mundt 916e97834e sh: Kill off unused flush_icache_user_range().
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:38:05 +09:00
Paul Mundt 0b445dcaf3 sh: Don't export flush_dcache_all().
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:22:50 +09:00
Paul Mundt 27d59ec170 sh: Move alias computation to shared cache init.
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().

This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:11:16 +09:00
Paul Mundt ecba106058 sh: Centralize the CPU cache initialization routines.
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:05:42 +09:00
Paul Mundt aae4d1428c sh: consolidate nommu stubs in arch/sh/mm/nommu.c.
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:57:57 +09:00
Paul Mundt dde5e3ffb7 sh: rework nommu for generic cache.c use.
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:49:32 +09:00
Paul Mundt cbbe2f68f6 sh: rename pg-mmu.c -> cache.c, enable generically.
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:30:39 +09:00
Paul Mundt 2739742c24 sh: Provide the kmap_coherent() interface generically.
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:19:19 +09:00
Paul Mundt 8edcfcbbd1 sh: Bail from kmap_coherent_init() if we have no dcache aliases.
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 09:03:59 +09:00
Paul Mundt d2dcd9101b Merge branch 'master' into sh/cachetlb 2009-08-15 05:58:45 +09:00
Paul Mundt 8010fbe7a6 sh: TLB fast path optimizations for load/store exceptions.
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 03:06:41 +09:00
Paul Mundt 112e58471d sh: TLB protection violation exception optimizations.
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.

Based on an earlier patch by SUGIOKA Toshinobu.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:49:40 +09:00
Paul Mundt e7b8b7f16e sh: NO_CONTEXT ASID optimizations for SH-4 cache flush.
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:21:16 +09:00
Paul Mundt 795687265d sh64: Wire up the shared __flush_xxx_region() flushers.
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:00:54 +09:00
Paul Mundt 43bc61d86f sh: Add register alignment helpers for shared flushers.
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 01:57:36 +09:00
Marcin Slusarz 922b0dc59b sh: use printk_once
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-13 11:48:08 +09:00
Paul Mundt 0837f52463 sh: Partially unroll the SH-4 __flush_xxx_region() flushers.
This does a bit of unrolling for the SH-4 region flushers.

Based on an earlier patch by SUGIOKA Toshinobu.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 18:09:54 +09:00
Paul Mundt 8174252752 sh: Split out SH-4 __flush_xxx_region() ops.
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 18:06:01 +09:00
Paul Mundt c7914834ef sh: Tidy up NEFF-based sign extension for SH-5.
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 17:14:39 +09:00
Paul Mundt c0fe478dbb sh: Provide __flush_anon_page().
This provides a __flush_anon_page() that handles both the aliasing and
non-aliasing cases. This fixes up some crashes with heavy
get_user_pages() users.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 16:02:43 +09:00
Paul Mundt b5eb10ae90 sh: Drop unused arguments for kunmap_coherent().
kunmap_coherent() doesn't do anything with its arguments, so just kill
them off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 16:00:36 +09:00
Paul Mundt 222db3e5f2 sh: Bring kmap_coherent() out-of-line.
kmap_coherent() has gotten too big to leave as an inline, so we
bring it out-of-line.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 15:59:15 +09:00
Paul Mundt 700487c158 sh: Add a PG_dcache_dirty sanity check in kmap_coherent().
This plugs in a BUG_ON() in kmap_coherent() for PG_dcache_dirty pages
to catch when things go horribly wrong. Copied from the MIPS
implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 15:57:44 +09:00
Paul Mundt 3ed6e12939 sh: Handle a NULL vma in __update_tlb() for the fast-path.
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-29 22:06:58 +09:00
Paul Mundt 9cef749269 sh: update_mmu_cache() consolidation.
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-29 00:12:17 +09:00
Paul Mundt 0dfae7d5a2 sh: Use the now generic SH-4 clear/copy page ops for all MMU platforms.
Now that the SH-4 page clear/copy ops are generic, they can be used for
all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too
will gradually be converted over to using this interface.

SH-3 platforms which do not contain aliases will see no impact from this
change, while aliasing SH-3 platforms will get the same interface as
SH-4.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-27 21:30:17 +09:00
Paul Mundt dfff0fa65a sh: wire up clear_user_highpage() for sh4, convert sh7705.
This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-27 20:53:22 +09:00
Paul Mundt 2277ab4a1d sh: Migrate from PG_mapped to PG_dcache_dirty.
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.

SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-22 19:20:49 +09:00
Paul Mundt c358fc46ef Merge branches 'sh/hwblk' and 'sh/platform-updates' 2009-07-20 04:28:11 +09:00
Matt Fleming 05dd2cd3bb sh: Restore previous behaviour on kernel fault
The last commit changed the behaviour on kernel faults when we were
doing something other than syncing the page tables. vmalloc_sync_one()
needs to return NULL if the page tables are up to date, because the
reason for the fault was not a missing/inconsitent page table entry. By
returning NULL if the page tables are sync'd we signal to the calling
function that further work must be done to resolve this fault.

Also, remove the superfluous __va() around the first argument to
vmalloc_sync_one(). The value of pgd_k is already a virtual address and
using it wth __va() causes a NULL dereference.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-13 17:43:22 -04:00
Alexey Dobriyan 405f55712d headers: smp_lock.h redux
* Remove smp_lock.h from files which don't need it (including some headers!)
* Add smp_lock.h to files which do need it
* Make smp_lock.h include conditional in hardirq.h
  It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT

  This will make hardirq.h inclusion cheaper for every PREEMPT=n config
  (which includes allmodconfig/allyesconfig, BTW)

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-12 12:22:34 -07:00
Paul Mundt 0f60bb25b4 sh: Tidy up vmalloc fault handling.
This rewrites the vmalloc fault handling as per x86, which subsequently
allows for easy future tie-in for vmalloc_sync_all().

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-05 03:18:47 +09:00
Paul Mundt c63c3105e4 sh: use kprobes_built_in() for notify_page_fault().
Kill off the KPROBES ifdef, as per x86.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-05 02:50:10 +09:00
Matt Fleming 5084f61a4d sh: Use bootmem ontop of lmb for NUMA
Like the UP case, use lmb as the foundation of memory resource
management on NUMA.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-05 00:32:11 +09:00
Paul Mundt 163b2f0ba9 sh64: Hook up page fault events for software perf counters.
sh64 can use these as well, so tie them up there as well.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-25 02:49:03 +09:00
Paul Mundt 7433ab7703 sh: Hook up page fault events for software perf counters.
This adds page fault instrumentation for the software performance
counters. Follows the x86 and powerpc changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-25 02:30:10 +09:00
Paul Mundt b29fa1fbc2 sh: Wire up the uncached fixmap on sh64 as well.
Now that sh64 also can use the uncached section, wire up the fixmap for
it as well.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-23 17:30:17 +09:00
Paul Mundt 997d003093 sh: Use local TLB flush in set_pte_phys().
set_pte_phys() presently uses the global flush_tlb_one(), which locks on
SMP trying to do the IPI. As we have not even initialized the other CPUs
at this point, switch to the local_ variant so the flush happens on the
boot CPU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-23 17:30:17 +09:00
Linus Torvalds d06063cc22 Move FAULT_FLAG_xyz into handle_mm_fault() callers
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault().  All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-21 13:08:22 -07:00
Christoph Hellwig 4505ffda54 sh: remove stray markers.
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb.  Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-18 13:38:26 +09:00
Paul Mundt 8fc40238b4 sh: Prefer slab_is_available() over after_bootmem.
This kills off after_bootmem and switches to using slab_is_available()
instead. Presently the only place this is used is by the sh64 ioremap,
and there's not much point in keeping the reference around otherwise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-22 14:21:03 +09:00
Paul Mundt ad3256e361 sh: Provide FORCE_MAX_ZONEORDER.
Several platforms want to be able to do large physically contiguous
allocations (primarily nommu and video codecs on SH-Mobile), provide a
MAX_ORDER override for those cases.

Tested-by: Conrad Parker <conrad@metadecks.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-14 17:40:08 +09:00
Paul Mundt b412a49af9 sh: Consolidate the boot link and entry offset definitions.
Consolidate these in a single place in the Kconfig menus. At the same
time, disable their interactivity and set them according to the board
config defaults.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-10 01:23:25 +09:00
Paul Mundt 2fedaacdc0 sh: Cleanup irqflags size mismatch on SH-5 build.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-09 14:38:49 +09:00
Paul Mundt 0fb849b9d7 sh: Integrate the SH-5 onchip_remap() more coherently.
Presently this is special-cased for early initialization. While there are
situations where these static early initializations are still necessary,
with minor changes it is possible to use this for the regular ioremap
implementation as well. This allows us to kill off the special-casing for
the remap completely and to start tidying up all of the SH-5
special-casing in drivers.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-07 18:10:27 +09:00
Paul Mundt ee1acbfabd sh: Handle shm_align_mask also for HAVE_ARCH_UNMAPPED_AREA_TOPDOWN.
Presently shm_align_mask is only looked at for the bottom up case, but we
still want this for proper colouring constraints in the topdown case.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-07 16:38:16 +09:00
Paul Mundt 99f95f1178 sh: pci: Rework fixed region checks in ioremap().
Not all PCI channels have non-translatable memory windows, this is a
special property of the on-chip PCIC with its 0xfd00... mapping, handle
this explicitly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-20 18:24:57 +09:00
Magnus Damm ef339f241b sh: pci memory range checking code
This patch changes the code to use __is_pci_memory() instead of
is_pci_memaddr(). __is_pci_memory() loops through all the pci
channels on the system to match memory windows.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-16 16:00:14 +09:00
Paul Mundt 7b41f5688c sh: Pre-allocate a reasonable number of DMA debug entries.
This prevents the DMA API debugging from running out of entries right
away on boot. Defines 4096 entries by default, which while a bit on the
heavy side, ought to leave enough breathing room for some time.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-14 15:22:15 +09:00
Paul Mundt f802d969b6 sh: Add support for DMA API debugging.
This wires up support for the generic DMA API debugging.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-09 10:36:54 -07:00
Paul Mundt e8208828dc sh: Kill off broken direct-mapped cache mode.
Forcing direct-mapped worked on certain older 2-way set associative
parts, but was always error prone on 4-way parts. As these are the
norm these days, there is not much point in continuing to support this
mode. Most of the folks that used direct-mapped mode generally just
wanted writethrough caching in the first place..

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-02 17:40:16 +09:00
Paul Mundt 3a3b311ca3 sh: Update debugfs ASID dumping for 16-bit ASID support.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-17 17:59:31 +09:00
Paul Mundt c54a43e90b sh: tlb-pteaex: Kill off legacy PTEA updates.
While harmless, PTEA has different semantics on these parts, and is only
used in extended TLB mode. Kill off the legacy support.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-17 17:58:33 +09:00
Paul Mundt 8263a67e16 sh: Support for extended ASIDs on PTEAEX-capable SH-X3 cores.
This adds support for extended ASIDs (up to 16-bits) on newer SH-X3 cores
that implement the PTAEX register and respective functionality. Presently
only the 65nm SH7786 (90nm only supports legacy 8-bit ASIDs).

The main change is in how the PTE is written out when loading the entry
in to the TLB, as well as in how the TLB entry is selectively flushed.

While SH-X2 extended mode splits out the memory-mapped U and I-TLB data
arrays for extra bits, extended ASID mode splits out the address arrays.
While we don't use the memory-mapped data array access, the address
array accesses are necessary for selective TLB flushes, so these are
implemented newly and replace the generic SH-4 implementation.

With this, TLB flushes in switch_mm() are almost non-existent on newer
parts.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-17 17:49:49 +09:00
Francesco VIRLINZI a83c0b739f sh: PMB hibernation support
This implements preliminary suspend/resume support for the PMB.

Signed-off-by: Francesco Virlinzi <francesco.virlinzi@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-16 19:46:17 +09:00
Yoshihiro Shimoda 2f47f44790 sh: Support fixed 32-bit PMB mappings from bootloader.
This provides a method for supporting fixed PMB mappings inherited from
the bootloader, as an alternative to the dynamic PMB mapping currently
used by the kernel. In the future these methods will be combined.

P1/P2 area is handled like a regular 29-bit physical address, and local
bus device are assigned P3 area addresses.

Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-10 15:49:54 +09:00
Magnus Damm 3e91faec47 sh: fix P4 iounmap() pass-through
Fix iounmap() of pass-through P4 addresses. Without this patch
iounmap() on the sh7780 rtc area results in a warning message.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-02-27 16:50:00 +09:00
Julia Lawall 7bfa122c19 arch/sh/mm: Move a dereference below a NULL test
If the NULL test is necessary, then the dereference should be moved below
the NULL test.

The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/).

// <smpl>
@disable is_null@
identifier f;
expression E;
identifier fld;
statement S;
@@

+ if (E == NULL) S
  f(...,E->fld,...);
- if (E == NULL) S

@@
identifier f;
expression E;
identifier fld;
statement S;
@@

+ if (!E) S
  f(...,E->fld,...);
- if (!E) S
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-01-21 17:41:14 +09:00
Gary Hade c04fc586c1 mm: show node to memory section relationship with symlinks in sysfs
Show node to memory section relationship with symlinks in sysfs

Add /sys/devices/system/node/nodeX/memoryY symlinks for all
the memory sections located on nodeX.  For example:
/sys/devices/system/node/node1/memory135 -> ../../memory/memory135
indicates that memory section 135 resides on node1.

Also revises documentation to cover this change as well as updating
Documentation/ABI/testing/sysfs-devices-memory to include descriptions
of memory hotremove files 'phys_device', 'phys_index', and 'state'
that were previously not described there.

In addition to it always being a good policy to provide users with
the maximum possible amount of physical location information for
resources that can be hot-added and/or hot-removed, the following
are some (but likely not all) of the user benefits provided by
this change.
Immediate:
  - Provides information needed to determine the specific node
    on which a defective DIMM is located.  This will reduce system
    downtime when the node or defective DIMM is swapped out.
  - Prevents unintended onlining of a memory section that was
    previously offlined due to a defective DIMM.  This could happen
    during node hot-add when the user or node hot-add assist script
    onlines _all_ offlined sections due to user or script inability
    to identify the specific memory sections located on the hot-added
    node.  The consequences of reintroducing the defective memory
    could be ugly.
  - Provides information needed to vary the amount and distribution
    of memory on specific nodes for testing or debugging purposes.
Future:
  - Will provide information needed to identify the memory
    sections that need to be offlined prior to physical removal
    of a specific node.

Symlink creation during boot was tested on 2-node x86_64, 2-node
ppc64, and 2-node ia64 systems.  Symlink creation during physical
memory hot-add tested on a 2-node x86_64 system.

Signed-off-by: Gary Hade <garyhade@us.ibm.com>
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-06 15:59:00 -08:00
Magnus Damm da9fdc8b44 sh: split coherent pages
Split pages returned by dma_alloc_coherent() and make sure
we free them one by one.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:44:48 +09:00
Paul Mundt ab6e570ba3 sh: Generic kgdb stub support.
This migrates from the old bitrotted kgdb stub implementation and moves
to the generic stub. In the process support for SH-2/SH-2A is also added,
which the old stub never provided.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:44:04 +09:00
Paul Mundt 35724a0aed sh: Fix up the cpu_asid() return value on nommu.
This ought to be unsigned long, rather than defaulting to int.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:44:03 +09:00
Paul Mundt a99d6fde69 sh: Convert sh64 /proc/asids to debugfs and generic sh.
This converts the sh64 /proc/asids entry to debugfs and enables it for
all SH parts that have debugfs enabled.

On MMU systems this can be used to determine which processes are using
which ASIDs which in turn can be used for finer grained cache tag
analysis.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:44:03 +09:00
Magnus Damm 716777db72 sh: P4 ioremap pass-through
This patch adds a pass-through case when ioremapping P4 addresses.

Addresses passed to ioremap() should be physical addresses, so the
best option is usually to convert the virtual address to a physical
address before calling ioremap. This will give you a virtual address
in P2 which matches the physical address and this works well for
most internal hardware blocks on the SuperH architecture.

However, some hardware blocks must be accessed through P4. Converting
the P4 address to a physical and then back to a P2 does not work. One
example of this is the sh7722 TMU block, it must be accessed through P4.

Without this patch P4 addresses will be mapped using PTEs which
requires the page allocator to be up and running.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:43:48 +09:00
Paul Mundt 4a4a9be3eb sh: Move arch_get_unmapped_area() in to arch/sh/mm/mmap.c.
Now that arch/sh/mm/mmap.c exists, move arch_get_unmapped_area() there.
Follows the ARM change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:42:49 +09:00
Paul Mundt 10840f034e sh: Don't factor in PAGE_OFFSET for valid_phys_addr_range() check.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-13 15:38:02 +09:00
Paul Mundt 185aed7557 sh: Provide a sane valid_phys_addr_range() to prevent TLB reset with PMB.
With the PMB enabled, only P1SEG and up are covered by the PMB mappings,
meaning that situations where out-of-bounds physical addresses are read
from will lead to TLB reset after the PMB miss, allowing for use cases
like dd if=/dev/mem to reset the TLB.

Fix this up to make sure the reference is between __MEMORY_START (phys)
and __pa(high_memory). This is coherent across all variants of sh/sh64
with and without MMU, though the PMB bug itself is only applicable to
SH-4A parts.

Reported-by: Hideo Saito <saito@densan.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-12 12:53:48 +09:00
Paul Mundt acca4f4d9b sh: Handle fixmap TLB eviction more coherently.
There was a race in the kmap_coherent() implementation. While we
guarded against preemption, there was nothing preventing eviction of
the pre-faulted fixmap entry from the UTLB. Under certain workloads
this would result in the fixmap entries used for cache colouring being
evicted from the UTLB in the midst of a copy_page().

In addition to pre-faulting, we also make sure to preserve the PTEs
in the kernel page table and introduce a cached PTE for kmap_coherent()
usage. This follows a similar change on MIPS ("[MIPS] Fix aliasing bug
in copy_to_user_page / copy_from_user_page").

Reported-by: Hideo Saito <saito@densan.co.jp>
Reported-by: CHIKAMA Masaki <masaki.chikama@gmail.com>
Tested-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-10 20:00:45 +09:00
Yoshihiro Shimoda 216813a8bb sh: fix sh2a cache entry_mask
fix sh2a cache entry_mask in __flush_{purge,invalidate}_region.

Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-31 16:29:20 +09:00
Andrew Morton 5e451d9c9d sh: Kill off duplicate remove_memory() definition.
Use the generic remove_memory() provided by mm/memory_hotplug.c instead.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-21 12:51:51 +09:00
Magnus Damm 7713718751 sh: remove consistent alloc cruft
Remove left overs from the generic declared coherent rework.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-20 11:38:44 +09:00
Zhaolei 25627c7fd7 Fix debugfs_create_file's error checking method for arch/sh/mm/
debugfs_create_file() returns NULL if an error occurs, returns -ENODEV
when debugfs is not enabled in the kernel.

Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-20 10:40:21 +09:00
Paul Mundt 3d58695edb sh: Trivial trace_mark() instrumentation for core events.
This implements a few trace points across events that are deemed
interesting. This implements a number of trace points:

	- The page fault handler / TLB miss
	- IPC calls
	- Kernel thread creation

The original LTTng patch had the slow-path instrumented, which
fails to account for the vast majority of events. In general
placing this in the fast-path is not a huge performance hit, as
we don't take page faults for kernel addresses.

The other bits of interest are some of the other trap handlers, as
well as the syscall entry/exit (which is better off being handled
through the tracehook API). Most of the other trap handlers are corner
cases where alternate means of notification exist, so there is little
value in placing extra trace points in these locations.

Based on top of the points provided both by the LTTng instrumentation
patch as well as the patch shipping in the ST-Linux tree, albeit in a
stripped down form.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-21 13:56:39 +09:00
Paul Mundt 8f2baee280 sh: Kill off duplicate page fault notifiers in slow path.
We already have hooks in place in the __do_page_fault() fast-path,
so kill them off in the slow path.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-21 12:11:25 +09:00
Paul Mundt 887f1ae3bc sh: Look up the trap vector for the page fault notifier.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-21 12:06:43 +09:00
Paul Mundt c15c5f8c2b sh: Support kernel stacks smaller than a page.
This follows the powerpc commit f6a616800e
'[POWERPC] Fix kernel stack allocation alignment'.

SH has traditionally forced the thread order to be relative to the page
size, so there were never any situations where the same bug was
triggered by slub. Regardless, the usage of > 8kB stacks for the larger
page sizes is overkill, so we switch to using slab allocations there,
as per the powerpc change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-20 20:21:33 +09:00
Paul Mundt b85641bdde sh: Make memory hot-add and hot-remove depend on MMU.
Cleans up link numerous build issues with page migration and so on when
enabled on nommu builds.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-17 23:13:27 +09:00
Paul Mundt 037c10a612 sh: kprobes: Hook up kprobe_fault_handler() in the page fault path.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 12:22:47 +09:00
Paul Mundt 205a3b4328 sh: uninline flush_icache_all().
This uses jump_to_uncached() which is now given the noinline attribute
due to the special section mapping. Kill off the inline attribute to
fix up compilation failure.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:06 +09:00
Marek Skuczynski b6c20e4290 sh: remove unnecessary memset after alloc_bootmem_low_pages
Because alloc_bootmem functions return the allocated memory always
zeroed, an additional call of memset on allocated memory is
unnecessary.

Signed-off-by: Marek Skuczynski <M.Skuczynski@adbglobal.com>
Signed-off-by: Carl Shaw <carl.shaw@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:05 +09:00
Stuart Menefy 96e14e54a6 sh: vmalloc pgtable sync fix.
This fixes a problem in the code which copies the vmalloc portion of the
kernel's page table into the current user space page table. The addition
of the four level page table code breaks on folded page tables, because
the pud level is always present (although folded). This updates the code
to use the same style of updates for the pud as is used for the pgd
level.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Stuart Menefy c6feb6142c sh: early cached_to_uncached initialization.
statically initialise the cached_to_uncached offset, so that we can use
it immediatly.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Paul Mundt 3159e7d62a sh: Add support for memory hot-remove.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Paul Mundt fa43972fab sh: fixup many sparse errors.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Magnus Damm c773d8af8e sh: fix platform_resource_setup_memory() section mismatch
This patch kills a section mismatch for platform_resource_setup_memory().

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-28 14:53:08 +09:00
Magnus Damm 0c13bf1e7c sh: select memchunk size using kernel cmdline
Allow user to pass parameters on kernel command line to override
default size for physically contiguous memory buffers. The default
VPU buffer size is too small for VGA harware encoding, but instead
of just bumping up the number we allow the user to override the
default size using the command line. Supports SuperH Mobile hardware
blocks such as VEU, VPU and CEU.

Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-11 20:17:55 +09:00
Paul Mundt 68b7c24cf9 sh: Disable 64kB hugetlbpage size when using 64kB PAGE_SIZE.
Presently we oops in mm/hugetlb.c:1325, which is the order == 0 test in
hugetlb_add_hstate() called at initialization time. So, disable 64kB
huge pages when we're using a 64kB PAGE_SIZE. On most parts this will
force the default to be 1MB huge pages.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-11 20:17:49 +09:00
Yoshinori Sato cce2d453e4 SH2(A) cache update
Includes:
- SH2 (7619) Writeback support.
- SH2A cache handling fix.

Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-08-04 16:33:47 +09:00