This zeroes out the number of cache aliases in the cache info descriptors
when hardware alias avoidance is enabled. This cuts down on the amount of
flushing taken care of by common code, and also permits coherency control
to be disabled for the single CPU and 4k page size case.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables support for the hardware synonym avoidance handling on SH-X3
CPUs for the case where dcache aliases are possible. icache handling is
retained, but we flip on broadcasting of the block invalidations due to
the lack of coherency otherwise on SMP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
While the MMUCR.URB and ITLB/UTLB differentiation works fine for all SH-4
and later TLBs, these features are absent on SH-3. This splits out
local_flush_tlb_all() in to SH-4 and PTEAEX copies while restoring the
old SH-3 one, subsequently fixing up the build.
This will probably want some further reordering and tidying in the
future, but that's out of scope at present.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Export the status of the utlb and itlb entries through debugfs.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the TLB wiring code depends on MMUCR.URB for working out where
to place the wired entry, but fails to take the replacment counter in to
consideration. This fixes up the wiring logic and ensures that wired
entries remain so.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
set_pmb_entry() is now only used by a function that is wrapped in #ifdef
CONFIG_PM, so wrap set_pmb_entry() in CONFIG_PM too.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Setting the TI in MMUCR causes all the TLB bits in MMUCR to be
cleared. Unfortunately, the TLB wired bits are also cleared when setting
the TI bit, causing any wired TLB entries to become unwired.
Use local_flush_tlb_all() which implements TLB flushing in a safer
manner by using the memory-mapped TLB registers. As each CPU has its own
PMB the modifications in pmb_init() only affect the local CPU, so only
flush the local CPU's TLB.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_tlb_page() can be used to flush TLB entries that map executable
pages. Therefore, we need to ensure that the ITLB is also flushed in
local_flush_tlb_page().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The uncached_start was being set up properly for 32-bit but managed to
break 29-bit in the process, fix it up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
... so the "sh_debugfs_root" is already available. Previously it
wasn't and in result its path was "/sys/kernel/debug/pmb" instead of
"/sys/kernel/debug/sh/pmb".
Signed-off-by: Pawel Moll <pawel.moll@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently we run in to issues with the MMU resetting the CPU when
variable sized mappings are employed. This takes a slightly more
aggressive approach to keeping the TLB and cache state sane before
establishing the mappings in order to cut down on races observed on
SMP configurations.
At the same time, we bump the VMA range up to the 0xb000...0xc000 range,
as there still seems to be some undocumented behaviour in setting up
variable mappings in the 0xa000...0xb000 range, resulting in reset by the
TLB.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the case of NUMA emulation when in range PPNs are being used for
secondary nodes, we need to make sure that the PMB has a mapping for it
before setting up the pgdat. This prevents the MMU from resetting.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When entries are being bolted unconditionally it's possible that the boot
loader has established mappings that are within range that we don't want
to clobber. Perform some basic validation to ensure that the new mapping
is out of range before allowing the entry setup to take place.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves the pmb_remap_caller() mapping logic out in to
pmb_bolt_mapping(), which enables us to establish fixed mappings in
places such as the NUMA code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in an early_param for permitting transparent PMB-backed
ioremapping to be enabled/disabled. For the time being, we use a
default-disabled policy.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a fairly significant overhaul of the dynamic PMB mapping
code. The primary change here is that the PMB gets its own VMA that
follows the uncached mapping and we attempt to be a bit more intelligent
with dynamic sizing, multi-entry mapping, and so forth.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Since <linux/spinlock.h> already includes <linux/rwlock.h>, and the
latter file will warn about not having included the former file
anyway, there is no value in including rwlock.h explicitly.
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
the PPC and ia64 implementations. The thread flags happen to be the
logical inverse of what the global fault mode is set to, so this works
out pretty cleanly. By default the global fault mode is used, with tasks
now being able to override their own settings via prctl().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies. We do this via make_coherent() by making the pages
uncacheable.
This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().
Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():
On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
to construct a pointer to the pte again. Passing a pte_t * is much
more elegant. Maybe we might even replace the pte argument with the
pte_t?
Ben Herrenschmidt would also like the pte pointer for PowerPC:
Passing the ptep in there is exactly what I want. I want that
-instead- of the PTE value, because I have issue on some ppc cases,
for I$/D$ coherency, where set_pte_at() may decide to mask out the
_PAGE_EXEC.
So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.
Includes a fix from Stephen Rothwell:
sparc: fix fallout from update_mmu_cache API change
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This implements a bit of rework for the PMB code, which permits us to
kill off the legacy PMB mode completely. Rather than trusting the boot
loader to do the right thing, we do a quick verification of the PMB
contents to determine whether to have the kernel setup the initial
mappings or whether it needs to mangle them later on instead.
If we're booting from legacy mappings, the kernel will now take control
of them and make them match the kernel's initial mapping configuration.
This is accomplished by breaking the initialization phase out in to
multiple steps: synchronization, merging, and resizing. With the recent
rework, the synchronization code establishes page links for compound
mappings already, so we build on top of this for promoting mappings and
reclaiming unused slots.
At the same time, the changes introduced for the uncached helpers also
permit us to dynamically resize the uncached mapping without any
particular headaches. The smallest page size is more than sufficient for
mapping all of kernel text, and as we're careful not to jump to any far
off locations in the setup code the mapping can safely be resized
regardless of whether we are executing from it or not.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The PMB code is an example of something that spends an absurd amount of
time running uncached when only a couple of operations really need to be.
This switches over to the shiny new uncached helpers, permitting us to
spend far more time running cached.
Additionally, MMUCR twiddling is perfectly safe from cached space given
that it's paired with a control register barrier, so fix that up, too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements some locking for the PMB code. A high level rwlock is
added for dealing with rw accesses on the entry map while a per-entry
data structure spinlock is added to deal with the PMB entry changing out
from underneath us.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Write-through PMB mappings still require the cache bit to be set, even if
they're to be flagged with a different cache policy and bufferability
bit. To reduce some of the confusion surrounding the flag encoding we
centralize the cache mask based on the system cache policy while we're at
it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in entry sizing support for existing mappings and then builds
on top of that for linking together entries that are mapping contiguous
areas. This will ultimately permit us to coalesce mappings and promote
head pages while reclaiming PMB slots for dynamic remapping.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds some helper routines for uncached mapping support. This
simplifies some of the cases where we need to check the uncached mapping
boundaries in addition to giving us a centralized location for building
more complex manipulation on top of.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Some overdue cleanup of the PMB code, killing off unused functionality
and duplication sprinkled about the tree.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Both the store queue API and the PMB remapping take unsigned long for
their pgprot flags, which cuts off the extended protection bits. In the
case of the PMB this isn't really a problem since the cache attribute
bits that we care about are all in the lower 32-bits, but we do it just
to be safe. The store queue remapping on the other hand depends on the
extended prot bits for enabling userspace access to the mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This merges the code for iterating over the legacy PMB mappings and the
code for synchronizing software state with the hardware mappings. There's
really no reason to do the same iteration twice, and this also buys us
the legacy entry logging facility for the dynamic PMB case.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The PMB initialization code walks the entries and synchronizes the
software PMB state with the hardware mappings, preserving the slot index.
Unfortunately pmb_alloc() only tested the bit position in the entry map
and failed to set it, resulting in subsequent remaps being able to be
dynamically assigned a slot that trampled an existing boot mapping with
general badness ensuing.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the uncached mapping support under its own config option,
presently only used by 29-bit mode and 32-bit + PMB. This will make it
possible to optionally add an uncached mapping on sh64 as well as booting
without an uncached mapping for 32-bit.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the deprected fixed memory range accessors for
the cases of non-translatable ioremapping.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The old ctrl in/out routines are non-portable and unsuitable for
cross-platform use. While drivers/sh has already been sanitized, there
is still quite a lot of code that is not. This converts the arch/sh/ bits
over, which permits us to flag the routines as deprecated whilst still
building with -Werror for the architecture code, and to ensure that
future users are not added.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map < 16MB anyways, there's no need for the
special uncached section. Kill it off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a variable for tracking the uncached mapping size, and uses
it for pretty printing the uncached lowmem range. Beyond this, we'll also
be building on top of this for figuring out from where the remainder of
P2 becomes usable when constructing unrelated mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This cribs the pretty printing from arch/x86/mm/init_32.c to dump the
virtual memory layout on boot. This is primarily intended as a debugging
aid, given that the newer CPUs have full control over their address space
and as such have little to nothing in common with the legacy layout.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
iounmap_fixed() had a couple of bugs in it that caused it to effectively
fail at life. The total number of pages to unmap factored in the mapping
offset and aligned up to the next page boundary, which doesn't match the
ioremap_fixed() behaviour.
When ioremap_fixed() pegs a slot, the address in the mapping data already
contains the offset displacement, and the size is recorded verbatim given
that we're only interested in total number of pages required. As such, we
need to calculate the total number from the original size in the unmap
path as well.
At the same time, there was also an off-by-1 problem in the fixmap index
calculation which has also been corrected.
Previously subsequent remaps of an identical fixmap index would trigger
the pte_ERROR() in set_pte_phys():
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
With this patch in place, the iounmap-driven fixmap teardown actually
does what it's supposed to do.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently __in_29bit_mode() is only defined for the PMB case, but
it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT &&
CONFIG_PMB=n cases.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the build bails with the following:
CC arch/sh/mm/alignment.o
cc1: warnings being treated as errors
arch/sh/mm/alignment.c: In function 'unaligned_fixups_notify':
arch/sh/mm/alignment.c:69: warning: cast to pointer from integer of different size
arch/sh/mm/alignment.c:74: warning: cast to pointer from integer of different size
make[2]: *** [arch/sh/mm/alignment.o] Error 1
This is due to the fact that regs->pc is always 64-bit, while the pointer size
depends on the ABI. Wrapping through instruction_pointer() takes care of the
appropriate casting for both configurations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The plans for _PAGE_WIRED were detailed in a comment with the fixmap
code, but as it's now all taken care of, we no longer have any reason for
keeping it around, particularly since it's no longer accurate. Kill it
off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the
helpers out in to a generic tlb-urb that can be used by any parts
equipped with MMUCR.URB.
At the same time, move the SH-5 code out-of-line, as we require single
global state for DTLB entry wiring.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is already taken care of in the top-level ioremap, and now that
no one should be calling ioremap_fixed() directly we can simply throw the
mapping displacement in as an additional argument.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently 'flags' gets passed around a lot between the various ioremap
helpers and implementations, which is only 32-bits. In the X2TLB case
we use 64-bit pgprots which presently results in the upper 32bits being
chopped off (which handily include our read/write/exec permissions).
As such, we convert everything internally to using pgprot_t directly and
simply convert over with pgprot_val() where needed. With this in place,
transparent fixmap utilization for early ioremap works as expected.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The mem_init_done test makes sure that this path is only entered in
__init cases, so leaving ioremap_fixed() as __init and flagging the
caller __init_refok is sufficient.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
iounmap() should balance whatever is done by ioremap(). Presently
ioremap() can do any of fixed mappings, PMB mappings, or page table
mappings. Presently only the latter two are handled through the standard
unmap path, so tie in the fixed unmapping, too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This converts iounmap_fixed() to return success/error if it handled the
unmap request or not. At the same time, drop the __init label, as this
can be called in to later.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There is nothing of interest in the _64 version anymore, so the _32 one
can be renamed and used unconditionally.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds in a mem_init_done to work out when a standard ioremap() is
possible, falling back to the fixmap based ioremap otherwise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
More and more boards are going to start shipping that boot with the MMU
in 32BIT mode by default. Previously we relied on the bootloader to
setup PMB mappings for use by the kernel but we also need to cater for
boards whose bootloaders don't set them up.
If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB
mappings and can compress our address space. Usually, the distance
between the the cached and uncached mappings of RAM is always 512MB,
however we can compress the distance to be the amount of RAM on the
board.
pmb_init() now becomes much simpler. It no longer has to calculate any
mappings, it just has to synchronise the software PMB table with the
hardware.
Tested on SDK7786 and SH7785LCR.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up the iounmap path with consolidated checks for
nontranslatable mappings. This is in preparation of unifying
the implementations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Use the fixmap-based memory mapping implementation for SH-5's ioremap()
functions and delete the old static allocator that was borrowed from
sparc.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Some devices need to be ioremap'd and accessed very early in the boot
process. It is not possible to use the standard ioremap() function in
this case because that requires kmalloc()'ing some virtual address space
and kmalloc() may not be available so early in boot.
This patch provides fixmap mappings that allow physical address ranges
to be remapped into the kernel address space during the early boot
stages.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Generalise the code for setting and clearing pte's and allow TLB entries
to be pinned and unpinned if the _PAGE_WIRED flag is present.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
We need some more page flags to hook up _PAGE_WIRED (and eventually
other things). So use the unused PTE bits above the PPN field as no
implementations use these for anything currently.
Now that we have _PAGE_WIRED let's provide the SH-5 functions for wiring
up TLB entries.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation
for wiring TLB entries and use it in the fixmap code path so that we can
wire the fixmap TLB entry.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
flush_cache_all() gets called in to when we do some early ioremapping.
Unfortunately on SDK7786 the interrupt controller itself requires
ioremapping, leading to a bit of a chicken and egg scenario. For now,
don't bother with IPI crosscalls if there aren't any other CPUs online.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All SH-X2 and SH-X3 parts support an extended TLB mode, which has been
left as experimental since support was originally merged. Now that it's
had some time to stabilize and get some exposure to various platforms,
we can drop it as an option and default enable it across the board.
This is also good future proofing for newer parts that will drop support
for the legacy TLB mode completely.
This will also force 3-level page tables for all newer parts, which is
necessary both for the varying page sizes and larger memories.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.
Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.
With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the thread_info allocators are special cased, depending on
THREAD_SHIFT < PAGE_SHIFT. This provides a sensible definition for them
regardless of configuration, in preparation for extended CPU state.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the unaligned access counters and userspace bits in to
their own generic interface, which will allow them to be wired up on sh64
too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
pgtable_cache_init() has been moved out-of-line, so we also need a dummy
definition for it on nommu to fix up the build.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This has the adverse effect of converting many 29bit configs to 32bit
mode, while this is a change that needs to be done manually for each
platform. Turn it off by default in order to cut down on spurious bug
reports.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
While the PMB is available on SH-4A parts, SH4AL-DSP parts exclude it
altogether. As such, explicitly disable PMB support for these parts. If
this changes in the future for newer subtypes, this will have to be made
more fine-grained.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We also switched away from quicklists and instead moved to slab
caches. After benchmarking both implementations the difference is
negligible. The slab caches suit us better though because the size of a
pgd table is just 4 entries when we're using a 3-level page table layout
and quicklists always deal with pages.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
If the page is not mapped into any process's address space then aliases
cannot exist in the cache. So reduce the amount of flushing we perform.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
The previous expressions were wrong which made free_pmd_range() explode
when using anything other than 4KB pages (which is why 8KB and 64KB
pages were disabled with the 3-level page table layout).
The problem was that pmd_offset() was returning an index of non-zero
when it should have been returning 0. This non-zero offset was used to
calculate the address of the pmd table to free in free_pmd_range(),
which ended up trying to free an object that was not aligned on a page
boundary.
Now 3-level page tables should work with 4KB, 8KB and 64KB pages.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.
Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
If using 64-bit PTEs and 4K pages then each page table has 512 entries
(as opposed to 1024 entries with 32-bit PTEs). Unlike MIPS, SH follows
the convention that all structures in the page table (pgd_t, pmd_t,
pgprot_t, etc) must be the same size. Therefore, 64-bit PTEs require
64-bit PGD entries, etc. Using 2-levels of page tables and 64-bit PTEs
it is only possible to map 1GB of virtual address space.
In order to map all 4GB of virtual address space we need to adopt a
3-level page table layout. This actually works out better for
CONFIG_SUPERH32 because we only waste 2 PGD entries on the P1 and P2
areas (which are untranslated) instead of 256.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up the caller information for the ioremap VMA, which allows
for more helpful caller tracking via /proc/vmallocinfo. Follows the x86
and powerpc changes of the same nature.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the NUMA version of setup_memory()
with UMA code changes and also modifies the last argument
to lmb_alloc_base() to use an address instead of pfn.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Fix the NUMA size calculation for node 0. Do the same
as the UMA version of setup_memory() and use address
instead of pfn when calculating the size.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
It does not make sense to compare virtual and physical addresses for
aliasing, only virtual addresses can be compared for aliases.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When flushing/invalidating the icache/dcache via the memory-mapped IC/OC
address arrays, the associative bit should only be used in conjunction with
virtual addresses. However, we currently flush cache lines based on physical
address, so stop using the associative bit.
It is a better strategy to use non-associative writes (and physical tags) for
flushing the caches anyway, because flushing by virtual address (as with the
A-bit set) requires a valid TLB entry for that virtual address. If one does not
exist in the TLB no exception is generated and the flush is silently ignored.
This is also future-proofing for SH-4A parts which are gradually phasing out
associative writes to the cache array due to the aforementioned case of certain
flushes silently turning in to nops.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These still require more testing, so revert them for now. We keep the
off-by-1 in the fixmap colouring and drop the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.
However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.
Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.
Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This gets the build fixed up for the sh64 cache enabled case.
Disabling still needs further abstraction for independent I/D-cache
disabling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the PMB options were limited to a number of CPUs they were
tested with, but it is generally available on all SH-4A CPUs, so just
drop the subtype conditionals.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The icache may also contain aliases so we must account for them just
like we do when manipulating the dcache. We usually get away with
aliases in the icache because the instructions that are read from memory
are read-only, i.e. they never change. However, the place where this
bites us is when the code has been modified.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The indexes are signed, make sure they are not negative
when we read array elements.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The variable 'phys' already contains the physical address to flush. It
is not a virtual address and should not be passed to virt_to_phys().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently this was tacked on to the dma debug init bits from
fs_initcall(), which is far too late for devices setting up their own
per-device coherent areas.
Throw this in the beginning of mem_init(), as per the x86 iommu
allocation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previously hidden in sh_ksyms_32, despite also being needed
for sh64 now that the cache.c code is shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The hugetlb dependencies presently depend on SUPERH && MMU while the
hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This
unfortunately allows SH-3 + MMU configurations to enable hugetlbfs
without a corresponding HPAGE_SHIFT definition, resulting in the build
blowing up.
As SH-3 doesn't support variable page sizes, we tighten up the
dependenies a bit to prevent hugetlbfs from being enabled. These days
we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using
that rather than adding to the list of corner cases in fs/Kconfig.
Reported-by: Kristoffer Ericson <kristoffer.ericson@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.
Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.
This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.
Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB,
as well as RAM.
With this change my 7785LCR board can switch to 32bit MMU mode at
runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Unfortunately, at the time during in boot when we want to be setting up
the PMB entries, the kmem subsystem hasn't been initialised.
We now match pmb_map slots with pmb_entry_list slots. When we find an
empty slot in pmb_map, we set the bit, thereby acquiring the
corresponding pmb_entry_list entry. There is a benefit in using this
static array of struct pmb_entry's; we don't need to acquire any locks
in order to traverse the list of struct pmb_entry's.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.
Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.
Likewise, replace a P1SEGADDR() use with virt_to_phys().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Simplify set_pmb_entry() by removing the possibility of not finding a
free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so
that if there are no free slots we fail at allocation time, rather than
in set_pmb_entry().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,
entry 1: 128MB
entry 2: 64MB
entry 3: 16MB
entry 4: 16MB
This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.
With this patch, we now only need two entries in the PMB to map 256MB of
address space,
entry 1: 128MB
entry 2: 128MB
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We should favour PMB mappings when the physical address cannot be
reached with 29-bits.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
If we fail to allocate a PMB entry in pmb_remap() we must remember to
clear and free any PMB entries that we may have previously allocated,
e.g. if we were allocating a multiple entry mapping.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Fix some callers of jump_to_uncached() and back_to_cached() that were
not annotated with __uses_jump_to_uncached.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,
- range of physical memory
- range of vmalloc area
- text, etc...
are registered but "range of physical memory" has some troubles. It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes. Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug. Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
For /proc/kcore, vmalloc areas are registered per arch. But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them. By this. archs which have no kclist_add() hooks can see vmalloc
area correctly.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.
This patch add kclist types as
KCORE_RAM
KCORE_VMALLOC
KCORE_TEXT
KCORE_OTHER
This "type" is used in a patch following this for detecting KCORE_RAM.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'. This made the casts to 'unsigned long' in most callers superfluous,
so remove them.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
In the SMP VIPT case the page copy/clear ops still perform colouring,
care needs to be taken that CPUs don't end up stepping on each other,
so we give them a bit of room to work with.
At the same time, we reduce the worst-case colouring given that these
pages are always consumed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
that we purge the cache 4k at a time. Make it explicitly 4k per
iteration, rather than iterating for PAGE_SIZE before looping over again.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds on top of the MIPS r4k code that does roughly the same thing.
This permits the use of kmap_coherent() for mapped pages with dirty
dcache lines and falls back on kmap_atomic() otherwise.
This also fixes up a problem with the alias check and defers to
shm_align_mask directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the unrolled segment based flushers on SH-4 and switches
over to a generic unrolled approach derived from the writethrough segment
flusher.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.
After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.
Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reverts commit 64a6d72213.
Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Allow peripherals before the start of RAM to be remapped.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the build when caches are disabled, by linking in all of
the cache routines directly. This paves the way for splitting out
separate I and D cache disabling, similar to what sh64 had, and which
we want for SH-X3 anyways.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These will be handled through the shared cache interface instead, and
they are presently undefined anyways.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The caches enabled case needs more work, but is presently broken
regardless, so this can be done incrementally.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of unrolling for the SH-4 region flushers.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a __flush_anon_page() that handles both the aliasing and
non-aliasing cases. This fixes up some crashes with heavy
get_user_pages() users.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in a BUG_ON() in kmap_coherent() for PG_dcache_dirty pages
to catch when things go horribly wrong. Copied from the MIPS
implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the SH-4 page clear/copy ops are generic, they can be used for
all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too
will gradually be converted over to using this interface.
SH-3 platforms which do not contain aliases will see no impact from this
change, while aliasing SH-3 platforms will get the same interface as
SH-4.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.
SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The last commit changed the behaviour on kernel faults when we were
doing something other than syncing the page tables. vmalloc_sync_one()
needs to return NULL if the page tables are up to date, because the
reason for the fault was not a missing/inconsitent page table entry. By
returning NULL if the page tables are sync'd we signal to the calling
function that further work must be done to resolve this fault.
Also, remove the superfluous __va() around the first argument to
vmalloc_sync_one(). The value of pgd_k is already a virtual address and
using it wth __va() causes a NULL dereference.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Remove smp_lock.h from files which don't need it (including some headers!)
* Add smp_lock.h to files which do need it
* Make smp_lock.h include conditional in hardirq.h
It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT
This will make hardirq.h inclusion cheaper for every PREEMPT=n config
(which includes allmodconfig/allyesconfig, BTW)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This rewrites the vmalloc fault handling as per x86, which subsequently
allows for easy future tie-in for vmalloc_sync_all().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Like the UP case, use lmb as the foundation of memory resource
management on NUMA.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds page fault instrumentation for the software performance
counters. Follows the x86 and powerpc changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
set_pte_phys() presently uses the global flush_tlb_one(), which locks on
SMP trying to do the IPI. As we have not even initialized the other CPUs
at this point, switch to the local_ variant so the flush happens on the
boot CPU.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb. Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off after_bootmem and switches to using slab_is_available()
instead. Presently the only place this is used is by the sh64 ioremap,
and there's not much point in keeping the reference around otherwise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Several platforms want to be able to do large physically contiguous
allocations (primarily nommu and video codecs on SH-Mobile), provide a
MAX_ORDER override for those cases.
Tested-by: Conrad Parker <conrad@metadecks.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>