This adds support for 16k and 64k page sizes on PowerPC 44x processors.
The PGDIR table is much smaller than a page when using 16k or 64k
pages (512 and 32 bytes respectively) so we allocate the PGDIR with
kzalloc() instead of __get_free_pages().
One PTE table covers rather a large memory area when using 16k or 64k
pages (32MB or 512MB respectively), so we can easily put FIXMAP and
PKMAP in the area covered by one PTE table.
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Vladimir Panfilov <pvr@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Ensure that total memory size is page-aligned, because otherwise
mark_bootmem() gets upset.
This error case was triggered by using 64 KiB pages in the kernel
while arch/powerpc/boot/4xx.c arbitrarily reduced the amount of memory
by 4096 (to work around a chip bug that affects the last 256 bytes of
physical memory).
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Wire up the trampoline code for ppc32 to relay exceptions from the
vectors at address 0 to vectors at address 32MB, and modify Kconfig
to enable Kdump support for all classic powerpcs.
Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add the ability for a classic ppc kernel to be loaded at an address
of 32MB. This done by fixing a few places that assume we are loaded
at address 0, and by changing several uses of KERNELBASE to use
PAGE_OFFSET, instead.
Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
While for debugging it is good to catch bogus users of ioremap, though
for kdump support it is more convenient to use __ioremap for
copy_oldmem_page() (exactly as we do for PPC64 currently).
Note that copy_oldmem_page() calls __ioremap with flags set to '0',
so it should be safe with the regard to the caches.
The other option is to use kmap_atomic_pfn()[1], but it will not work
for kernels compiled without HIGHMEM.
That is, on a board with 256MB RAM and crashkernel=64M@32M case, the
!HIGHMEM capturing kernel maps 0-96M range, which does not include all
the memory needed to capture the dump. And, obviously, accessing
anything upper than 96M will cause faults.
[1] http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046747.html
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Refactor the setting of kdump OF properties, moving the common code
from machine_kexec_64.c to machine_kexec.c where it can be used on
both ppc64 and ppc32. This will be needed for kdump to work on ppc32
platforms.
Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This replaces the dummy crash_setup_regs function with full-fledged
crash_setup_regs implementation. On PPC32 we simply use the new
ppc_save_regs function to dump the registers.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Today the arch/powerpc/xmon/setjmp.S file contains only the
xmon_save_regs function. We want to use it for kdump purposes, so
let's move the file into arch/powerpc/kernel/ and give the function a
more generic name (ppc_save_regs).
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This removes the need for each platform to specify default kexec and
crash kernel ops, thus effectively adds a working kexec support for
most 6xx/7xx/7xxx-based boards.
Platforms that can't cope with default ops will explode in some weird
way (a hang or reboot is most likely), which means that the board's
kexec support should be fixed or blacklisted via dummy _prepare
callback returning -ENOSYS.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Refactor the setting of kexec OF properties, moving the common code
from machine_kexec_64.c to machine_kexec.c where it can be used on
both ppc64 and ppc32. This is needed for kexec to work on ppc32
platforms.
Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently, pseries_cpu_die() calls msleep() while polling RTAS for
the status of the dying cpu.
However, if the cpu that is going down also happens to be the one
doing the tick then we're hosed as the tick_do_timer_cpu 'baton' is
only passed later on in tick_shutdown() when _cpu_down() does the
CPU_DEAD notification. Therefore jiffies won't be updated anymore.
This replaces that msleep() with a cpu_relax() to make sure we're not
going to schedule at that point.
With this patch my test box survives a 100k iterations hotplug stress
test on _all_ cpus, whereas without it, it quickly dies after ~50
iterations.
Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Commit 2a4aca1144 ("powerpc/mm: Split
low level tlb invalidate for nohash processors") changed a call to
_tlbia to _tlbil_all but didn't include the header that defines
_tlbil_all, leading to a build failure on 440 if KVM is enabled.
This fixes it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Since the QPACE (Chromodynamics Parallel Computing on the
Cell Broadband Engine) platform doesn't use a iommu, doesn't
have PCI devices and a MPIC much lesser setup and
configurations are needed. So far all devices are detected
as OF device. A notifier function is used to set the dma_ops
for the of_platform bus. Further this patch splits the
PPC_CELL_NATIVE into PPC_CELL_COMMON which are parts that are
shared with the QPACE platform and the rest.
Signed-off-by: Benjamin Krill <ben@codiert.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
CBE_THERM and OPROFILE_CELL both cannot be built without
SPU_FS disabled, so make the dependency explicit.
Reported-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
- error cases for mapbase and irq were unbundled
- mapped irq now gets disposed on error
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Add RTS/CTS-support for the PSC of the MPC5200B. Tested with a Phytec
MPC5200B-IO.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
This patch adds the capability to the mpc52xx-uart to report framing
errors, parity errors, breaks and overruns to userspace. These values
may be requested in userspace by using the ioctl TIOCGICOUNT.
Signed-off-by: René Bürgel <r.buergel@unicontrol.de>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
As this driver polls for a complete MDIO transaction, there is no need
to enable interrupts for it. Furthermore, make both checks for
freeing MDIO-bus irqs consistent.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
This patch adds MDMA/UDMA support using BestComm for DMA on the MPC5200
platform. Based heavily on previous work by Freescale (Bernard Kuhn,
John Rigby) and Domen Puncer.
With this patch, a SanDisk Extreme IV CF card gets read speeds of
approximately 26.70 MB/sec.
Signed-off-by: Tim Yamin <plasm@roo.me.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
When ATA DMA is enabled, bestcomm prefetching does not work. This
patch adds a function to disable bestcomm prefetch when the ATA
Bestcomm task is initialized.
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
1) ata.h has dst_pa in the wrong place (needs to match what the BestComm
task microcode in bcom_ata_task.c expects); fix it.
2) The BestComm ATA task priority was changed to maximum in bestcomm_priv.h;
this fixes a deadlock issue experienced with heavy DMA occurring on
both the ATA and Ethernet BestComm tasks, e.g. when downloading a large
file over a LAN to disk.
Signed-off-by: Tim Yamin <plasm@roo.me.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
The buffer descriptors for the ATA BestComm task are larger than the
current definition for bcom_bd. This causes problems because the
various bcom_... functions dereference the buffer descriptor pointer
by using the array operator which doesn't work when the buffer
descriptors are a different size.
This patch adds the bcom_get_bd() function which uses the value in
bcom_task.bd_size to calculate the offset into the BD table. This
patch also changes the definition of bcom_bd to specify a data size
of 0 instead of 1 so that it will never work if anyone attempts to
dereference the bd list as an array (as opposed to something that
might work even though it is wrong).
Finally, this patch moves the definition of bcom_bd up in the file
to eliminate a forward declaration.
Based on patch originally written by Tim Yamin.
Signed-off-by: Tim Yamin <plasm@roo.me.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
The MPC5200 internal interrupt controller setup function needs to set
the default interrupt controller when it is called. Without this
irq_create_of_mapping() cannot be called without first determining
the pointer to the irq controller (ie. call with controller = NULL).
Reported-by: Steven Cavanagh <scavanagh@secretlab.ca>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
This patch adds documentation to the mpc5200 interrupt controller
driver and cleans up some minor coding conventions. It also moves the
contents of mpc52xx_pic.h into the driver proper (except for a small
common bit that is moved to the common mpc52xx.h) because the
information encoded there is not required by any other part of kernel
code. Finally for code readability sake, the L2_OFFSET shift value
is removed because the code using it resolves to a noop.
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Rework to MMU code dropped a much missed 'blr' instruction.
Brown-Paper-Bag-Worn-By: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
The correct #address-cells was still used for the actual translation,
so the impact is only a possibility of choosing the wrong range entry
or failing to find any match. Most common cases were not affected.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add const qualifier to device_node argument for
dcr_resource_{start,len} as of_get_property also const-qualifies this
argument.
Signed-off-by: Grant Erickson <gerickson@nuovations.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
After discussing with chip designers, it appears that it's not
necessary to set G everywhere on 440 cores. The various core
errata related to prefetch should be sorted out by firmware by
disabling icache prefetching in CCR0. We add the workaround to
the kernel however just in case oooold firmwares don't do it.
This is valid for -all- 4xx core variants. Later ones hard wire
the absence of prefetch but it doesn't harm to clear the bits
in CCR0 (they should already be cleared anyway).
We still leave G=1 on the linear mapping for now, we need to
stop over-mapping RAM to be able to remove it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in
in the hash code based on some CPU feature bit. We also manipulate
_PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places.
This changes the logic so that instead, the PTE now contains
_PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms
that need it. The hash code clears it if the feature bit is not set.
It also adds some clean accessors to setup various valid combinations
of access flags and change various bits of code to use them instead.
This should help having the PTE actually containing the bit
combinations that we really want.
I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead
set it explicitely from the TLB miss. I will ultimately remove it
completely as it appears that it might not be needed after all
but in the meantime, having it in the TLB miss makes things a
lot easier.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This makes the MMU context code used for CPUs with no hash table
(except 603) dynamically allocate the various maps used to track
the state of contexts.
Only the main free map and CPU 0 stale map are allocated at boot
time. Other CPU maps are allocated when those CPUs are brought up
and freed if they are unplugged.
This also moves the initialization of the MMU context management
slightly later during the boot process, which should be fine as
it's really only needed when userland if first started anyways.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The handlers for Critical, Machine Check or Debug interrupts
will save and restore MMUCR nowadays, thus we only need to
disable normal interrupts when invalidating TLB entries.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently, the various forms of low level TLB invalidations are all
implemented in misc_32.S for 32-bit processors, in a fairly scary
mess of #ifdef's and with interesting duplication such as a whole
bunch of code for FSL _tlbie and _tlbia which are no longer used.
This moves things around such that _tlbie is now defined in
hash_low_32.S and is only used by the 32-bit hash code, and all
nohash CPUs use the various _tlbil_* forms that are now moved to
a new file, tlb_nohash_low.S.
I moved all the definitions for that stuff out of
include/asm/tlbflush.h as they are really internal mm stuff, into
mm/mmu_decl.h
The code should have no functional changes. I kept some variants
inline for trivial forms on things like 40x and 8xx.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This commit moves the whole no-hash TLB handling out of line into a
new tlb_nohash.c file, and implements some basic SMP support using
IPIs and/or broadcast tlbivax instructions.
Note that I'm using local invalidations for D->I cache coherency.
At worst, if another processor is trying to execute the same and
has the old entry in its TLB, it will just take a fault and re-do
the TLB flush locally (it won't re-do the cache flush in any case).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We're soon running out of CPU features and I need to add some new
ones for various MMU related bits, so this patch separates the MMU
features from the CPU features. I moved over the 32-bit MMU related
ones, added base features for MMU type families, but didn't move
over any 64-bit only feature yet.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This reworks the context management code used by 4xx,8xx and
freescale BookE. It adds support for SMP by implementing a
concept of stale context map to lazily flush the TLB on
processors where a context may have been invalidated. This
also contains the ground work for generalizing such lazy TLB
flushing by just picking up a new PID and marking the old one
stale. This will be implemented later.
This is a first implementation that uses a global spinlock.
Ideally, we should try to get at least the fast path (context ID
already assigned) lockless or limited to a per context lock,
but for now this will do.
I tried to keep the UP case reasonably simple to avoid adding
too much overhead to 8xx which does a lot of context stealing
since it effectively has only 16 PIDs available.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This splits the mmu_context handling between 32-bit hash based
processors, 64-bit hash based processors and everybody else. This is
preliminary work for adding SMP support for BookE processors.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds supports to the "extended" DCR addressing via the indirect
mfdcrx/mtdcrx instructions supported by some 4xx cores (440H6 and
later).
I enabled the feature for now only on AMCC 460 chips.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When running Active Memory Sharing, pages can get marked as
"loaned" with the hypervisor by the CMM driver. This state gets
cleared by the system firmware when rebooting the partition.
When using kexec to boot a new kernel, this state never gets
cleared and the hypervisor and CMM driver can get out of sync
with respect to the number of pages currently marked "loaned".
Fix this by adding a reboot notifier to the CMM driver to deflate
the balloon and mark all pages as active.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When running Active Memory Sharing, the Collaborative Memory Manager
(CMM) may mark some pages as "loaned" with the hypervisor.
Periodically, the CMM will query the hypervisor for a loan request,
which is a single signed value. When kexec'ing into a kdump kernel,
the CMM driver in the kdump kernel is not aware of the pages the
previous kernel had marked as "loaned", so the hypervisor and the CMM
driver are out of sync. This results in the CMM driver getting a
negative loan request, which can then get treated as a large unsigned
value and can cause kdump to hang due to the CMM driver inflating too
large. Since there really is no clean way for the CMM driver in the
kdump kernel to clean this up, simply disable CMM in the kdump kernel.
This fixes hangs we were seeing doing kdump with AMS.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Otherwise you get lot of errors like these:
drivers/block/viodasd.c:72: error: dereferencing pointer to incomplete type
drivers/block/viodasd.c: In function 'viodasd_open':
drivers/block/viodasd.c:135: error: dereferencing pointer to incomplete type
drivers/block/viodasd.c: In function 'viodasd_release':
drivers/block/viodasd.c:184: error: dereferencing pointer to incomplete type
drivers/block/viodasd.c: In function 'viodasd_getgeo':
drivers/block/viodasd.c:209: error: dereferencing pointer to incomplete type
drivers/block/viodasd.c:214: error: implicit declaration of function 'get_capacity'
drivers/block/viodasd.c: At top level:
drivers/block/viodasd.c:222: error: variable 'viodasd_fops' has initializer but incomplete type
drivers/block/viodasd.c:223: error: unknown field 'owner' specified in initializer
Discovered by a randconfig build.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The ctrl-o (^O) is a common control key used by several applications,
such as vim, but hvc_console uses ^O as the magic-sysrq key. This
commit allows users to send ^O to applications by pressing ^O twice
in succession.
To implement this, this commit introduces a check if ^O is pressed
again if the sysrq_pressed variable is already set. In this case,
clear sysrq_pressed state and flip the ^O character to the tty. (The
old behavior has always set "sysrq_pressed" if ^O has been entered,
and it has not flipped the ^O character to the tty.)
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
ibm_configure_kernel_dump is passed as the token to rtas_call() is
never initialised. This sets it to something sane.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Acked-by: Nathan Lynch <ntl@pobox.com>
Acked-by: Manish Ahuja <mahujam@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
print_dump_header() will be called at least once with a NULL pointer in
a normal boot sequence. If DEBUG is defined then we will dereference
the pointer and crash. Add a quick fix to exit early in the NULL pointer
case.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Acked-by: Manish Ahuja <mahujam@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Rename PowerPC's struct vm_region so that I can introduce my own
global version for NOMMU. It's feasible that the PowerPC version may
wish to use my global one instead.
The NOMMU vm_region struct defines areas of the physical memory map
that are under mmap. This may include chunks of RAM or regions of
memory mapped devices, such as flash. It is also used to retain
copies of file content so that shareable private memory mappings of
files can be made. As such, it may be compatible with what is
described in the banner comment for PowerPC's vm_region struct.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Using the common code means that more complete cache information will
provided in sysfs on platforms that don't use the l2-cache property
convention.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The smp code uses cache information to populate cpu_core_map; change
it to use common code for cache lookup.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We have more than one piece of code that looks up cache nodes manually
using the "l2-cache" property. Add a common helper routine which does
this and handles ePAPR's "next-level-cache" property as well as
powermac.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This function is used to count how many GPIOs are specified for
a device node.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Given this list (contains three gpio specifiers, one of which is a hole):
gpios = <&phandle1 1 2 3
0 /* a hole */
&phandle2 4 5 6>;
of_parse_phandles_with_args() would report -ENOENT for the `hole'
specifier item, the same error value is used to report the end of the
list, for example.
Sometimes we want to differentiate holes from real errors -- for
example when we want to count all the [syntax correct] specifiers.
With this patch of_parse_phandles_with_args() will report -EEXITS when
somebody requested to parse a hole.
Also, make the out_{node,args} arguments optional, when counting we
don't really need the out values.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>