OpenCloudOS-Kernel/arch/parisc/kernel/traps.c

856 lines
21 KiB
C
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
// SPDX-License-Identifier: GPL-2.0
/*
* linux/arch/parisc/traps.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright (C) 1999, 2000 Philipp Rumpf <prumpf@tux.org>
*/
/*
* 'Traps.c' handles hardware traps and faults after we have saved some
* state in 'asm.s'.
*/
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/timer.h>
#include <linux/delay.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/console.h>
#include <linux/bug.h>
#include <linux/ratelimit.h>
mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in the handler Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-11 23:52:11 +08:00
#include <linux/uaccess.h>
#include <linux/kdebug.h>
#include <linux/kfence.h>
#include <asm/assembly.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/traps.h>
#include <asm/unaligned.h>
#include <linux/atomic.h>
#include <asm/smp.h>
#include <asm/pdc.h>
#include <asm/pdc_chassis.h>
#include <asm/unwind.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <linux/kgdb.h>
#include <linux/kprobes.h>
#include "../math-emu/math-emu.h" /* for handle_fpe() */
static void parisc_show_stack(struct task_struct *task,
struct pt_regs *regs, const char *loglvl);
static int printbinary(char *buf, unsigned long x, int nbits)
{
unsigned long mask = 1UL << (nbits - 1);
while (mask != 0) {
*buf++ = (mask & x ? '1' : '0');
mask >>= 1;
}
*buf = '\0';
return nbits;
}
#ifdef CONFIG_64BIT
#define RFMT "%016lx"
#else
#define RFMT "%08lx"
#endif
#define FFMT "%016llx" /* fpregs are 64-bit always */
#define PRINTREGS(lvl,r,f,fmt,x) \
printk("%s%s%02d-%02d " fmt " " fmt " " fmt " " fmt "\n", \
lvl, f, (x), (x+3), (r)[(x)+0], (r)[(x)+1], \
(r)[(x)+2], (r)[(x)+3])
static void print_gr(const char *level, struct pt_regs *regs)
{
int i;
char buf[64];
printk("%s\n", level);
printk("%s YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI\n", level);
printbinary(buf, regs->gr[0], 32);
printk("%sPSW: %s %s\n", level, buf, print_tainted());
for (i = 0; i < 32; i += 4)
PRINTREGS(level, regs->gr, "r", RFMT, i);
}
static void print_fr(const char *level, struct pt_regs *regs)
{
int i;
char buf[64];
struct { u32 sw[2]; } s;
/* FR are 64bit everywhere. Need to use asm to get the content
* of fpsr/fper1, and we assume that we won't have a FP Identify
* in our way, otherwise we're screwed.
* The fldd is used to restore the T-bit if there was one, as the
* store clears it anyway.
* PA2.0 book says "thou shall not use fstw on FPSR/FPERs" - T-Bone */
asm volatile ("fstd %%fr0,0(%1) \n\t"
"fldd 0(%1),%%fr0 \n\t"
: "=m" (s) : "r" (&s) : "r0");
printk("%s\n", level);
printk("%s VZOUICununcqcqcqcqcqcrmunTDVZOUI\n", level);
printbinary(buf, s.sw[0], 32);
printk("%sFPSR: %s\n", level, buf);
printk("%sFPER1: %08x\n", level, s.sw[1]);
/* here we'll print fr0 again, tho it'll be meaningless */
for (i = 0; i < 32; i += 4)
PRINTREGS(level, regs->fr, "fr", FFMT, i);
}
void show_regs(struct pt_regs *regs)
{
int i, user;
const char *level;
unsigned long cr30, cr31;
user = user_mode(regs);
level = user ? KERN_DEBUG : KERN_CRIT;
dump_stack: unify debug information printed by show_regs() show_regs() is inherently arch-dependent but it does make sense to print generic debug information and some archs already do albeit in slightly different forms. This patch introduces a generic function to print debug information from show_regs() so that different archs print out the same information and it's much easier to modify what's printed. show_regs_print_info() prints out the same debug info as dump_stack() does plus task and thread_info pointers. * Archs which didn't print debug info now do. alpha, arc, blackfin, c6x, cris, frv, h8300, hexagon, ia64, m32r, metag, microblaze, mn10300, openrisc, parisc, score, sh64, sparc, um, xtensa * Already prints debug info. Replaced with show_regs_print_info(). The printed information is superset of what used to be there. arm, arm64, avr32, mips, powerpc, sh32, tile, unicore32, x86 * s390 is special in that it used to print arch-specific information along with generic debug info. Heiko and Martin think that the arch-specific extra isn't worth keeping s390 specfic implementation. Converted to use the generic version. Note that now all archs print the debug info before actual register dumps. An example BUG() dump follows. kernel BUG at /work/os/work/kernel/workqueue.c:4841! invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #7 Hardware name: empty empty/S3992, BIOS 080011 10/26/2007 task: ffff88007c85e040 ti: ffff88007c860000 task.ti: ffff88007c860000 RIP: 0010:[<ffffffff8234a07e>] [<ffffffff8234a07e>] init_workqueues+0x4/0x6 RSP: 0000:ffff88007c861ec8 EFLAGS: 00010246 RAX: ffff88007c861fd8 RBX: ffffffff824466a8 RCX: 0000000000000001 RDX: 0000000000000046 RSI: 0000000000000001 RDI: ffffffff8234a07a RBP: ffff88007c861ec8 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff8234a07a R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88007dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: ffff88015f7ff000 CR3: 00000000021f1000 CR4: 00000000000007f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Stack: ffff88007c861ef8 ffffffff81000312 ffffffff824466a8 ffff88007c85e650 0000000000000003 0000000000000000 ffff88007c861f38 ffffffff82335e5d ffff88007c862080 ffffffff8223d8c0 ffff88007c862080 ffffffff81c47760 Call Trace: [<ffffffff81000312>] do_one_initcall+0x122/0x170 [<ffffffff82335e5d>] kernel_init_freeable+0x9b/0x1c8 [<ffffffff81c47760>] ? rest_init+0x140/0x140 [<ffffffff81c4776e>] kernel_init+0xe/0xf0 [<ffffffff81c6be9c>] ret_from_fork+0x7c/0xb0 [<ffffffff81c47760>] ? rest_init+0x140/0x140 ... v2: Typo fix in x86-32. v3: CPU number dropped from show_regs_print_info() as dump_stack_print_info() has been updated to print it. s390 specific implementation dropped as requested by s390 maintainers. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Sam Ravnborg <sam@ravnborg.org> Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile bits] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon bits] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 06:27:17 +08:00
show_regs_print_info(level);
print_gr(level, regs);
for (i = 0; i < 8; i += 4)
PRINTREGS(level, regs->sr, "sr", RFMT, i);
if (user)
print_fr(level, regs);
cr30 = mfctl(30);
cr31 = mfctl(31);
printk("%s\n", level);
printk("%sIASQ: " RFMT " " RFMT " IAOQ: " RFMT " " RFMT "\n",
level, regs->iasq[0], regs->iasq[1], regs->iaoq[0], regs->iaoq[1]);
printk("%s IIR: %08lx ISR: " RFMT " IOR: " RFMT "\n",
level, regs->iir, regs->isr, regs->ior);
printk("%s CPU: %8d CR30: " RFMT " CR31: " RFMT "\n",
level, task_cpu(current), cr30, cr31);
printk("%s ORIG_R28: " RFMT "\n", level, regs->orig_r28);
if (user) {
printk("%s IAOQ[0]: " RFMT "\n", level, regs->iaoq[0]);
printk("%s IAOQ[1]: " RFMT "\n", level, regs->iaoq[1]);
printk("%s RP(r2): " RFMT "\n", level, regs->gr[2]);
} else {
printk("%s IAOQ[0]: %pS\n", level, (void *) regs->iaoq[0]);
printk("%s IAOQ[1]: %pS\n", level, (void *) regs->iaoq[1]);
printk("%s RP(r2): %pS\n", level, (void *) regs->gr[2]);
parisc_show_stack(current, regs, KERN_DEFAULT);
}
}
static DEFINE_RATELIMIT_STATE(_hppa_rs,
DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
#define parisc_printk_ratelimited(critical, regs, fmt, ...) { \
if ((critical || show_unhandled_signals) && __ratelimit(&_hppa_rs)) { \
printk(fmt, ##__VA_ARGS__); \
show_regs(regs); \
} \
}
static void do_show_stack(struct unwind_frame_info *info, const char *loglvl)
{
int i = 1;
printk("%sBacktrace:\n", loglvl);
while (i <= MAX_UNWIND_ENTRIES) {
if (unwind_once(info) < 0 || info->ip == 0)
break;
if (__kernel_text_address(info->ip)) {
printk("%s [<" RFMT ">] %pS\n",
loglvl, info->ip, (void *) info->ip);
i++;
}
}
printk("%s\n", loglvl);
}
static void parisc_show_stack(struct task_struct *task,
struct pt_regs *regs, const char *loglvl)
{
struct unwind_frame_info info;
unwind_frame_init_task(&info, task, regs);
do_show_stack(&info, loglvl);
}
void show_stack(struct task_struct *t, unsigned long *sp, const char *loglvl)
{
parisc_show_stack(t, NULL, loglvl);
}
int is_valid_bugaddr(unsigned long iaoq)
{
return 1;
}
void die_if_kernel(char *str, struct pt_regs *regs, long err)
{
if (user_mode(regs)) {
if (err == 0)
return; /* STFU */
parisc_printk_ratelimited(1, regs,
KERN_CRIT "%s (pid %d): %s (code %ld) at " RFMT "\n",
current->comm, task_pid_nr(current), str, err, regs->iaoq[0]);
return;
}
bust_spinlocks(1);
oops_enter();
/* Amuse the user in a SPARC fashion */
if (err) printk(KERN_CRIT
" _______________________________ \n"
" < Your System ate a SPARC! Gah! >\n"
" ------------------------------- \n"
" \\ ^__^\n"
" (__)\\ )\\/\\\n"
" U ||----w |\n"
" || ||\n");
/* unlock the pdc lock if necessary */
pdc_emergency_unlock();
if (err)
printk(KERN_CRIT "%s (pid %d): %s (code %ld)\n",
current->comm, task_pid_nr(current), str, err);
/* Wot's wrong wif bein' racy? */
if (current->thread.flags & PARISC_KERNEL_DEATH) {
printk(KERN_CRIT "%s() recursion detected.\n", __func__);
local_irq_enable();
while (1);
}
current->thread.flags |= PARISC_KERNEL_DEATH;
show_regs(regs);
dump_stack();
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops)
panic("Fatal exception");
oops_exit();
make_task_dead(SIGSEGV);
}
/* gdb uses break 4,8 */
#define GDB_BREAK_INSN 0x10004
static void handle_gdb_break(struct pt_regs *regs, int wot)
{
force_sig_fault(SIGTRAP, wot,
(void __user *) (regs->iaoq[0] & ~3));
}
static void handle_break(struct pt_regs *regs)
{
unsigned iir = regs->iir;
if (unlikely(iir == PARISC_BUG_BREAK_INSN && !user_mode(regs))) {
/* check if a BUG() or WARN() trapped here. */
enum bug_trap_type tt;
tt = report_bug(regs->iaoq[0] & ~3, regs);
if (tt == BUG_TRAP_TYPE_WARN) {
regs->iaoq[0] += 4;
regs->iaoq[1] += 4;
return; /* return to next instruction when WARN_ON(). */
}
die_if_kernel("Unknown kernel breakpoint", regs,
(tt == BUG_TRAP_TYPE_NONE) ? 9 : 0);
}
#ifdef CONFIG_KPROBES
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) {
parisc_kprobe_break_handler(regs);
return;
}
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) {
parisc_kprobe_ss_handler(regs);
return;
}
#endif
#ifdef CONFIG_KGDB
if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
iir == PARISC_KGDB_BREAK_INSN)) {
kgdb_handle_exception(9, SIGTRAP, 0, regs);
return;
}
#endif
if (unlikely(iir != GDB_BREAK_INSN))
parisc_printk_ratelimited(0, regs,
KERN_DEBUG "break %d,%d: pid=%d command='%s'\n",
iir & 31, (iir>>13) & ((1<<13)-1),
task_pid_nr(current), current->comm);
/* send standard GDB signal */
handle_gdb_break(regs, TRAP_BRKPT);
}
static void default_trap(int code, struct pt_regs *regs)
{
printk(KERN_ERR "Trap %d on CPU %d\n", code, smp_processor_id());
show_regs(regs);
}
void (*cpu_lpmc) (int code, struct pt_regs *regs) __read_mostly = default_trap;
void transfer_pim_to_trap_frame(struct pt_regs *regs)
{
register int i;
extern unsigned int hpmc_pim_data[];
struct pdc_hpmc_pim_11 *pim_narrow;
struct pdc_hpmc_pim_20 *pim_wide;
if (boot_cpu_data.cpu_type >= pcxu) {
pim_wide = (struct pdc_hpmc_pim_20 *)hpmc_pim_data;
/*
* Note: The following code will probably generate a
* bunch of truncation error warnings from the compiler.
* Could be handled with an ifdef, but perhaps there
* is a better way.
*/
regs->gr[0] = pim_wide->cr[22];
for (i = 1; i < 32; i++)
regs->gr[i] = pim_wide->gr[i];
for (i = 0; i < 32; i++)
regs->fr[i] = pim_wide->fr[i];
for (i = 0; i < 8; i++)
regs->sr[i] = pim_wide->sr[i];
regs->iasq[0] = pim_wide->cr[17];
regs->iasq[1] = pim_wide->iasq_back;
regs->iaoq[0] = pim_wide->cr[18];
regs->iaoq[1] = pim_wide->iaoq_back;
regs->sar = pim_wide->cr[11];
regs->iir = pim_wide->cr[19];
regs->isr = pim_wide->cr[20];
regs->ior = pim_wide->cr[21];
}
else {
pim_narrow = (struct pdc_hpmc_pim_11 *)hpmc_pim_data;
regs->gr[0] = pim_narrow->cr[22];
for (i = 1; i < 32; i++)
regs->gr[i] = pim_narrow->gr[i];
for (i = 0; i < 32; i++)
regs->fr[i] = pim_narrow->fr[i];
for (i = 0; i < 8; i++)
regs->sr[i] = pim_narrow->sr[i];
regs->iasq[0] = pim_narrow->cr[17];
regs->iasq[1] = pim_narrow->iasq_back;
regs->iaoq[0] = pim_narrow->cr[18];
regs->iaoq[1] = pim_narrow->iaoq_back;
regs->sar = pim_narrow->cr[11];
regs->iir = pim_narrow->cr[19];
regs->isr = pim_narrow->cr[20];
regs->ior = pim_narrow->cr[21];
}
/*
* The following fields only have meaning if we came through
* another path. So just zero them here.
*/
regs->ksp = 0;
regs->kpc = 0;
regs->orig_r28 = 0;
}
/*
* This routine is called as a last resort when everything else
* has gone clearly wrong. We get called for faults in kernel space,
* and HPMC's.
*/
void parisc_terminate(char *msg, struct pt_regs *regs, int code, unsigned long offset)
{
static DEFINE_SPINLOCK(terminate_lock);
(void)notify_die(DIE_OOPS, msg, regs, 0, code, SIGTRAP);
bust_spinlocks(1);
set_eiem(0);
local_irq_disable();
spin_lock(&terminate_lock);
/* unlock the pdc lock if necessary */
pdc_emergency_unlock();
/* Not all paths will gutter the processor... */
switch(code){
case 1:
transfer_pim_to_trap_frame(regs);
break;
default:
break;
}
{
/* show_stack(NULL, (unsigned long *)regs->gr[30]); */
struct unwind_frame_info info;
unwind_frame_init(&info, current, regs);
do_show_stack(&info, KERN_CRIT);
}
printk("\n");
pr_crit("%s: Code=%d (%s) at addr " RFMT "\n",
msg, code, trap_name(code), offset);
show_regs(regs);
spin_unlock(&terminate_lock);
/* put soft power button back under hardware control;
* if the user had pressed it once at any time, the
* system will shut down immediately right here. */
pdc_soft_power_button(0);
/* Call kernel panic() so reboot timeouts work properly
* FIXME: This function should be on the list of
* panic notifiers, and we should call panic
* directly from the location that we wish.
* e.g. We should not call panic from
* parisc_terminate, but rather the other way around.
* This hack works, prints the panic message twice,
* and it enables reboot timers!
*/
panic(msg);
}
void notrace handle_interruption(int code, struct pt_regs *regs)
{
unsigned long fault_address = 0;
unsigned long fault_space = 0;
int si_code;
if (!irqs_disabled_flags(regs->gr[0]))
local_irq_enable();
/* Security check:
* If the priority level is still user, and the
* faulting space is not equal to the active space
* then the user is attempting something in a space
* that does not belong to them. Kill the process.
*
* This is normally the situation when the user
* attempts to jump into the kernel space at the
* wrong offset, be it at the gateway page or a
* random location.
*
* We cannot normally signal the process because it
* could *be* on the gateway page, and processes
* executing on the gateway page can't have signals
* delivered.
*
* We merely readjust the address into the users
* space, at a destination address of zero, and
* allow processing to continue.
*/
if (((unsigned long)regs->iaoq[0] & 3) &&
((unsigned long)regs->iasq[0] != (unsigned long)regs->sr[7])) {
/* Kill the user process later */
regs->iaoq[0] = 0 | 3;
regs->iaoq[1] = regs->iaoq[0] + 4;
regs->iasq[0] = regs->iasq[1] = regs->sr[7];
regs->gr[0] &= ~PSW_B;
return;
}
#if 0
printk(KERN_CRIT "Interruption # %d\n", code);
#endif
switch(code) {
case 1:
/* High-priority machine check (HPMC) */
/* set up a new led state on systems shipped with a LED State panel */
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_HPMC);
parisc_terminate("High Priority Machine Check (HPMC)",
regs, code, 0);
/* NOT REACHED */
case 2:
/* Power failure interrupt */
printk(KERN_CRIT "Power failure interrupt !\n");
return;
case 3:
/* Recovery counter trap */
regs->gr[0] &= ~PSW_R;
#ifdef CONFIG_KGDB
if (kgdb_single_step) {
kgdb_handle_exception(0, SIGTRAP, 0, regs);
return;
}
#endif
if (user_space(regs))
handle_gdb_break(regs, TRAP_TRACE);
/* else this must be the start of a syscall - just let it run */
return;
case 5:
/* Low-priority machine check */
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_LPMC);
flush_cache_all();
flush_tlb_all();
cpu_lpmc(5, regs);
return;
case PARISC_ITLB_TRAP:
/* Instruction TLB miss fault/Instruction page fault */
fault_address = regs->iaoq[0];
fault_space = regs->iasq[0];
break;
case 8:
/* Illegal instruction trap */
die_if_kernel("Illegal instruction", regs, code);
si_code = ILL_ILLOPC;
goto give_sigill;
case 9:
/* Break instruction trap */
handle_break(regs);
return;
case 10:
/* Privileged operation trap */
die_if_kernel("Privileged operation", regs, code);
si_code = ILL_PRVOPC;
goto give_sigill;
case 11:
/* Privileged register trap */
if ((regs->iir & 0xffdfffe0) == 0x034008a0) {
/* This is a MFCTL cr26/cr27 to gr instruction.
* PCXS traps on this, so we need to emulate it.
*/
if (regs->iir & 0x00200000)
regs->gr[regs->iir & 0x1f] = mfctl(27);
else
regs->gr[regs->iir & 0x1f] = mfctl(26);
regs->iaoq[0] = regs->iaoq[1];
regs->iaoq[1] += 4;
regs->iasq[0] = regs->iasq[1];
return;
}
die_if_kernel("Privileged register usage", regs, code);
si_code = ILL_PRVREG;
give_sigill:
force_sig_fault(SIGILL, si_code,
(void __user *) regs->iaoq[0]);
return;
case 12:
/* Overflow Trap, let the userland signal handler do the cleanup */
force_sig_fault(SIGFPE, FPE_INTOVF,
(void __user *) regs->iaoq[0]);
return;
case 13:
/* Conditional Trap
The condition succeeds in an instruction which traps
on condition */
if(user_mode(regs)){
/* Let userspace app figure it out from the insn pointed
* to by si_addr.
*/
force_sig_fault(SIGFPE, FPE_CONDTRAP,
(void __user *) regs->iaoq[0]);
return;
}
/* The kernel doesn't want to handle condition codes */
break;
case 14:
/* Assist Exception Trap, i.e. floating point exception. */
die_if_kernel("Floating point exception", regs, 0); /* quiet */
__inc_irq_stat(irq_fpassist_count);
handle_fpe(regs);
return;
case 15:
/* Data TLB miss fault/Data page fault */
fallthrough;
case 16:
/* Non-access instruction TLB miss fault */
/* The instruction TLB entry needed for the target address of the FIC
is absent, and hardware can't find it, so we get to cleanup */
fallthrough;
case 17:
/* Non-access data TLB miss fault/Non-access data page fault */
/* FIXME:
Still need to add slow path emulation code here!
If the insn used a non-shadow register, then the tlb
handlers could not have their side-effect (e.g. probe
writing to a target register) emulated since rfir would
erase the changes to said register. Instead we have to
setup everything, call this function we are in, and emulate
by hand. Technically we need to emulate:
fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw
*/
if (code == 17 && handle_nadtlb_fault(regs))
return;
fault_address = regs->ior;
fault_space = regs->isr;
break;
case 18:
/* PCXS only -- later cpu's split this into types 26,27 & 28 */
/* Check for unaligned access */
if (check_unaligned(regs)) {
handle_unaligned(regs);
return;
}
fallthrough;
case 26:
/* PCXL: Data memory access rights trap */
fault_address = regs->ior;
fault_space = regs->isr;
break;
case 19:
/* Data memory break trap */
regs->gr[0] |= PSW_X; /* So we can single-step over the trap */
fallthrough;
case 21:
/* Page reference trap */
handle_gdb_break(regs, TRAP_HWBKPT);
return;
case 25:
/* Taken branch trap */
regs->gr[0] &= ~PSW_T;
if (user_space(regs))
handle_gdb_break(regs, TRAP_BRANCH);
/* else this must be the start of a syscall - just let it
* run.
*/
return;
case 7:
/* Instruction access rights */
/* PCXL: Instruction memory protection trap */
/*
* This could be caused by either: 1) a process attempting
* to execute within a vma that does not have execute
* permission, or 2) an access rights violation caused by a
* flush only translation set up by ptep_get_and_clear().
* So we check the vma permissions to differentiate the two.
* If the vma indicates we have execute permission, then
* the cause is the latter one. In this case, we need to
* call do_page_fault() to fix the problem.
*/
if (user_mode(regs)) {
struct vm_area_struct *vma;
mmap locking API: use coccinelle to convert mmap_sem rwsem call sites This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09 12:33:25 +08:00
mmap_read_lock(current->mm);
vma = find_vma(current->mm,regs->iaoq[0]);
if (vma && (regs->iaoq[0] >= vma->vm_start)
&& (vma->vm_flags & VM_EXEC)) {
fault_address = regs->iaoq[0];
fault_space = regs->iasq[0];
mmap locking API: use coccinelle to convert mmap_sem rwsem call sites This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09 12:33:25 +08:00
mmap_read_unlock(current->mm);
break; /* call do_page_fault() */
}
mmap locking API: use coccinelle to convert mmap_sem rwsem call sites This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09 12:33:25 +08:00
mmap_read_unlock(current->mm);
}
/* CPU could not fetch instruction, so clear stale IIR value. */
regs->iir = 0xbaadf00d;
fallthrough;
case 27:
/* Data memory protection ID trap */
if (code == 27 && !user_mode(regs) &&
fixup_exception(regs))
return;
die_if_kernel("Protection id trap", regs, code);
force_sig_fault(SIGSEGV, SEGV_MAPERR,
(code == 7)?
((void __user *) regs->iaoq[0]) :
((void __user *) regs->ior));
return;
case 28:
/* Unaligned data reference trap */
handle_unaligned(regs);
return;
default:
if (user_mode(regs)) {
parisc_printk_ratelimited(0, regs, KERN_DEBUG
"handle_interruption() pid=%d command='%s'\n",
task_pid_nr(current), current->comm);
/* SIGBUS, for lack of a better one. */
force_sig_fault(SIGBUS, BUS_OBJERR,
(void __user *)regs->ior);
return;
}
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);
parisc_terminate("Unexpected interruption", regs, code, 0);
/* NOT REACHED */
}
if (user_mode(regs)) {
if ((fault_space >> SPACEID_SHIFT) != (regs->sr[7] >> SPACEID_SHIFT)) {
parisc_printk_ratelimited(0, regs, KERN_DEBUG
"User fault %d on space 0x%08lx, pid=%d command='%s'\n",
code, fault_space,
task_pid_nr(current), current->comm);
force_sig_fault(SIGSEGV, SEGV_MAPERR,
(void __user *)regs->ior);
return;
}
}
else {
/*
* The kernel should never fault on its own address space,
* unless pagefault_disable() was called before.
*/
if (faulthandler_disabled() || fault_space == 0)
{
/* Clean up and return if in exception table. */
if (fixup_exception(regs))
return;
/* Clean up and return if handled by kfence. */
if (kfence_handle_page_fault(fault_address,
parisc_acctyp(code, regs->iir) == VM_WRITE, regs))
return;
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);
parisc_terminate("Kernel Fault", regs, code, fault_address);
}
}
do_page_fault(regs, code, fault_address);
}
void __init initialize_ivt(const void *iva)
{
extern const u32 os_hpmc[];
int i;
u32 check = 0;
u32 *ivap;
u32 *hpmcp;
parisc: Fix IVT checksum calculation wrt HPMC On my C8000 a HPMC was triggered, but the HPMC handler wasn't called. I got the following chassis codes: <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu3> e800009803e00000 00000000001b28a3 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu3> 37000f7303e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> f600105e02e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED <Cpu3> 5600100b03e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR <Cpu2> 140003b202e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY <Cpu3> 5600106403e00000 fffffff0f043ad20 CC_MC_BR_TO_OS_HPMC_FAILED <Cpu3> 160012cf03e00000 030001001e000007 CC_MPS_CPU_WAITING <Cpu2> 5600100b02e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR <Cpu2> 5600106402e00000 fffffff0f0438e70 CC_MC_BR_TO_OS_HPMC_FAILED <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> 4000109f02e00000 0000000000000000 CC_MC_HPMC_INITIATED <Cpu2> 4000101902e00000 0000000000000000 CC_MC_MULTIPLE_HPMCS <Cpu2> 030010d502e00000 0000000000000000 CC_CPU_STOP C8000 PDC is complaining about our HPMC handler length, which is 1a0 (second part of the chassis code). Changing that to 0 makes the error go away: <Cpu0> e800009800e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu3> e800009803e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu1> e800009801e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu0> 37000f7300e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu3> 37000f7303e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu1> 37000f7301e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> 37000f7302e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu0> f600105e00e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED <Cpu3> 5600109b03e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu1> 5600109b01e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu2> 5600109b02e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu0> 140003b200e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY <Cpu3> 0000000003000000 0000000000000000 <Cpu1> 0000000001000000 0000000000000000 <Cpu2> 0000000002000000 0000000000000000 <Cpu0> 5600109b00e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu0> 0000000000000000 0000000000000000 So at least the HPMC handler is now called, but it hangs. Which isn't really suprising, as the code has at least one comment saying it can't handle multiple CPUs, and here the handler is called on all CPUs. And i'm not sure whether it can handle 64 Bit. So despite what the PDC spec says, C8000 and RP34xx/RP44xx don't want the OS_HPMC length in the vector set, which is odd. I disassembled the firmware and it actually looks like a Bug in PDC. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2020-10-03 02:07:20 +08:00
u32 instr;
if (strcmp((const char *)iva, "cows can fly"))
panic("IVT invalid");
ivap = (u32 *)iva;
for (i = 0; i < 8; i++)
*ivap++ = 0;
/*
* Use PDC_INSTR firmware function to get instruction that invokes
* PDCE_CHECK in HPMC handler. See programming note at page 1-31 of
* the PA 1.1 Firmware Architecture document.
*/
if (pdc_instr(&instr) == PDC_OK)
ivap[0] = instr;
/*
* Rules for the checksum of the HPMC handler:
* 1. The IVA does not point to PDC/PDH space (ie: the OS has installed
* its own IVA).
* 2. The word at IVA + 32 is nonzero.
* 3. If Length (IVA + 60) is not zero, then Length (IVA + 60) and
* Address (IVA + 56) are word-aligned.
* 4. The checksum of the 8 words starting at IVA + 32 plus the sum of
* the Length/4 words starting at Address is zero.
*/
parisc: Fix address in HPMC IVA Helge noticed that the address of the os_hpmc handler was not being correctly calculated in the hpmc macro. As a result, PDCE_CHECK would fail to call os_hpmc: <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8040004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> f600105e02e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED <Cpu2> 140003b202e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY <Cpu2> 5600100b02e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR <Cpu2> 5600106402e00000 fffffff0f0438e70 CC_MC_BR_TO_OS_HPMC_FAILED <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8040004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> 4000109f02e00000 0000000000000000 CC_MC_HPMC_INITIATED <Cpu2> 4000101902e00000 0000000000000000 CC_MC_MULTIPLE_HPMCS <Cpu2> 030010d502e00000 0000000000000000 CC_CPU_STOP The address problem can be seen by dumping the fault vector: 0000000040159000 <fault_vector_20>: 40159000: 63 6f 77 73 stb r15,-2447(dp) 40159004: 20 63 61 6e ldil L%b747000,r3 40159008: 20 66 6c 79 ldil L%-1c3b3000,r3 ... 40159020: 08 00 02 40 nop 40159024: 20 6e 60 02 ldil L%15d000,r3 40159028: 34 63 00 00 ldo 0(r3),r3 4015902c: e8 60 c0 02 bv,n r0(r3) 40159030: 08 00 02 40 nop 40159034: 00 00 00 00 break 0,0 40159038: c0 00 70 00 bb,*< r0,sar,40159840 <fault_vector_20+0x840> 4015903c: 00 00 00 00 break 0,0 Location 40159038 should contain the physical address of os_hpmc: 000000004015d000 <os_hpmc>: 4015d000: 08 1a 02 43 copy r26,r3 4015d004: 01 c0 08 a4 mfctl iva,r4 4015d008: 48 85 00 68 ldw 34(r4),r5 This patch moves the address setup into initialize_ivt to resolve the above problem. I tested the change by dumping the HPMC entry after setup: 0000000040209020: 8000240 0000000040209024: 206a2004 0000000040209028: 34630ac0 000000004020902c: e860c002 0000000040209030: 8000240 0000000040209034: 1bdddce6 0000000040209038: 15d000 000000004020903c: 1a0 Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: <stable@vger.kernel.org> Signed-off-by: Helge Deller <deller@gmx.de>
2018-10-07 01:11:30 +08:00
/* Setup IVA and compute checksum for HPMC handler */
ivap[6] = (u32)__pa(os_hpmc);
hpmcp = (u32 *)os_hpmc;
for (i=0; i<8; i++)
check += ivap[i];
ivap[5] = -check;
parisc: Fix IVT checksum calculation wrt HPMC On my C8000 a HPMC was triggered, but the HPMC handler wasn't called. I got the following chassis codes: <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu3> e800009803e00000 00000000001b28a3 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu3> 37000f7303e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> f600105e02e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED <Cpu3> 5600100b03e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR <Cpu2> 140003b202e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY <Cpu3> 5600106403e00000 fffffff0f043ad20 CC_MC_BR_TO_OS_HPMC_FAILED <Cpu3> 160012cf03e00000 030001001e000007 CC_MPS_CPU_WAITING <Cpu2> 5600100b02e00000 00000000000001a0 CC_MC_OS_HPMC_LEN_ERR <Cpu2> 5600106402e00000 fffffff0f0438e70 CC_MC_BR_TO_OS_HPMC_FAILED <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> 37000f7302e00000 8400000000800000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> 4000109f02e00000 0000000000000000 CC_MC_HPMC_INITIATED <Cpu2> 4000101902e00000 0000000000000000 CC_MC_MULTIPLE_HPMCS <Cpu2> 030010d502e00000 0000000000000000 CC_CPU_STOP C8000 PDC is complaining about our HPMC handler length, which is 1a0 (second part of the chassis code). Changing that to 0 makes the error go away: <Cpu0> e800009800e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu3> e800009803e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu1> e800009801e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu2> e800009802e00000 0000000000000000 CC_ERR_CHECK_HPMC <Cpu0> 37000f7300e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu3> 37000f7303e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu1> 37000f7301e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu2> 37000f7302e00000 8060004000000000 CC_ERR_CPU_CHECK_SUMMARY <Cpu0> f600105e00e00000 fffffff0f0c00000 CC_MC_HPMC_MONARCH_SELECTED <Cpu3> 5600109b03e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu1> 5600109b01e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu2> 5600109b02e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu0> 140003b200e00000 000000000000000b CC_ERR_HPMC_STATE_ENTRY <Cpu3> 0000000003000000 0000000000000000 <Cpu1> 0000000001000000 0000000000000000 <Cpu2> 0000000002000000 0000000000000000 <Cpu0> 5600109b00e00000 00000000001eb024 CC_MC_BR_TO_OS_HPMC <Cpu0> 0000000000000000 0000000000000000 So at least the HPMC handler is now called, but it hangs. Which isn't really suprising, as the code has at least one comment saying it can't handle multiple CPUs, and here the handler is called on all CPUs. And i'm not sure whether it can handle 64 Bit. So despite what the PDC spec says, C8000 and RP34xx/RP44xx don't want the OS_HPMC length in the vector set, which is odd. I disassembled the firmware and it actually looks like a Bug in PDC. Signed-off-by: Sven Schnelle <svens@stackframe.org> Signed-off-by: Helge Deller <deller@gmx.de>
2020-10-03 02:07:20 +08:00
pr_debug("initialize_ivt: IVA[6] = 0x%08x\n", ivap[6]);
}
/* early_trap_init() is called before we set up kernel mappings and
* write-protect the kernel */
void __init early_trap_init(void)
{
extern const void fault_vector_20;
#ifndef CONFIG_64BIT
extern const void fault_vector_11;
initialize_ivt(&fault_vector_11);
#endif
initialize_ivt(&fault_vector_20);
}