2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Derived from arch/i386/kernel/irq.c
|
|
|
|
* Copyright (C) 1992 Linus Torvalds
|
|
|
|
* Adapted from arch/i386 by Gary Thomas
|
|
|
|
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
|
2005-11-09 15:07:45 +08:00
|
|
|
* Updated and modified by Cort Dougan <cort@fsmlabs.com>
|
|
|
|
* Copyright (C) 1996-2001 Cort Dougan
|
2005-04-17 06:20:36 +08:00
|
|
|
* Adapted for Power Macintosh by Paul Mackerras
|
|
|
|
* Copyright (C) 1996 Paul Mackerras (paulus@cs.anu.edu.au)
|
2005-11-09 15:07:45 +08:00
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This file contains the code used by various IRQ handling routines:
|
|
|
|
* asking for different IRQ's should be done through these routines
|
|
|
|
* instead of just grabbing them. Thus setups with different IRQ numbers
|
|
|
|
* shouldn't result in any weird surprises, and installing new handlers
|
|
|
|
* should be easier.
|
2005-11-09 15:07:45 +08:00
|
|
|
*
|
|
|
|
* The MPC8xx has an interrupt mask in the SIU. If a bit is set, the
|
|
|
|
* interrupt is _enabled_. As expected, IRQ0 is bit 0 in the 32-bit
|
|
|
|
* mask register (of which only 16 are defined), hence the weird shifting
|
|
|
|
* and complement of the cached_irq_mask. I want to be able to stuff
|
|
|
|
* this right into the SIU SMASK register.
|
|
|
|
* Many of the prep/chrp functions are conditional compiled on CONFIG_8xx
|
|
|
|
* to reduce code space and undefined function references.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
#undef DEBUG
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/threads.h>
|
|
|
|
#include <linux/kernel_stat.h>
|
|
|
|
#include <linux/signal.h>
|
|
|
|
#include <linux/sched.h>
|
2005-11-09 15:07:45 +08:00
|
|
|
#include <linux/ptrace.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/ioport.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/timex.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/irq.h>
|
2005-11-09 15:07:45 +08:00
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/cpumask.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/profile.h>
|
|
|
|
#include <linux/bitops.h>
|
2006-07-03 19:36:01 +08:00
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/radix-tree.h>
|
|
|
|
#include <linux/mutex.h>
|
|
|
|
#include <linux/bootmem.h>
|
2006-07-28 02:17:25 +08:00
|
|
|
#include <linux/pci.h>
|
2007-08-28 16:47:57 +08:00
|
|
|
#include <linux/debugfs.h>
|
2010-06-19 01:09:59 +08:00
|
|
|
#include <linux/of.h>
|
|
|
|
#include <linux/of_irq.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <asm/uaccess.h>
|
|
|
|
#include <asm/system.h>
|
|
|
|
#include <asm/io.h>
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
#include <asm/irq.h>
|
|
|
|
#include <asm/cache.h>
|
|
|
|
#include <asm/prom.h>
|
|
|
|
#include <asm/ptrace.h>
|
|
|
|
#include <asm/machdep.h>
|
2006-07-03 19:36:01 +08:00
|
|
|
#include <asm/udbg.h>
|
2010-08-18 14:44:25 +08:00
|
|
|
#include <asm/smp.h>
|
2010-07-09 13:31:28 +08:00
|
|
|
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
#ifdef CONFIG_PPC64
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/paca.h>
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
#include <asm/firmware.h>
|
2007-05-01 05:01:07 +08:00
|
|
|
#include <asm/lv1call.h>
|
2005-11-09 15:07:45 +08:00
|
|
|
#endif
|
2009-10-27 02:47:42 +08:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <asm/trace.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-02-01 04:30:23 +08:00
|
|
|
DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
|
|
|
|
EXPORT_PER_CPU_SYMBOL(irq_stat);
|
|
|
|
|
2005-11-10 15:38:46 +08:00
|
|
|
int __irq_offset_value;
|
2005-11-09 15:07:45 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_PPC32
|
2006-07-03 17:32:51 +08:00
|
|
|
EXPORT_SYMBOL(__irq_offset_value);
|
|
|
|
atomic_t ppc_n_lost_interrupts;
|
2005-11-09 15:07:45 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_TAU_INT
|
|
|
|
extern int tau_initialized;
|
|
|
|
extern int tau_interrupts(int);
|
|
|
|
#endif
|
2006-07-03 17:32:51 +08:00
|
|
|
#endif /* CONFIG_PPC32 */
|
2005-11-09 15:07:45 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_PPC64
|
2009-10-14 03:45:03 +08:00
|
|
|
|
|
|
|
#ifndef CONFIG_SPARSE_IRQ
|
2005-04-17 06:20:36 +08:00
|
|
|
EXPORT_SYMBOL(irq_desc);
|
2009-10-14 03:45:03 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
int distribute_irqs = 1;
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
|
2008-05-15 11:49:44 +08:00
|
|
|
static inline notrace unsigned long get_hard_enabled(void)
|
2006-11-11 05:32:40 +08:00
|
|
|
{
|
|
|
|
unsigned long enabled;
|
|
|
|
|
|
|
|
__asm__ __volatile__("lbz %0,%1(13)"
|
|
|
|
: "=r" (enabled) : "i" (offsetof(struct paca_struct, hard_enabled)));
|
|
|
|
|
|
|
|
return enabled;
|
|
|
|
}
|
|
|
|
|
2008-05-15 11:49:44 +08:00
|
|
|
static inline notrace void set_soft_enabled(unsigned long enable)
|
2006-11-11 05:32:40 +08:00
|
|
|
{
|
|
|
|
__asm__ __volatile__("stb %0,%1(13)"
|
|
|
|
: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
|
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
notrace void arch_local_irq_restore(unsigned long en)
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
{
|
2006-11-11 05:32:40 +08:00
|
|
|
/*
|
|
|
|
* get_paca()->soft_enabled = en;
|
|
|
|
* Is it ever valid to use local_irq_restore(0) when soft_enabled is 1?
|
|
|
|
* That was allowed before, and in such a case we do need to take care
|
|
|
|
* that gcc will set soft_enabled directly via r13, not choose to use
|
|
|
|
* an intermediate register, lest we're preempted to a different cpu.
|
|
|
|
*/
|
|
|
|
set_soft_enabled(en);
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
if (!en)
|
|
|
|
return;
|
|
|
|
|
2009-06-03 05:17:45 +08:00
|
|
|
#ifdef CONFIG_PPC_STD_MMU_64
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
if (firmware_has_feature(FW_FEATURE_ISERIES)) {
|
2006-11-11 05:32:40 +08:00
|
|
|
/*
|
|
|
|
* Do we need to disable preemption here? Not really: in the
|
|
|
|
* unlikely event that we're preempted to a different cpu in
|
|
|
|
* between getting r13, loading its lppaca_ptr, and loading
|
|
|
|
* its any_int, we might call iseries_handle_interrupts without
|
|
|
|
* an interrupt pending on the new cpu, but that's no disaster,
|
|
|
|
* is it? And the business of preempting us off the old cpu
|
|
|
|
* would itself involve a local_irq_restore which handles the
|
|
|
|
* interrupt to that cpu.
|
|
|
|
*
|
|
|
|
* But use "local_paca->lppaca_ptr" instead of "get_lppaca()"
|
|
|
|
* to avoid any preemption checking added into get_paca().
|
|
|
|
*/
|
|
|
|
if (local_paca->lppaca_ptr->int_dword.any_int)
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
iseries_handle_interrupts();
|
|
|
|
}
|
2009-06-03 05:17:45 +08:00
|
|
|
#endif /* CONFIG_PPC_STD_MMU_64 */
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
|
2006-11-11 05:32:40 +08:00
|
|
|
/*
|
|
|
|
* if (get_paca()->hard_enabled) return;
|
|
|
|
* But again we need to take care that gcc gets hard_enabled directly
|
|
|
|
* via r13, not choose to use an intermediate register, lest we're
|
|
|
|
* preempted to a different cpu in between the two instructions.
|
|
|
|
*/
|
|
|
|
if (get_hard_enabled())
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
return;
|
2006-11-11 05:32:40 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Need to hard-enable interrupts here. Since currently disabled,
|
|
|
|
* no need to take further asm precautions against preemption; but
|
|
|
|
* use local_paca instead of get_paca() to avoid preemption checking.
|
|
|
|
*/
|
|
|
|
local_paca->hard_enabled = en;
|
2010-07-09 13:30:22 +08:00
|
|
|
|
|
|
|
#ifndef CONFIG_BOOKE
|
|
|
|
/* On server, re-trigger the decrementer if it went negative since
|
|
|
|
* some processors only trigger on edge transitions of the sign bit.
|
|
|
|
*
|
|
|
|
* BookE has a level sensitive decrementer (latches in TSR) so we
|
|
|
|
* don't need that
|
|
|
|
*/
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
if ((int)mfspr(SPRN_DEC) < 0)
|
|
|
|
mtspr(SPRN_DEC, 1);
|
2010-07-09 13:30:22 +08:00
|
|
|
#endif /* CONFIG_BOOKE */
|
2007-05-01 05:01:07 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Force the delivery of pending soft-disabled interrupts on PS3.
|
|
|
|
* Any HV call will have this side effect.
|
|
|
|
*/
|
|
|
|
if (firmware_has_feature(FW_FEATURE_PS3_LV1)) {
|
|
|
|
u64 tmp;
|
|
|
|
lv1_get_version_info(&tmp);
|
|
|
|
}
|
|
|
|
|
2007-05-11 13:22:45 +08:00
|
|
|
__hard_irq_enable();
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
}
|
2010-10-07 21:08:55 +08:00
|
|
|
EXPORT_SYMBOL(arch_local_irq_restore);
|
2005-11-09 15:07:45 +08:00
|
|
|
#endif /* CONFIG_PPC64 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-26 00:04:59 +08:00
|
|
|
int arch_show_interrupts(struct seq_file *p, int prec)
|
2010-02-01 04:33:18 +08:00
|
|
|
{
|
|
|
|
int j;
|
|
|
|
|
|
|
|
#if defined(CONFIG_PPC32) && defined(CONFIG_TAU_INT)
|
|
|
|
if (tau_initialized) {
|
|
|
|
seq_printf(p, "%*s: ", prec, "TAU");
|
|
|
|
for_each_online_cpu(j)
|
|
|
|
seq_printf(p, "%10u ", tau_interrupts(j));
|
|
|
|
seq_puts(p, " PowerPC Thermal Assist (cpu temp)\n");
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_PPC32 && CONFIG_TAU_INT */
|
|
|
|
|
2010-02-01 04:34:06 +08:00
|
|
|
seq_printf(p, "%*s: ", prec, "LOC");
|
|
|
|
for_each_online_cpu(j)
|
|
|
|
seq_printf(p, "%10u ", per_cpu(irq_stat, j).timer_irqs);
|
|
|
|
seq_printf(p, " Local timer interrupts\n");
|
|
|
|
|
2010-02-01 04:34:36 +08:00
|
|
|
seq_printf(p, "%*s: ", prec, "SPU");
|
|
|
|
for_each_online_cpu(j)
|
|
|
|
seq_printf(p, "%10u ", per_cpu(irq_stat, j).spurious_irqs);
|
|
|
|
seq_printf(p, " Spurious interrupts\n");
|
|
|
|
|
2010-02-01 04:34:06 +08:00
|
|
|
seq_printf(p, "%*s: ", prec, "CNT");
|
|
|
|
for_each_online_cpu(j)
|
|
|
|
seq_printf(p, "%10u ", per_cpu(irq_stat, j).pmu_irqs);
|
|
|
|
seq_printf(p, " Performance monitoring interrupts\n");
|
|
|
|
|
|
|
|
seq_printf(p, "%*s: ", prec, "MCE");
|
|
|
|
for_each_online_cpu(j)
|
|
|
|
seq_printf(p, "%10u ", per_cpu(irq_stat, j).mce_exceptions);
|
|
|
|
seq_printf(p, " Machine check exceptions\n");
|
|
|
|
|
2010-02-01 04:33:18 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-02-01 04:34:06 +08:00
|
|
|
/*
|
|
|
|
* /proc/stat helpers
|
|
|
|
*/
|
|
|
|
u64 arch_irq_stat_cpu(unsigned int cpu)
|
|
|
|
{
|
|
|
|
u64 sum = per_cpu(irq_stat, cpu).timer_irqs;
|
|
|
|
|
|
|
|
sum += per_cpu(irq_stat, cpu).pmu_irqs;
|
|
|
|
sum += per_cpu(irq_stat, cpu).mce_exceptions;
|
2010-02-01 04:34:36 +08:00
|
|
|
sum += per_cpu(irq_stat, cpu).spurious_irqs;
|
2010-02-01 04:34:06 +08:00
|
|
|
|
|
|
|
return sum;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
2011-02-11 10:05:17 +08:00
|
|
|
void migrate_irqs(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-10-14 03:44:51 +08:00
|
|
|
struct irq_desc *desc;
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned int irq;
|
|
|
|
static int warned;
|
2010-04-26 23:32:35 +08:00
|
|
|
cpumask_var_t mask;
|
2011-02-11 10:05:17 +08:00
|
|
|
const struct cpumask *map = cpu_online_mask;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-04-26 23:32:35 +08:00
|
|
|
alloc_cpumask_var(&mask, GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-04-26 23:32:35 +08:00
|
|
|
for_each_irq(irq) {
|
2011-03-25 23:36:35 +08:00
|
|
|
struct irq_data *data;
|
2011-03-07 22:00:20 +08:00
|
|
|
struct irq_chip *chip;
|
|
|
|
|
2009-10-14 03:44:51 +08:00
|
|
|
desc = irq_to_desc(irq);
|
2010-06-16 08:09:35 +08:00
|
|
|
if (!desc)
|
|
|
|
continue;
|
|
|
|
|
2011-03-25 23:36:35 +08:00
|
|
|
data = irq_desc_get_irq_data(desc);
|
|
|
|
if (irqd_is_per_cpu(data))
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
|
2011-03-25 23:36:35 +08:00
|
|
|
chip = irq_data_get_irq_chip(data);
|
2011-03-07 22:00:20 +08:00
|
|
|
|
2011-03-25 23:36:35 +08:00
|
|
|
cpumask_and(mask, data->affinity, map);
|
2010-04-26 23:32:35 +08:00
|
|
|
if (cpumask_any(mask) >= nr_cpu_ids) {
|
2005-04-17 06:20:36 +08:00
|
|
|
printk("Breaking affinity for irq %i\n", irq);
|
2010-04-26 23:32:35 +08:00
|
|
|
cpumask_copy(mask, map);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2011-03-07 22:00:20 +08:00
|
|
|
if (chip->irq_set_affinity)
|
2011-03-25 23:36:35 +08:00
|
|
|
chip->irq_set_affinity(data, mask, true);
|
2009-10-14 03:44:51 +08:00
|
|
|
else if (desc->action && !(warned++))
|
2005-04-17 06:20:36 +08:00
|
|
|
printk("Cannot set affinity for irq %i\n", irq);
|
|
|
|
}
|
|
|
|
|
2010-04-26 23:32:35 +08:00
|
|
|
free_cpumask_var(mask);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
local_irq_enable();
|
|
|
|
mdelay(1);
|
|
|
|
local_irq_disable();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2009-04-28 09:57:43 +08:00
|
|
|
static inline void handle_one_irq(unsigned int irq)
|
|
|
|
{
|
|
|
|
struct thread_info *curtp, *irqtp;
|
|
|
|
unsigned long saved_sp_limit;
|
|
|
|
struct irq_desc *desc;
|
|
|
|
|
2011-05-25 04:34:18 +08:00
|
|
|
desc = irq_to_desc(irq);
|
|
|
|
if (!desc)
|
|
|
|
return;
|
|
|
|
|
2009-04-28 09:57:43 +08:00
|
|
|
/* Switch to the irq stack to handle this */
|
|
|
|
curtp = current_thread_info();
|
|
|
|
irqtp = hardirq_ctx[smp_processor_id()];
|
|
|
|
|
|
|
|
if (curtp == irqtp) {
|
|
|
|
/* We're already on the irq stack, just handle it */
|
2011-05-25 04:34:18 +08:00
|
|
|
desc->handle_irq(irq, desc);
|
2009-04-28 09:57:43 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
saved_sp_limit = current->thread.ksp_limit;
|
|
|
|
|
|
|
|
irqtp->task = curtp->task;
|
|
|
|
irqtp->flags = 0;
|
|
|
|
|
|
|
|
/* Copy the softirq bits in preempt_count so that the
|
|
|
|
* softirq checks work in the hardirq context. */
|
|
|
|
irqtp->preempt_count = (irqtp->preempt_count & ~SOFTIRQ_MASK) |
|
|
|
|
(curtp->preempt_count & SOFTIRQ_MASK);
|
|
|
|
|
|
|
|
current->thread.ksp_limit = (unsigned long)irqtp +
|
|
|
|
_ALIGN_UP(sizeof(struct thread_info), 16);
|
|
|
|
|
2009-04-22 23:31:43 +08:00
|
|
|
call_handle_irq(irq, desc, irqtp, desc->handle_irq);
|
2009-04-28 09:57:43 +08:00
|
|
|
current->thread.ksp_limit = saved_sp_limit;
|
|
|
|
irqtp->task = NULL;
|
|
|
|
|
|
|
|
/* Set any flag that may have been set on the
|
|
|
|
* alternate stack
|
|
|
|
*/
|
|
|
|
if (irqtp->flags)
|
|
|
|
set_bits(irqtp->flags, &curtp->flags);
|
|
|
|
}
|
|
|
|
|
2009-04-22 23:31:37 +08:00
|
|
|
static inline void check_stack_overflow(void)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_DEBUG_STACKOVERFLOW
|
|
|
|
long sp;
|
|
|
|
|
|
|
|
sp = __get_SP() & (THREAD_SIZE-1);
|
|
|
|
|
|
|
|
/* check for stack overflow: is there less than 2KB free? */
|
|
|
|
if (unlikely(sp < (sizeof(struct thread_info) + 2048))) {
|
|
|
|
printk("do_IRQ: stack overflow: %ld\n",
|
|
|
|
sp - sizeof(struct thread_info));
|
|
|
|
dump_stack();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
void do_IRQ(struct pt_regs *regs)
|
|
|
|
{
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 21:55:46 +08:00
|
|
|
struct pt_regs *old_regs = set_irq_regs(regs);
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int irq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-10-27 02:47:42 +08:00
|
|
|
trace_irq_entry(regs);
|
|
|
|
|
2007-08-21 00:36:19 +08:00
|
|
|
irq_enter();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-04-22 23:31:37 +08:00
|
|
|
check_stack_overflow();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-10-07 20:08:26 +08:00
|
|
|
irq = ppc_md.get_irq();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-04-28 09:57:43 +08:00
|
|
|
if (irq != NO_IRQ && irq != NO_IRQ_IGNORE)
|
|
|
|
handle_one_irq(irq);
|
|
|
|
else if (irq != NO_IRQ_IGNORE)
|
2010-02-01 04:34:36 +08:00
|
|
|
__get_cpu_var(irq_stat).spurious_irqs++;
|
2005-11-16 15:53:29 +08:00
|
|
|
|
2007-08-21 00:36:19 +08:00
|
|
|
irq_exit();
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 21:55:46 +08:00
|
|
|
set_irq_regs(old_regs);
|
2005-11-09 15:07:45 +08:00
|
|
|
|
2005-11-16 15:53:29 +08:00
|
|
|
#ifdef CONFIG_PPC_ISERIES
|
2006-11-21 11:16:13 +08:00
|
|
|
if (firmware_has_feature(FW_FEATURE_ISERIES) &&
|
|
|
|
get_lppaca()->int_dword.fields.decr_int) {
|
2006-01-13 07:26:42 +08:00
|
|
|
get_lppaca()->int_dword.fields.decr_int = 0;
|
|
|
|
/* Signal a fake decrementer interrupt */
|
|
|
|
timer_interrupt(regs);
|
2005-11-16 15:53:29 +08:00
|
|
|
}
|
|
|
|
#endif
|
2009-10-27 02:47:42 +08:00
|
|
|
|
|
|
|
trace_irq_exit(regs);
|
2005-11-16 15:53:29 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void __init init_IRQ(void)
|
|
|
|
{
|
2007-07-10 01:31:44 +08:00
|
|
|
if (ppc_md.init_IRQ)
|
|
|
|
ppc_md.init_IRQ();
|
2008-04-30 16:49:55 +08:00
|
|
|
|
|
|
|
exc_lvl_ctx_init();
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
irq_ctx_init();
|
|
|
|
}
|
|
|
|
|
2008-04-30 16:49:55 +08:00
|
|
|
#if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
|
|
|
|
struct thread_info *critirq_ctx[NR_CPUS] __read_mostly;
|
|
|
|
struct thread_info *dbgirq_ctx[NR_CPUS] __read_mostly;
|
|
|
|
struct thread_info *mcheckirq_ctx[NR_CPUS] __read_mostly;
|
|
|
|
|
|
|
|
void exc_lvl_ctx_init(void)
|
|
|
|
{
|
|
|
|
struct thread_info *tp;
|
2011-04-15 06:32:04 +08:00
|
|
|
int i, cpu_nr;
|
2008-04-30 16:49:55 +08:00
|
|
|
|
|
|
|
for_each_possible_cpu(i) {
|
2011-04-15 06:32:04 +08:00
|
|
|
#ifdef CONFIG_PPC64
|
|
|
|
cpu_nr = i;
|
|
|
|
#else
|
|
|
|
cpu_nr = get_hard_smp_processor_id(i);
|
|
|
|
#endif
|
|
|
|
memset((void *)critirq_ctx[cpu_nr], 0, THREAD_SIZE);
|
|
|
|
tp = critirq_ctx[cpu_nr];
|
|
|
|
tp->cpu = cpu_nr;
|
2008-04-30 16:49:55 +08:00
|
|
|
tp->preempt_count = 0;
|
|
|
|
|
|
|
|
#ifdef CONFIG_BOOKE
|
2011-04-15 06:32:04 +08:00
|
|
|
memset((void *)dbgirq_ctx[cpu_nr], 0, THREAD_SIZE);
|
|
|
|
tp = dbgirq_ctx[cpu_nr];
|
|
|
|
tp->cpu = cpu_nr;
|
2008-04-30 16:49:55 +08:00
|
|
|
tp->preempt_count = 0;
|
|
|
|
|
2011-04-15 06:32:04 +08:00
|
|
|
memset((void *)mcheckirq_ctx[cpu_nr], 0, THREAD_SIZE);
|
|
|
|
tp = mcheckirq_ctx[cpu_nr];
|
|
|
|
tp->cpu = cpu_nr;
|
2008-04-30 16:49:55 +08:00
|
|
|
tp->preempt_count = HARDIRQ_OFFSET;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-06-23 17:05:30 +08:00
|
|
|
struct thread_info *softirq_ctx[NR_CPUS] __read_mostly;
|
|
|
|
struct thread_info *hardirq_ctx[NR_CPUS] __read_mostly;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void irq_ctx_init(void)
|
|
|
|
{
|
|
|
|
struct thread_info *tp;
|
|
|
|
int i;
|
|
|
|
|
2006-03-29 06:50:51 +08:00
|
|
|
for_each_possible_cpu(i) {
|
2005-04-17 06:20:36 +08:00
|
|
|
memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
|
|
|
|
tp = softirq_ctx[i];
|
|
|
|
tp->cpu = i;
|
2008-04-09 15:21:28 +08:00
|
|
|
tp->preempt_count = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
|
|
|
|
tp = hardirq_ctx[i];
|
|
|
|
tp->cpu = i;
|
|
|
|
tp->preempt_count = HARDIRQ_OFFSET;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
static inline void do_softirq_onstack(void)
|
|
|
|
{
|
|
|
|
struct thread_info *curtp, *irqtp;
|
2008-04-28 14:21:22 +08:00
|
|
|
unsigned long saved_sp_limit = current->thread.ksp_limit;
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
|
|
|
|
curtp = current_thread_info();
|
|
|
|
irqtp = softirq_ctx[smp_processor_id()];
|
|
|
|
irqtp->task = curtp->task;
|
2011-07-19 01:17:22 +08:00
|
|
|
irqtp->flags = 0;
|
2008-04-28 14:21:22 +08:00
|
|
|
current->thread.ksp_limit = (unsigned long)irqtp +
|
|
|
|
_ALIGN_UP(sizeof(struct thread_info), 16);
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
call_do_softirq(irqtp);
|
2008-04-28 14:21:22 +08:00
|
|
|
current->thread.ksp_limit = saved_sp_limit;
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
irqtp->task = NULL;
|
2011-07-19 01:17:22 +08:00
|
|
|
|
|
|
|
/* Set any flag that may have been set on the
|
|
|
|
* alternate stack
|
|
|
|
*/
|
|
|
|
if (irqtp->flags)
|
|
|
|
set_bits(irqtp->flags, &curtp->flags);
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
void do_softirq(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (in_interrupt())
|
|
|
|
return;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
|
2006-07-04 06:28:34 +08:00
|
|
|
if (local_softirq_pending())
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
do_softirq_onstack();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2006-07-03 19:36:01 +08:00
|
|
|
* IRQ controller and virtual interrupts
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2011-05-04 13:02:15 +08:00
|
|
|
/* The main irq map itself is an array of NR_IRQ entries containing the
|
|
|
|
* associate host and irq number. An entry with a host of NULL is free.
|
|
|
|
* An entry can be allocated if it's free, the allocator always then sets
|
|
|
|
* hwirq first to the host's invalid irq number and then fills ops.
|
|
|
|
*/
|
|
|
|
struct irq_map_entry {
|
|
|
|
irq_hw_number_t hwirq;
|
|
|
|
struct irq_host *host;
|
|
|
|
};
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
static LIST_HEAD(irq_hosts);
|
2010-02-18 10:22:24 +08:00
|
|
|
static DEFINE_RAW_SPINLOCK(irq_big_lock);
|
2008-09-04 20:37:08 +08:00
|
|
|
static DEFINE_MUTEX(revmap_trees_mutex);
|
2011-05-04 13:02:15 +08:00
|
|
|
static struct irq_map_entry irq_map[NR_IRQS];
|
2006-07-03 19:36:01 +08:00
|
|
|
static unsigned int irq_virq_count = NR_IRQS;
|
|
|
|
static struct irq_host *irq_default_host;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-04 13:02:15 +08:00
|
|
|
irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
|
|
|
|
{
|
|
|
|
return irq_map[d->irq].hwirq;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(irqd_to_hwirq);
|
|
|
|
|
2007-06-04 12:47:04 +08:00
|
|
|
irq_hw_number_t virq_to_hw(unsigned int virq)
|
|
|
|
{
|
|
|
|
return irq_map[virq].hwirq;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(virq_to_hw);
|
|
|
|
|
2011-05-11 03:30:36 +08:00
|
|
|
bool virq_is_host(unsigned int virq, struct irq_host *host)
|
|
|
|
{
|
|
|
|
return irq_map[virq].host == host;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(virq_is_host);
|
|
|
|
|
2007-08-28 16:47:55 +08:00
|
|
|
static int default_irq_host_match(struct irq_host *h, struct device_node *np)
|
|
|
|
{
|
|
|
|
return h->of_node != NULL && h->of_node == np;
|
|
|
|
}
|
|
|
|
|
2007-10-02 11:37:53 +08:00
|
|
|
struct irq_host *irq_alloc_host(struct device_node *of_node,
|
2007-08-28 16:47:54 +08:00
|
|
|
unsigned int revmap_type,
|
|
|
|
unsigned int revmap_arg,
|
|
|
|
struct irq_host_ops *ops,
|
|
|
|
irq_hw_number_t inval_irq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-07-03 19:36:01 +08:00
|
|
|
struct irq_host *host;
|
|
|
|
unsigned int size = sizeof(struct irq_host);
|
|
|
|
unsigned int i;
|
|
|
|
unsigned int *rmap;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/* Allocate structure and revmap table if using linear mapping */
|
|
|
|
if (revmap_type == IRQ_HOST_MAP_LINEAR)
|
|
|
|
size += revmap_arg * sizeof(unsigned int);
|
2011-05-11 03:29:53 +08:00
|
|
|
host = kzalloc(size, GFP_KERNEL);
|
2006-07-03 19:36:01 +08:00
|
|
|
if (host == NULL)
|
|
|
|
return NULL;
|
2006-04-04 12:49:48 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
/* Fill structure */
|
|
|
|
host->revmap_type = revmap_type;
|
|
|
|
host->inval_irq = inval_irq;
|
|
|
|
host->ops = ops;
|
2008-05-26 10:12:32 +08:00
|
|
|
host->of_node = of_node_get(of_node);
|
2006-04-04 12:49:48 +08:00
|
|
|
|
2007-08-28 16:47:55 +08:00
|
|
|
if (host->ops->match == NULL)
|
|
|
|
host->ops->match = default_irq_host_match;
|
2006-04-04 12:49:48 +08:00
|
|
|
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* If it's a legacy controller, check for duplicates and
|
|
|
|
* mark it as allocated (we use irq 0 host pointer for that
|
|
|
|
*/
|
|
|
|
if (revmap_type == IRQ_HOST_MAP_LEGACY) {
|
|
|
|
if (irq_map[0].host != NULL) {
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2011-05-25 04:34:17 +08:00
|
|
|
of_node_put(host->of_node);
|
|
|
|
kfree(host);
|
2006-07-03 19:36:01 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
irq_map[0].host = host;
|
|
|
|
}
|
|
|
|
|
|
|
|
list_add(&host->link, &irq_hosts);
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* Additional setups per revmap type */
|
|
|
|
switch(revmap_type) {
|
|
|
|
case IRQ_HOST_MAP_LEGACY:
|
|
|
|
/* 0 is always the invalid number for legacy */
|
|
|
|
host->inval_irq = 0;
|
|
|
|
/* setup us as the host for all legacy interrupts */
|
|
|
|
for (i = 1; i < NUM_ISA_INTERRUPTS; i++) {
|
2007-08-28 16:47:56 +08:00
|
|
|
irq_map[i].hwirq = i;
|
2006-07-03 19:36:01 +08:00
|
|
|
smp_wmb();
|
|
|
|
irq_map[i].host = host;
|
|
|
|
smp_wmb();
|
|
|
|
|
|
|
|
/* Legacy flags are left to default at this point,
|
|
|
|
* one can then use irq_create_mapping() to
|
2007-10-20 05:22:55 +08:00
|
|
|
* explicitly change them
|
2006-07-03 19:36:01 +08:00
|
|
|
*/
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
ops->map(host, i, i);
|
2011-05-11 03:30:44 +08:00
|
|
|
|
|
|
|
/* Clear norequest flags */
|
|
|
|
irq_clear_status_flags(i, IRQ_NOREQUEST);
|
2006-07-03 19:36:01 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case IRQ_HOST_MAP_LINEAR:
|
|
|
|
rmap = (unsigned int *)(host + 1);
|
|
|
|
for (i = 0; i < revmap_arg; i++)
|
2007-06-01 15:23:26 +08:00
|
|
|
rmap[i] = NO_IRQ;
|
2006-07-03 19:36:01 +08:00
|
|
|
host->revmap_data.linear.size = revmap_arg;
|
|
|
|
smp_wmb();
|
|
|
|
host->revmap_data.linear.revmap = rmap;
|
|
|
|
break;
|
2011-05-11 03:29:53 +08:00
|
|
|
case IRQ_HOST_MAP_TREE:
|
|
|
|
INIT_RADIX_TREE(&host->revmap_data.tree, GFP_KERNEL);
|
|
|
|
break;
|
2006-07-03 19:36:01 +08:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
|
|
|
|
|
|
|
|
return host;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
struct irq_host *irq_find_host(struct device_node *node)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-07-03 19:36:01 +08:00
|
|
|
struct irq_host *h, *found = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/* We might want to match the legacy controller last since
|
|
|
|
* it might potentially be set to match all interrupts in
|
|
|
|
* the absence of a device node. This isn't a problem so far
|
|
|
|
* yet though...
|
|
|
|
*/
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
list_for_each_entry(h, &irq_hosts, link)
|
2007-08-28 16:47:55 +08:00
|
|
|
if (h->ops->match(h, node)) {
|
2006-07-03 19:36:01 +08:00
|
|
|
found = h;
|
|
|
|
break;
|
|
|
|
}
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
return found;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(irq_find_host);
|
|
|
|
|
|
|
|
void irq_set_default_host(struct irq_host *host)
|
|
|
|
{
|
|
|
|
pr_debug("irq: Default host set to @0x%p\n", host);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
irq_default_host = host;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
void irq_set_virq_count(unsigned int count)
|
|
|
|
{
|
|
|
|
pr_debug("irq: Trying to set virq count to %d\n", count);
|
2005-06-23 07:43:37 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
BUG_ON(count < NUM_ISA_INTERRUPTS);
|
|
|
|
if (count < NR_IRQS)
|
|
|
|
irq_virq_count = count;
|
|
|
|
}
|
|
|
|
|
2007-06-04 20:59:59 +08:00
|
|
|
static int irq_setup_virq(struct irq_host *host, unsigned int virq,
|
|
|
|
irq_hw_number_t hwirq)
|
|
|
|
{
|
2011-01-21 14:12:30 +08:00
|
|
|
int res;
|
2009-10-14 03:45:03 +08:00
|
|
|
|
2011-01-21 14:12:30 +08:00
|
|
|
res = irq_alloc_desc_at(virq, 0);
|
|
|
|
if (res != virq) {
|
2009-10-14 03:45:03 +08:00
|
|
|
pr_debug("irq: -> allocating desc failed\n");
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2007-06-04 20:59:59 +08:00
|
|
|
/* map it */
|
|
|
|
smp_wmb();
|
|
|
|
irq_map[virq].hwirq = hwirq;
|
|
|
|
smp_mb();
|
|
|
|
|
|
|
|
if (host->ops->map(host, virq, hwirq)) {
|
|
|
|
pr_debug("irq: -> mapping failed, freeing\n");
|
2011-01-21 14:12:30 +08:00
|
|
|
goto errdesc;
|
2007-06-04 20:59:59 +08:00
|
|
|
}
|
|
|
|
|
2011-05-11 03:30:44 +08:00
|
|
|
irq_clear_status_flags(virq, IRQ_NOREQUEST);
|
|
|
|
|
2007-06-04 20:59:59 +08:00
|
|
|
return 0;
|
2009-10-14 03:45:03 +08:00
|
|
|
|
2011-01-21 14:12:30 +08:00
|
|
|
errdesc:
|
|
|
|
irq_free_descs(virq, 1);
|
2009-10-14 03:45:03 +08:00
|
|
|
error:
|
|
|
|
irq_free_virt(virq, 1);
|
|
|
|
return -1;
|
2007-06-04 20:59:59 +08:00
|
|
|
}
|
2006-08-28 09:17:37 +08:00
|
|
|
|
2007-06-04 21:00:00 +08:00
|
|
|
unsigned int irq_create_direct_mapping(struct irq_host *host)
|
|
|
|
{
|
|
|
|
unsigned int virq;
|
|
|
|
|
|
|
|
if (host == NULL)
|
|
|
|
host = irq_default_host;
|
|
|
|
|
|
|
|
BUG_ON(host == NULL);
|
|
|
|
WARN_ON(host->revmap_type != IRQ_HOST_MAP_NOMAP);
|
|
|
|
|
|
|
|
virq = irq_alloc_virt(host, 1, 0);
|
|
|
|
if (virq == NO_IRQ) {
|
|
|
|
pr_debug("irq: create_direct virq allocation failed\n");
|
|
|
|
return NO_IRQ;
|
|
|
|
}
|
|
|
|
|
|
|
|
pr_debug("irq: create_direct obtained virq %d\n", virq);
|
|
|
|
|
|
|
|
if (irq_setup_virq(host, virq, virq))
|
|
|
|
return NO_IRQ;
|
|
|
|
|
|
|
|
return virq;
|
|
|
|
}
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int irq_create_mapping(struct irq_host *host,
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
irq_hw_number_t hwirq)
|
2006-07-03 19:36:01 +08:00
|
|
|
{
|
|
|
|
unsigned int virq, hint;
|
|
|
|
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", host, hwirq);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* Look for default host if nececssary */
|
|
|
|
if (host == NULL)
|
|
|
|
host = irq_default_host;
|
|
|
|
if (host == NULL) {
|
|
|
|
printk(KERN_WARNING "irq_create_mapping called for"
|
|
|
|
" NULL host, hwirq=%lx\n", hwirq);
|
|
|
|
WARN_ON(1);
|
|
|
|
return NO_IRQ;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
pr_debug("irq: -> using host @%p\n", host);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-25 04:34:18 +08:00
|
|
|
/* Check if mapping already exists */
|
2006-07-03 19:36:01 +08:00
|
|
|
virq = irq_find_mapping(host, hwirq);
|
2007-06-01 15:23:26 +08:00
|
|
|
if (virq != NO_IRQ) {
|
2006-07-03 19:36:01 +08:00
|
|
|
pr_debug("irq: -> existing mapping on virq %d\n", virq);
|
|
|
|
return virq;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
/* Get a virtual interrupt number */
|
|
|
|
if (host->revmap_type == IRQ_HOST_MAP_LEGACY) {
|
|
|
|
/* Handle legacy */
|
|
|
|
virq = (unsigned int)hwirq;
|
|
|
|
if (virq == 0 || virq >= NUM_ISA_INTERRUPTS)
|
|
|
|
return NO_IRQ;
|
|
|
|
return virq;
|
|
|
|
} else {
|
|
|
|
/* Allocate a virtual interrupt number */
|
|
|
|
hint = hwirq % irq_virq_count;
|
|
|
|
virq = irq_alloc_virt(host, 1, hint);
|
|
|
|
if (virq == NO_IRQ) {
|
|
|
|
pr_debug("irq: -> virq allocation failed\n");
|
|
|
|
return NO_IRQ;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-06-04 20:59:59 +08:00
|
|
|
if (irq_setup_virq(host, virq, hwirq))
|
2006-07-03 19:36:01 +08:00
|
|
|
return NO_IRQ;
|
2007-06-04 20:59:59 +08:00
|
|
|
|
2011-07-08 04:35:38 +08:00
|
|
|
pr_debug("irq: irq %lu on host %s mapped to virtual irq %u\n",
|
2009-04-06 00:05:02 +08:00
|
|
|
hwirq, host->of_node ? host->of_node->full_name : "null", virq);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return virq;
|
2006-07-03 19:36:01 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(irq_create_mapping);
|
|
|
|
|
2006-10-09 23:22:09 +08:00
|
|
|
unsigned int irq_create_of_mapping(struct device_node *controller,
|
2009-12-08 10:39:50 +08:00
|
|
|
const u32 *intspec, unsigned int intsize)
|
2006-07-03 19:36:01 +08:00
|
|
|
{
|
|
|
|
struct irq_host *host;
|
|
|
|
irq_hw_number_t hwirq;
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
unsigned int type = IRQ_TYPE_NONE;
|
|
|
|
unsigned int virq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
if (controller == NULL)
|
|
|
|
host = irq_default_host;
|
|
|
|
else
|
|
|
|
host = irq_find_host(controller);
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
if (host == NULL) {
|
|
|
|
printk(KERN_WARNING "irq: no irq host found for %s !\n",
|
|
|
|
controller->full_name);
|
2006-07-03 19:36:01 +08:00
|
|
|
return NO_IRQ;
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* If host has no translation, then we assume interrupt line */
|
|
|
|
if (host->ops->xlate == NULL)
|
|
|
|
hwirq = intspec[0];
|
|
|
|
else {
|
|
|
|
if (host->ops->xlate(host, controller, intspec, intsize,
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
&hwirq, &type))
|
2006-07-03 19:36:01 +08:00
|
|
|
return NO_IRQ;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
/* Create mapping */
|
|
|
|
virq = irq_create_mapping(host, hwirq);
|
|
|
|
if (virq == NO_IRQ)
|
|
|
|
return virq;
|
|
|
|
|
|
|
|
/* Set type if specified and different than the current one */
|
|
|
|
if (type != IRQ_TYPE_NONE &&
|
2011-03-25 23:36:35 +08:00
|
|
|
type != (irqd_get_trigger_type(irq_get_irq_data(virq))))
|
2011-03-25 23:45:20 +08:00
|
|
|
irq_set_irq_type(virq, type);
|
[PATCH] powerpc: fix trigger handling in the new irq code
This patch slightly reworks the new irq code to fix a small design error. I
removed the passing of the trigger to the map() calls entirely, it was not a
good idea to have one call do two different things. It also fixes a couple of
corner cases.
Mapping a linux virtual irq to a physical irq now does only that. Setting the
trigger is a different action which has a different call.
The main changes are:
- I no longer call host->ops->map() for an already mapped irq, I just return
the virtual number that was already mapped. It was called before to give an
opportunity to change the trigger, but that was causing issues as that could
happen while the interrupt was in use by a device, and because of the
trigger change, map would potentially muck around with things in a racy way.
That was causing much burden on a given's controller implementation of
map() to get it right. This is much simpler now. map() is only called on
the initial mapping of an irq, meaning that you know that this irq is _not_
being used. You can initialize the hardware if you want (though you don't
have to).
- Controllers that can handle different type of triggers (level/edge/etc...)
now implement the standard irq_chip->set_type() call as defined by the
generic code. That means that you can use the standard set_irq_type() to
configure an irq line manually if you wish or (though I don't like that
interface), pass explicit trigger flags to request_irq() as defined by the
generic kernel interfaces. Also, using those interfaces guarantees that
your controller set_type callback is called with the descriptor lock held,
thus providing locking against activity on the same interrupt (including
mask/unmask/etc...) automatically. A result is that, for example, MPIC's
own map() implementation calls irq_set_type(NONE) to configure the hardware
to the default triggers.
- To allow the above, the irq_map array entry for the new mapped interrupt
is now set before map() callback is called for the controller.
- The irq_create_of_mapping() (also used by irq_of_parse_and_map()) function
for mapping interrupts from the device-tree now also call the separate
set_irq_type(), and only does so if there is a change in the trigger type.
- While I was at it, I changed pci_read_irq_line() (which is the helper I
would expect most archs to use in their pcibios_fixup() to get the PCI
interrupt routing from the device tree) to also handle a fallback when the
DT mapping fails consisting of reading the PCI_INTERRUPT_PIN to know wether
the device has an interrupt at all, and the the PCI_INTERRUPT_LINE to get an
interrupt number from the device. That number is then mapped using the
default controller, and the trigger is set to level low. That default
behaviour works for several platforms that don't have a proper interrupt
tree like Pegasos. If it doesn't work for your platform, then either
provide a proper interrupt tree from the firmware so that fallback isn't
needed, or don't call pci_read_irq_line()
- Add back a bit that got dropped by my main rework patch for properly
clearing pending IPIs on pSeries when using a kexec
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-10 19:44:42 +08:00
|
|
|
return virq;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
void irq_dispose_mapping(unsigned int virq)
|
|
|
|
{
|
2006-10-24 11:37:34 +08:00
|
|
|
struct irq_host *host;
|
2006-07-03 19:36:01 +08:00
|
|
|
irq_hw_number_t hwirq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-10-24 11:37:34 +08:00
|
|
|
if (virq == NO_IRQ)
|
|
|
|
return;
|
|
|
|
|
|
|
|
host = irq_map[virq].host;
|
2011-05-11 03:29:57 +08:00
|
|
|
if (WARN_ON(host == NULL))
|
2006-07-03 19:36:01 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
/* Never unmap legacy interrupts */
|
|
|
|
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
|
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-11 03:30:44 +08:00
|
|
|
irq_set_status_flags(virq, IRQ_NOREQUEST);
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
/* remove chip and handler */
|
2011-03-25 23:45:20 +08:00
|
|
|
irq_set_chip_and_handler(virq, NULL, NULL);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* Make sure it's completed */
|
|
|
|
synchronize_irq(virq);
|
|
|
|
|
|
|
|
/* Tell the PIC about it */
|
|
|
|
if (host->ops->unmap)
|
|
|
|
host->ops->unmap(host, virq);
|
|
|
|
smp_mb();
|
|
|
|
|
|
|
|
/* Clear reverse map */
|
|
|
|
hwirq = irq_map[virq].hwirq;
|
|
|
|
switch(host->revmap_type) {
|
|
|
|
case IRQ_HOST_MAP_LINEAR:
|
|
|
|
if (hwirq < host->revmap_data.linear.size)
|
2007-06-01 15:23:26 +08:00
|
|
|
host->revmap_data.linear.revmap[hwirq] = NO_IRQ;
|
2006-07-03 19:36:01 +08:00
|
|
|
break;
|
|
|
|
case IRQ_HOST_MAP_TREE:
|
2008-09-04 20:37:08 +08:00
|
|
|
mutex_lock(&revmap_trees_mutex);
|
2006-07-03 19:36:01 +08:00
|
|
|
radix_tree_delete(&host->revmap_data.tree, hwirq);
|
2008-09-04 20:37:08 +08:00
|
|
|
mutex_unlock(&revmap_trees_mutex);
|
2006-07-03 19:36:01 +08:00
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
/* Destroy map */
|
|
|
|
smp_mb();
|
|
|
|
irq_map[virq].hwirq = host->inval_irq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-01-21 14:12:30 +08:00
|
|
|
irq_free_descs(virq, 1);
|
2006-07-03 19:36:01 +08:00
|
|
|
/* Free it */
|
|
|
|
irq_free_virt(virq, 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int irq_find_mapping(struct irq_host *host,
|
|
|
|
irq_hw_number_t hwirq)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
unsigned int hint = hwirq % irq_virq_count;
|
|
|
|
|
|
|
|
/* Look for default host if nececssary */
|
|
|
|
if (host == NULL)
|
|
|
|
host = irq_default_host;
|
|
|
|
if (host == NULL)
|
|
|
|
return NO_IRQ;
|
|
|
|
|
|
|
|
/* legacy -> bail early */
|
|
|
|
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
|
|
|
|
return hwirq;
|
|
|
|
|
|
|
|
/* Slow path does a linear search of the map */
|
|
|
|
if (hint < NUM_ISA_INTERRUPTS)
|
|
|
|
hint = NUM_ISA_INTERRUPTS;
|
|
|
|
i = hint;
|
|
|
|
do {
|
|
|
|
if (irq_map[i].host == host &&
|
|
|
|
irq_map[i].hwirq == hwirq)
|
|
|
|
return i;
|
|
|
|
i++;
|
|
|
|
if (i >= irq_virq_count)
|
|
|
|
i = NUM_ISA_INTERRUPTS;
|
|
|
|
} while(i != hint);
|
|
|
|
return NO_IRQ;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(irq_find_mapping);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-19 21:54:26 +08:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int irq_choose_cpu(const struct cpumask *mask)
|
|
|
|
{
|
|
|
|
int cpuid;
|
|
|
|
|
|
|
|
if (cpumask_equal(mask, cpu_all_mask)) {
|
|
|
|
static int irq_rover;
|
|
|
|
static DEFINE_RAW_SPINLOCK(irq_rover_lock);
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/* Round-robin distribution... */
|
|
|
|
do_round_robin:
|
|
|
|
raw_spin_lock_irqsave(&irq_rover_lock, flags);
|
|
|
|
|
|
|
|
irq_rover = cpumask_next(irq_rover, cpu_online_mask);
|
|
|
|
if (irq_rover >= nr_cpu_ids)
|
|
|
|
irq_rover = cpumask_first(cpu_online_mask);
|
|
|
|
|
|
|
|
cpuid = irq_rover;
|
|
|
|
|
|
|
|
raw_spin_unlock_irqrestore(&irq_rover_lock, flags);
|
|
|
|
} else {
|
|
|
|
cpuid = cpumask_first_and(mask, cpu_online_mask);
|
|
|
|
if (cpuid >= nr_cpu_ids)
|
|
|
|
goto do_round_robin;
|
|
|
|
}
|
|
|
|
|
|
|
|
return get_hard_smp_processor_id(cpuid);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
int irq_choose_cpu(const struct cpumask *mask)
|
|
|
|
{
|
|
|
|
return hard_smp_processor_id();
|
|
|
|
}
|
|
|
|
#endif
|
2006-07-03 19:36:01 +08:00
|
|
|
|
2008-09-04 20:37:07 +08:00
|
|
|
unsigned int irq_radix_revmap_lookup(struct irq_host *host,
|
|
|
|
irq_hw_number_t hwirq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-07-03 19:36:01 +08:00
|
|
|
struct irq_map_entry *ptr;
|
|
|
|
unsigned int virq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-11 03:29:57 +08:00
|
|
|
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_TREE))
|
|
|
|
return irq_find_mapping(host, hwirq);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-09-04 20:37:08 +08:00
|
|
|
/*
|
2011-05-25 04:34:18 +08:00
|
|
|
* The ptr returned references the static global irq_map.
|
|
|
|
* but freeing an irq can delete nodes along the path to
|
|
|
|
* do the lookup via call_rcu.
|
2008-09-04 20:37:08 +08:00
|
|
|
*/
|
2011-05-25 04:34:18 +08:00
|
|
|
rcu_read_lock();
|
2008-09-04 20:37:07 +08:00
|
|
|
ptr = radix_tree_lookup(&host->revmap_data.tree, hwirq);
|
2011-05-25 04:34:18 +08:00
|
|
|
rcu_read_unlock();
|
2006-08-28 09:17:37 +08:00
|
|
|
|
2008-09-04 20:37:07 +08:00
|
|
|
/*
|
|
|
|
* If found in radix tree, then fine.
|
|
|
|
* Else fallback to linear lookup - this should not happen in practice
|
|
|
|
* as it means that we failed to insert the node in the radix tree.
|
|
|
|
*/
|
|
|
|
if (ptr)
|
2006-07-03 19:36:01 +08:00
|
|
|
virq = ptr - irq_map;
|
2008-09-04 20:37:07 +08:00
|
|
|
else
|
|
|
|
virq = irq_find_mapping(host, hwirq);
|
|
|
|
|
|
|
|
return virq;
|
|
|
|
}
|
|
|
|
|
|
|
|
void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
|
|
|
|
irq_hw_number_t hwirq)
|
|
|
|
{
|
2011-05-11 03:29:57 +08:00
|
|
|
if (WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE))
|
|
|
|
return;
|
2008-09-04 20:37:07 +08:00
|
|
|
|
2006-08-28 09:17:37 +08:00
|
|
|
if (virq != NO_IRQ) {
|
2008-09-04 20:37:08 +08:00
|
|
|
mutex_lock(&revmap_trees_mutex);
|
2008-09-04 20:37:07 +08:00
|
|
|
radix_tree_insert(&host->revmap_data.tree, hwirq,
|
|
|
|
&irq_map[virq]);
|
2008-09-04 20:37:08 +08:00
|
|
|
mutex_unlock(&revmap_trees_mutex);
|
2006-08-28 09:17:37 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int irq_linear_revmap(struct irq_host *host,
|
|
|
|
irq_hw_number_t hwirq)
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
{
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int *revmap;
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
|
2011-05-11 03:29:57 +08:00
|
|
|
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_LINEAR))
|
|
|
|
return irq_find_mapping(host, hwirq);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* Check revmap bounds */
|
|
|
|
if (unlikely(hwirq >= host->revmap_data.linear.size))
|
|
|
|
return irq_find_mapping(host, hwirq);
|
|
|
|
|
|
|
|
/* Check if revmap was allocated */
|
|
|
|
revmap = host->revmap_data.linear.revmap;
|
|
|
|
if (unlikely(revmap == NULL))
|
|
|
|
return irq_find_mapping(host, hwirq);
|
|
|
|
|
|
|
|
/* Fill up revmap with slow path if no mapping found */
|
|
|
|
if (unlikely(revmap[hwirq] == NO_IRQ))
|
|
|
|
revmap[hwirq] = irq_find_mapping(host, hwirq);
|
|
|
|
|
|
|
|
return revmap[hwirq];
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
}
|
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int irq_alloc_virt(struct irq_host *host,
|
|
|
|
unsigned int count,
|
|
|
|
unsigned int hint)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
unsigned int i, j, found = NO_IRQ;
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
if (count == 0 || count > (irq_virq_count - NUM_ISA_INTERRUPTS))
|
|
|
|
return NO_IRQ;
|
|
|
|
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
|
|
|
|
/* Use hint for 1 interrupt if any */
|
|
|
|
if (count == 1 && hint >= NUM_ISA_INTERRUPTS &&
|
|
|
|
hint < irq_virq_count && irq_map[hint].host == NULL) {
|
|
|
|
found = hint;
|
|
|
|
goto hint_found;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Look for count consecutive numbers in the allocatable
|
|
|
|
* (non-legacy) space
|
|
|
|
*/
|
2006-08-02 08:48:50 +08:00
|
|
|
for (i = NUM_ISA_INTERRUPTS, j = 0; i < irq_virq_count; i++) {
|
|
|
|
if (irq_map[i].host != NULL)
|
|
|
|
j = 0;
|
|
|
|
else
|
|
|
|
j++;
|
|
|
|
|
|
|
|
if (j == count) {
|
|
|
|
found = i - count + 1;
|
|
|
|
break;
|
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
}
|
|
|
|
if (found == NO_IRQ) {
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
return NO_IRQ;
|
|
|
|
}
|
|
|
|
hint_found:
|
|
|
|
for (i = found; i < (found + count); i++) {
|
|
|
|
irq_map[i].hwirq = host->inval_irq;
|
|
|
|
smp_wmb();
|
|
|
|
irq_map[i].host = host;
|
|
|
|
}
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
|
|
|
void irq_free_virt(unsigned int virq, unsigned int count)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2006-07-03 19:36:01 +08:00
|
|
|
unsigned int i;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
WARN_ON (virq < NUM_ISA_INTERRUPTS);
|
|
|
|
WARN_ON (count == 0 || (virq + count) > irq_virq_count);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-25 04:34:18 +08:00
|
|
|
if (virq < NUM_ISA_INTERRUPTS) {
|
|
|
|
if (virq + count < NUM_ISA_INTERRUPTS)
|
|
|
|
return;
|
|
|
|
count =- NUM_ISA_INTERRUPTS - virq;
|
|
|
|
virq = NUM_ISA_INTERRUPTS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (count > irq_virq_count || virq > irq_virq_count - count) {
|
|
|
|
if (virq > irq_virq_count)
|
|
|
|
return;
|
|
|
|
count = irq_virq_count - virq;
|
|
|
|
}
|
|
|
|
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_lock_irqsave(&irq_big_lock, flags);
|
2006-07-03 19:36:01 +08:00
|
|
|
for (i = virq; i < (virq + count); i++) {
|
|
|
|
struct irq_host *host;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-07-03 19:36:01 +08:00
|
|
|
host = irq_map[i].host;
|
|
|
|
irq_map[i].hwirq = host->inval_irq;
|
|
|
|
smp_wmb();
|
|
|
|
irq_map[i].host = NULL;
|
|
|
|
}
|
2010-02-18 10:22:24 +08:00
|
|
|
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-07-03 19:36:01 +08:00
|
|
|
|
2009-10-14 03:45:03 +08:00
|
|
|
int arch_early_irq_init(void)
|
2006-07-03 19:36:01 +08:00
|
|
|
{
|
2009-10-14 03:45:03 +08:00
|
|
|
return 0;
|
2006-07-03 19:36:01 +08:00
|
|
|
}
|
|
|
|
|
2007-08-28 16:47:57 +08:00
|
|
|
#ifdef CONFIG_VIRQ_DEBUG
|
|
|
|
static int virq_debug_show(struct seq_file *m, void *private)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
2009-03-10 22:45:54 +08:00
|
|
|
struct irq_desc *desc;
|
2007-08-28 16:47:57 +08:00
|
|
|
const char *p;
|
2010-09-13 17:47:40 +08:00
|
|
|
static const char none[] = "none";
|
2011-04-11 04:26:15 +08:00
|
|
|
void *data;
|
2007-08-28 16:47:57 +08:00
|
|
|
int i;
|
|
|
|
|
2011-04-11 04:26:15 +08:00
|
|
|
seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq",
|
|
|
|
"chip name", "chip data", "host name");
|
2007-08-28 16:47:57 +08:00
|
|
|
|
2009-10-14 03:44:56 +08:00
|
|
|
for (i = 1; i < nr_irqs; i++) {
|
2009-10-14 03:44:51 +08:00
|
|
|
desc = irq_to_desc(i);
|
2009-10-14 03:44:56 +08:00
|
|
|
if (!desc)
|
|
|
|
continue;
|
|
|
|
|
2009-11-17 23:46:45 +08:00
|
|
|
raw_spin_lock_irqsave(&desc->lock, flags);
|
2007-08-28 16:47:57 +08:00
|
|
|
|
|
|
|
if (desc->action && desc->action->handler) {
|
2011-03-07 22:00:20 +08:00
|
|
|
struct irq_chip *chip;
|
|
|
|
|
2007-08-28 16:47:57 +08:00
|
|
|
seq_printf(m, "%5d ", i);
|
2011-05-04 13:02:15 +08:00
|
|
|
seq_printf(m, "0x%05lx ", irq_map[i].hwirq);
|
2007-08-28 16:47:57 +08:00
|
|
|
|
2011-03-25 23:45:20 +08:00
|
|
|
chip = irq_desc_get_chip(desc);
|
2011-03-07 22:00:20 +08:00
|
|
|
if (chip && chip->name)
|
|
|
|
p = chip->name;
|
2007-08-28 16:47:57 +08:00
|
|
|
else
|
|
|
|
p = none;
|
|
|
|
seq_printf(m, "%-15s ", p);
|
|
|
|
|
2011-04-11 04:26:15 +08:00
|
|
|
data = irq_desc_get_chip_data(desc);
|
|
|
|
seq_printf(m, "0x%16p ", data);
|
|
|
|
|
2007-08-28 16:47:57 +08:00
|
|
|
if (irq_map[i].host && irq_map[i].host->of_node)
|
|
|
|
p = irq_map[i].host->of_node->full_name;
|
|
|
|
else
|
|
|
|
p = none;
|
|
|
|
seq_printf(m, "%s\n", p);
|
|
|
|
}
|
|
|
|
|
2009-11-17 23:46:45 +08:00
|
|
|
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
2007-08-28 16:47:57 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virq_debug_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return single_open(file, virq_debug_show, inode->i_private);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations virq_debug_fops = {
|
|
|
|
.open = virq_debug_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = single_release,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init irq_debugfs_init(void)
|
|
|
|
{
|
|
|
|
if (debugfs_create_file("virq_mapping", S_IRUGO, powerpc_debugfs_root,
|
2008-05-23 03:49:22 +08:00
|
|
|
NULL, &virq_debug_fops) == NULL)
|
2007-08-28 16:47:57 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
__initcall(irq_debugfs_init);
|
|
|
|
#endif /* CONFIG_VIRQ_DEBUG */
|
|
|
|
|
powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 07:06:59 +08:00
|
|
|
#ifdef CONFIG_PPC64
|
2005-04-17 06:20:36 +08:00
|
|
|
static int __init setup_noirqdistrib(char *str)
|
|
|
|
{
|
|
|
|
distribute_irqs = 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
__setup("noirqdistrib", setup_noirqdistrib);
|
2005-11-09 15:07:45 +08:00
|
|
|
#endif /* CONFIG_PPC64 */
|