Merge branch 'master'

This commit is contained in:
Steven Whitehouse 2006-07-05 08:27:42 -04:00
commit cf57a30843
662 changed files with 27205 additions and 9469 deletions

View File

@ -109,7 +109,7 @@
for most of the implementations. These functions can be replaced by the
board driver if neccecary. Those functions are called via pointers in the
NAND chip description structure. The board driver can set the functions which
should be replaced by board dependend functions before calling nand_scan().
should be replaced by board dependent functions before calling nand_scan().
If the function pointer is NULL on entry to nand_scan() then the pointer
is set to the default function which is suitable for the detected chip type.
</para></listitem>
@ -133,7 +133,7 @@
[REPLACEABLE]</para><para>
Replaceable members hold hardware related functions which can be
provided by the board driver. The board driver can set the functions which
should be replaced by board dependend functions before calling nand_scan().
should be replaced by board dependent functions before calling nand_scan().
If the function pointer is NULL on entry to nand_scan() then the pointer
is set to the default function which is suitable for the detected chip type.
</para></listitem>
@ -156,9 +156,8 @@
<title>Basic board driver</title>
<para>
For most boards it will be sufficient to provide just the
basic functions and fill out some really board dependend
basic functions and fill out some really board dependent
members in the nand chip description structure.
See drivers/mtd/nand/skeleton for reference.
</para>
<sect1>
<title>Basic defines</title>
@ -1295,7 +1294,9 @@ in this page</entry>
</para>
!Idrivers/mtd/nand/nand_base.c
!Idrivers/mtd/nand/nand_bbt.c
!Idrivers/mtd/nand/nand_ecc.c
<!-- No internal functions for kernel-doc:
X!Idrivers/mtd/nand/nand_ecc.c
-->
</chapter>
<chapter id="credits">

View File

@ -0,0 +1,57 @@
IRQ-flags state tracing
started by Ingo Molnar <mingo@redhat.com>
the "irq-flags tracing" feature "traces" hardirq and softirq state, in
that it gives interested subsystems an opportunity to be notified of
every hardirqs-off/hardirqs-on, softirqs-off/softirqs-on event that
happens in the kernel.
CONFIG_TRACE_IRQFLAGS_SUPPORT is needed for CONFIG_PROVE_SPIN_LOCKING
and CONFIG_PROVE_RW_LOCKING to be offered by the generic lock debugging
code. Otherwise only CONFIG_PROVE_MUTEX_LOCKING and
CONFIG_PROVE_RWSEM_LOCKING will be offered on an architecture - these
are locking APIs that are not used in IRQ context. (the one exception
for rwsems is worked around)
architecture support for this is certainly not in the "trivial"
category, because lots of lowlevel assembly code deal with irq-flags
state changes. But an architecture can be irq-flags-tracing enabled in a
rather straightforward and risk-free manner.
Architectures that want to support this need to do a couple of
code-organizational changes first:
- move their irq-flags manipulation code from their asm/system.h header
to asm/irqflags.h
- rename local_irq_disable()/etc to raw_local_irq_disable()/etc. so that
the linux/irqflags.h code can inject callbacks and can construct the
real local_irq_disable()/etc APIs.
- add and enable TRACE_IRQFLAGS_SUPPORT in their arch level Kconfig file
and then a couple of functional changes are needed as well to implement
irq-flags-tracing support:
- in lowlevel entry code add (build-conditional) calls to the
trace_hardirqs_off()/trace_hardirqs_on() functions. The lock validator
closely guards whether the 'real' irq-flags matches the 'virtual'
irq-flags state, and complains loudly (and turns itself off) if the
two do not match. Usually most of the time for arch support for
irq-flags-tracing is spent in this state: look at the lockdep
complaint, try to figure out the assembly code we did not cover yet,
fix and repeat. Once the system has booted up and works without a
lockdep complaint in the irq-flags-tracing functions arch support is
complete.
- if the architecture has non-maskable interrupts then those need to be
excluded from the irq-tracing [and lock validation] mechanism via
lockdep_off()/lockdep_on().
in general there is no risk from having an incomplete irq-flags-tracing
implementation in an architecture: lockdep will detect that and will
turn itself off. I.e. the lock validator will still be reliable. There
should be no crashes due to irq-tracing bugs. (except if the assembly
changes break other code by modifying conditions or registers that
shouldnt be)

View File

@ -435,6 +435,15 @@ running once the system is up.
debug [KNL] Enable kernel debugging (events log level).
debug_locks_verbose=
[KNL] verbose self-tests
Format=<0|1>
Print debugging info while doing the locking API
self-tests.
We default to 0 (no extra messages), setting it to
1 will print _a lot_ more information - normally
only useful to kernel developers.
decnet= [HW,NET]
Format: <area>[,<node>]
See also Documentation/networking/decnet.txt.

View File

@ -0,0 +1,197 @@
Runtime locking correctness validator
=====================================
started by Ingo Molnar <mingo@redhat.com>
additions by Arjan van de Ven <arjan@linux.intel.com>
Lock-class
----------
The basic object the validator operates upon is a 'class' of locks.
A class of locks is a group of locks that are logically the same with
respect to locking rules, even if the locks may have multiple (possibly
tens of thousands of) instantiations. For example a lock in the inode
struct is one class, while each inode has its own instantiation of that
lock class.
The validator tracks the 'state' of lock-classes, and it tracks
dependencies between different lock-classes. The validator maintains a
rolling proof that the state and the dependencies are correct.
Unlike an lock instantiation, the lock-class itself never goes away: when
a lock-class is used for the first time after bootup it gets registered,
and all subsequent uses of that lock-class will be attached to this
lock-class.
State
-----
The validator tracks lock-class usage history into 5 separate state bits:
- 'ever held in hardirq context' [ == hardirq-safe ]
- 'ever held in softirq context' [ == softirq-safe ]
- 'ever held with hardirqs enabled' [ == hardirq-unsafe ]
- 'ever held with softirqs and hardirqs enabled' [ == softirq-unsafe ]
- 'ever used' [ == !unused ]
Single-lock state rules:
------------------------
A softirq-unsafe lock-class is automatically hardirq-unsafe as well. The
following states are exclusive, and only one of them is allowed to be
set for any lock-class:
<hardirq-safe> and <hardirq-unsafe>
<softirq-safe> and <softirq-unsafe>
The validator detects and reports lock usage that violate these
single-lock state rules.
Multi-lock dependency rules:
----------------------------
The same lock-class must not be acquired twice, because this could lead
to lock recursion deadlocks.
Furthermore, two locks may not be taken in different order:
<L1> -> <L2>
<L2> -> <L1>
because this could lead to lock inversion deadlocks. (The validator
finds such dependencies in arbitrary complexity, i.e. there can be any
other locking sequence between the acquire-lock operations, the
validator will still track all dependencies between locks.)
Furthermore, the following usage based lock dependencies are not allowed
between any two lock-classes:
<hardirq-safe> -> <hardirq-unsafe>
<softirq-safe> -> <softirq-unsafe>
The first rule comes from the fact the a hardirq-safe lock could be
taken by a hardirq context, interrupting a hardirq-unsafe lock - and
thus could result in a lock inversion deadlock. Likewise, a softirq-safe
lock could be taken by an softirq context, interrupting a softirq-unsafe
lock.
The above rules are enforced for any locking sequence that occurs in the
kernel: when acquiring a new lock, the validator checks whether there is
any rule violation between the new lock and any of the held locks.
When a lock-class changes its state, the following aspects of the above
dependency rules are enforced:
- if a new hardirq-safe lock is discovered, we check whether it
took any hardirq-unsafe lock in the past.
- if a new softirq-safe lock is discovered, we check whether it took
any softirq-unsafe lock in the past.
- if a new hardirq-unsafe lock is discovered, we check whether any
hardirq-safe lock took it in the past.
- if a new softirq-unsafe lock is discovered, we check whether any
softirq-safe lock took it in the past.
(Again, we do these checks too on the basis that an interrupt context
could interrupt _any_ of the irq-unsafe or hardirq-unsafe locks, which
could lead to a lock inversion deadlock - even if that lock scenario did
not trigger in practice yet.)
Exception: Nested data dependencies leading to nested locking
-------------------------------------------------------------
There are a few cases where the Linux kernel acquires more than one
instance of the same lock-class. Such cases typically happen when there
is some sort of hierarchy within objects of the same type. In these
cases there is an inherent "natural" ordering between the two objects
(defined by the properties of the hierarchy), and the kernel grabs the
locks in this fixed order on each of the objects.
An example of such an object hieararchy that results in "nested locking"
is that of a "whole disk" block-dev object and a "partition" block-dev
object; the partition is "part of" the whole device and as long as one
always takes the whole disk lock as a higher lock than the partition
lock, the lock ordering is fully correct. The validator does not
automatically detect this natural ordering, as the locking rule behind
the ordering is not static.
In order to teach the validator about this correct usage model, new
versions of the various locking primitives were added that allow you to
specify a "nesting level". An example call, for the block device mutex,
looks like this:
enum bdev_bd_mutex_lock_class
{
BD_MUTEX_NORMAL,
BD_MUTEX_WHOLE,
BD_MUTEX_PARTITION
};
mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);
In this case the locking is done on a bdev object that is known to be a
partition.
The validator treats a lock that is taken in such a nested fasion as a
separate (sub)class for the purposes of validation.
Note: When changing code to use the _nested() primitives, be careful and
check really thoroughly that the hiearchy is correctly mapped; otherwise
you can get false positives or false negatives.
Proof of 100% correctness:
--------------------------
The validator achieves perfect, mathematical 'closure' (proof of locking
correctness) in the sense that for every simple, standalone single-task
locking sequence that occured at least once during the lifetime of the
kernel, the validator proves it with a 100% certainty that no
combination and timing of these locking sequences can cause any class of
lock related deadlock. [*]
I.e. complex multi-CPU and multi-task locking scenarios do not have to
occur in practice to prove a deadlock: only the simple 'component'
locking chains have to occur at least once (anytime, in any
task/context) for the validator to be able to prove correctness. (For
example, complex deadlocks that would normally need more than 3 CPUs and
a very unlikely constellation of tasks, irq-contexts and timings to
occur, can be detected on a plain, lightly loaded single-CPU system as
well!)
This radically decreases the complexity of locking related QA of the
kernel: what has to be done during QA is to trigger as many "simple"
single-task locking dependencies in the kernel as possible, at least
once, to prove locking correctness - instead of having to trigger every
possible combination of locking interaction between CPUs, combined with
every possible hardirq and softirq nesting scenario (which is impossible
to do in practice).
[*] assuming that the validator itself is 100% correct, and no other
part of the system corrupts the state of the validator in any way.
We also assume that all NMI/SMM paths [which could interrupt
even hardirq-disabled codepaths] are correct and do not interfere
with the validator. We also assume that the 64-bit 'chain hash'
value is unique for every lock-chain in the system. Also, lock
recursion must not be higher than 20.
Performance:
------------
The above rules require _massive_ amounts of runtime checking. If we did
that for every lock taken and for every irqs-enable event, it would
render the system practically unusably slow. The complexity of checking
is O(N^2), so even with just a few hundred lock-classes we'd have to do
tens of thousands of checks for every event.
This problem is solved by checking any given 'locking scenario' (unique
sequence of locks taken after each other) only once. A simple stack of
held locks is maintained, and a lightweight 64-bit hash value is
calculated, which hash is unique for every lock chain. The hash value,
when the chain is validated for the first time, is then put into a hash
table, which hash-table can be checked in a lockfree manner. If the
locking chain occurs again later on, the hash table tells us that we
dont have to validate the chain again.

View File

@ -0,0 +1,143 @@
/proc/sys/net/ipv4/vs/* Variables:
am_droprate - INTEGER
default 10
It sets the always mode drop rate, which is used in the mode 3
of the drop_rate defense.
amemthresh - INTEGER
default 1024
It sets the available memory threshold (in pages), which is
used in the automatic modes of defense. When there is no
enough available memory, the respective strategy will be
enabled and the variable is automatically set to 2, otherwise
the strategy is disabled and the variable is set to 1.
cache_bypass - BOOLEAN
0 - disabled (default)
not 0 - enabled
If it is enabled, forward packets to the original destination
directly when no cache server is available and destination
address is not local (iph->daddr is RTN_UNICAST). It is mostly
used in transparent web cache cluster.
debug_level - INTEGER
0 - transmission error messages (default)
1 - non-fatal error messages
2 - configuration
3 - destination trash
4 - drop entry
5 - service lookup
6 - scheduling
7 - connection new/expire, lookup and synchronization
8 - state transition
9 - binding destination, template checks and applications
10 - IPVS packet transmission
11 - IPVS packet handling (ip_vs_in/ip_vs_out)
12 or more - packet traversal
Only available when IPVS is compiled with the CONFIG_IPVS_DEBUG
Higher debugging levels include the messages for lower debugging
levels, so setting debug level 2, includes level 0, 1 and 2
messages. Thus, logging becomes more and more verbose the higher
the level.
drop_entry - INTEGER
0 - disabled (default)
The drop_entry defense is to randomly drop entries in the
connection hash table, just in order to collect back some
memory for new connections. In the current code, the
drop_entry procedure can be activated every second, then it
randomly scans 1/32 of the whole and drops entries that are in
the SYN-RECV/SYNACK state, which should be effective against
syn-flooding attack.
The valid values of drop_entry are from 0 to 3, where 0 means
that this strategy is always disabled, 1 and 2 mean automatic
modes (when there is no enough available memory, the strategy
is enabled and the variable is automatically set to 2,
otherwise the strategy is disabled and the variable is set to
1), and 3 means that that the strategy is always enabled.
drop_packet - INTEGER
0 - disabled (default)
The drop_packet defense is designed to drop 1/rate packets
before forwarding them to real servers. If the rate is 1, then
drop all the incoming packets.
The value definition is the same as that of the drop_entry. In
the automatic mode, the rate is determined by the follow
formula: rate = amemthresh / (amemthresh - available_memory)
when available memory is less than the available memory
threshold. When the mode 3 is set, the always mode drop rate
is controlled by the /proc/sys/net/ipv4/vs/am_droprate.
expire_nodest_conn - BOOLEAN
0 - disabled (default)
not 0 - enabled
The default value is 0, the load balancer will silently drop
packets when its destination server is not available. It may
be useful, when user-space monitoring program deletes the
destination server (because of server overload or wrong
detection) and add back the server later, and the connections
to the server can continue.
If this feature is enabled, the load balancer will expire the
connection immediately when a packet arrives and its
destination server is not available, then the client program
will be notified that the connection is closed. This is
equivalent to the feature some people requires to flush
connections when its destination is not available.
expire_quiescent_template - BOOLEAN
0 - disabled (default)
not 0 - enabled
When set to a non-zero value, the load balancer will expire
persistent templates when the destination server is quiescent.
This may be useful, when a user makes a destination server
quiescent by setting its weight to 0 and it is desired that
subsequent otherwise persistent connections are sent to a
different destination server. By default new persistent
connections are allowed to quiescent destination servers.
If this feature is enabled, the load balancer will expire the
persistence template if it is to be used to schedule a new
connection and the destination server is quiescent.
nat_icmp_send - BOOLEAN
0 - disabled (default)
not 0 - enabled
It controls sending icmp error messages (ICMP_DEST_UNREACH)
for VS/NAT when the load balancer receives packets from real
servers but the connection entries don't exist.
secure_tcp - INTEGER
0 - disabled (default)
The secure_tcp defense is to use a more complicated state
transition table and some possible short timeouts of each
state. In the VS/NAT, it delays the entering the ESTABLISHED
until the real server starts to send data and ACK packet
(after 3-way handshake).
The value definition is the same as that of drop_entry or
drop_packet.
sync_threshold - INTEGER
default 3
It sets synchronization threshold, which is the minimum number
of incoming packets that a connection needs to receive before
the connection will be synchronized. A connection will be
synchronized, every time the number of its incoming packets
modulus 50 equals the threshold. The range of the threshold is
from 0 to 49.

View File

@ -1436,9 +1436,9 @@ platforms are moved over to use the flattened-device-tree model.
interrupts = <1d 3>;
interrupt-parent = <40000>;
num-channels = <4>;
channel-fifo-len = <24>;
channel-fifo-len = <18>;
exec-units-mask = <000000fe>;
descriptor-types-mask = <073f1127>;
descriptor-types-mask = <012b0ebf>;
};

View File

@ -1,4 +1,20 @@
1 Release Date : Sun May 14 22:49:52 PDT 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.03.01
3 Older Version : 00.00.02.04
i. Added support for ZCR controller.
New device id 0x413 added.
ii. Bug fix : Disable controller interrupt before firing INIT cmd to FW.
Interrupt is enabled after required initialization is over.
This is done to ensure that driver is ready to handle interrupts when
it is generated by the controller.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.04

View File

@ -28,6 +28,7 @@ Currently, these files are in /proc/sys/vm:
- block_dump
- drop-caches
- zone_reclaim_mode
- min_unmapped_ratio
- panic_on_oom
==============================================================
@ -168,6 +169,19 @@ in all nodes of the system.
=============================================================
min_unmapped_ratio:
This is available only on NUMA kernels.
A percentage of the file backed pages in each zone. Zone reclaim will only
occur if more than this percentage of pages are file backed and unmapped.
This is to insure that a minimal amount of local pages is still available for
file I/O even if the node is overallocated.
The default is 1 percent.
=============================================================
panic_on_oom
This enables or disables panic on out-of-memory feature. If this is set to 1,

View File

@ -871,6 +871,8 @@ S: Maintained
DOCBOOK FOR DOCUMENTATION
P: Martin Waitz
M: tali@admingilde.org
P: Randy Dunlap
M: rdunlap@xenotime.net
T: git http://tali.admingilde.org/git/linux-docbook.git
S: Maintained
@ -2316,6 +2318,14 @@ M: promise@pnd-pc.demon.co.uk
W: http://www.pnd-pc.demon.co.uk/promise/
S: Maintained
PVRUSB2 VIDEO4LINUX DRIVER
P: Mike Isely
M: isely@pobox.com
L: pvrusb2@isely.net
L: video4linux-list@redhat.com
W: http://www.isely.net/pvrusb2/
S: Maintained
PXA2xx SUPPORT
P: Nicolas Pitre
M: nico@cam.org

View File

@ -309,6 +309,9 @@ CPPFLAGS := -D__KERNEL__ $(LINUXINCLUDE)
CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -fno-common
# Force gcc to behave correct even for buggy distributions
CFLAGS += $(call cc-option, -fno-stack-protector-all \
-fno-stack-protector)
AFLAGS := -D__ASSEMBLY__
# Read KERNELRELEASE from include/config/kernel.release (if it exists)
@ -809,8 +812,8 @@ endif
# prepare2 creates a makefile if using a separate output directory
prepare2: prepare3 outputmakefile
prepare1: prepare2 include/linux/version.h include/asm \
include/config/auto.conf
prepare1: prepare2 include/linux/version.h include/linux/utsrelease.h \
include/asm include/config/auto.conf
ifneq ($(KBUILD_MODULES),)
$(Q)mkdir -p $(MODVERDIR)
$(Q)rm -f $(MODVERDIR)/*
@ -845,27 +848,47 @@ include/asm:
# needs to be updated, so this check is forced on all builds
uts_len := 64
define filechk_version.h
define filechk_utsrelease.h
if [ `echo -n "$(KERNELRELEASE)" | wc -c ` -gt $(uts_len) ]; then \
echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
exit 1; \
fi; \
(echo \#define UTS_RELEASE \"$(KERNELRELEASE)\"; \
echo \#define LINUX_VERSION_CODE `expr $(VERSION) \\* 65536 + $(PATCHLEVEL) \\* 256 + $(SUBLEVEL)`; \
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'; \
)
echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
exit 1; \
fi; \
(echo \#define UTS_RELEASE \"$(KERNELRELEASE)\";)
endef
include/linux/version.h: $(srctree)/Makefile include/config/kernel.release FORCE
define filechk_version.h
(echo \#define LINUX_VERSION_CODE $(shell \
expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))';)
endef
include/linux/version.h: $(srctree)/Makefile FORCE
$(call filechk,version.h)
include/linux/utsrelease.h: include/config/kernel.release FORCE
$(call filechk,utsrelease.h)
# ---------------------------------------------------------------------------
PHONY += depend dep
depend dep:
@echo '*** Warning: make $@ is unnecessary now.'
# ---------------------------------------------------------------------------
# Kernel headers
INSTALL_HDR_PATH=$(MODLIB)/abi
export INSTALL_HDR_PATH
PHONY += headers_install
headers_install: include/linux/version.h
$(Q)unifdef -Ux /dev/null
$(Q)rm -rf $(INSTALL_HDR_PATH)/include
$(Q)$(MAKE) -rR -f $(srctree)/scripts/Makefile.headersinst obj=include
PHONY += headers_check
headers_check: headers_install
$(Q)$(MAKE) -rR -f $(srctree)/scripts/Makefile.headersinst obj=include HDRCHECK=1
# ---------------------------------------------------------------------------
# Modules
@ -952,7 +975,8 @@ CLEAN_FILES += vmlinux System.map \
# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config include2
MRPROPER_FILES += .config .config.old include/asm .version .old_version \
include/linux/autoconf.h include/linux/version.h \
include/linux/autoconf.h include/linux/version.h \
include/linux/utsrelease.h \
Module.symvers tags TAGS cscope*
# clean - Delete most, but leave enough to build external modules
@ -1039,6 +1063,8 @@ help:
@echo ' cscope - Generate cscope index'
@echo ' kernelrelease - Output the release version string'
@echo ' kernelversion - Output the version stored in Makefile'
@echo ' headers_install - Install sanitised kernel headers to INSTALL_HDR_PATH'
@echo ' (default: /lib/modules/$$VERSION/abi)'
@echo ''
@echo 'Static analysers'
@echo ' checkstack - Generate a list of stack hogs'

View File

@ -9,7 +9,7 @@
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>

View File

@ -11,7 +11,7 @@
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>

View File

@ -7,7 +7,7 @@
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/mm.h>
#include <asm/system.h>

View File

@ -474,7 +474,7 @@ out:
*/
unsigned long
thread_saved_pc(task_t *t)
thread_saved_pc(struct task_struct *t)
{
unsigned long base = (unsigned long)task_stack_page(t);
unsigned long fp, sp = task_thread_info(t)->pcb.ksp;

View File

@ -883,7 +883,7 @@ static ssize_t ecard_show_resources(struct device *dev, struct device_attribute
int i;
for (i = 0; i < ECARD_NUM_RESOURCES; i++)
str += sprintf(str, "%08lx %08lx %08lx\n",
str += sprintf(str, "%08x %08x %08lx\n",
ec->resource[i].start,
ec->resource[i].end,
ec->resource[i].flags);

View File

@ -344,7 +344,7 @@ static void __init setup_processor(void)
cpu_cache = *list->cache;
#endif
printk("CPU: %s [%08x] revision %d (ARMv%s), cr=%08x\n",
printk("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",
cpu_name, processor_id, (int)processor_id & 15,
proc_arch[cpu_architecture()], cr_alignment);

View File

@ -98,9 +98,22 @@ isa_irq_handler(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
desc_handle_irq(isa_irq, desc, regs);
}
static struct irqaction irq_cascade = { .handler = no_action, .name = "cascade", };
static struct resource pic1_resource = { "pic1", 0x20, 0x3f };
static struct resource pic2_resource = { "pic2", 0xa0, 0xbf };
static struct irqaction irq_cascade = {
.handler = no_action,
.name = "cascade",
};
static struct resource pic1_resource = {
.name = "pic1",
.start = 0x20,
.end = 0x3f,
};
static struct resource pic2_resource = {
.name = "pic2",
.start = 0xa0,
.end = 0xbf,
};
void __init isa_init_irq(unsigned int host_irq)
{

View File

@ -303,7 +303,6 @@ __ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size,
int err;
unsigned long addr;
struct vm_struct * area;
unsigned int cr = get_cr();
/*
* High mappings must be supersection aligned
@ -317,7 +316,7 @@ __ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size,
addr = (unsigned long)area->addr;
#ifndef CONFIG_SMP
if ((((cpu_architecture() >= CPU_ARCH_ARMv6) && (cr & CR_XP)) ||
if ((((cpu_architecture() >= CPU_ARCH_ARMv6) && (get_cr() & CR_XP)) ||
cpu_is_xsc3()) &&
!((__pfn_to_phys(pfn) | size | addr) & ~SUPERSECTION_MASK)) {
area->flags |= VM_ARM_SECTION_MAPPING;
@ -369,6 +368,7 @@ void __iounmap(void __iomem *addr)
addr = (void __iomem *)(PAGE_MASK & (unsigned long)addr);
#ifndef CONFIG_SMP
/*
* If this is a section based mapping we need to handle it
* specially as the VM subysystem does not know how to handle
@ -390,6 +390,7 @@ void __iounmap(void __iomem *addr)
}
}
write_unlock(&vmlist_lock);
#endif
if (!section_mapping)
vunmap(addr);

View File

@ -34,6 +34,8 @@
#include <asm/procinfo.h>
#include <asm/ptrace.h>
#include "proc-macros.S"
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger

View File

@ -34,6 +34,8 @@
#include <asm/procinfo.h>
#include <asm/ptrace.h>
#include "proc-macros.S"
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger

View File

@ -23,6 +23,8 @@
#include <asm/procinfo.h>
#include <asm/ptrace.h>
#include "proc-macros.S"
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger

View File

@ -23,6 +23,8 @@
#include <asm/procinfo.h>
#include <asm/ptrace.h>
#include "proc-macros.S"
/*
* This is the maximum size of an area which will be invalidated
* using the single invalidate entry instructions. Anything larger

View File

@ -454,7 +454,8 @@ __arm925_setup:
mcr p15, 7, r0, c15, c0, 0
#endif
adr r5, {r5, r6}
adr r5, arm925_crval
ldmia r5, {r5, r6}
mrc p15, 0, r0, c1, c0 @ get control register v4
bic r0, r0, r5
orr r0, r0, r6

View File

@ -10,7 +10,7 @@
* 2 of the License, or (at your option) any later version.
*/
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/delay.h>

View File

@ -18,6 +18,14 @@ config GENERIC_TIME
bool
default y
config LOCKDEP_SUPPORT
bool
default y
config STACKTRACE_SUPPORT
bool
default y
config SEMAPHORE_SLEEPERS
bool
default y

View File

@ -1,5 +1,9 @@
menu "Kernel hacking"
config TRACE_IRQFLAGS_SUPPORT
bool
default y
source "lib/Kconfig.debug"
config EARLY_PRINTK
@ -31,15 +35,6 @@ config DEBUG_STACK_USAGE
This option will slow down process creation somewhat.
config STACK_BACKTRACE_COLS
int "Stack backtraces per line" if DEBUG_KERNEL
range 1 3
default 2
help
Selects how many stack backtrace entries per line to display.
This can save screen space when displaying traces.
comment "Page alloc debug is incompatible with Software Suspend on i386"
depends on DEBUG_KERNEL && SOFTWARE_SUSPEND

View File

@ -47,7 +47,7 @@
*/
#include <asm/segment.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/compile.h>
#include <asm/boot.h>
#include <asm/e820.h>

View File

@ -9,6 +9,7 @@ obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
pci-dma.o i386_ksyms.o i387.o bootflag.o \
quirks.o i8237.o topology.o alternative.o i8253.o tsc.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-y += cpu/
obj-y += acpi/
obj-$(CONFIG_X86_BIOS_REBOOT) += reboot.o

View File

@ -303,6 +303,16 @@ void alternatives_smp_switch(int smp)
struct smp_alt_module *mod;
unsigned long flags;
#ifdef CONFIG_LOCKDEP
/*
* A not yet fixed binutils section handling bug prevents
* alternatives-replacement from working reliably, so turn
* it off:
*/
printk("lockdep: not fixing up alternatives.\n");
return;
#endif
if (no_replacement || smp_alt_once)
return;
BUG_ON(!smp && (num_online_cpus() > 1));

View File

@ -167,6 +167,7 @@ static int cpuid_class_device_create(int i)
return err;
}
#ifdef CONFIG_HOTPLUG_CPU
static int cpuid_class_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
@ -186,6 +187,7 @@ static struct notifier_block __cpuinitdata cpuid_class_cpu_notifier =
{
.notifier_call = cpuid_class_cpu_callback,
};
#endif /* !CONFIG_HOTPLUG_CPU */
static int __init cpuid_init(void)
{
@ -208,7 +210,7 @@ static int __init cpuid_init(void)
if (err != 0)
goto out_class;
}
register_cpu_notifier(&cpuid_class_cpu_notifier);
register_hotcpu_notifier(&cpuid_class_cpu_notifier);
err = 0;
goto out;
@ -233,7 +235,7 @@ static void __exit cpuid_exit(void)
class_device_destroy(cpuid_class, MKDEV(CPUID_MAJOR, cpu));
class_destroy(cpuid_class);
unregister_chrdev(CPUID_MAJOR, "cpu/cpuid");
unregister_cpu_notifier(&cpuid_class_cpu_notifier);
unregister_hotcpu_notifier(&cpuid_class_cpu_notifier);
}
module_init(cpuid_init);

View File

@ -42,6 +42,7 @@
#include <linux/linkage.h>
#include <asm/thread_info.h>
#include <asm/irqflags.h>
#include <asm/errno.h>
#include <asm/segment.h>
#include <asm/smp.h>
@ -76,12 +77,21 @@ NT_MASK = 0x00004000
VM_MASK = 0x00020000
#ifdef CONFIG_PREEMPT
#define preempt_stop cli
#define preempt_stop cli; TRACE_IRQS_OFF
#else
#define preempt_stop
#define resume_kernel restore_nocheck
#endif
.macro TRACE_IRQS_IRET
#ifdef CONFIG_TRACE_IRQFLAGS
testl $IF_MASK,EFLAGS(%esp) # interrupts off?
jz 1f
TRACE_IRQS_ON
1:
#endif
.endm
#ifdef CONFIG_VM86
#define resume_userspace_sig check_userspace
#else
@ -257,6 +267,10 @@ ENTRY(sysenter_entry)
CFI_REGISTER esp, ebp
movl TSS_sysenter_esp0(%esp),%esp
sysenter_past_esp:
/*
* No need to follow this irqs on/off section: the syscall
* disabled irqs and here we enable it straight after entry:
*/
sti
pushl $(__USER_DS)
CFI_ADJUST_CFA_OFFSET 4
@ -303,6 +317,7 @@ sysenter_past_esp:
call *sys_call_table(,%eax,4)
movl %eax,EAX(%esp)
cli
TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx
jne syscall_exit_work
@ -310,6 +325,7 @@ sysenter_past_esp:
movl EIP(%esp), %edx
movl OLDESP(%esp), %ecx
xorl %ebp,%ebp
TRACE_IRQS_ON
sti
sysexit
CFI_ENDPROC
@ -339,6 +355,7 @@ syscall_exit:
cli # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx # current->work
jne syscall_exit_work
@ -355,12 +372,15 @@ restore_all:
CFI_REMEMBER_STATE
je ldt_ss # returning to user-space with LDT SS
restore_nocheck:
TRACE_IRQS_IRET
restore_nocheck_notrace:
RESTORE_REGS
addl $4, %esp
CFI_ADJUST_CFA_OFFSET -4
1: iret
.section .fixup,"ax"
iret_exc:
TRACE_IRQS_ON
sti
pushl $0 # no error code
pushl $do_iret_error
@ -386,11 +406,13 @@ ldt_ss:
subl $8, %esp # reserve space for switch16 pointer
CFI_ADJUST_CFA_OFFSET 8
cli
TRACE_IRQS_OFF
movl %esp, %eax
/* Set up the 16bit stack frame with switch32 pointer on top,
* and a switch16 pointer on top of the current frame. */
call setup_x86_bogus_stack
CFI_ADJUST_CFA_OFFSET -8 # frame has moved
TRACE_IRQS_IRET
RESTORE_REGS
lss 20+4(%esp), %esp # switch to 16bit stack
1: iret
@ -411,6 +433,7 @@ work_resched:
cli # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
andl $_TIF_WORK_MASK, %ecx # is there any work to be done other
# than syscall tracing?
@ -462,6 +485,7 @@ syscall_trace_entry:
syscall_exit_work:
testb $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP), %cl
jz work_pending
TRACE_IRQS_ON
sti # could let do_syscall_trace() call
# schedule() instead
movl %esp, %eax
@ -535,9 +559,14 @@ ENTRY(irq_entries_start)
vector=vector+1
.endr
/*
* the CPU automatically disables interrupts when executing an IRQ vector,
* so IRQ-flags tracing has to follow that:
*/
ALIGN
common_interrupt:
SAVE_ALL
TRACE_IRQS_OFF
movl %esp,%eax
call do_IRQ
jmp ret_from_intr
@ -549,9 +578,10 @@ ENTRY(name) \
pushl $~(nr); \
CFI_ADJUST_CFA_OFFSET 4; \
SAVE_ALL; \
TRACE_IRQS_OFF \
movl %esp,%eax; \
call smp_/**/name; \
jmp ret_from_intr; \
jmp ret_from_intr; \
CFI_ENDPROC
/* The include is where all of the SMP etc. interrupts come from */
@ -726,7 +756,7 @@ nmi_stack_correct:
xorl %edx,%edx # zero error code
movl %esp,%eax # pt_regs pointer
call do_nmi
jmp restore_all
jmp restore_nocheck_notrace
CFI_ENDPROC
nmi_stack_fixup:

View File

@ -166,7 +166,7 @@ void irq_ctx_init(int cpu)
irqctx->tinfo.task = NULL;
irqctx->tinfo.exec_domain = NULL;
irqctx->tinfo.cpu = cpu;
irqctx->tinfo.preempt_count = SOFTIRQ_OFFSET;
irqctx->tinfo.preempt_count = 0;
irqctx->tinfo.addr_limit = MAKE_MM_SEG(0);
softirq_ctx[cpu] = irqctx;
@ -211,6 +211,10 @@ asmlinkage void do_softirq(void)
: "0"(isp)
: "memory", "cc", "edx", "ecx", "eax"
);
/*
* Shouldnt happen, we returned above if in_interrupt():
*/
WARN_ON_ONCE(softirq_count());
}
local_irq_restore(flags);

View File

@ -107,7 +107,7 @@ int nmi_active;
static __init void nmi_cpu_busy(void *data)
{
volatile int *endflag = data;
local_irq_enable();
local_irq_enable_in_hardirq();
/* Intentionally don't use cpu_relax here. This is
to make sure that the performance counter really ticks,
even if there is a simulator or similar that catches the

View File

@ -0,0 +1,98 @@
/*
* arch/i386/kernel/stacktrace.c
*
* Stack trace management functions
*
* Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
*/
#include <linux/sched.h>
#include <linux/stacktrace.h>
static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
{
return p > (void *)tinfo &&
p < (void *)tinfo + THREAD_SIZE - 3;
}
/*
* Save stack-backtrace addresses into a stack_trace buffer:
*/
static inline unsigned long
save_context_stack(struct stack_trace *trace, unsigned int skip,
struct thread_info *tinfo, unsigned long *stack,
unsigned long ebp)
{
unsigned long addr;
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
if (!skip)
trace->entries[trace->nr_entries++] = addr;
else
skip--;
if (trace->nr_entries >= trace->max_entries)
break;
/*
* break out of recursive entries (such as
* end_of_stack_stop_unwind_function):
*/
if (ebp == *(unsigned long *)ebp)
break;
ebp = *(unsigned long *)ebp;
}
#else
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr)) {
if (!skip)
trace->entries[trace->nr_entries++] = addr;
else
skip--;
if (trace->nr_entries >= trace->max_entries)
break;
}
}
#endif
return ebp;
}
/*
* Save stack-backtrace addresses into a stack_trace buffer.
* If all_contexts is set, all contexts (hardirq, softirq and process)
* are saved. If not set then only the current context is saved.
*/
void save_stack_trace(struct stack_trace *trace,
struct task_struct *task, int all_contexts,
unsigned int skip)
{
unsigned long ebp;
unsigned long *stack = &ebp;
WARN_ON(trace->nr_entries || !trace->max_entries);
if (!task || task == current) {
/* Grab ebp right from our regs: */
asm ("movl %%ebp, %0" : "=r" (ebp));
} else {
/* ebp is the last reg pushed by switch_to(): */
ebp = *(unsigned long *) task->thread.esp;
}
while (1) {
struct thread_info *context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
ebp = save_context_stack(trace, skip, context, stack, ebp);
stack = (unsigned long *)context->previous_esp;
if (!all_contexts || !stack ||
trace->nr_entries >= trace->max_entries)
break;
trace->entries[trace->nr_entries++] = ULONG_MAX;
if (trace->nr_entries >= trace->max_entries)
break;
}
}

View File

@ -115,28 +115,13 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
}
/*
* Print CONFIG_STACK_BACKTRACE_COLS address/symbol entries per line.
* Print one address/symbol entries per line.
*/
static inline int print_addr_and_symbol(unsigned long addr, char *log_lvl,
int printed)
static inline void print_addr_and_symbol(unsigned long addr, char *log_lvl)
{
if (!printed)
printk(log_lvl);
#if CONFIG_STACK_BACKTRACE_COLS == 1
printk(" [<%08lx>] ", addr);
#else
printk(" <%08lx> ", addr);
#endif
print_symbol("%s", addr);
printed = (printed + 1) % CONFIG_STACK_BACKTRACE_COLS;
if (printed)
printk(" ");
else
printk("\n");
return printed;
print_symbol("%s\n", addr);
}
static inline unsigned long print_context_stack(struct thread_info *tinfo,
@ -144,12 +129,11 @@ static inline unsigned long print_context_stack(struct thread_info *tinfo,
char *log_lvl)
{
unsigned long addr;
int printed = 0; /* nr of entries already printed on current line */
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
printed = print_addr_and_symbol(addr, log_lvl, printed);
print_addr_and_symbol(addr, log_lvl);
/*
* break out of recursive entries (such as
* end_of_stack_stop_unwind_function):
@ -162,28 +146,23 @@ static inline unsigned long print_context_stack(struct thread_info *tinfo,
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr))
printed = print_addr_and_symbol(addr, log_lvl, printed);
print_addr_and_symbol(addr, log_lvl);
}
#endif
if (printed)
printk("\n");
return ebp;
}
static asmlinkage int show_trace_unwind(struct unwind_frame_info *info, void *log_lvl)
static asmlinkage int
show_trace_unwind(struct unwind_frame_info *info, void *log_lvl)
{
int n = 0;
int printed = 0; /* nr of entries already printed on current line */
while (unwind(info) == 0 && UNW_PC(info)) {
++n;
printed = print_addr_and_symbol(UNW_PC(info), log_lvl, printed);
n++;
print_addr_and_symbol(UNW_PC(info), log_lvl);
if (arch_unw_user_mode(info))
break;
}
if (printed)
printk("\n");
return n;
}

View File

@ -50,7 +50,7 @@ static acpi_status hp_ccsr_locate(acpi_handle obj, u64 *base, u64 *length)
memcpy(length, vendor->byte_data + 8, sizeof(*length));
exit:
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
return status;
}

View File

@ -856,7 +856,7 @@ int acpi_map_lsapic(acpi_handle handle, int *pcpu)
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_BUFFER ||
obj->buffer.length < sizeof(*lsapic)) {
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
return -EINVAL;
}
@ -864,13 +864,13 @@ int acpi_map_lsapic(acpi_handle handle, int *pcpu)
if ((lsapic->header.type != ACPI_MADT_LSAPIC) ||
(!lsapic->flags.enabled)) {
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
return -EINVAL;
}
physid = ((lsapic->id << 8) | (lsapic->eid));
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
@ -934,20 +934,20 @@ acpi_map_iosapic(acpi_handle handle, u32 depth, void *context, void **ret)
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_BUFFER ||
obj->buffer.length < sizeof(*iosapic)) {
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
return AE_OK;
}
iosapic = (struct acpi_table_iosapic *)obj->buffer.pointer;
if (iosapic->header.type != ACPI_MADT_IOSAPIC) {
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
return AE_OK;
}
gsi_base = iosapic->global_irq_base;
acpi_os_free(buffer.pointer);
kfree(buffer.pointer);
/*
* OK, it's an IOSAPIC MADT entry, look for a _PXM value to tell

View File

@ -678,7 +678,7 @@ copy_reg(const u64 *fr, u64 fnat, u64 *tr, u64 *tnat)
*/
static void
ia64_mca_modify_comm(const task_t *previous_current)
ia64_mca_modify_comm(const struct task_struct *previous_current)
{
char *p, comm[sizeof(current->comm)];
if (previous_current->pid)
@ -709,7 +709,7 @@ ia64_mca_modify_comm(const task_t *previous_current)
* that we can do backtrace on the MCA/INIT handler code itself.
*/
static task_t *
static struct task_struct *
ia64_mca_modify_original_stack(struct pt_regs *regs,
const struct switch_stack *sw,
struct ia64_sal_os_state *sos,
@ -719,7 +719,7 @@ ia64_mca_modify_original_stack(struct pt_regs *regs,
ia64_va va;
extern char ia64_leave_kernel[]; /* Need asm address, not function descriptor */
const pal_min_state_area_t *ms = sos->pal_min_state;
task_t *previous_current;
struct task_struct *previous_current;
struct pt_regs *old_regs;
struct switch_stack *old_sw;
unsigned size = sizeof(struct pt_regs) +
@ -1023,7 +1023,7 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
pal_processor_state_info_t *psp = (pal_processor_state_info_t *)
&sos->proc_state_param;
int recover, cpu = smp_processor_id();
task_t *previous_current;
struct task_struct *previous_current;
struct ia64_mca_notify_die nd =
{ .sos = sos, .monarch_cpu = &monarch_cpu };
@ -1352,7 +1352,7 @@ ia64_init_handler(struct pt_regs *regs, struct switch_stack *sw,
{
static atomic_t slaves;
static atomic_t monarchs;
task_t *previous_current;
struct task_struct *previous_current;
int cpu = smp_processor_id();
struct ia64_mca_notify_die nd =
{ .sos = sos, .monarch_cpu = &monarch_cpu };

View File

@ -124,7 +124,7 @@ extern void __devinit calibrate_delay (void);
extern void start_ap (void);
extern unsigned long ia64_iobase;
task_t *task_for_booting_cpu;
struct task_struct *task_for_booting_cpu;
/*
* State for each CPU

View File

@ -313,9 +313,19 @@ static void __meminit scatter_node_data(void)
pg_data_t **dst;
int node;
for_each_online_node(node) {
dst = LOCAL_DATA_ADDR(pgdat_list[node])->pg_data_ptrs;
memcpy(dst, pgdat_list, sizeof(pgdat_list));
/*
* for_each_online_node() can't be used at here.
* node_online_map is not set for hot-added nodes at this time,
* because we are halfway through initialization of the new node's
* structures. If for_each_online_node() is used, a new node's
* pg_data_ptrs will be not initialized. Insted of using it,
* pgdat_list[] is checked.
*/
for_each_node(node) {
if (pgdat_list[node]) {
dst = LOCAL_DATA_ADDR(pgdat_list[node])->pg_data_ptrs;
memcpy(dst, pgdat_list, sizeof(pgdat_list));
}
}
}

View File

@ -65,7 +65,7 @@ need_resched:
#endif
FEXPORT(ret_from_fork)
jal schedule_tail # a0 = task_t *prev
jal schedule_tail # a0 = struct task_struct *prev
FEXPORT(syscall_exit)
local_irq_disable # make sure need_resched and

View File

@ -47,7 +47,7 @@ unsigned long mt_fpemul_threshold = 0;
* used in sys_sched_set/getaffinity() in kernel/sched.c, so
* cloned here.
*/
static inline task_t *find_process_by_pid(pid_t pid)
static inline struct task_struct *find_process_by_pid(pid_t pid)
{
return pid ? find_task_by_pid(pid) : current;
}
@ -62,7 +62,7 @@ asmlinkage long mipsmt_sys_sched_setaffinity(pid_t pid, unsigned int len,
cpumask_t new_mask;
cpumask_t effective_mask;
int retval;
task_t *p;
struct task_struct *p;
if (len < sizeof(new_mask))
return -EINVAL;
@ -127,7 +127,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len,
unsigned int real_len;
cpumask_t mask;
int retval;
task_t *p;
struct task_struct *p;
real_len = sizeof(mask);
if (len < real_len)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -111,7 +111,7 @@ void __init btext_setup_display(int width, int height, int depth, int pitch,
logicalDisplayBase = (unsigned char *)address;
dispDeviceBase = (unsigned char *)address;
dispDeviceRowBytes = pitch;
dispDeviceDepth = depth;
dispDeviceDepth = depth == 15 ? 16 : depth;
dispDeviceRect[0] = dispDeviceRect[1] = 0;
dispDeviceRect[2] = width;
dispDeviceRect[3] = height;
@ -160,20 +160,28 @@ int btext_initialize(struct device_node *np)
unsigned long address = 0;
u32 *prop;
prop = (u32 *)get_property(np, "width", NULL);
prop = (u32 *)get_property(np, "linux,bootx-width", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "width", NULL);
if (prop == NULL)
return -EINVAL;
width = *prop;
prop = (u32 *)get_property(np, "height", NULL);
prop = (u32 *)get_property(np, "linux,bootx-height", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "height", NULL);
if (prop == NULL)
return -EINVAL;
height = *prop;
prop = (u32 *)get_property(np, "depth", NULL);
prop = (u32 *)get_property(np, "linux,bootx-depth", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "depth", NULL);
if (prop == NULL)
return -EINVAL;
depth = *prop;
pitch = width * ((depth + 7) / 8);
prop = (u32 *)get_property(np, "linebytes", NULL);
prop = (u32 *)get_property(np, "linux,bootx-linebytes", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "linebytes", NULL);
if (prop)
pitch = *prop;
if (pitch == 1)
@ -194,7 +202,7 @@ int btext_initialize(struct device_node *np)
g_max_loc_Y = height / 16;
dispDeviceBase = (unsigned char *)address;
dispDeviceRowBytes = pitch;
dispDeviceDepth = depth;
dispDeviceDepth = depth == 15 ? 16 : depth;
dispDeviceRect[0] = dispDeviceRect[1] = 0;
dispDeviceRect[2] = width;
dispDeviceRect[3] = height;

View File

@ -323,13 +323,11 @@ int ibmebus_request_irq(struct ibmebus_dev *dev,
unsigned long irq_flags, const char * devname,
void *dev_id)
{
unsigned int irq = virt_irq_create_mapping(ist);
unsigned int irq = irq_create_mapping(NULL, ist, 0);
if (irq == NO_IRQ)
return -EINVAL;
irq = irq_offset_up(irq);
return request_irq(irq, handler,
irq_flags, devname, dev_id);
}
@ -337,12 +335,9 @@ EXPORT_SYMBOL(ibmebus_request_irq);
void ibmebus_free_irq(struct ibmebus_dev *dev, u32 ist, void *dev_id)
{
unsigned int irq = virt_irq_create_mapping(ist);
unsigned int irq = irq_find_mapping(NULL, ist);
irq = irq_offset_up(irq);
free_irq(irq, dev_id);
return;
}
EXPORT_SYMBOL(ibmebus_free_irq);

View File

@ -29,6 +29,8 @@
* to reduce code space and undefined function references.
*/
#undef DEBUG
#include <linux/module.h>
#include <linux/threads.h>
#include <linux/kernel_stat.h>
@ -46,7 +48,10 @@
#include <linux/cpumask.h>
#include <linux/profile.h>
#include <linux/bitops.h>
#include <linux/pci.h>
#include <linux/list.h>
#include <linux/radix-tree.h>
#include <linux/mutex.h>
#include <linux/bootmem.h>
#include <asm/uaccess.h>
#include <asm/system.h>
@ -57,39 +62,38 @@
#include <asm/prom.h>
#include <asm/ptrace.h>
#include <asm/machdep.h>
#include <asm/udbg.h>
#ifdef CONFIG_PPC_ISERIES
#include <asm/paca.h>
#endif
int __irq_offset_value;
#ifdef CONFIG_PPC32
EXPORT_SYMBOL(__irq_offset_value);
#endif
static int ppc_spurious_interrupts;
#ifdef CONFIG_PPC32
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
EXPORT_SYMBOL(__irq_offset_value);
atomic_t ppc_n_lost_interrupts;
#ifndef CONFIG_PPC_MERGE
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
#endif
#ifdef CONFIG_TAU_INT
extern int tau_initialized;
extern int tau_interrupts(int);
#endif
#endif /* CONFIG_PPC32 */
#if defined(CONFIG_SMP) && !defined(CONFIG_PPC_MERGE)
extern atomic_t ipi_recv;
extern atomic_t ipi_sent;
#endif
#endif /* CONFIG_PPC32 */
#ifdef CONFIG_PPC64
EXPORT_SYMBOL(irq_desc);
int distribute_irqs = 1;
u64 ppc64_interrupt_controller;
#endif /* CONFIG_PPC64 */
int show_interrupts(struct seq_file *p, void *v)
@ -182,7 +186,7 @@ void fixup_irqs(cpumask_t map)
void do_IRQ(struct pt_regs *regs)
{
int irq;
unsigned int irq;
#ifdef CONFIG_IRQSTACKS
struct thread_info *curtp, *irqtp;
#endif
@ -213,22 +217,26 @@ void do_IRQ(struct pt_regs *regs)
*/
irq = ppc_md.get_irq(regs);
if (irq >= 0) {
if (irq != NO_IRQ && irq != NO_IRQ_IGNORE) {
#ifdef CONFIG_IRQSTACKS
/* Switch to the irq stack to handle this */
curtp = current_thread_info();
irqtp = hardirq_ctx[smp_processor_id()];
if (curtp != irqtp) {
struct irq_desc *desc = irq_desc + irq;
void *handler = desc->handle_irq;
if (handler == NULL)
handler = &__do_IRQ;
irqtp->task = curtp->task;
irqtp->flags = 0;
call___do_IRQ(irq, regs, irqtp);
call_handle_irq(irq, desc, regs, irqtp, handler);
irqtp->task = NULL;
if (irqtp->flags)
set_bits(irqtp->flags, &curtp->flags);
} else
#endif
__do_IRQ(irq, regs);
} else if (irq != -2)
generic_handle_irq(irq, regs);
} else if (irq != NO_IRQ_IGNORE)
/* That's not SMP safe ... but who cares ? */
ppc_spurious_interrupts++;
@ -245,138 +253,12 @@ void do_IRQ(struct pt_regs *regs)
void __init init_IRQ(void)
{
#ifdef CONFIG_PPC64
static int once = 0;
if (once)
return;
once++;
#endif
ppc_md.init_IRQ();
#ifdef CONFIG_PPC64
irq_ctx_init();
#endif
}
#ifdef CONFIG_PPC64
/*
* Virtual IRQ mapping code, used on systems with XICS interrupt controllers.
*/
#define UNDEFINED_IRQ 0xffffffff
unsigned int virt_irq_to_real_map[NR_IRQS];
/*
* Don't use virtual irqs 0, 1, 2 for devices.
* The pcnet32 driver considers interrupt numbers < 2 to be invalid,
* and 2 is the XICS IPI interrupt.
* We limit virtual irqs to __irq_offet_value less than virt_irq_max so
* that when we offset them we don't end up with an interrupt
* number >= virt_irq_max.
*/
#define MIN_VIRT_IRQ 3
unsigned int virt_irq_max;
static unsigned int max_virt_irq;
static unsigned int nr_virt_irqs;
void
virt_irq_init(void)
{
int i;
if ((virt_irq_max == 0) || (virt_irq_max > (NR_IRQS - 1)))
virt_irq_max = NR_IRQS - 1;
max_virt_irq = virt_irq_max - __irq_offset_value;
nr_virt_irqs = max_virt_irq - MIN_VIRT_IRQ + 1;
for (i = 0; i < NR_IRQS; i++)
virt_irq_to_real_map[i] = UNDEFINED_IRQ;
}
/* Create a mapping for a real_irq if it doesn't already exist.
* Return the virtual irq as a convenience.
*/
int virt_irq_create_mapping(unsigned int real_irq)
{
unsigned int virq, first_virq;
static int warned;
if (ppc64_interrupt_controller == IC_OPEN_PIC)
return real_irq; /* no mapping for openpic (for now) */
if (ppc64_interrupt_controller == IC_CELL_PIC)
return real_irq; /* no mapping for iic either */
/* don't map interrupts < MIN_VIRT_IRQ */
if (real_irq < MIN_VIRT_IRQ) {
virt_irq_to_real_map[real_irq] = real_irq;
return real_irq;
}
/* map to a number between MIN_VIRT_IRQ and max_virt_irq */
virq = real_irq;
if (virq > max_virt_irq)
virq = (virq % nr_virt_irqs) + MIN_VIRT_IRQ;
/* search for this number or a free slot */
first_virq = virq;
while (virt_irq_to_real_map[virq] != UNDEFINED_IRQ) {
if (virt_irq_to_real_map[virq] == real_irq)
return virq;
if (++virq > max_virt_irq)
virq = MIN_VIRT_IRQ;
if (virq == first_virq)
goto nospace; /* oops, no free slots */
}
virt_irq_to_real_map[virq] = real_irq;
return virq;
nospace:
if (!warned) {
printk(KERN_CRIT "Interrupt table is full\n");
printk(KERN_CRIT "Increase virt_irq_max (currently %d) "
"in your kernel sources and rebuild.\n", virt_irq_max);
warned = 1;
}
return NO_IRQ;
}
/*
* In most cases will get a hit on the very first slot checked in the
* virt_irq_to_real_map. Only when there are a large number of
* IRQs will this be expensive.
*/
unsigned int real_irq_to_virt_slowpath(unsigned int real_irq)
{
unsigned int virq;
unsigned int first_virq;
virq = real_irq;
if (virq > max_virt_irq)
virq = (virq % nr_virt_irqs) + MIN_VIRT_IRQ;
first_virq = virq;
do {
if (virt_irq_to_real_map[virq] == real_irq)
return virq;
virq++;
if (virq >= max_virt_irq)
virq = 0;
} while (first_virq != virq);
return NO_IRQ;
}
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_IRQSTACKS
struct thread_info *softirq_ctx[NR_CPUS] __read_mostly;
@ -424,18 +306,510 @@ void do_softirq(void)
local_irq_save(flags);
if (local_softirq_pending()) {
account_system_vtime(current);
local_bh_disable();
if (local_softirq_pending())
do_softirq_onstack();
account_system_vtime(current);
__local_bh_enable();
}
local_irq_restore(flags);
}
EXPORT_SYMBOL(do_softirq);
/*
* IRQ controller and virtual interrupts
*/
#ifdef CONFIG_PPC_MERGE
static LIST_HEAD(irq_hosts);
static spinlock_t irq_big_lock = SPIN_LOCK_UNLOCKED;
struct irq_map_entry irq_map[NR_IRQS];
static unsigned int irq_virq_count = NR_IRQS;
static struct irq_host *irq_default_host;
struct irq_host *irq_alloc_host(unsigned int revmap_type,
unsigned int revmap_arg,
struct irq_host_ops *ops,
irq_hw_number_t inval_irq)
{
struct irq_host *host;
unsigned int size = sizeof(struct irq_host);
unsigned int i;
unsigned int *rmap;
unsigned long flags;
/* Allocate structure and revmap table if using linear mapping */
if (revmap_type == IRQ_HOST_MAP_LINEAR)
size += revmap_arg * sizeof(unsigned int);
if (mem_init_done)
host = kzalloc(size, GFP_KERNEL);
else {
host = alloc_bootmem(size);
if (host)
memset(host, 0, size);
}
if (host == NULL)
return NULL;
/* Fill structure */
host->revmap_type = revmap_type;
host->inval_irq = inval_irq;
host->ops = ops;
spin_lock_irqsave(&irq_big_lock, flags);
/* If it's a legacy controller, check for duplicates and
* mark it as allocated (we use irq 0 host pointer for that
*/
if (revmap_type == IRQ_HOST_MAP_LEGACY) {
if (irq_map[0].host != NULL) {
spin_unlock_irqrestore(&irq_big_lock, flags);
/* If we are early boot, we can't free the structure,
* too bad...
* this will be fixed once slab is made available early
* instead of the current cruft
*/
if (mem_init_done)
kfree(host);
return NULL;
}
irq_map[0].host = host;
}
list_add(&host->link, &irq_hosts);
spin_unlock_irqrestore(&irq_big_lock, flags);
/* Additional setups per revmap type */
switch(revmap_type) {
case IRQ_HOST_MAP_LEGACY:
/* 0 is always the invalid number for legacy */
host->inval_irq = 0;
/* setup us as the host for all legacy interrupts */
for (i = 1; i < NUM_ISA_INTERRUPTS; i++) {
irq_map[i].hwirq = 0;
smp_wmb();
irq_map[i].host = host;
smp_wmb();
/* Clear some flags */
get_irq_desc(i)->status
&= ~(IRQ_NOREQUEST | IRQ_LEVEL);
/* Legacy flags are left to default at this point,
* one can then use irq_create_mapping() to
* explicitely change them
*/
ops->map(host, i, i, 0);
}
break;
case IRQ_HOST_MAP_LINEAR:
rmap = (unsigned int *)(host + 1);
for (i = 0; i < revmap_arg; i++)
rmap[i] = IRQ_NONE;
host->revmap_data.linear.size = revmap_arg;
smp_wmb();
host->revmap_data.linear.revmap = rmap;
break;
default:
break;
}
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
return host;
}
struct irq_host *irq_find_host(struct device_node *node)
{
struct irq_host *h, *found = NULL;
unsigned long flags;
/* We might want to match the legacy controller last since
* it might potentially be set to match all interrupts in
* the absence of a device node. This isn't a problem so far
* yet though...
*/
spin_lock_irqsave(&irq_big_lock, flags);
list_for_each_entry(h, &irq_hosts, link)
if (h->ops->match == NULL || h->ops->match(h, node)) {
found = h;
break;
}
spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
EXPORT_SYMBOL_GPL(irq_find_host);
void irq_set_default_host(struct irq_host *host)
{
pr_debug("irq: Default host set to @0x%p\n", host);
irq_default_host = host;
}
void irq_set_virq_count(unsigned int count)
{
pr_debug("irq: Trying to set virq count to %d\n", count);
BUG_ON(count < NUM_ISA_INTERRUPTS);
if (count < NR_IRQS)
irq_virq_count = count;
}
unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq,
unsigned int flags)
{
unsigned int virq, hint;
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx, 0x%x)\n",
host, hwirq, flags);
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL) {
printk(KERN_WARNING "irq_create_mapping called for"
" NULL host, hwirq=%lx\n", hwirq);
WARN_ON(1);
return NO_IRQ;
}
pr_debug("irq: -> using host @%p\n", host);
/* Check if mapping already exist, if it does, call
* host->ops->map() to update the flags
*/
virq = irq_find_mapping(host, hwirq);
if (virq != IRQ_NONE) {
pr_debug("irq: -> existing mapping on virq %d\n", virq);
host->ops->map(host, virq, hwirq, flags);
return virq;
}
/* Get a virtual interrupt number */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY) {
/* Handle legacy */
virq = (unsigned int)hwirq;
if (virq == 0 || virq >= NUM_ISA_INTERRUPTS)
return NO_IRQ;
return virq;
} else {
/* Allocate a virtual interrupt number */
hint = hwirq % irq_virq_count;
virq = irq_alloc_virt(host, 1, hint);
if (virq == NO_IRQ) {
pr_debug("irq: -> virq allocation failed\n");
return NO_IRQ;
}
}
pr_debug("irq: -> obtained virq %d\n", virq);
/* Clear some flags */
get_irq_desc(virq)->status &= ~(IRQ_NOREQUEST | IRQ_LEVEL);
/* map it */
if (host->ops->map(host, virq, hwirq, flags)) {
pr_debug("irq: -> mapping failed, freeing\n");
irq_free_virt(virq, 1);
return NO_IRQ;
}
smp_wmb();
irq_map[virq].hwirq = hwirq;
smp_mb();
return virq;
}
EXPORT_SYMBOL_GPL(irq_create_mapping);
extern unsigned int irq_create_of_mapping(struct device_node *controller,
u32 *intspec, unsigned int intsize)
{
struct irq_host *host;
irq_hw_number_t hwirq;
unsigned int flags = IRQ_TYPE_NONE;
if (controller == NULL)
host = irq_default_host;
else
host = irq_find_host(controller);
if (host == NULL)
return NO_IRQ;
/* If host has no translation, then we assume interrupt line */
if (host->ops->xlate == NULL)
hwirq = intspec[0];
else {
if (host->ops->xlate(host, controller, intspec, intsize,
&hwirq, &flags))
return NO_IRQ;
}
return irq_create_mapping(host, hwirq, flags);
}
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
unsigned int irq_of_parse_and_map(struct device_node *dev, int index)
{
struct of_irq oirq;
if (of_irq_map_one(dev, index, &oirq))
return NO_IRQ;
return irq_create_of_mapping(oirq.controller, oirq.specifier,
oirq.size);
}
EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
void irq_dispose_mapping(unsigned int virq)
{
struct irq_host *host = irq_map[virq].host;
irq_hw_number_t hwirq;
unsigned long flags;
WARN_ON (host == NULL);
if (host == NULL)
return;
/* Never unmap legacy interrupts */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
return;
/* remove chip and handler */
set_irq_chip_and_handler(virq, NULL, NULL);
/* Make sure it's completed */
synchronize_irq(virq);
/* Tell the PIC about it */
if (host->ops->unmap)
host->ops->unmap(host, virq);
smp_mb();
/* Clear reverse map */
hwirq = irq_map[virq].hwirq;
switch(host->revmap_type) {
case IRQ_HOST_MAP_LINEAR:
if (hwirq < host->revmap_data.linear.size)
host->revmap_data.linear.revmap[hwirq] = IRQ_NONE;
break;
case IRQ_HOST_MAP_TREE:
/* Check if radix tree allocated yet */
if (host->revmap_data.tree.gfp_mask == 0)
break;
/* XXX radix tree not safe ! remove lock whem it becomes safe
* and use some RCU sync to make sure everything is ok before we
* can re-use that map entry
*/
spin_lock_irqsave(&irq_big_lock, flags);
radix_tree_delete(&host->revmap_data.tree, hwirq);
spin_unlock_irqrestore(&irq_big_lock, flags);
break;
}
/* Destroy map */
smp_mb();
irq_map[virq].hwirq = host->inval_irq;
/* Set some flags */
get_irq_desc(virq)->status |= IRQ_NOREQUEST;
/* Free it */
irq_free_virt(virq, 1);
}
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
unsigned int irq_find_mapping(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int i;
unsigned int hint = hwirq % irq_virq_count;
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL)
return NO_IRQ;
/* legacy -> bail early */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
return hwirq;
/* Slow path does a linear search of the map */
if (hint < NUM_ISA_INTERRUPTS)
hint = NUM_ISA_INTERRUPTS;
i = hint;
do {
if (irq_map[i].host == host &&
irq_map[i].hwirq == hwirq)
return i;
i++;
if (i >= irq_virq_count)
i = NUM_ISA_INTERRUPTS;
} while(i != hint);
return NO_IRQ;
}
EXPORT_SYMBOL_GPL(irq_find_mapping);
unsigned int irq_radix_revmap(struct irq_host *host,
irq_hw_number_t hwirq)
{
struct radix_tree_root *tree;
struct irq_map_entry *ptr;
unsigned int virq;
unsigned long flags;
WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE);
/* Check if the radix tree exist yet. We test the value of
* the gfp_mask for that. Sneaky but saves another int in the
* structure. If not, we fallback to slow mode
*/
tree = &host->revmap_data.tree;
if (tree->gfp_mask == 0)
return irq_find_mapping(host, hwirq);
/* XXX Current radix trees are NOT SMP safe !!! Remove that lock
* when that is fixed (when Nick's patch gets in
*/
spin_lock_irqsave(&irq_big_lock, flags);
/* Now try to resolve */
ptr = radix_tree_lookup(tree, hwirq);
/* Found it, return */
if (ptr) {
virq = ptr - irq_map;
goto bail;
}
/* If not there, try to insert it */
virq = irq_find_mapping(host, hwirq);
if (virq != NO_IRQ)
radix_tree_insert(tree, virq, &irq_map[virq]);
bail:
spin_unlock_irqrestore(&irq_big_lock, flags);
return virq;
}
unsigned int irq_linear_revmap(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int *revmap;
WARN_ON(host->revmap_type != IRQ_HOST_MAP_LINEAR);
/* Check revmap bounds */
if (unlikely(hwirq >= host->revmap_data.linear.size))
return irq_find_mapping(host, hwirq);
/* Check if revmap was allocated */
revmap = host->revmap_data.linear.revmap;
if (unlikely(revmap == NULL))
return irq_find_mapping(host, hwirq);
/* Fill up revmap with slow path if no mapping found */
if (unlikely(revmap[hwirq] == NO_IRQ))
revmap[hwirq] = irq_find_mapping(host, hwirq);
return revmap[hwirq];
}
unsigned int irq_alloc_virt(struct irq_host *host,
unsigned int count,
unsigned int hint)
{
unsigned long flags;
unsigned int i, j, found = NO_IRQ;
unsigned int limit = irq_virq_count - count;
if (count == 0 || count > (irq_virq_count - NUM_ISA_INTERRUPTS))
return NO_IRQ;
spin_lock_irqsave(&irq_big_lock, flags);
/* Use hint for 1 interrupt if any */
if (count == 1 && hint >= NUM_ISA_INTERRUPTS &&
hint < irq_virq_count && irq_map[hint].host == NULL) {
found = hint;
goto hint_found;
}
/* Look for count consecutive numbers in the allocatable
* (non-legacy) space
*/
for (i = NUM_ISA_INTERRUPTS; i <= limit; ) {
for (j = i; j < (i + count); j++)
if (irq_map[j].host != NULL) {
i = j + 1;
continue;
}
found = i;
break;
}
if (found == NO_IRQ) {
spin_unlock_irqrestore(&irq_big_lock, flags);
return NO_IRQ;
}
hint_found:
for (i = found; i < (found + count); i++) {
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = host;
}
spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
void irq_free_virt(unsigned int virq, unsigned int count)
{
unsigned long flags;
unsigned int i;
WARN_ON (virq < NUM_ISA_INTERRUPTS);
WARN_ON (count == 0 || (virq + count) > irq_virq_count);
spin_lock_irqsave(&irq_big_lock, flags);
for (i = virq; i < (virq + count); i++) {
struct irq_host *host;
if (i < NUM_ISA_INTERRUPTS ||
(virq + count) > irq_virq_count)
continue;
host = irq_map[i].host;
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = NULL;
}
spin_unlock_irqrestore(&irq_big_lock, flags);
}
void irq_early_init(void)
{
unsigned int i;
for (i = 0; i < NR_IRQS; i++)
get_irq_desc(i)->status |= IRQ_NOREQUEST;
}
/* We need to create the radix trees late */
static int irq_late_init(void)
{
struct irq_host *h;
unsigned long flags;
spin_lock_irqsave(&irq_big_lock, flags);
list_for_each_entry(h, &irq_hosts, link) {
if (h->revmap_type == IRQ_HOST_MAP_TREE)
INIT_RADIX_TREE(&h->revmap_data.tree, GFP_ATOMIC);
}
spin_unlock_irqrestore(&irq_big_lock, flags);
return 0;
}
arch_initcall(irq_late_init);
#endif /* CONFIG_PPC_MERGE */
#ifdef CONFIG_PCI_MSI
int pci_enable_msi(struct pci_dev * pdev)
{

View File

@ -28,6 +28,7 @@ static struct legacy_serial_info {
struct device_node *np;
unsigned int speed;
unsigned int clock;
int irq_check_parent;
phys_addr_t taddr;
} legacy_serial_infos[MAX_LEGACY_SERIAL_PORTS];
static unsigned int legacy_serial_count;
@ -36,7 +37,7 @@ static int legacy_serial_console = -1;
static int __init add_legacy_port(struct device_node *np, int want_index,
int iotype, phys_addr_t base,
phys_addr_t taddr, unsigned long irq,
upf_t flags)
upf_t flags, int irq_check_parent)
{
u32 *clk, *spd, clock = BASE_BAUD * 16;
int index;
@ -68,7 +69,7 @@ static int __init add_legacy_port(struct device_node *np, int want_index,
if (legacy_serial_infos[index].np != 0) {
/* if we still have some room, move it, else override */
if (legacy_serial_count < MAX_LEGACY_SERIAL_PORTS) {
printk(KERN_INFO "Moved legacy port %d -> %d\n",
printk(KERN_DEBUG "Moved legacy port %d -> %d\n",
index, legacy_serial_count);
legacy_serial_ports[legacy_serial_count] =
legacy_serial_ports[index];
@ -76,7 +77,7 @@ static int __init add_legacy_port(struct device_node *np, int want_index,
legacy_serial_infos[index];
legacy_serial_count++;
} else {
printk(KERN_INFO "Replacing legacy port %d\n", index);
printk(KERN_DEBUG "Replacing legacy port %d\n", index);
}
}
@ -95,10 +96,11 @@ static int __init add_legacy_port(struct device_node *np, int want_index,
legacy_serial_infos[index].np = of_node_get(np);
legacy_serial_infos[index].clock = clock;
legacy_serial_infos[index].speed = spd ? *spd : 0;
legacy_serial_infos[index].irq_check_parent = irq_check_parent;
printk(KERN_INFO "Found legacy serial port %d for %s\n",
printk(KERN_DEBUG "Found legacy serial port %d for %s\n",
index, np->full_name);
printk(KERN_INFO " %s=%llx, taddr=%llx, irq=%lx, clk=%d, speed=%d\n",
printk(KERN_DEBUG " %s=%llx, taddr=%llx, irq=%lx, clk=%d, speed=%d\n",
(iotype == UPIO_PORT) ? "port" : "mem",
(unsigned long long)base, (unsigned long long)taddr, irq,
legacy_serial_ports[index].uartclk,
@ -126,11 +128,13 @@ static int __init add_legacy_soc_port(struct device_node *np,
return -1;
addr = of_translate_address(soc_dev, addrp);
if (addr == OF_BAD_ADDR)
return -1;
/* Add port, irq will be dealt with later. We passed a translated
* IO port value. It will be fixed up later along with the irq
*/
return add_legacy_port(np, -1, UPIO_MEM, addr, addr, NO_IRQ, flags);
return add_legacy_port(np, -1, UPIO_MEM, addr, addr, NO_IRQ, flags, 0);
}
static int __init add_legacy_isa_port(struct device_node *np,
@ -141,6 +145,8 @@ static int __init add_legacy_isa_port(struct device_node *np,
int index = -1;
phys_addr_t taddr;
DBG(" -> add_legacy_isa_port(%s)\n", np->full_name);
/* Get the ISA port number */
reg = (u32 *)get_property(np, "reg", NULL);
if (reg == NULL)
@ -161,9 +167,12 @@ static int __init add_legacy_isa_port(struct device_node *np,
/* Translate ISA address */
taddr = of_translate_address(np, reg);
if (taddr == OF_BAD_ADDR)
return -1;
/* Add port, irq will be dealt with later */
return add_legacy_port(np, index, UPIO_PORT, reg[1], taddr, NO_IRQ, UPF_BOOT_AUTOCONF);
return add_legacy_port(np, index, UPIO_PORT, reg[1], taddr,
NO_IRQ, UPF_BOOT_AUTOCONF, 0);
}
@ -176,6 +185,8 @@ static int __init add_legacy_pci_port(struct device_node *np,
unsigned int flags;
int iotype, index = -1, lindex = 0;
DBG(" -> add_legacy_pci_port(%s)\n", np->full_name);
/* We only support ports that have a clock frequency properly
* encoded in the device-tree (that is have an fcode). Anything
* else can't be used that early and will be normally probed by
@ -194,6 +205,8 @@ static int __init add_legacy_pci_port(struct device_node *np,
/* We only support BAR 0 for now */
iotype = (flags & IORESOURCE_MEM) ? UPIO_MEM : UPIO_PORT;
addr = of_translate_address(pci_dev, addrp);
if (addr == OF_BAD_ADDR)
return -1;
/* Set the IO base to the same as the translated address for MMIO,
* or to the domain local IO base for PIO (it will be fixed up later)
@ -231,7 +244,8 @@ static int __init add_legacy_pci_port(struct device_node *np,
/* Add port, irq will be dealt with later. We passed a translated
* IO port value. It will be fixed up later along with the irq
*/
return add_legacy_port(np, index, iotype, base, addr, NO_IRQ, UPF_BOOT_AUTOCONF);
return add_legacy_port(np, index, iotype, base, addr, NO_IRQ,
UPF_BOOT_AUTOCONF, np != pci_dev);
}
#endif
@ -362,27 +376,22 @@ static void __init fixup_port_irq(int index,
struct device_node *np,
struct plat_serial8250_port *port)
{
unsigned int virq;
DBG("fixup_port_irq(%d)\n", index);
/* Check for interrupts in that node */
if (np->n_intrs > 0) {
port->irq = np->intrs[0].line;
DBG(" port %d (%s), irq=%d\n",
index, np->full_name, port->irq);
return;
virq = irq_of_parse_and_map(np, 0);
if (virq == NO_IRQ && legacy_serial_infos[index].irq_check_parent) {
np = of_get_parent(np);
if (np == NULL)
return;
virq = irq_of_parse_and_map(np, 0);
of_node_put(np);
}
/* Check for interrupts in the parent */
np = of_get_parent(np);
if (np == NULL)
if (virq == NO_IRQ)
return;
if (np->n_intrs > 0) {
port->irq = np->intrs[0].line;
DBG(" port %d (%s), irq=%d\n",
index, np->full_name, port->irq);
}
of_node_put(np);
port->irq = virq;
}
static void __init fixup_port_pio(int index,

View File

@ -51,12 +51,14 @@ _GLOBAL(call_do_softirq)
mtlr r0
blr
_GLOBAL(call___do_IRQ)
_GLOBAL(call_handle_irq)
ld r8,0(r7)
mflr r0
std r0,16(r1)
stdu r1,THREAD_SIZE-112(r5)
mr r1,r5
bl .__do_IRQ
mtctr r8
stdu r1,THREAD_SIZE-112(r6)
mr r1,r6
bctrl
ld r1,0(r1)
ld r0,16(r1)
mtlr r0

View File

@ -1404,6 +1404,43 @@ pcibios_update_irq(struct pci_dev *dev, int irq)
/* XXX FIXME - update OF device tree node interrupt property */
}
#ifdef CONFIG_PPC_MERGE
/* XXX This is a copy of the ppc64 version. This is temporary until we start
* merging the 2 PCI layers
*/
/*
* Reads the interrupt pin to determine if interrupt is use by card.
* If the interrupt is used, then gets the interrupt line from the
* openfirmware and sets it in the pci_dev and pci_config line.
*/
int pci_read_irq_line(struct pci_dev *pci_dev)
{
struct of_irq oirq;
unsigned int virq;
DBG("Try to map irq for %s...\n", pci_name(pci_dev));
if (of_irq_map_pci(pci_dev, &oirq)) {
DBG(" -> failed !\n");
return -1;
}
DBG(" -> got one, spec %d cells (0x%08x...) on %s\n",
oirq.size, oirq.specifier[0], oirq.controller->full_name);
virq = irq_create_of_mapping(oirq.controller, oirq.specifier, oirq.size);
if(virq == NO_IRQ) {
DBG(" -> failed to map !\n");
return -1;
}
pci_dev->irq = virq;
pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, virq);
return 0;
}
EXPORT_SYMBOL(pci_read_irq_line);
#endif /* CONFIG_PPC_MERGE */
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
u16 cmd, old_cmd;

View File

@ -398,12 +398,8 @@ struct pci_dev *of_create_pci_dev(struct device_node *node,
} else {
dev->hdr_type = PCI_HEADER_TYPE_NORMAL;
dev->rom_base_reg = PCI_ROM_ADDRESS;
/* Maybe do a default OF mapping here */
dev->irq = NO_IRQ;
if (node->n_intrs > 0) {
dev->irq = node->intrs[0].line;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE,
dev->irq);
}
}
pci_parse_of_addrs(node, dev);
@ -1288,23 +1284,26 @@ EXPORT_SYMBOL(pcibios_fixup_bus);
*/
int pci_read_irq_line(struct pci_dev *pci_dev)
{
u8 intpin;
struct device_node *node;
struct of_irq oirq;
unsigned int virq;
pci_read_config_byte(pci_dev, PCI_INTERRUPT_PIN, &intpin);
if (intpin == 0)
return 0;
DBG("Try to map irq for %s...\n", pci_name(pci_dev));
node = pci_device_to_OF_node(pci_dev);
if (node == NULL)
if (of_irq_map_pci(pci_dev, &oirq)) {
DBG(" -> failed !\n");
return -1;
}
if (node->n_intrs == 0)
DBG(" -> got one, spec %d cells (0x%08x...) on %s\n",
oirq.size, oirq.specifier[0], oirq.controller->full_name);
virq = irq_create_of_mapping(oirq.controller, oirq.specifier, oirq.size);
if(virq == NO_IRQ) {
DBG(" -> failed to map !\n");
return -1;
pci_dev->irq = node->intrs[0].line;
pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, pci_dev->irq);
}
pci_dev->irq = virq;
pci_write_config_byte(pci_dev, PCI_INTERRUPT_LINE, virq);
return 0;
}

View File

@ -30,6 +30,7 @@
#include <linux/module.h>
#include <linux/kexec.h>
#include <linux/debugfs.h>
#include <linux/irq.h>
#include <asm/prom.h>
#include <asm/rtas.h>
@ -86,424 +87,6 @@ static DEFINE_RWLOCK(devtree_lock);
/* export that to outside world */
struct device_node *of_chosen;
struct device_node *dflt_interrupt_controller;
int num_interrupt_controllers;
/*
* Wrapper for allocating memory for various data that needs to be
* attached to device nodes as they are processed at boot or when
* added to the device tree later (e.g. DLPAR). At boot there is
* already a region reserved so we just increment *mem_start by size;
* otherwise we call kmalloc.
*/
static void * prom_alloc(unsigned long size, unsigned long *mem_start)
{
unsigned long tmp;
if (!mem_start)
return kmalloc(size, GFP_KERNEL);
tmp = *mem_start;
*mem_start += size;
return (void *)tmp;
}
/*
* Find the device_node with a given phandle.
*/
static struct device_node * find_phandle(phandle ph)
{
struct device_node *np;
for (np = allnodes; np != 0; np = np->allnext)
if (np->linux_phandle == ph)
return np;
return NULL;
}
/*
* Find the interrupt parent of a node.
*/
static struct device_node * __devinit intr_parent(struct device_node *p)
{
phandle *parp;
parp = (phandle *) get_property(p, "interrupt-parent", NULL);
if (parp == NULL)
return p->parent;
p = find_phandle(*parp);
if (p != NULL)
return p;
/*
* On a powermac booted with BootX, we don't get to know the
* phandles for any nodes, so find_phandle will return NULL.
* Fortunately these machines only have one interrupt controller
* so there isn't in fact any ambiguity. -- paulus
*/
if (num_interrupt_controllers == 1)
p = dflt_interrupt_controller;
return p;
}
/*
* Find out the size of each entry of the interrupts property
* for a node.
*/
int __devinit prom_n_intr_cells(struct device_node *np)
{
struct device_node *p;
unsigned int *icp;
for (p = np; (p = intr_parent(p)) != NULL; ) {
icp = (unsigned int *)
get_property(p, "#interrupt-cells", NULL);
if (icp != NULL)
return *icp;
if (get_property(p, "interrupt-controller", NULL) != NULL
|| get_property(p, "interrupt-map", NULL) != NULL) {
printk("oops, node %s doesn't have #interrupt-cells\n",
p->full_name);
return 1;
}
}
#ifdef DEBUG_IRQ
printk("prom_n_intr_cells failed for %s\n", np->full_name);
#endif
return 1;
}
/*
* Map an interrupt from a device up to the platform interrupt
* descriptor.
*/
static int __devinit map_interrupt(unsigned int **irq, struct device_node **ictrler,
struct device_node *np, unsigned int *ints,
int nintrc)
{
struct device_node *p, *ipar;
unsigned int *imap, *imask, *ip;
int i, imaplen, match;
int newintrc = 0, newaddrc = 0;
unsigned int *reg;
int naddrc;
reg = (unsigned int *) get_property(np, "reg", NULL);
naddrc = prom_n_addr_cells(np);
p = intr_parent(np);
while (p != NULL) {
if (get_property(p, "interrupt-controller", NULL) != NULL)
/* this node is an interrupt controller, stop here */
break;
imap = (unsigned int *)
get_property(p, "interrupt-map", &imaplen);
if (imap == NULL) {
p = intr_parent(p);
continue;
}
imask = (unsigned int *)
get_property(p, "interrupt-map-mask", NULL);
if (imask == NULL) {
printk("oops, %s has interrupt-map but no mask\n",
p->full_name);
return 0;
}
imaplen /= sizeof(unsigned int);
match = 0;
ipar = NULL;
while (imaplen > 0 && !match) {
/* check the child-interrupt field */
match = 1;
for (i = 0; i < naddrc && match; ++i)
match = ((reg[i] ^ imap[i]) & imask[i]) == 0;
for (; i < naddrc + nintrc && match; ++i)
match = ((ints[i-naddrc] ^ imap[i]) & imask[i]) == 0;
imap += naddrc + nintrc;
imaplen -= naddrc + nintrc;
/* grab the interrupt parent */
ipar = find_phandle((phandle) *imap++);
--imaplen;
if (ipar == NULL && num_interrupt_controllers == 1)
/* cope with BootX not giving us phandles */
ipar = dflt_interrupt_controller;
if (ipar == NULL) {
printk("oops, no int parent %x in map of %s\n",
imap[-1], p->full_name);
return 0;
}
/* find the parent's # addr and intr cells */
ip = (unsigned int *)
get_property(ipar, "#interrupt-cells", NULL);
if (ip == NULL) {
printk("oops, no #interrupt-cells on %s\n",
ipar->full_name);
return 0;
}
newintrc = *ip;
ip = (unsigned int *)
get_property(ipar, "#address-cells", NULL);
newaddrc = (ip == NULL)? 0: *ip;
imap += newaddrc + newintrc;
imaplen -= newaddrc + newintrc;
}
if (imaplen < 0) {
printk("oops, error decoding int-map on %s, len=%d\n",
p->full_name, imaplen);
return 0;
}
if (!match) {
#ifdef DEBUG_IRQ
printk("oops, no match in %s int-map for %s\n",
p->full_name, np->full_name);
#endif
return 0;
}
p = ipar;
naddrc = newaddrc;
nintrc = newintrc;
ints = imap - nintrc;
reg = ints - naddrc;
}
if (p == NULL) {
#ifdef DEBUG_IRQ
printk("hmmm, int tree for %s doesn't have ctrler\n",
np->full_name);
#endif
return 0;
}
*irq = ints;
*ictrler = p;
return nintrc;
}
static unsigned char map_isa_senses[4] = {
IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE,
IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE,
IRQ_SENSE_EDGE | IRQ_POLARITY_NEGATIVE,
IRQ_SENSE_EDGE | IRQ_POLARITY_POSITIVE
};
static unsigned char map_mpic_senses[4] = {
IRQ_SENSE_EDGE | IRQ_POLARITY_POSITIVE,
IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE,
/* 2 seems to be used for the 8259 cascade... */
IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE,
IRQ_SENSE_EDGE | IRQ_POLARITY_NEGATIVE,
};
static int __devinit finish_node_interrupts(struct device_node *np,
unsigned long *mem_start,
int measure_only)
{
unsigned int *ints;
int intlen, intrcells, intrcount;
int i, j, n, sense;
unsigned int *irq, virq;
struct device_node *ic;
int trace = 0;
//#define TRACE(fmt...) do { if (trace) { printk(fmt); mdelay(1000); } } while(0)
#define TRACE(fmt...)
if (!strcmp(np->name, "smu-doorbell"))
trace = 1;
TRACE("Finishing SMU doorbell ! num_interrupt_controllers = %d\n",
num_interrupt_controllers);
if (num_interrupt_controllers == 0) {
/*
* Old machines just have a list of interrupt numbers
* and no interrupt-controller nodes.
*/
ints = (unsigned int *) get_property(np, "AAPL,interrupts",
&intlen);
/* XXX old interpret_pci_props looked in parent too */
/* XXX old interpret_macio_props looked for interrupts
before AAPL,interrupts */
if (ints == NULL)
ints = (unsigned int *) get_property(np, "interrupts",
&intlen);
if (ints == NULL)
return 0;
np->n_intrs = intlen / sizeof(unsigned int);
np->intrs = prom_alloc(np->n_intrs * sizeof(np->intrs[0]),
mem_start);
if (!np->intrs)
return -ENOMEM;
if (measure_only)
return 0;
for (i = 0; i < np->n_intrs; ++i) {
np->intrs[i].line = *ints++;
np->intrs[i].sense = IRQ_SENSE_LEVEL
| IRQ_POLARITY_NEGATIVE;
}
return 0;
}
ints = (unsigned int *) get_property(np, "interrupts", &intlen);
TRACE("ints=%p, intlen=%d\n", ints, intlen);
if (ints == NULL)
return 0;
intrcells = prom_n_intr_cells(np);
intlen /= intrcells * sizeof(unsigned int);
TRACE("intrcells=%d, new intlen=%d\n", intrcells, intlen);
np->intrs = prom_alloc(intlen * sizeof(*(np->intrs)), mem_start);
if (!np->intrs)
return -ENOMEM;
if (measure_only)
return 0;
intrcount = 0;
for (i = 0; i < intlen; ++i, ints += intrcells) {
n = map_interrupt(&irq, &ic, np, ints, intrcells);
TRACE("map, irq=%d, ic=%p, n=%d\n", irq, ic, n);
if (n <= 0)
continue;
/* don't map IRQ numbers under a cascaded 8259 controller */
if (ic && device_is_compatible(ic, "chrp,iic")) {
np->intrs[intrcount].line = irq[0];
sense = (n > 1)? (irq[1] & 3): 3;
np->intrs[intrcount].sense = map_isa_senses[sense];
} else {
virq = virt_irq_create_mapping(irq[0]);
TRACE("virq=%d\n", virq);
#ifdef CONFIG_PPC64
if (virq == NO_IRQ) {
printk(KERN_CRIT "Could not allocate interrupt"
" number for %s\n", np->full_name);
continue;
}
#endif
np->intrs[intrcount].line = irq_offset_up(virq);
sense = (n > 1)? (irq[1] & 3): 1;
/* Apple uses bits in there in a different way, let's
* only keep the real sense bit on macs
*/
if (machine_is(powermac))
sense &= 0x1;
np->intrs[intrcount].sense = map_mpic_senses[sense];
}
#ifdef CONFIG_PPC64
/* We offset irq numbers for the u3 MPIC by 128 in PowerMac */
if (machine_is(powermac) && ic && ic->parent) {
char *name = get_property(ic->parent, "name", NULL);
if (name && !strcmp(name, "u3"))
np->intrs[intrcount].line += 128;
else if (!(name && (!strcmp(name, "mac-io") ||
!strcmp(name, "u4"))))
/* ignore other cascaded controllers, such as
the k2-sata-root */
break;
}
#endif /* CONFIG_PPC64 */
if (n > 2) {
printk("hmmm, got %d intr cells for %s:", n,
np->full_name);
for (j = 0; j < n; ++j)
printk(" %d", irq[j]);
printk("\n");
}
++intrcount;
}
np->n_intrs = intrcount;
return 0;
}
static int __devinit finish_node(struct device_node *np,
unsigned long *mem_start,
int measure_only)
{
struct device_node *child;
int rc = 0;
rc = finish_node_interrupts(np, mem_start, measure_only);
if (rc)
goto out;
for (child = np->child; child != NULL; child = child->sibling) {
rc = finish_node(child, mem_start, measure_only);
if (rc)
goto out;
}
out:
return rc;
}
static void __init scan_interrupt_controllers(void)
{
struct device_node *np;
int n = 0;
char *name, *ic;
int iclen;
for (np = allnodes; np != NULL; np = np->allnext) {
ic = get_property(np, "interrupt-controller", &iclen);
name = get_property(np, "name", NULL);
/* checking iclen makes sure we don't get a false
match on /chosen.interrupt_controller */
if ((name != NULL
&& strcmp(name, "interrupt-controller") == 0)
|| (ic != NULL && iclen == 0
&& strcmp(name, "AppleKiwi"))) {
if (n == 0)
dflt_interrupt_controller = np;
++n;
}
}
num_interrupt_controllers = n;
}
/**
* finish_device_tree is called once things are running normally
* (i.e. with text and data mapped to the address they were linked at).
* It traverses the device tree and fills in some of the additional,
* fields in each node like {n_}addrs and {n_}intrs, the virt interrupt
* mapping is also initialized at this point.
*/
void __init finish_device_tree(void)
{
unsigned long start, end, size = 0;
DBG(" -> finish_device_tree\n");
#ifdef CONFIG_PPC64
/* Initialize virtual IRQ map */
virt_irq_init();
#endif
scan_interrupt_controllers();
/*
* Finish device-tree (pre-parsing some properties etc...)
* We do this in 2 passes. One with "measure_only" set, which
* will only measure the amount of memory needed, then we can
* allocate that memory, and call finish_node again. However,
* we must be careful as most routines will fail nowadays when
* prom_alloc() returns 0, so we must make sure our first pass
* doesn't start at 0. We pre-initialize size to 16 for that
* reason and then remove those additional 16 bytes
*/
size = 16;
finish_node(allnodes, &size, 1);
size -= 16;
if (0 == size)
end = start = 0;
else
end = start = (unsigned long)__va(lmb_alloc(size, 128));
finish_node(allnodes, &end, 0);
BUG_ON(end != start + size);
DBG(" <- finish_device_tree\n");
}
static inline char *find_flat_dt_string(u32 offset)
{
return ((char *)initial_boot_params) +
@ -1388,27 +971,6 @@ prom_n_size_cells(struct device_node* np)
}
EXPORT_SYMBOL(prom_n_size_cells);
/**
* Work out the sense (active-low level / active-high edge)
* of each interrupt from the device tree.
*/
void __init prom_get_irq_senses(unsigned char *senses, int off, int max)
{
struct device_node *np;
int i, j;
/* default to level-triggered */
memset(senses, IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE, max - off);
for (np = allnodes; np != 0; np = np->allnext) {
for (j = 0; j < np->n_intrs; j++) {
i = np->intrs[j].line;
if (i >= off && i < max)
senses[i-off] = np->intrs[j].sense;
}
}
}
/**
* Construct and return a list of the device_nodes with a given name.
*/
@ -1808,7 +1370,6 @@ static void of_node_release(struct kref *kref)
node->deadprops = NULL;
}
}
kfree(node->intrs);
kfree(node->full_name);
kfree(node->data);
kfree(node);
@ -1881,13 +1442,7 @@ void of_detach_node(const struct device_node *np)
#ifdef CONFIG_PPC_PSERIES
/*
* Fix up the uninitialized fields in a new device node:
* name, type, n_addrs, addrs, n_intrs, intrs, and pci-specific fields
*
* A lot of boot-time code is duplicated here, because functions such
* as finish_node_interrupts, interpret_pci_props, etc. cannot use the
* slab allocator.
*
* This should probably be split up into smaller chunks.
* name, type and pci-specific fields
*/
static int of_finish_dynamic_node(struct device_node *node)
@ -1928,8 +1483,6 @@ static int prom_reconfig_notifier(struct notifier_block *nb,
switch (action) {
case PSERIES_RECONFIG_ADD:
err = of_finish_dynamic_node(node);
if (!err)
finish_node(node, NULL, 0);
if (err < 0) {
printk(KERN_ERR "finish_node returned %d\n", err);
err = NOTIFY_BAD;
@ -1975,8 +1528,7 @@ struct property *of_find_property(struct device_node *np, const char *name,
* Find a property with a given name for a given node
* and return the value.
*/
unsigned char *get_property(struct device_node *np, const char *name,
int *lenp)
void *get_property(struct device_node *np, const char *name, int *lenp)
{
struct property *pp = of_find_property(np,name,lenp);
return pp ? pp->value : NULL;

View File

@ -1990,12 +1990,22 @@ static void __init flatten_device_tree(void)
static void __init fixup_device_tree_maple(void)
{
phandle isa;
u32 rloc = 0x01002000; /* IO space; PCI device = 4 */
u32 isa_ranges[6];
char *name;
isa = call_prom("finddevice", 1, 1, ADDR("/ht@0/isa@4"));
name = "/ht@0/isa@4";
isa = call_prom("finddevice", 1, 1, ADDR(name));
if (!PHANDLE_VALID(isa)) {
name = "/ht@0/isa@6";
isa = call_prom("finddevice", 1, 1, ADDR(name));
rloc = 0x01003000; /* IO space; PCI device = 6 */
}
if (!PHANDLE_VALID(isa))
return;
if (prom_getproplen(isa, "ranges") != 12)
return;
if (prom_getprop(isa, "ranges", isa_ranges, sizeof(isa_ranges))
== PROM_ERROR)
return;
@ -2005,15 +2015,15 @@ static void __init fixup_device_tree_maple(void)
isa_ranges[2] != 0x00010000)
return;
prom_printf("fixing up bogus ISA range on Maple...\n");
prom_printf("Fixing up bogus ISA range on Maple/Apache...\n");
isa_ranges[0] = 0x1;
isa_ranges[1] = 0x0;
isa_ranges[2] = 0x01002000; /* IO space; PCI device = 4 */
isa_ranges[2] = rloc;
isa_ranges[3] = 0x0;
isa_ranges[4] = 0x0;
isa_ranges[5] = 0x00010000;
prom_setprop(isa, "/ht@0/isa@4", "ranges",
prom_setprop(isa, name, "ranges",
isa_ranges, sizeof(isa_ranges));
}
#else

View File

@ -38,14 +38,6 @@ static void of_dump_addr(const char *s, u32 *addr, int na)
static void of_dump_addr(const char *s, u32 *addr, int na) { }
#endif
/* Read a big address */
static inline u64 of_read_addr(u32 *cell, int size)
{
u64 r = 0;
while (size--)
r = (r << 32) | *(cell++);
return r;
}
/* Callbacks for bus specific translators */
struct of_bus {
@ -77,9 +69,9 @@ static u64 of_bus_default_map(u32 *addr, u32 *range, int na, int ns, int pna)
{
u64 cp, s, da;
cp = of_read_addr(range, na);
s = of_read_addr(range + na + pna, ns);
da = of_read_addr(addr, na);
cp = of_read_number(range, na);
s = of_read_number(range + na + pna, ns);
da = of_read_number(addr, na);
DBG("OF: default map, cp="PRu64", s="PRu64", da="PRu64"\n",
cp, s, da);
@ -91,7 +83,7 @@ static u64 of_bus_default_map(u32 *addr, u32 *range, int na, int ns, int pna)
static int of_bus_default_translate(u32 *addr, u64 offset, int na)
{
u64 a = of_read_addr(addr, na);
u64 a = of_read_number(addr, na);
memset(addr, 0, na * 4);
a += offset;
if (na > 1)
@ -135,9 +127,9 @@ static u64 of_bus_pci_map(u32 *addr, u32 *range, int na, int ns, int pna)
return OF_BAD_ADDR;
/* Read address values, skipping high cell */
cp = of_read_addr(range + 1, na - 1);
s = of_read_addr(range + na + pna, ns);
da = of_read_addr(addr + 1, na - 1);
cp = of_read_number(range + 1, na - 1);
s = of_read_number(range + na + pna, ns);
da = of_read_number(addr + 1, na - 1);
DBG("OF: PCI map, cp="PRu64", s="PRu64", da="PRu64"\n", cp, s, da);
@ -195,9 +187,9 @@ static u64 of_bus_isa_map(u32 *addr, u32 *range, int na, int ns, int pna)
return OF_BAD_ADDR;
/* Read address values, skipping high cell */
cp = of_read_addr(range + 1, na - 1);
s = of_read_addr(range + na + pna, ns);
da = of_read_addr(addr + 1, na - 1);
cp = of_read_number(range + 1, na - 1);
s = of_read_number(range + na + pna, ns);
da = of_read_number(addr + 1, na - 1);
DBG("OF: ISA map, cp="PRu64", s="PRu64", da="PRu64"\n", cp, s, da);
@ -295,7 +287,7 @@ static int of_translate_one(struct device_node *parent, struct of_bus *bus,
*/
ranges = (u32 *)get_property(parent, "ranges", &rlen);
if (ranges == NULL || rlen == 0) {
offset = of_read_addr(addr, na);
offset = of_read_number(addr, na);
memset(addr, 0, pna * 4);
DBG("OF: no ranges, 1:1 translation\n");
goto finish;
@ -378,7 +370,7 @@ u64 of_translate_address(struct device_node *dev, u32 *in_addr)
/* If root, we have finished */
if (parent == NULL) {
DBG("OF: reached root node\n");
result = of_read_addr(addr, na);
result = of_read_number(addr, na);
break;
}
@ -442,7 +434,7 @@ u32 *of_get_address(struct device_node *dev, int index, u64 *size,
for (i = 0; psize >= onesize; psize -= onesize, prop += onesize, i++)
if (i == index) {
if (size)
*size = of_read_addr(prop + na, ns);
*size = of_read_number(prop + na, ns);
if (flags)
*flags = bus->get_flags(prop);
return prop;
@ -484,7 +476,7 @@ u32 *of_get_pci_address(struct device_node *dev, int bar_no, u64 *size,
for (i = 0; psize >= onesize; psize -= onesize, prop += onesize, i++)
if ((prop[0] & 0xff) == ((bar_no * 4) + PCI_BASE_ADDRESS_0)) {
if (size)
*size = of_read_addr(prop + na, ns);
*size = of_read_number(prop + na, ns);
if (flags)
*flags = bus->get_flags(prop);
return prop;
@ -565,11 +557,414 @@ void of_parse_dma_window(struct device_node *dn, unsigned char *dma_window_prop,
prop = get_property(dn, "#address-cells", NULL);
cells = prop ? *(u32 *)prop : prom_n_addr_cells(dn);
*phys = of_read_addr(dma_window, cells);
*phys = of_read_number(dma_window, cells);
dma_window += cells;
prop = get_property(dn, "ibm,#dma-size-cells", NULL);
cells = prop ? *(u32 *)prop : prom_n_size_cells(dn);
*size = of_read_addr(dma_window, cells);
*size = of_read_number(dma_window, cells);
}
/*
* Interrupt remapper
*/
static unsigned int of_irq_workarounds;
static struct device_node *of_irq_dflt_pic;
static struct device_node *of_irq_find_parent(struct device_node *child)
{
struct device_node *p;
phandle *parp;
if (!of_node_get(child))
return NULL;
do {
parp = (phandle *)get_property(child, "interrupt-parent", NULL);
if (parp == NULL)
p = of_get_parent(child);
else {
if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)
p = of_node_get(of_irq_dflt_pic);
else
p = of_find_node_by_phandle(*parp);
}
of_node_put(child);
child = p;
} while (p && get_property(p, "#interrupt-cells", NULL) == NULL);
return p;
}
static u8 of_irq_pci_swizzle(u8 slot, u8 pin)
{
return (((pin - 1) + slot) % 4) + 1;
}
/* This doesn't need to be called if you don't have any special workaround
* flags to pass
*/
void of_irq_map_init(unsigned int flags)
{
of_irq_workarounds = flags;
/* OldWorld, don't bother looking at other things */
if (flags & OF_IMAP_OLDWORLD_MAC)
return;
/* If we don't have phandles, let's try to locate a default interrupt
* controller (happens when booting with BootX). We do a first match
* here, hopefully, that only ever happens on machines with one
* controller.
*/
if (flags & OF_IMAP_NO_PHANDLE) {
struct device_node *np;
for(np = NULL; (np = of_find_all_nodes(np)) != NULL;) {
if (get_property(np, "interrupt-controller", NULL)
== NULL)
continue;
/* Skip /chosen/interrupt-controller */
if (strcmp(np->name, "chosen") == 0)
continue;
/* It seems like at least one person on this planet wants
* to use BootX on a machine with an AppleKiwi controller
* which happens to pretend to be an interrupt
* controller too.
*/
if (strcmp(np->name, "AppleKiwi") == 0)
continue;
/* I think we found one ! */
of_irq_dflt_pic = np;
break;
}
}
}
int of_irq_map_raw(struct device_node *parent, u32 *intspec, u32 *addr,
struct of_irq *out_irq)
{
struct device_node *ipar, *tnode, *old = NULL, *newpar = NULL;
u32 *tmp, *imap, *imask;
u32 intsize = 1, addrsize, newintsize = 0, newaddrsize = 0;
int imaplen, match, i;
ipar = of_node_get(parent);
/* First get the #interrupt-cells property of the current cursor
* that tells us how to interpret the passed-in intspec. If there
* is none, we are nice and just walk up the tree
*/
do {
tmp = (u32 *)get_property(ipar, "#interrupt-cells", NULL);
if (tmp != NULL) {
intsize = *tmp;
break;
}
tnode = ipar;
ipar = of_irq_find_parent(ipar);
of_node_put(tnode);
} while (ipar);
if (ipar == NULL) {
DBG(" -> no parent found !\n");
goto fail;
}
DBG("of_irq_map_raw: ipar=%s, size=%d\n", ipar->full_name, intsize);
/* Look for this #address-cells. We have to implement the old linux
* trick of looking for the parent here as some device-trees rely on it
*/
old = of_node_get(ipar);
do {
tmp = (u32 *)get_property(old, "#address-cells", NULL);
tnode = of_get_parent(old);
of_node_put(old);
old = tnode;
} while(old && tmp == NULL);
of_node_put(old);
old = NULL;
addrsize = (tmp == NULL) ? 2 : *tmp;
DBG(" -> addrsize=%d\n", addrsize);
/* Now start the actual "proper" walk of the interrupt tree */
while (ipar != NULL) {
/* Now check if cursor is an interrupt-controller and if it is
* then we are done
*/
if (get_property(ipar, "interrupt-controller", NULL) != NULL) {
DBG(" -> got it !\n");
memcpy(out_irq->specifier, intspec,
intsize * sizeof(u32));
out_irq->size = intsize;
out_irq->controller = ipar;
of_node_put(old);
return 0;
}
/* Now look for an interrupt-map */
imap = (u32 *)get_property(ipar, "interrupt-map", &imaplen);
/* No interrupt map, check for an interrupt parent */
if (imap == NULL) {
DBG(" -> no map, getting parent\n");
newpar = of_irq_find_parent(ipar);
goto skiplevel;
}
imaplen /= sizeof(u32);
/* Look for a mask */
imask = (u32 *)get_property(ipar, "interrupt-map-mask", NULL);
/* If we were passed no "reg" property and we attempt to parse
* an interrupt-map, then #address-cells must be 0.
* Fail if it's not.
*/
if (addr == NULL && addrsize != 0) {
DBG(" -> no reg passed in when needed !\n");
goto fail;
}
/* Parse interrupt-map */
match = 0;
while (imaplen > (addrsize + intsize + 1) && !match) {
/* Compare specifiers */
match = 1;
for (i = 0; i < addrsize && match; ++i) {
u32 mask = imask ? imask[i] : 0xffffffffu;
match = ((addr[i] ^ imap[i]) & mask) == 0;
}
for (; i < (addrsize + intsize) && match; ++i) {
u32 mask = imask ? imask[i] : 0xffffffffu;
match =
((intspec[i-addrsize] ^ imap[i]) & mask) == 0;
}
imap += addrsize + intsize;
imaplen -= addrsize + intsize;
DBG(" -> match=%d (imaplen=%d)\n", match, imaplen);
/* Get the interrupt parent */
if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)
newpar = of_node_get(of_irq_dflt_pic);
else
newpar = of_find_node_by_phandle((phandle)*imap);
imap++;
--imaplen;
/* Check if not found */
if (newpar == NULL) {
DBG(" -> imap parent not found !\n");
goto fail;
}
/* Get #interrupt-cells and #address-cells of new
* parent
*/
tmp = (u32 *)get_property(newpar, "#interrupt-cells",
NULL);
if (tmp == NULL) {
DBG(" -> parent lacks #interrupt-cells !\n");
goto fail;
}
newintsize = *tmp;
tmp = (u32 *)get_property(newpar, "#address-cells",
NULL);
newaddrsize = (tmp == NULL) ? 0 : *tmp;
DBG(" -> newintsize=%d, newaddrsize=%d\n",
newintsize, newaddrsize);
/* Check for malformed properties */
if (imaplen < (newaddrsize + newintsize))
goto fail;
imap += newaddrsize + newintsize;
imaplen -= newaddrsize + newintsize;
DBG(" -> imaplen=%d\n", imaplen);
}
if (!match)
goto fail;
of_node_put(old);
old = of_node_get(newpar);
addrsize = newaddrsize;
intsize = newintsize;
intspec = imap - intsize;
addr = intspec - addrsize;
skiplevel:
/* Iterate again with new parent */
DBG(" -> new parent: %s\n", newpar ? newpar->full_name : "<>");
of_node_put(ipar);
ipar = newpar;
newpar = NULL;
}
fail:
of_node_put(ipar);
of_node_put(old);
of_node_put(newpar);
return -EINVAL;
}
EXPORT_SYMBOL_GPL(of_irq_map_raw);
#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
static int of_irq_map_oldworld(struct device_node *device, int index,
struct of_irq *out_irq)
{
u32 *ints;
int intlen;
/*
* Old machines just have a list of interrupt numbers
* and no interrupt-controller nodes.
*/
ints = (u32 *) get_property(device, "AAPL,interrupts", &intlen);
if (ints == NULL)
return -EINVAL;
intlen /= sizeof(u32);
if (index >= intlen)
return -EINVAL;
out_irq->controller = NULL;
out_irq->specifier[0] = ints[index];
out_irq->size = 1;
return 0;
}
#else /* defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32) */
static int of_irq_map_oldworld(struct device_node *device, int index,
struct of_irq *out_irq)
{
return -EINVAL;
}
#endif /* !(defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)) */
int of_irq_map_one(struct device_node *device, int index, struct of_irq *out_irq)
{
struct device_node *p;
u32 *intspec, *tmp, intsize, intlen, *addr;
int res;
DBG("of_irq_map_one: dev=%s, index=%d\n", device->full_name, index);
/* OldWorld mac stuff is "special", handle out of line */
if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
return of_irq_map_oldworld(device, index, out_irq);
/* Get the interrupts property */
intspec = (u32 *)get_property(device, "interrupts", &intlen);
if (intspec == NULL)
return -EINVAL;
intlen /= sizeof(u32);
/* Get the reg property (if any) */
addr = (u32 *)get_property(device, "reg", NULL);
/* Look for the interrupt parent. */
p = of_irq_find_parent(device);
if (p == NULL)
return -EINVAL;
/* Get size of interrupt specifier */
tmp = (u32 *)get_property(p, "#interrupt-cells", NULL);
if (tmp == NULL) {
of_node_put(p);
return -EINVAL;
}
intsize = *tmp;
/* Check index */
if (index * intsize >= intlen)
return -EINVAL;
/* Get new specifier and map it */
res = of_irq_map_raw(p, intspec + index * intsize, addr, out_irq);
of_node_put(p);
return res;
}
EXPORT_SYMBOL_GPL(of_irq_map_one);
int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq)
{
struct device_node *dn, *ppnode;
struct pci_dev *ppdev;
u32 lspec;
u32 laddr[3];
u8 pin;
int rc;
/* Check if we have a device node, if yes, fallback to standard OF
* parsing
*/
dn = pci_device_to_OF_node(pdev);
if (dn)
return of_irq_map_one(dn, 0, out_irq);
/* Ok, we don't, time to have fun. Let's start by building up an
* interrupt spec. we assume #interrupt-cells is 1, which is standard
* for PCI. If you do different, then don't use that routine.
*/
rc = pci_read_config_byte(pdev, PCI_INTERRUPT_PIN, &pin);
if (rc != 0)
return rc;
/* No pin, exit */
if (pin == 0)
return -ENODEV;
/* Now we walk up the PCI tree */
lspec = pin;
for (;;) {
/* Get the pci_dev of our parent */
ppdev = pdev->bus->self;
/* Ouch, it's a host bridge... */
if (ppdev == NULL) {
#ifdef CONFIG_PPC64
ppnode = pci_bus_to_OF_node(pdev->bus);
#else
struct pci_controller *host;
host = pci_bus_to_host(pdev->bus);
ppnode = host ? host->arch_data : NULL;
#endif
/* No node for host bridge ? give up */
if (ppnode == NULL)
return -EINVAL;
} else
/* We found a P2P bridge, check if it has a node */
ppnode = pci_device_to_OF_node(ppdev);
/* Ok, we have found a parent with a device-node, hand over to
* the OF parsing code.
* We build a unit address from the linux device to be used for
* resolution. Note that we use the linux bus number which may
* not match your firmware bus numbering.
* Fortunately, in most cases, interrupt-map-mask doesn't include
* the bus number as part of the matching.
* You should still be careful about that though if you intend
* to rely on this function (you ship a firmware that doesn't
* create device nodes for all PCI devices).
*/
if (ppnode)
break;
/* We can only get here if we hit a P2P bridge with no node,
* let's do standard swizzling and try again
*/
lspec = of_irq_pci_swizzle(PCI_SLOT(pdev->devfn), lspec);
pdev = ppdev;
}
laddr[0] = (pdev->bus->number << 16)
| (pdev->devfn << 8);
laddr[1] = laddr[2] = 0;
return of_irq_map_raw(ppnode, &lspec, laddr, out_irq);
}
EXPORT_SYMBOL_GPL(of_irq_map_pci);

View File

@ -297,19 +297,9 @@ unsigned long __init find_and_init_phbs(void)
struct device_node *node;
struct pci_controller *phb;
unsigned int index;
unsigned int root_size_cells = 0;
unsigned int *opprop = NULL;
struct device_node *root = of_find_node_by_path("/");
if (ppc64_interrupt_controller == IC_OPEN_PIC) {
opprop = (unsigned int *)get_property(root,
"platform-open-pic", NULL);
}
root_size_cells = prom_n_size_cells(root);
index = 0;
for (node = of_get_next_child(root, NULL);
node != NULL;
node = of_get_next_child(root, node)) {
@ -324,13 +314,6 @@ unsigned long __init find_and_init_phbs(void)
setup_phb(node, phb);
pci_process_bridge_OF_ranges(phb, node, 0);
pci_setup_phb_io(phb, index == 0);
#ifdef CONFIG_PPC_PSERIES
/* XXX This code need serious fixing ... --BenH */
if (ppc64_interrupt_controller == IC_OPEN_PIC && pSeries_mpic) {
int addr = root_size_cells * (index + 2) - 1;
mpic_assign_isu(pSeries_mpic, index, opprop[addr]);
}
#endif
index++;
}

View File

@ -51,7 +51,6 @@
extern void bootx_init(unsigned long r4, unsigned long phys);
boot_infos_t *boot_infos;
struct ide_machdep_calls ppc_ide_md;
int boot_cpuid;
@ -240,7 +239,6 @@ void __init setup_arch(char **cmdline_p)
ppc_md.init_early();
find_legacy_serial_ports();
finish_device_tree();
smp_setup_cpu_maps();

View File

@ -361,12 +361,15 @@ void __init setup_system(void)
/*
* Fill the ppc64_caches & systemcfg structures with informations
* retrieved from the device-tree. Need to be called before
* finish_device_tree() since the later requires some of the
* informations filled up here to properly parse the interrupt tree.
* retrieved from the device-tree.
*/
initialize_cache_info();
/*
* Initialize irq remapping subsystem
*/
irq_early_init();
#ifdef CONFIG_PPC_RTAS
/*
* Initialize RTAS if available
@ -393,12 +396,6 @@ void __init setup_system(void)
*/
find_legacy_serial_ports();
/*
* "Finish" the device-tree, that is do the actual parsing of
* some of the properties like the interrupt map
*/
finish_device_tree();
/*
* Initialize xmon
*/
@ -427,8 +424,6 @@ void __init setup_system(void)
printk("-----------------------------------------------------\n");
printk("ppc64_pft_size = 0x%lx\n", ppc64_pft_size);
printk("ppc64_interrupt_controller = 0x%ld\n",
ppc64_interrupt_controller);
printk("physicalMemorySize = 0x%lx\n", lmb_phys_mem_size());
printk("ppc64_caches.dcache_line_size = 0x%x\n",
ppc64_caches.dline_size);

View File

@ -218,7 +218,6 @@ struct vio_dev * __devinit vio_register_device_node(struct device_node *of_node)
{
struct vio_dev *viodev;
unsigned int *unit_address;
unsigned int *irq_p;
/* we need the 'device_type' property, in order to match with drivers */
if (of_node->type == NULL) {
@ -243,16 +242,7 @@ struct vio_dev * __devinit vio_register_device_node(struct device_node *of_node)
viodev->dev.platform_data = of_node_get(of_node);
viodev->irq = NO_IRQ;
irq_p = (unsigned int *)get_property(of_node, "interrupts", NULL);
if (irq_p) {
int virq = virt_irq_create_mapping(*irq_p);
if (virq == NO_IRQ) {
printk(KERN_ERR "Unable to allocate interrupt "
"number for %s\n", of_node->full_name);
} else
viodev->irq = irq_offset_up(virq);
}
viodev->irq = irq_of_parse_and_map(of_node, 0);
snprintf(viodev->dev.bus_id, BUS_ID_SIZE, "%x", *unit_address);
viodev->name = of_node->name;

View File

@ -16,12 +16,21 @@ config MPC834x_SYS
3 PCI slots. The PIBs PCI initialization is the bootloader's
responsiblilty.
config MPC834x_ITX
bool "Freescale MPC834x ITX"
select DEFAULT_UIMAGE
help
This option enables support for the MPC 834x ITX evaluation board.
Be aware that PCI initialization is the bootloader's
responsiblilty.
endchoice
config MPC834x
bool
select PPC_UDBG_16550
select PPC_INDIRECT_PCI
default y if MPC834x_SYS
default y if MPC834x_SYS || MPC834x_ITX
endmenu

View File

@ -4,3 +4,4 @@
obj-y := misc.o
obj-$(CONFIG_PCI) += pci.o
obj-$(CONFIG_MPC834x_SYS) += mpc834x_sys.o
obj-$(CONFIG_MPC834x_ITX) += mpc834x_itx.o

View File

@ -0,0 +1,156 @@
/*
* arch/powerpc/platforms/83xx/mpc834x_itx.c
*
* MPC834x ITX board specific routines
*
* Maintainer: Kumar Gala <galak@kernel.crashing.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
#include <linux/config.h>
#include <linux/stddef.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/reboot.h>
#include <linux/pci.h>
#include <linux/kdev_t.h>
#include <linux/major.h>
#include <linux/console.h>
#include <linux/delay.h>
#include <linux/seq_file.h>
#include <linux/root_dev.h>
#include <asm/system.h>
#include <asm/atomic.h>
#include <asm/time.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/ipic.h>
#include <asm/bootinfo.h>
#include <asm/irq.h>
#include <asm/prom.h>
#include <asm/udbg.h>
#include <sysdev/fsl_soc.h>
#include "mpc83xx.h"
#include <platforms/83xx/mpc834x_sys.h>
#ifndef CONFIG_PCI
unsigned long isa_io_base = 0;
unsigned long isa_mem_base = 0;
#endif
#ifdef CONFIG_PCI
static int
mpc83xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
{
static char pci_irq_table[][4] =
/*
* PCI IDSEL/INTPIN->INTLINE
* A B C D
*/
{
{PIRQB, PIRQC, PIRQD, PIRQA}, /* idsel 0x0e */
{PIRQA, PIRQB, PIRQC, PIRQD}, /* idsel 0x0f */
{PIRQC, PIRQD, PIRQA, PIRQB}, /* idsel 0x10 */
};
const long min_idsel = 0x0e, max_idsel = 0x10, irqs_per_slot = 4;
return PCI_IRQ_TABLE_LOOKUP;
}
#endif /* CONFIG_PCI */
/* ************************************************************************
*
* Setup the architecture
*
*/
static void __init mpc834x_itx_setup_arch(void)
{
struct device_node *np;
if (ppc_md.progress)
ppc_md.progress("mpc834x_itx_setup_arch()", 0);
np = of_find_node_by_type(NULL, "cpu");
if (np != 0) {
unsigned int *fp =
(int *)get_property(np, "clock-frequency", NULL);
if (fp != 0)
loops_per_jiffy = *fp / HZ;
else
loops_per_jiffy = 50000000 / HZ;
of_node_put(np);
}
#ifdef CONFIG_PCI
for (np = NULL; (np = of_find_node_by_type(np, "pci")) != NULL;)
add_bridge(np);
ppc_md.pci_swizzle = common_swizzle;
ppc_md.pci_map_irq = mpc83xx_map_irq;
ppc_md.pci_exclude_device = mpc83xx_exclude_device;
#endif
#ifdef CONFIG_ROOT_NFS
ROOT_DEV = Root_NFS;
#else
ROOT_DEV = Root_HDA1;
#endif
}
void __init mpc834x_itx_init_IRQ(void)
{
u8 senses[8] = {
0, /* EXT 0 */
IRQ_SENSE_LEVEL, /* EXT 1 */
IRQ_SENSE_LEVEL, /* EXT 2 */
0, /* EXT 3 */
#ifdef CONFIG_PCI
IRQ_SENSE_LEVEL, /* EXT 4 */
IRQ_SENSE_LEVEL, /* EXT 5 */
IRQ_SENSE_LEVEL, /* EXT 6 */
IRQ_SENSE_LEVEL, /* EXT 7 */
#else
0, /* EXT 4 */
0, /* EXT 5 */
0, /* EXT 6 */
0, /* EXT 7 */
#endif
};
ipic_init(get_immrbase() + 0x00700, 0, 0, senses, 8);
/* Initialize the default interrupt mapping priorities,
* in case the boot rom changed something on us.
*/
ipic_set_default_priority();
}
/*
* Called very early, MMU is off, device-tree isn't unflattened
*/
static int __init mpc834x_itx_probe(void)
{
/* We always match for now, eventually we should look at the flat
dev tree to ensure this is the board we are suppose to run on
*/
return 1;
}
define_machine(mpc834x_itx) {
.name = "MPC834x ITX",
.probe = mpc834x_itx_probe,
.setup_arch = mpc834x_itx_setup_arch,
.init_IRQ = mpc834x_itx_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,
.calibrate_decr = generic_calibrate_decr,
.progress = udbg_progress,
};

View File

@ -0,0 +1,23 @@
/*
* arch/powerpc/platforms/83xx/mpc834x_itx.h
*
* MPC834X ITX common board definitions
*
* Maintainer: Kumar Gala <galak@kernel.crashing.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#ifndef __MACH_MPC83XX_ITX_H__
#define __MACH_MPC83XX_ITX_H__
#define PIRQA MPC83xx_IRQ_EXT4
#define PIRQB MPC83xx_IRQ_EXT5
#define PIRQC MPC83xx_IRQ_EXT6
#define PIRQD MPC83xx_IRQ_EXT7
#endif /* __MACH_MPC83XX_ITX_H__ */

View File

@ -1,6 +1,9 @@
/*
* Cell Internal Interrupt Controller
*
* Copyright (C) 2006 Benjamin Herrenschmidt (benh@kernel.crashing.org)
* IBM, Corp.
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Arnd Bergmann <arndb@de.ibm.com>
@ -25,11 +28,13 @@
#include <linux/module.h>
#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/ioport.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/prom.h>
#include <asm/ptrace.h>
#include <asm/machdep.h>
#include "interrupt.h"
#include "cbe_regs.h"
@ -37,231 +42,65 @@
struct iic {
struct cbe_iic_thread_regs __iomem *regs;
u8 target_id;
u8 eoi_stack[16];
int eoi_ptr;
struct irq_host *host;
};
static DEFINE_PER_CPU(struct iic, iic);
#define IIC_NODE_COUNT 2
static struct irq_host *iic_hosts[IIC_NODE_COUNT];
void iic_local_enable(void)
/* Convert between "pending" bits and hw irq number */
static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits)
{
unsigned char unit = bits.source & 0xf;
if (bits.flags & CBE_IIC_IRQ_IPI)
return IIC_IRQ_IPI0 | (bits.prio >> 4);
else if (bits.class <= 3)
return (bits.class << 4) | unit;
else
return IIC_IRQ_INVALID;
}
static void iic_mask(unsigned int irq)
{
}
static void iic_unmask(unsigned int irq)
{
}
static void iic_eoi(unsigned int irq)
{
struct iic *iic = &__get_cpu_var(iic);
u64 tmp;
/*
* There seems to be a bug that is present in DD2.x CPUs
* and still only partially fixed in DD3.1.
* This bug causes a value written to the priority register
* not to make it there, resulting in a system hang unless we
* write it again.
* Masking with 0xf0 is done because the Cell BE does not
* implement the lower four bits of the interrupt priority,
* they always read back as zeroes, although future CPUs
* might implement different bits.
*/
do {
out_be64(&iic->regs->prio, 0xff);
tmp = in_be64(&iic->regs->prio);
} while ((tmp & 0xf0) != 0xf0);
out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
BUG_ON(iic->eoi_ptr < 0);
}
void iic_local_disable(void)
{
out_be64(&__get_cpu_var(iic).regs->prio, 0x0);
}
static unsigned int iic_startup(unsigned int irq)
{
return 0;
}
static void iic_enable(unsigned int irq)
{
iic_local_enable();
}
static void iic_disable(unsigned int irq)
{
}
static void iic_end(unsigned int irq)
{
iic_local_enable();
}
static struct hw_interrupt_type iic_pic = {
static struct irq_chip iic_chip = {
.typename = " CELL-IIC ",
.startup = iic_startup,
.enable = iic_enable,
.disable = iic_disable,
.end = iic_end,
.mask = iic_mask,
.unmask = iic_unmask,
.eoi = iic_eoi,
};
static int iic_external_get_irq(struct cbe_iic_pending_bits pending)
{
int irq;
unsigned char node, unit;
node = pending.source >> 4;
unit = pending.source & 0xf;
irq = -1;
/*
* This mapping is specific to the Cell Broadband
* Engine. We might need to get the numbers
* from the device tree to support future CPUs.
*/
switch (unit) {
case 0x00:
case 0x0b:
/*
* One of these units can be connected
* to an external interrupt controller.
*/
if (pending.class != 2)
break;
irq = IIC_EXT_OFFSET
+ spider_get_irq(node)
+ node * IIC_NODE_STRIDE;
break;
case 0x01 ... 0x04:
case 0x07 ... 0x0a:
/*
* These units are connected to the SPEs
*/
if (pending.class > 2)
break;
irq = IIC_SPE_OFFSET
+ pending.class * IIC_CLASS_STRIDE
+ node * IIC_NODE_STRIDE
+ unit;
break;
}
if (irq == -1)
printk(KERN_WARNING "Unexpected interrupt class %02x, "
"source %02x, prio %02x, cpu %02x\n", pending.class,
pending.source, pending.prio, smp_processor_id());
return irq;
}
/* Get an IRQ number from the pending state register of the IIC */
int iic_get_irq(struct pt_regs *regs)
static unsigned int iic_get_irq(struct pt_regs *regs)
{
struct iic *iic;
int irq;
struct cbe_iic_pending_bits pending;
struct cbe_iic_pending_bits pending;
struct iic *iic;
iic = &__get_cpu_var(iic);
*(unsigned long *) &pending =
in_be64((unsigned long __iomem *) &iic->regs->pending_destr);
irq = -1;
if (pending.flags & CBE_IIC_IRQ_VALID) {
if (pending.flags & CBE_IIC_IRQ_IPI) {
irq = IIC_IPI_OFFSET + (pending.prio >> 4);
/*
if (irq > 0x80)
printk(KERN_WARNING "Unexpected IPI prio %02x"
"on CPU %02x\n", pending.prio,
smp_processor_id());
*/
} else {
irq = iic_external_get_irq(pending);
}
}
return irq;
}
/* hardcoded part to be compatible with older firmware */
static int setup_iic_hardcoded(void)
{
struct device_node *np;
int nodeid, cpu;
unsigned long regs;
struct iic *iic;
for_each_possible_cpu(cpu) {
iic = &per_cpu(iic, cpu);
nodeid = cpu/2;
for (np = of_find_node_by_type(NULL, "cpu");
np;
np = of_find_node_by_type(np, "cpu")) {
if (nodeid == *(int *)get_property(np, "node-id", NULL))
break;
}
if (!np) {
printk(KERN_WARNING "IIC: CPU %d not found\n", cpu);
iic->regs = NULL;
iic->target_id = 0xff;
return -ENODEV;
}
regs = *(long *)get_property(np, "iic", NULL);
/* hack until we have decided on the devtree info */
regs += 0x400;
if (cpu & 1)
regs += 0x20;
printk(KERN_INFO "IIC for CPU %d at %lx\n", cpu, regs);
iic->regs = ioremap(regs, sizeof(struct cbe_iic_thread_regs));
iic->target_id = (nodeid << 4) + ((cpu & 1) ? 0xf : 0xe);
}
return 0;
}
static int setup_iic(void)
{
struct device_node *dn;
unsigned long *regs;
char *compatible;
unsigned *np, found = 0;
struct iic *iic = NULL;
for (dn = NULL; (dn = of_find_node_by_name(dn, "interrupt-controller"));) {
compatible = (char *)get_property(dn, "compatible", NULL);
if (!compatible) {
printk(KERN_WARNING "no compatible property found !\n");
continue;
}
if (strstr(compatible, "IBM,CBEA-Internal-Interrupt-Controller"))
regs = (unsigned long *)get_property(dn,"reg", NULL);
else
continue;
if (!regs)
printk(KERN_WARNING "IIC: no reg property\n");
np = (unsigned int *)get_property(dn, "ibm,interrupt-server-ranges", NULL);
if (!np) {
printk(KERN_WARNING "IIC: CPU association not found\n");
iic->regs = NULL;
iic->target_id = 0xff;
return -ENODEV;
}
iic = &per_cpu(iic, np[0]);
iic->regs = ioremap(regs[0], sizeof(struct cbe_iic_thread_regs));
iic->target_id = ((np[0] & 2) << 3) + ((np[0] & 1) ? 0xf : 0xe);
printk("IIC for CPU %d at %lx mapped to %p\n", np[0], regs[0], iic->regs);
iic = &per_cpu(iic, np[1]);
iic->regs = ioremap(regs[2], sizeof(struct cbe_iic_thread_regs));
iic->target_id = ((np[1] & 2) << 3) + ((np[1] & 1) ? 0xf : 0xe);
printk("IIC for CPU %d at %lx mapped to %p\n", np[1], regs[2], iic->regs);
found++;
}
if (found)
return 0;
else
return -ENODEV;
iic = &__get_cpu_var(iic);
*(unsigned long *) &pending =
in_be64((unsigned long __iomem *) &iic->regs->pending_destr);
iic->eoi_stack[++iic->eoi_ptr] = pending.prio;
BUG_ON(iic->eoi_ptr > 15);
if (pending.flags & CBE_IIC_IRQ_VALID)
return irq_linear_revmap(iic->host,
iic_pending_to_hwnum(pending));
return NO_IRQ;
}
#ifdef CONFIG_SMP
@ -269,12 +108,12 @@ static int setup_iic(void)
/* Use the highest interrupt priorities for IPI */
static inline int iic_ipi_to_irq(int ipi)
{
return IIC_IPI_OFFSET + IIC_NUM_IPIS - 1 - ipi;
return IIC_IRQ_IPI0 + IIC_NUM_IPIS - 1 - ipi;
}
static inline int iic_irq_to_ipi(int irq)
{
return IIC_NUM_IPIS - 1 - (irq - IIC_IPI_OFFSET);
return IIC_NUM_IPIS - 1 - (irq - IIC_IRQ_IPI0);
}
void iic_setup_cpu(void)
@ -293,22 +132,51 @@ u8 iic_get_target_id(int cpu)
}
EXPORT_SYMBOL_GPL(iic_get_target_id);
struct irq_host *iic_get_irq_host(int node)
{
if (node < 0 || node >= IIC_NODE_COUNT)
return NULL;
return iic_hosts[node];
}
EXPORT_SYMBOL_GPL(iic_get_irq_host);
static irqreturn_t iic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
{
smp_message_recv(iic_irq_to_ipi(irq), regs);
int ipi = (int)(long)dev_id;
smp_message_recv(ipi, regs);
return IRQ_HANDLED;
}
static void iic_request_ipi(int ipi, const char *name)
{
int irq;
int node, virq;
irq = iic_ipi_to_irq(ipi);
/* IPIs are marked IRQF_DISABLED as they must run with irqs
* disabled */
get_irq_desc(irq)->chip = &iic_pic;
get_irq_desc(irq)->status |= IRQ_PER_CPU;
request_irq(irq, iic_ipi_action, IRQF_DISABLED, name, NULL);
for (node = 0; node < IIC_NODE_COUNT; node++) {
char *rname;
if (iic_hosts[node] == NULL)
continue;
virq = irq_create_mapping(iic_hosts[node],
iic_ipi_to_irq(ipi), 0);
if (virq == NO_IRQ) {
printk(KERN_ERR
"iic: failed to map IPI %s on node %d\n",
name, node);
continue;
}
rname = kzalloc(strlen(name) + 16, GFP_KERNEL);
if (rname)
sprintf(rname, "%s node %d", name, node);
else
rname = (char *)name;
if (request_irq(virq, iic_ipi_action, IRQF_DISABLED,
rname, (void *)(long)ipi))
printk(KERN_ERR
"iic: failed to request IPI %s on node %d\n",
name, node);
}
}
void iic_request_IPIs(void)
@ -319,34 +187,119 @@ void iic_request_IPIs(void)
iic_request_ipi(PPC_MSG_DEBUGGER_BREAK, "IPI-debug");
#endif /* CONFIG_DEBUGGER */
}
#endif /* CONFIG_SMP */
static void iic_setup_spe_handlers(void)
{
int be, isrc;
/* Assume two threads per BE are present */
for (be=0; be < num_present_cpus() / 2; be++) {
for (isrc = 0; isrc < IIC_CLASS_STRIDE * 3; isrc++) {
int irq = IIC_NODE_STRIDE * be + IIC_SPE_OFFSET + isrc;
get_irq_desc(irq)->chip = &iic_pic;
static int iic_host_match(struct irq_host *h, struct device_node *node)
{
return h->host_data != NULL && node == h->host_data;
}
static int iic_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
if (hw < IIC_IRQ_IPI0)
set_irq_chip_and_handler(virq, &iic_chip, handle_fasteoi_irq);
else
set_irq_chip_and_handler(virq, &iic_chip, handle_percpu_irq);
return 0;
}
static int iic_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
/* Currently, we don't translate anything. That needs to be fixed as
* we get better defined device-trees. iic interrupts have to be
* explicitely mapped by whoever needs them
*/
return -ENODEV;
}
static struct irq_host_ops iic_host_ops = {
.match = iic_host_match,
.map = iic_host_map,
.xlate = iic_host_xlate,
};
static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr,
struct irq_host *host)
{
/* XXX FIXME: should locate the linux CPU number from the HW cpu
* number properly. We are lucky for now
*/
struct iic *iic = &per_cpu(iic, hw_cpu);
iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs));
BUG_ON(iic->regs == NULL);
iic->target_id = ((hw_cpu & 2) << 3) | ((hw_cpu & 1) ? 0xf : 0xe);
iic->eoi_stack[0] = 0xff;
iic->host = host;
out_be64(&iic->regs->prio, 0);
printk(KERN_INFO "IIC for CPU %d at %lx mapped to %p, target id 0x%x\n",
hw_cpu, addr, iic->regs, iic->target_id);
}
static int __init setup_iic(void)
{
struct device_node *dn;
struct resource r0, r1;
struct irq_host *host;
int found = 0;
u32 *np;
for (dn = NULL;
(dn = of_find_node_by_name(dn,"interrupt-controller")) != NULL;) {
if (!device_is_compatible(dn,
"IBM,CBEA-Internal-Interrupt-Controller"))
continue;
np = (u32 *)get_property(dn, "ibm,interrupt-server-ranges",
NULL);
if (np == NULL) {
printk(KERN_WARNING "IIC: CPU association not found\n");
of_node_put(dn);
return -ENODEV;
}
if (of_address_to_resource(dn, 0, &r0) ||
of_address_to_resource(dn, 1, &r1)) {
printk(KERN_WARNING "IIC: Can't resolve addresses\n");
of_node_put(dn);
return -ENODEV;
}
host = NULL;
if (found < IIC_NODE_COUNT) {
host = irq_alloc_host(IRQ_HOST_MAP_LINEAR,
IIC_SOURCE_COUNT,
&iic_host_ops,
IIC_IRQ_INVALID);
iic_hosts[found] = host;
BUG_ON(iic_hosts[found] == NULL);
iic_hosts[found]->host_data = of_node_get(dn);
found++;
}
init_one_iic(np[0], r0.start, host);
init_one_iic(np[1], r1.start, host);
}
if (found)
return 0;
else
return -ENODEV;
}
void iic_init_IRQ(void)
void __init iic_init_IRQ(void)
{
int cpu, irq_offset;
struct iic *iic;
/* Discover and initialize iics */
if (setup_iic() < 0)
setup_iic_hardcoded();
panic("IIC: Failed to initialize !\n");
irq_offset = 0;
for_each_possible_cpu(cpu) {
iic = &per_cpu(iic, cpu);
if (iic->regs)
out_be64(&iic->regs->prio, 0xff);
}
iic_setup_spe_handlers();
/* Set master interrupt handling function */
ppc_md.get_irq = iic_get_irq;
/* Enable on current CPU */
iic_setup_cpu();
}

View File

@ -37,27 +37,24 @@
*/
enum {
IIC_EXT_OFFSET = 0x00, /* Start of south bridge IRQs */
IIC_NUM_EXT = 0x40, /* Number of south bridge IRQs */
IIC_SPE_OFFSET = 0x40, /* Start of SPE interrupts */
IIC_CLASS_STRIDE = 0x10, /* SPE IRQs per class */
IIC_IPI_OFFSET = 0x70, /* Start of IPI IRQs */
IIC_NUM_IPIS = 0x10, /* IRQs reserved for IPI */
IIC_NODE_STRIDE = 0x80, /* Total IRQs per node */
IIC_IRQ_INVALID = 0xff,
IIC_IRQ_MAX = 0x3f,
IIC_IRQ_EXT_IOIF0 = 0x20,
IIC_IRQ_EXT_IOIF1 = 0x2b,
IIC_IRQ_IPI0 = 0x40,
IIC_NUM_IPIS = 0x10, /* IRQs reserved for IPI */
IIC_SOURCE_COUNT = 0x50,
};
extern void iic_init_IRQ(void);
extern int iic_get_irq(struct pt_regs *regs);
extern void iic_cause_IPI(int cpu, int mesg);
extern void iic_request_IPIs(void);
extern void iic_setup_cpu(void);
extern void iic_local_enable(void);
extern void iic_local_disable(void);
extern u8 iic_get_target_id(int cpu);
extern struct irq_host *iic_get_irq_host(int node);
extern void spider_init_IRQ(void);
extern int spider_get_irq(int node);
#endif
#endif /* ASM_CELL_PIC_H */

View File

@ -49,6 +49,7 @@
#include <asm/irq.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
#include <asm/udbg.h>
#include "interrupt.h"
#include "iommu.h"
@ -79,10 +80,22 @@ static void cell_progress(char *s, unsigned short hex)
printk("*** %04x : %s\n", hex, s ? s : "");
}
static void __init cell_pcibios_fixup(void)
{
struct pci_dev *dev = NULL;
for_each_pci_dev(dev)
pci_read_irq_line(dev);
}
static void __init cell_init_irq(void)
{
iic_init_IRQ();
spider_init_IRQ();
}
static void __init cell_setup_arch(void)
{
ppc_md.init_IRQ = iic_init_IRQ;
ppc_md.get_irq = iic_get_irq;
#ifdef CONFIG_SPU_BASE
spu_priv1_ops = &spu_priv1_mmio_ops;
#endif
@ -108,7 +121,6 @@ static void __init cell_setup_arch(void)
/* Find and initialize PCI host bridges */
init_pci_config_tokens();
find_and_init_phbs();
spider_init_IRQ();
cbe_pervasive_init();
#ifdef CONFIG_DUMMY_CONSOLE
conswitchp = &dummy_con;
@ -126,8 +138,6 @@ static void __init cell_init_early(void)
cell_init_iommu();
ppc64_interrupt_controller = IC_CELL_PIC;
DBG(" <- cell_init_early()\n");
}
@ -173,6 +183,8 @@ define_machine(cell) {
.calibrate_decr = generic_calibrate_decr,
.check_legacy_ioport = cell_check_legacy_ioport,
.progress = cell_progress,
.init_IRQ = cell_init_irq,
.pcibios_fixup = cell_pcibios_fixup,
#ifdef CONFIG_KEXEC
.machine_kexec = default_machine_kexec,
.machine_kexec_prepare = default_machine_kexec_prepare,

View File

@ -22,6 +22,7 @@
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/ioport.h>
#include <asm/pgtable.h>
#include <asm/prom.h>
@ -56,184 +57,313 @@ enum {
REISWAITEN = 0x508, /* Reissue Wait Control*/
};
static void __iomem *spider_pics[4];
#define SPIDER_CHIP_COUNT 4
#define SPIDER_SRC_COUNT 64
#define SPIDER_IRQ_INVALID 63
static void __iomem *spider_get_pic(int irq)
struct spider_pic {
struct irq_host *host;
struct device_node *of_node;
void __iomem *regs;
unsigned int node_id;
};
static struct spider_pic spider_pics[SPIDER_CHIP_COUNT];
static struct spider_pic *spider_virq_to_pic(unsigned int virq)
{
int node = irq / IIC_NODE_STRIDE;
irq %= IIC_NODE_STRIDE;
if (irq >= IIC_EXT_OFFSET &&
irq < IIC_EXT_OFFSET + IIC_NUM_EXT &&
spider_pics)
return spider_pics[node];
return NULL;
return irq_map[virq].host->host_data;
}
static int spider_get_nr(unsigned int irq)
static void __iomem *spider_get_irq_config(struct spider_pic *pic,
unsigned int src)
{
return (irq % IIC_NODE_STRIDE) - IIC_EXT_OFFSET;
return pic->regs + TIR_CFGA + 8 * src;
}
static void __iomem *spider_get_irq_config(int irq)
static void spider_unmask_irq(unsigned int virq)
{
void __iomem *pic;
pic = spider_get_pic(irq);
return pic + TIR_CFGA + 8 * spider_get_nr(irq);
struct spider_pic *pic = spider_virq_to_pic(virq);
void __iomem *cfg = spider_get_irq_config(pic, irq_map[virq].hwirq);
/* We use no locking as we should be covered by the descriptor lock
* for access to invidual source configuration registers
*/
out_be32(cfg, in_be32(cfg) | 0x30000000u);
}
static void spider_enable_irq(unsigned int irq)
static void spider_mask_irq(unsigned int virq)
{
int nodeid = (irq / IIC_NODE_STRIDE) * 0x10;
void __iomem *cfg = spider_get_irq_config(irq);
irq = spider_get_nr(irq);
out_be32(cfg, (in_be32(cfg) & ~0xf0)| 0x3107000eu | nodeid);
out_be32(cfg + 4, in_be32(cfg + 4) | 0x00020000u | irq);
}
static void spider_disable_irq(unsigned int irq)
{
void __iomem *cfg = spider_get_irq_config(irq);
irq = spider_get_nr(irq);
struct spider_pic *pic = spider_virq_to_pic(virq);
void __iomem *cfg = spider_get_irq_config(pic, irq_map[virq].hwirq);
/* We use no locking as we should be covered by the descriptor lock
* for access to invidual source configuration registers
*/
out_be32(cfg, in_be32(cfg) & ~0x30000000u);
}
static unsigned int spider_startup_irq(unsigned int irq)
static void spider_ack_irq(unsigned int virq)
{
spider_enable_irq(irq);
struct spider_pic *pic = spider_virq_to_pic(virq);
unsigned int src = irq_map[virq].hwirq;
/* Reset edge detection logic if necessary
*/
if (get_irq_desc(virq)->status & IRQ_LEVEL)
return;
/* Only interrupts 47 to 50 can be set to edge */
if (src < 47 || src > 50)
return;
/* Perform the clear of the edge logic */
out_be32(pic->regs + TIR_EDC, 0x100 | (src & 0xf));
}
static struct irq_chip spider_pic = {
.typename = " SPIDER ",
.unmask = spider_unmask_irq,
.mask = spider_mask_irq,
.ack = spider_ack_irq,
};
static int spider_host_match(struct irq_host *h, struct device_node *node)
{
struct spider_pic *pic = h->host_data;
return node == pic->of_node;
}
static int spider_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
struct spider_pic *pic = h->host_data;
void __iomem *cfg = spider_get_irq_config(pic, hw);
int level = 0;
u32 ic;
/* Note that only level high is supported for most interrupts */
if (sense != IRQ_TYPE_NONE && sense != IRQ_TYPE_LEVEL_HIGH &&
(hw < 47 || hw > 50))
return -EINVAL;
/* Decode sense type */
switch(sense) {
case IRQ_TYPE_EDGE_RISING:
ic = 0x3;
break;
case IRQ_TYPE_EDGE_FALLING:
ic = 0x2;
break;
case IRQ_TYPE_LEVEL_LOW:
ic = 0x0;
level = 1;
break;
case IRQ_TYPE_LEVEL_HIGH:
case IRQ_TYPE_NONE:
ic = 0x1;
level = 1;
break;
default:
return -EINVAL;
}
/* Configure the source. One gross hack that was there before and
* that I've kept around is the priority to the BE which I set to
* be the same as the interrupt source number. I don't know wether
* that's supposed to make any kind of sense however, we'll have to
* decide that, but for now, I'm not changing the behaviour.
*/
out_be32(cfg, (ic << 24) | (0x7 << 16) | (pic->node_id << 4) | 0xe);
out_be32(cfg + 4, (0x2 << 16) | (hw & 0xff));
if (level)
get_irq_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &spider_pic, handle_level_irq);
return 0;
}
static void spider_shutdown_irq(unsigned int irq)
static int spider_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
spider_disable_irq(irq);
/* Spider interrupts have 2 cells, first is the interrupt source,
* second, well, I don't know for sure yet ... We mask the top bits
* because old device-trees encode a node number in there
*/
*out_hwirq = intspec[0] & 0x3f;
*out_flags = IRQ_TYPE_LEVEL_HIGH;
return 0;
}
static void spider_end_irq(unsigned int irq)
{
spider_enable_irq(irq);
}
static void spider_ack_irq(unsigned int irq)
{
spider_disable_irq(irq);
iic_local_enable();
}
static struct hw_interrupt_type spider_pic = {
.typename = " SPIDER ",
.startup = spider_startup_irq,
.shutdown = spider_shutdown_irq,
.enable = spider_enable_irq,
.disable = spider_disable_irq,
.ack = spider_ack_irq,
.end = spider_end_irq,
static struct irq_host_ops spider_host_ops = {
.match = spider_host_match,
.map = spider_host_map,
.xlate = spider_host_xlate,
};
int spider_get_irq(int node)
static void spider_irq_cascade(unsigned int irq, struct irq_desc *desc,
struct pt_regs *regs)
{
unsigned long cs;
void __iomem *regs = spider_pics[node];
struct spider_pic *pic = desc->handler_data;
unsigned int cs, virq;
cs = in_be32(regs + TIR_CS) >> 24;
if (cs == 63)
return -1;
cs = in_be32(pic->regs + TIR_CS) >> 24;
if (cs == SPIDER_IRQ_INVALID)
virq = NO_IRQ;
else
return cs;
virq = irq_linear_revmap(pic->host, cs);
if (virq != NO_IRQ)
generic_handle_irq(virq, regs);
desc->chip->eoi(irq);
}
/* hardcoded part to be compatible with older firmware */
void spider_init_IRQ_hardcoded(void)
/* For hooking up the cascace we have a problem. Our device-tree is
* crap and we don't know on which BE iic interrupt we are hooked on at
* least not the "standard" way. We can reconstitute it based on two
* informations though: which BE node we are connected to and wether
* we are connected to IOIF0 or IOIF1. Right now, we really only care
* about the IBM cell blade and we know that its firmware gives us an
* interrupt-map property which is pretty strange.
*/
static unsigned int __init spider_find_cascade_and_node(struct spider_pic *pic)
{
int node;
long spiderpic;
long pics[] = { 0x24000008000, 0x34000008000 };
int n;
unsigned int virq;
u32 *imap, *tmp;
int imaplen, intsize, unit;
struct device_node *iic;
struct irq_host *iic_host;
pr_debug("%s(%d): Using hardcoded defaults\n", __FUNCTION__, __LINE__);
for (node = 0; node < num_present_cpus()/2; node++) {
spiderpic = pics[node];
printk(KERN_DEBUG "SPIDER addr: %lx\n", spiderpic);
spider_pics[node] = ioremap(spiderpic, 0x800);
for (n = 0; n < IIC_NUM_EXT; n++) {
int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
get_irq_desc(irq)->chip = &spider_pic;
}
/* do not mask any interrupts because of level */
out_be32(spider_pics[node] + TIR_MSK, 0x0);
/* disable edge detection clear */
/* out_be32(spider_pics[node] + TIR_EDC, 0x0); */
/* enable interrupt packets to be output */
out_be32(spider_pics[node] + TIR_PIEN,
in_be32(spider_pics[node] + TIR_PIEN) | 0x1);
/* Enable the interrupt detection enable bit. Do this last! */
out_be32(spider_pics[node] + TIR_DEN,
in_be32(spider_pics[node] + TIR_DEN) | 0x1);
#if 0 /* Enable that when we have a way to retreive the node as well */
/* First, we check wether we have a real "interrupts" in the device
* tree in case the device-tree is ever fixed
*/
struct of_irq oirq;
if (of_irq_map_one(pic->of_node, 0, &oirq) == 0) {
virq = irq_create_of_mapping(oirq.controller, oirq.specifier,
oirq.size);
goto bail;
}
#endif
/* Now do the horrible hacks */
tmp = (u32 *)get_property(pic->of_node, "#interrupt-cells", NULL);
if (tmp == NULL)
return NO_IRQ;
intsize = *tmp;
imap = (u32 *)get_property(pic->of_node, "interrupt-map", &imaplen);
if (imap == NULL || imaplen < (intsize + 1))
return NO_IRQ;
iic = of_find_node_by_phandle(imap[intsize]);
if (iic == NULL)
return NO_IRQ;
imap += intsize + 1;
tmp = (u32 *)get_property(iic, "#interrupt-cells", NULL);
if (tmp == NULL)
return NO_IRQ;
intsize = *tmp;
/* Assume unit is last entry of interrupt specifier */
unit = imap[intsize - 1];
/* Ok, we have a unit, now let's try to get the node */
tmp = (u32 *)get_property(iic, "ibm,interrupt-server-ranges", NULL);
if (tmp == NULL) {
of_node_put(iic);
return NO_IRQ;
}
/* ugly as hell but works for now */
pic->node_id = (*tmp) >> 1;
of_node_put(iic);
/* Ok, now let's get cracking. You may ask me why I just didn't match
* the iic host from the iic OF node, but that way I'm still compatible
* with really really old old firmwares for which we don't have a node
*/
iic_host = iic_get_irq_host(pic->node_id);
if (iic_host == NULL)
return NO_IRQ;
/* Manufacture an IIC interrupt number of class 2 */
virq = irq_create_mapping(iic_host, 0x20 | unit, 0);
if (virq == NO_IRQ)
printk(KERN_ERR "spider_pic: failed to map cascade !");
return virq;
}
void spider_init_IRQ(void)
static void __init spider_init_one(struct device_node *of_node, int chip,
unsigned long addr)
{
long spider_reg;
struct spider_pic *pic = &spider_pics[chip];
int i, virq;
/* Map registers */
pic->regs = ioremap(addr, 0x1000);
if (pic->regs == NULL)
panic("spider_pic: can't map registers !");
/* Allocate a host */
pic->host = irq_alloc_host(IRQ_HOST_MAP_LINEAR, SPIDER_SRC_COUNT,
&spider_host_ops, SPIDER_IRQ_INVALID);
if (pic->host == NULL)
panic("spider_pic: can't allocate irq host !");
pic->host->host_data = pic;
/* Fill out other bits */
pic->of_node = of_node_get(of_node);
/* Go through all sources and disable them */
for (i = 0; i < SPIDER_SRC_COUNT; i++) {
void __iomem *cfg = pic->regs + TIR_CFGA + 8 * i;
out_be32(cfg, in_be32(cfg) & ~0x30000000u);
}
/* do not mask any interrupts because of level */
out_be32(pic->regs + TIR_MSK, 0x0);
/* enable interrupt packets to be output */
out_be32(pic->regs + TIR_PIEN, in_be32(pic->regs + TIR_PIEN) | 0x1);
/* Hook up the cascade interrupt to the iic and nodeid */
virq = spider_find_cascade_and_node(pic);
if (virq == NO_IRQ)
return;
set_irq_data(virq, pic);
set_irq_chained_handler(virq, spider_irq_cascade);
printk(KERN_INFO "spider_pic: node %d, addr: 0x%lx %s\n",
pic->node_id, addr, of_node->full_name);
/* Enable the interrupt detection enable bit. Do this last! */
out_be32(pic->regs + TIR_DEN, in_be32(pic->regs + TIR_DEN) | 0x1);
}
void __init spider_init_IRQ(void)
{
struct resource r;
struct device_node *dn;
char *compatible;
int n, node = 0;
int chip = 0;
for (dn = NULL; (dn = of_find_node_by_name(dn, "interrupt-controller"));) {
compatible = (char *)get_property(dn, "compatible", NULL);
if (!compatible)
continue;
if (strstr(compatible, "CBEA,platform-spider-pic"))
spider_reg = *(long *)get_property(dn,"reg", NULL);
else if (strstr(compatible, "sti,platform-spider-pic")) {
spider_init_IRQ_hardcoded();
return;
/* XXX node numbers are totally bogus. We _hope_ we get the device
* nodes in the right order here but that's definitely not guaranteed,
* we need to get the node from the device tree instead.
* There is currently no proper property for it (but our whole
* device-tree is bogus anyway) so all we can do is pray or maybe test
* the address and deduce the node-id
*/
for (dn = NULL;
(dn = of_find_node_by_name(dn, "interrupt-controller"));) {
if (device_is_compatible(dn, "CBEA,platform-spider-pic")) {
if (of_address_to_resource(dn, 0, &r)) {
printk(KERN_WARNING "spider-pic: Failed\n");
continue;
}
} else if (device_is_compatible(dn, "sti,platform-spider-pic")
&& (chip < 2)) {
static long hard_coded_pics[] =
{ 0x24000008000, 0x34000008000 };
r.start = hard_coded_pics[chip];
} else
continue;
if (!spider_reg)
printk("interrupt controller does not have reg property !\n");
n = prom_n_addr_cells(dn);
if ( n != 2)
printk("reg property with invalid number of elements \n");
spider_pics[node] = ioremap(spider_reg, 0x800);
printk("SPIDER addr: %lx with %i addr_cells mapped to %p\n",
spider_reg, n, spider_pics[node]);
for (n = 0; n < IIC_NUM_EXT; n++) {
int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
get_irq_desc(irq)->chip = &spider_pic;
}
/* do not mask any interrupts because of level */
out_be32(spider_pics[node] + TIR_MSK, 0x0);
/* disable edge detection clear */
/* out_be32(spider_pics[node] + TIR_EDC, 0x0); */
/* enable interrupt packets to be output */
out_be32(spider_pics[node] + TIR_PIEN,
in_be32(spider_pics[node] + TIR_PIEN) | 0x1);
/* Enable the interrupt detection enable bit. Do this last! */
out_be32(spider_pics[node] + TIR_DEN,
in_be32(spider_pics[node] + TIR_DEN) | 0x1);
node++;
spider_init_one(dn, chip++, r.start);
}
}

View File

@ -264,51 +264,57 @@ spu_irq_class_2(int irq, void *data, struct pt_regs *regs)
return stat ? IRQ_HANDLED : IRQ_NONE;
}
static int
spu_request_irqs(struct spu *spu)
static int spu_request_irqs(struct spu *spu)
{
int ret;
int irq_base;
int ret = 0;
irq_base = IIC_NODE_STRIDE * spu->node + IIC_SPE_OFFSET;
if (spu->irqs[0] != NO_IRQ) {
snprintf(spu->irq_c0, sizeof (spu->irq_c0), "spe%02d.0",
spu->number);
ret = request_irq(spu->irqs[0], spu_irq_class_0,
IRQF_DISABLED,
spu->irq_c0, spu);
if (ret)
goto bail0;
}
if (spu->irqs[1] != NO_IRQ) {
snprintf(spu->irq_c1, sizeof (spu->irq_c1), "spe%02d.1",
spu->number);
ret = request_irq(spu->irqs[1], spu_irq_class_1,
IRQF_DISABLED,
spu->irq_c1, spu);
if (ret)
goto bail1;
}
if (spu->irqs[2] != NO_IRQ) {
snprintf(spu->irq_c2, sizeof (spu->irq_c2), "spe%02d.2",
spu->number);
ret = request_irq(spu->irqs[2], spu_irq_class_2,
IRQF_DISABLED,
spu->irq_c2, spu);
if (ret)
goto bail2;
}
return 0;
snprintf(spu->irq_c0, sizeof (spu->irq_c0), "spe%02d.0", spu->number);
ret = request_irq(irq_base + spu->isrc,
spu_irq_class_0, IRQF_DISABLED, spu->irq_c0, spu);
if (ret)
goto out;
snprintf(spu->irq_c1, sizeof (spu->irq_c1), "spe%02d.1", spu->number);
ret = request_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc,
spu_irq_class_1, IRQF_DISABLED, spu->irq_c1, spu);
if (ret)
goto out1;
snprintf(spu->irq_c2, sizeof (spu->irq_c2), "spe%02d.2", spu->number);
ret = request_irq(irq_base + 2*IIC_CLASS_STRIDE + spu->isrc,
spu_irq_class_2, IRQF_DISABLED, spu->irq_c2, spu);
if (ret)
goto out2;
goto out;
out2:
free_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc, spu);
out1:
free_irq(irq_base + spu->isrc, spu);
out:
bail2:
if (spu->irqs[1] != NO_IRQ)
free_irq(spu->irqs[1], spu);
bail1:
if (spu->irqs[0] != NO_IRQ)
free_irq(spu->irqs[0], spu);
bail0:
return ret;
}
static void
spu_free_irqs(struct spu *spu)
static void spu_free_irqs(struct spu *spu)
{
int irq_base;
irq_base = IIC_NODE_STRIDE * spu->node + IIC_SPE_OFFSET;
free_irq(irq_base + spu->isrc, spu);
free_irq(irq_base + IIC_CLASS_STRIDE + spu->isrc, spu);
free_irq(irq_base + 2*IIC_CLASS_STRIDE + spu->isrc, spu);
if (spu->irqs[0] != NO_IRQ)
free_irq(spu->irqs[0], spu);
if (spu->irqs[1] != NO_IRQ)
free_irq(spu->irqs[1], spu);
if (spu->irqs[2] != NO_IRQ)
free_irq(spu->irqs[2], spu);
}
static LIST_HEAD(spu_list);
@ -559,17 +565,38 @@ static void spu_unmap(struct spu *spu)
iounmap((u8 __iomem *)spu->local_store);
}
/* This function shall be abstracted for HV platforms */
static int __init spu_map_interrupts(struct spu *spu, struct device_node *np)
{
struct irq_host *host;
unsigned int isrc;
u32 *tmp;
host = iic_get_irq_host(spu->node);
if (host == NULL)
return -ENODEV;
/* Get the interrupt source from the device-tree */
tmp = (u32 *)get_property(np, "isrc", NULL);
if (!tmp)
return -ENODEV;
spu->isrc = isrc = tmp[0];
/* Now map interrupts of all 3 classes */
spu->irqs[0] = irq_create_mapping(host, 0x00 | isrc, 0);
spu->irqs[1] = irq_create_mapping(host, 0x10 | isrc, 0);
spu->irqs[2] = irq_create_mapping(host, 0x20 | isrc, 0);
/* Right now, we only fail if class 2 failed */
return spu->irqs[2] == NO_IRQ ? -EINVAL : 0;
}
static int __init spu_map_device(struct spu *spu, struct device_node *node)
{
char *prop;
int ret;
ret = -ENODEV;
prop = get_property(node, "isrc", NULL);
if (!prop)
goto out;
spu->isrc = *(unsigned int *)prop;
spu->name = get_property(node, "name", NULL);
if (!spu->name)
goto out;
@ -636,7 +663,8 @@ static int spu_create_sysdev(struct spu *spu)
return ret;
}
sysdev_create_file(&spu->sysdev, &attr_isrc);
if (spu->isrc != 0)
sysdev_create_file(&spu->sysdev, &attr_isrc);
sysfs_add_device_to_node(&spu->sysdev, spu->nid);
return 0;
@ -668,6 +696,9 @@ static int __init create_spu(struct device_node *spe)
spu->nid = of_node_to_nid(spe);
if (spu->nid == -1)
spu->nid = 0;
ret = spu_map_interrupts(spu, spe);
if (ret)
goto out_unmap;
spin_lock_init(&spu->register_lock);
spu_mfc_sdr_set(spu, mfspr(SPRN_SDR1));
spu_mfc_sr1_set(spu, 0x33);

View File

@ -18,7 +18,6 @@
#include <asm/machdep.h>
#include <asm/sections.h>
#include <asm/pci-bridge.h>
#include <asm/open_pic.h>
#include <asm/grackle.h>
#include <asm/rtas.h>
@ -161,15 +160,9 @@ void __init
chrp_pcibios_fixup(void)
{
struct pci_dev *dev = NULL;
struct device_node *np;
/* PCI interrupts are controlled by the OpenPIC */
for_each_pci_dev(dev) {
np = pci_device_to_OF_node(dev);
if ((np != 0) && (np->n_intrs > 0) && (np->intrs[0].line != 0))
dev->irq = np->intrs[0].line;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
}
for_each_pci_dev(dev)
pci_read_irq_line(dev);
}
#define PRG_CL_RESET_VALID 0x00010000

View File

@ -24,7 +24,7 @@
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <linux/adb.h>
#include <linux/module.h>
#include <linux/delay.h>
@ -59,7 +59,7 @@ void rtas_indicator_progress(char *, unsigned short);
int _chrp_type;
EXPORT_SYMBOL(_chrp_type);
struct mpic *chrp_mpic;
static struct mpic *chrp_mpic;
/* Used for doing CHRP event-scans */
DEFINE_PER_CPU(struct timer_list, heartbeat_timer);
@ -315,24 +315,32 @@ chrp_event_scan(unsigned long unused)
jiffies + event_scan_interval);
}
static void chrp_8259_cascade(unsigned int irq, struct irq_desc *desc,
struct pt_regs *regs)
{
unsigned int cascade_irq = i8259_irq(regs);
if (cascade_irq != NO_IRQ)
generic_handle_irq(cascade_irq, regs);
desc->chip->eoi(irq);
}
/*
* Finds the open-pic node and sets up the mpic driver.
*/
static void __init chrp_find_openpic(void)
{
struct device_node *np, *root;
int len, i, j, irq_count;
int len, i, j;
int isu_size, idu_size;
unsigned int *iranges, *opprop = NULL;
int oplen = 0;
unsigned long opaddr;
int na = 1;
unsigned char init_senses[NR_IRQS - NUM_8259_INTERRUPTS];
np = find_type_devices("open-pic");
np = of_find_node_by_type(NULL, "open-pic");
if (np == NULL)
return;
root = find_path_device("/");
root = of_find_node_by_path("/");
if (root) {
opprop = (unsigned int *) get_property
(root, "platform-open-pic", &oplen);
@ -343,19 +351,15 @@ static void __init chrp_find_openpic(void)
oplen /= na * sizeof(unsigned int);
} else {
struct resource r;
if (of_address_to_resource(np, 0, &r))
return;
if (of_address_to_resource(np, 0, &r)) {
goto bail;
}
opaddr = r.start;
oplen = 0;
}
printk(KERN_INFO "OpenPIC at %lx\n", opaddr);
irq_count = NR_IRQS - NUM_ISA_INTERRUPTS - 4; /* leave room for IPIs */
prom_get_irq_senses(init_senses, NUM_ISA_INTERRUPTS, NR_IRQS - 4);
/* i8259 cascade is always positive level */
init_senses[0] = IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE;
iranges = (unsigned int *) get_property(np, "interrupt-ranges", &len);
if (iranges == NULL)
len = 0; /* non-distributed mpic */
@ -382,15 +386,12 @@ static void __init chrp_find_openpic(void)
if (len > 1)
isu_size = iranges[3];
chrp_mpic = mpic_alloc(opaddr, MPIC_PRIMARY,
isu_size, NUM_ISA_INTERRUPTS, irq_count,
NR_IRQS - 4, init_senses, irq_count,
" MPIC ");
chrp_mpic = mpic_alloc(np, opaddr, MPIC_PRIMARY,
isu_size, 0, " MPIC ");
if (chrp_mpic == NULL) {
printk(KERN_ERR "Failed to allocate MPIC structure\n");
return;
goto bail;
}
j = na - 1;
for (i = 1; i < len; ++i) {
iranges += 2;
@ -402,7 +403,10 @@ static void __init chrp_find_openpic(void)
}
mpic_init(chrp_mpic);
mpic_setup_cascade(NUM_ISA_INTERRUPTS, i8259_irq_cascade, NULL);
ppc_md.get_irq = mpic_get_irq;
bail:
of_node_put(root);
of_node_put(np);
}
#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
@ -413,14 +417,34 @@ static struct irqaction xmon_irqaction = {
};
#endif
void __init chrp_init_IRQ(void)
static void __init chrp_find_8259(void)
{
struct device_node *np;
struct device_node *np, *pic = NULL;
unsigned long chrp_int_ack = 0;
#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
struct device_node *kbd;
#endif
unsigned int cascade_irq;
/* Look for cascade */
for_each_node_by_type(np, "interrupt-controller")
if (device_is_compatible(np, "chrp,iic")) {
pic = np;
break;
}
/* Ok, 8259 wasn't found. We need to handle the case where
* we have a pegasos that claims to be chrp but doesn't have
* a proper interrupt tree
*/
if (pic == NULL && chrp_mpic != NULL) {
printk(KERN_ERR "i8259: Not found in device-tree"
" assuming no legacy interrupts\n");
return;
}
/* Look for intack. In a perfect world, we would look for it on
* the ISA bus that holds the 8259 but heh... Works that way. If
* we ever see a problem, we can try to re-use the pSeries code here.
* Also, Pegasos-type platforms don't have a proper node to start
* from anyway
*/
for (np = find_devices("pci"); np != NULL; np = np->next) {
unsigned int *addrp = (unsigned int *)
get_property(np, "8259-interrupt-acknowledge", NULL);
@ -431,11 +455,29 @@ void __init chrp_init_IRQ(void)
break;
}
if (np == NULL)
printk(KERN_ERR "Cannot find PCI interrupt acknowledge address\n");
printk(KERN_WARNING "Cannot find PCI interrupt acknowledge"
" address, polling\n");
i8259_init(pic, chrp_int_ack);
if (ppc_md.get_irq == NULL)
ppc_md.get_irq = i8259_irq;
if (chrp_mpic != NULL) {
cascade_irq = irq_of_parse_and_map(pic, 0);
if (cascade_irq == NO_IRQ)
printk(KERN_ERR "i8259: failed to map cascade irq\n");
else
set_irq_chained_handler(cascade_irq,
chrp_8259_cascade);
}
}
void __init chrp_init_IRQ(void)
{
#if defined(CONFIG_VT) && defined(CONFIG_INPUT_ADBHID) && defined(XMON)
struct device_node *kbd;
#endif
chrp_find_openpic();
i8259_init(chrp_int_ack, 0);
chrp_find_8259();
if (_chrp_type == _CHRP_Pegasos)
ppc_md.get_irq = i8259_irq;
@ -520,10 +562,6 @@ static int __init chrp_probe(void)
DMA_MODE_READ = 0x44;
DMA_MODE_WRITE = 0x48;
isa_io_base = CHRP_ISA_IO_BASE; /* default value */
ppc_do_canonicalize_irqs = 1;
/* Assume we have an 8259... */
__irq_offset_value = NUM_ISA_INTERRUPTS;
return 1;
}
@ -535,7 +573,6 @@ define_machine(chrp) {
.init = chrp_init2,
.show_cpuinfo = chrp_show_cpuinfo,
.init_IRQ = chrp_init_IRQ,
.get_irq = mpic_get_irq,
.pcibios_fixup = chrp_pcibios_fixup,
.restart = rtas_restart,
.power_off = rtas_power_off,

View File

@ -29,7 +29,6 @@
#include <asm/smp.h>
#include <asm/residual.h>
#include <asm/time.h>
#include <asm/open_pic.h>
#include <asm/machdep.h>
#include <asm/smp.h>
#include <asm/mpic.h>

View File

@ -162,27 +162,6 @@ static void pci_event_handler(struct HvLpEvent *event, struct pt_regs *regs)
printk(KERN_ERR "pci_event_handler: NULL event received\n");
}
/*
* This is called by init_IRQ. set in ppc_md.init_IRQ by iSeries_setup.c
* It must be called before the bus walk.
*/
void __init iSeries_init_IRQ(void)
{
/* Register PCI event handler and open an event path */
int ret;
ret = HvLpEvent_registerHandler(HvLpEvent_Type_PciIo,
&pci_event_handler);
if (ret == 0) {
ret = HvLpEvent_openPath(HvLpEvent_Type_PciIo, 0);
if (ret != 0)
printk(KERN_ERR "iseries_init_IRQ: open event path "
"failed with rc 0x%x\n", ret);
} else
printk(KERN_ERR "iseries_init_IRQ: register handler "
"failed with rc 0x%x\n", ret);
}
#define REAL_IRQ_TO_SUBBUS(irq) (((irq) >> 14) & 0xff)
#define REAL_IRQ_TO_BUS(irq) ((((irq) >> 6) & 0xff) + 1)
#define REAL_IRQ_TO_IDSEL(irq) ((((irq) >> 3) & 7) + 1)
@ -196,7 +175,7 @@ static void iseries_enable_IRQ(unsigned int irq)
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
unsigned int rirq = virt_irq_to_real_map[irq];
unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* The IRQ has already been locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
@ -213,7 +192,7 @@ static unsigned int iseries_startup_IRQ(unsigned int irq)
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
unsigned int rirq = virt_irq_to_real_map[irq];
unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
bus = REAL_IRQ_TO_BUS(rirq);
function = REAL_IRQ_TO_FUNC(rirq);
@ -254,7 +233,7 @@ static void iseries_shutdown_IRQ(unsigned int irq)
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
unsigned int rirq = virt_irq_to_real_map[irq];
unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* irq should be locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
@ -277,7 +256,7 @@ static void iseries_disable_IRQ(unsigned int irq)
{
u32 bus, dev_id, function, mask;
const u32 sub_bus = 0;
unsigned int rirq = virt_irq_to_real_map[irq];
unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
/* The IRQ has already been locked by the caller */
bus = REAL_IRQ_TO_BUS(rirq);
@ -291,19 +270,19 @@ static void iseries_disable_IRQ(unsigned int irq)
static void iseries_end_IRQ(unsigned int irq)
{
unsigned int rirq = virt_irq_to_real_map[irq];
unsigned int rirq = (unsigned int)irq_map[irq].hwirq;
HvCallPci_eoi(REAL_IRQ_TO_BUS(rirq), REAL_IRQ_TO_SUBBUS(rirq),
(REAL_IRQ_TO_IDSEL(rirq) << 4) + REAL_IRQ_TO_FUNC(rirq));
}
static hw_irq_controller iSeries_IRQ_handler = {
.typename = "iSeries irq controller",
.startup = iseries_startup_IRQ,
.shutdown = iseries_shutdown_IRQ,
.enable = iseries_enable_IRQ,
.disable = iseries_disable_IRQ,
.end = iseries_end_IRQ
static struct irq_chip iseries_pic = {
.typename = "iSeries irq controller",
.startup = iseries_startup_IRQ,
.shutdown = iseries_shutdown_IRQ,
.unmask = iseries_enable_IRQ,
.mask = iseries_disable_IRQ,
.eoi = iseries_end_IRQ
};
/*
@ -314,17 +293,14 @@ static hw_irq_controller iSeries_IRQ_handler = {
int __init iSeries_allocate_IRQ(HvBusNumber bus,
HvSubBusNumber sub_bus, u32 bsubbus)
{
int virtirq;
unsigned int realirq;
u8 idsel = ISERIES_GET_DEVICE_FROM_SUBBUS(bsubbus);
u8 function = ISERIES_GET_FUNCTION_FROM_SUBBUS(bsubbus);
realirq = (((((sub_bus << 8) + (bus - 1)) << 3) + (idsel - 1)) << 3)
+ function;
virtirq = virt_irq_create_mapping(realirq);
irq_desc[virtirq].chip = &iSeries_IRQ_handler;
return virtirq;
return irq_create_mapping(NULL, realirq, IRQ_TYPE_NONE);
}
#endif /* CONFIG_PCI */
@ -332,10 +308,9 @@ int __init iSeries_allocate_IRQ(HvBusNumber bus,
/*
* Get the next pending IRQ.
*/
int iSeries_get_irq(struct pt_regs *regs)
unsigned int iSeries_get_irq(struct pt_regs *regs)
{
/* -2 means ignore this interrupt */
int irq = -2;
int irq = NO_IRQ_IGNORE;
#ifdef CONFIG_SMP
if (get_lppaca()->int_dword.fields.ipi_cnt) {
@ -358,9 +333,57 @@ int iSeries_get_irq(struct pt_regs *regs)
}
spin_unlock(&pending_irqs_lock);
if (irq >= NR_IRQS)
irq = -2;
irq = NO_IRQ_IGNORE;
}
#endif
return irq;
}
static int iseries_irq_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
set_irq_chip_and_handler(virq, &iseries_pic, handle_fasteoi_irq);
return 0;
}
static struct irq_host_ops iseries_irq_host_ops = {
.map = iseries_irq_host_map,
};
/*
* This is called by init_IRQ. set in ppc_md.init_IRQ by iSeries_setup.c
* It must be called before the bus walk.
*/
void __init iSeries_init_IRQ(void)
{
/* Register PCI event handler and open an event path */
struct irq_host *host;
int ret;
/*
* The Hypervisor only allows us up to 256 interrupt
* sources (the irq number is passed in a u8).
*/
irq_set_virq_count(256);
/* Create irq host. No need for a revmap since HV will give us
* back our virtual irq number
*/
host = irq_alloc_host(IRQ_HOST_MAP_NOMAP, 0, &iseries_irq_host_ops, 0);
BUG_ON(host == NULL);
irq_set_default_host(host);
ret = HvLpEvent_registerHandler(HvLpEvent_Type_PciIo,
&pci_event_handler);
if (ret == 0) {
ret = HvLpEvent_openPath(HvLpEvent_Type_PciIo, 0);
if (ret != 0)
printk(KERN_ERR "iseries_init_IRQ: open event path "
"failed with rc 0x%x\n", ret);
} else
printk(KERN_ERR "iseries_init_IRQ: register handler "
"failed with rc 0x%x\n", ret);
}

View File

@ -4,6 +4,6 @@
extern void iSeries_init_IRQ(void);
extern int iSeries_allocate_IRQ(HvBusNumber, HvSubBusNumber, u32);
extern void iSeries_activate_IRQs(void);
extern int iSeries_get_irq(struct pt_regs *);
extern unsigned int iSeries_get_irq(struct pt_regs *);
#endif /* _ISERIES_IRQ_H */

View File

@ -294,8 +294,6 @@ static void __init iSeries_init_early(void)
{
DBG(" -> iSeries_init_early()\n");
ppc64_interrupt_controller = IC_ISERIES;
#if defined(CONFIG_BLK_DEV_INITRD)
/*
* If the init RAM disk has been configured and there is
@ -659,12 +657,6 @@ static int __init iseries_probe(void)
powerpc_firmware_features |= FW_FEATURE_ISERIES;
powerpc_firmware_features |= FW_FEATURE_LPAR;
/*
* The Hypervisor only allows us up to 256 interrupt
* sources (the irq number is passed in a u8).
*/
virt_irq_max = 255;
hpte_init_iSeries();
return 1;

View File

@ -443,18 +443,23 @@ void __init maple_pci_init(void)
int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel)
{
struct device_node *np;
int irq = channel ? 15 : 14;
unsigned int defirq = channel ? 15 : 14;
unsigned int irq;
if (pdev->vendor != PCI_VENDOR_ID_AMD ||
pdev->device != PCI_DEVICE_ID_AMD_8111_IDE)
return irq;
return defirq;
np = pci_device_to_OF_node(pdev);
if (np == NULL)
return irq;
if (np->n_intrs < 2)
return irq;
return np->intrs[channel & 0x1].line;
return defirq;
irq = irq_of_parse_and_map(np, channel & 0x1);
if (irq == NO_IRQ) {
printk("Failed to map onboard IDE interrupt for channel %d\n",
channel);
return defirq;
}
return irq;
}
/* XXX: To remove once all firmwares are ok */

View File

@ -11,7 +11,7 @@
*
*/
#define DEBUG
#undef DEBUG
#include <linux/init.h>
#include <linux/errno.h>
@ -198,50 +198,81 @@ static void __init maple_init_early(void)
{
DBG(" -> maple_init_early\n");
/* Setup interrupt mapping options */
ppc64_interrupt_controller = IC_OPEN_PIC;
iommu_init_early_dart();
DBG(" <- maple_init_early\n");
}
static __init void maple_init_IRQ(void)
/*
* This is almost identical to pSeries and CHRP. We need to make that
* code generic at one point, with appropriate bits in the device-tree to
* identify the presence of an HT APIC
*/
static void __init maple_init_IRQ(void)
{
struct device_node *root;
struct device_node *root, *np, *mpic_node = NULL;
unsigned int *opprop;
unsigned long opic_addr;
unsigned long openpic_addr = 0;
int naddr, n, i, opplen, has_isus = 0;
struct mpic *mpic;
unsigned char senses[128];
int n;
unsigned int flags = MPIC_PRIMARY;
DBG(" -> maple_init_IRQ\n");
/* Locate MPIC in the device-tree. Note that there is a bug
* in Maple device-tree where the type of the controller is
* open-pic and not interrupt-controller
*/
for_each_node_by_type(np, "open-pic") {
mpic_node = np;
break;
}
if (mpic_node == NULL) {
printk(KERN_ERR
"Failed to locate the MPIC interrupt controller\n");
return;
}
/* XXX: Non standard, replace that with a proper openpic/mpic node
* in the device-tree. Find the Open PIC if present */
/* Find address list in /platform-open-pic */
root = of_find_node_by_path("/");
opprop = (unsigned int *) get_property(root,
"platform-open-pic", NULL);
if (opprop == 0)
panic("OpenPIC not found !\n");
n = prom_n_addr_cells(root);
for (opic_addr = 0; n > 0; --n)
opic_addr = (opic_addr << 32) + *opprop++;
naddr = prom_n_addr_cells(root);
opprop = (unsigned int *) get_property(root, "platform-open-pic",
&opplen);
if (opprop != 0) {
openpic_addr = of_read_number(opprop, naddr);
has_isus = (opplen > naddr);
printk(KERN_DEBUG "OpenPIC addr: %lx, has ISUs: %d\n",
openpic_addr, has_isus);
}
of_node_put(root);
/* Obtain sense values from device-tree */
prom_get_irq_senses(senses, 0, 128);
BUG_ON(openpic_addr == 0);
mpic = mpic_alloc(opic_addr,
MPIC_PRIMARY | MPIC_BIG_ENDIAN |
MPIC_BROKEN_U3 | MPIC_WANTS_RESET,
0, 0, 128, 128, senses, 128, "U3-MPIC");
/* Check for a big endian MPIC */
if (get_property(np, "big-endian", NULL) != NULL)
flags |= MPIC_BIG_ENDIAN;
/* XXX Maple specific bits */
flags |= MPIC_BROKEN_U3 | MPIC_WANTS_RESET;
/* Setup the openpic driver. More device-tree junks, we hard code no
* ISUs for now. I'll have to revisit some stuffs with the folks doing
* the firmware for those
*/
mpic = mpic_alloc(mpic_node, openpic_addr, flags,
/*has_isus ? 16 :*/ 0, 0, " MPIC ");
BUG_ON(mpic == NULL);
mpic_init(mpic);
DBG(" <- maple_init_IRQ\n");
/* Add ISUs */
opplen /= sizeof(u32);
for (n = 0, i = naddr; i < opplen; i += naddr, n++) {
unsigned long isuaddr = of_read_number(opprop + i, naddr);
mpic_assign_isu(mpic, n, isuaddr);
}
/* All ISUs are setup, complete initialization */
mpic_init(mpic);
ppc_md.get_irq = mpic_get_irq;
of_node_put(mpic_node);
of_node_put(root);
}
static void __init maple_progress(char *s, unsigned short hex)
@ -256,7 +287,9 @@ static void __init maple_progress(char *s, unsigned short hex)
static int __init maple_probe(void)
{
unsigned long root = of_get_flat_dt_root();
if (!of_flat_dt_is_compatible(root, "Momentum,Maple"))
if (!of_flat_dt_is_compatible(root, "Momentum,Maple") &&
!of_flat_dt_is_compatible(root, "Momentum,Apache"))
return 0;
/*
* On U3, the DART (iommu) must be allocated now since it
@ -277,7 +310,6 @@ define_machine(maple_md) {
.setup_arch = maple_setup_arch,
.init_early = maple_init_early,
.init_IRQ = maple_init_IRQ,
.get_irq = mpic_get_irq,
.pcibios_fixup = maple_pcibios_fixup,
.pci_get_legacy_ide_irq = maple_pci_get_legacy_ide_irq,
.restart = maple_restart,

View File

@ -12,7 +12,7 @@
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <asm/sections.h>
#include <asm/prom.h>
#include <asm/page.h>
@ -162,6 +162,8 @@ static void __init bootx_add_chosen_props(unsigned long base,
{
u32 val;
bootx_dt_add_prop("linux,bootx", NULL, 0, mem_end);
if (bootx_info->kernelParamsOffset) {
char *args = (char *)((unsigned long)bootx_info) +
bootx_info->kernelParamsOffset;
@ -181,8 +183,25 @@ static void __init bootx_add_chosen_props(unsigned long base,
static void __init bootx_add_display_props(unsigned long base,
unsigned long *mem_end)
{
boot_infos_t *bi = bootx_info;
u32 tmp;
bootx_dt_add_prop("linux,boot-display", NULL, 0, mem_end);
bootx_dt_add_prop("linux,opened", NULL, 0, mem_end);
tmp = bi->dispDeviceDepth;
bootx_dt_add_prop("linux,bootx-depth", &tmp, 4, mem_end);
tmp = bi->dispDeviceRect[2] - bi->dispDeviceRect[0];
bootx_dt_add_prop("linux,bootx-width", &tmp, 4, mem_end);
tmp = bi->dispDeviceRect[3] - bi->dispDeviceRect[1];
bootx_dt_add_prop("linux,bootx-height", &tmp, 4, mem_end);
tmp = bi->dispDeviceRowBytes;
bootx_dt_add_prop("linux,bootx-linebytes", &tmp, 4, mem_end);
tmp = (u32)bi->dispDeviceBase;
if (tmp == 0)
tmp = (u32)bi->logicalDisplayBase;
tmp += bi->dispDeviceRect[1] * bi->dispDeviceRowBytes;
tmp += bi->dispDeviceRect[0] * ((bi->dispDeviceDepth + 7) / 8);
bootx_dt_add_prop("linux,bootx-addr", &tmp, 4, mem_end);
}
static void __init bootx_dt_add_string(char *s, unsigned long *mem_end)
@ -211,7 +230,7 @@ static void __init bootx_scan_dt_build_strings(unsigned long base,
if (!strcmp(namep, "/chosen")) {
DBG(" detected /chosen ! adding properties names !\n");
bootx_dt_add_string("linux,platform", mem_end);
bootx_dt_add_string("linux,bootx", mem_end);
bootx_dt_add_string("linux,stdout-path", mem_end);
bootx_dt_add_string("linux,initrd-start", mem_end);
bootx_dt_add_string("linux,initrd-end", mem_end);
@ -222,6 +241,11 @@ static void __init bootx_scan_dt_build_strings(unsigned long base,
DBG(" detected display ! adding properties names !\n");
bootx_dt_add_string("linux,boot-display", mem_end);
bootx_dt_add_string("linux,opened", mem_end);
bootx_dt_add_string("linux,bootx-depth", mem_end);
bootx_dt_add_string("linux,bootx-width", mem_end);
bootx_dt_add_string("linux,bootx-height", mem_end);
bootx_dt_add_string("linux,bootx-linebytes", mem_end);
bootx_dt_add_string("linux,bootx-addr", mem_end);
strncpy(bootx_disp_path, namep, 255);
}
@ -443,7 +467,14 @@ void __init bootx_init(unsigned long r3, unsigned long r4)
if (!BOOT_INFO_IS_V2_COMPATIBLE(bi))
bi->logicalDisplayBase = bi->dispDeviceBase;
/* Fixup depth 16 -> 15 as that's what MacOS calls 16bpp */
if (bi->dispDeviceDepth == 16)
bi->dispDeviceDepth = 15;
#ifdef CONFIG_BOOTX_TEXT
ptr = (unsigned long)bi->logicalDisplayBase;
ptr += bi->dispDeviceRect[1] * bi->dispDeviceRowBytes;
ptr += bi->dispDeviceRect[0] * ((bi->dispDeviceDepth + 7) / 8);
btext_setup_display(bi->dispDeviceRect[2] - bi->dispDeviceRect[0],
bi->dispDeviceRect[3] - bi->dispDeviceRect[1],
bi->dispDeviceDepth, bi->dispDeviceRowBytes,

View File

@ -522,10 +522,11 @@ static struct pmac_i2c_host_kw *__init kw_i2c_host_init(struct device_node *np)
host->speed = KW_I2C_MODE_25KHZ;
break;
}
if (np->n_intrs > 0)
host->irq = np->intrs[0].line;
else
host->irq = NO_IRQ;
host->irq = irq_of_parse_and_map(np, 0);
if (host->irq == NO_IRQ)
printk(KERN_WARNING
"low_i2c: Failed to map interrupt for %s\n",
np->full_name);
host->base = ioremap((*addrp), 0x1000);
if (host->base == NULL) {

View File

@ -29,6 +29,8 @@
#include <asm/machdep.h>
#include <asm/nvram.h>
#include "pmac.h"
#define DEBUG
#ifdef DEBUG
@ -80,9 +82,6 @@ static int nvram_partitions[3];
// XXX Turn that into a sem
static DEFINE_SPINLOCK(nv_lock);
extern int pmac_newworld;
extern int system_running;
static int (*core99_write_bank)(int bank, u8* datas);
static int (*core99_erase_bank)(int bank);

View File

@ -46,6 +46,9 @@ static int has_uninorth;
static struct pci_controller *u3_agp;
static struct pci_controller *u4_pcie;
static struct pci_controller *u3_ht;
#define has_second_ohare 0
#else
static int has_second_ohare;
#endif /* CONFIG_PPC64 */
extern u8 pci_cache_line_size;
@ -647,6 +650,33 @@ static void __init init_p2pbridge(void)
early_write_config_word(hose, bus, devfn, PCI_BRIDGE_CONTROL, val);
}
static void __init init_second_ohare(void)
{
struct device_node *np = of_find_node_by_name(NULL, "pci106b,7");
unsigned char bus, devfn;
unsigned short cmd;
if (np == NULL)
return;
/* This must run before we initialize the PICs since the second
* ohare hosts a PIC that will be accessed there.
*/
if (pci_device_from_OF_node(np, &bus, &devfn) == 0) {
struct pci_controller* hose =
pci_find_hose_for_OF_device(np);
if (!hose) {
printk(KERN_ERR "Can't find PCI hose for OHare2 !\n");
return;
}
early_read_config_word(hose, bus, devfn, PCI_COMMAND, &cmd);
cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
cmd &= ~PCI_COMMAND_IO;
early_write_config_word(hose, bus, devfn, PCI_COMMAND, cmd);
}
has_second_ohare = 1;
}
/*
* Some Apple desktop machines have a NEC PD720100A USB2 controller
* on the motherboard. Open Firmware, on these, will disable the
@ -688,9 +718,6 @@ static void __init fixup_nec_usb2(void)
" EHCI, fixing up...\n");
data &= ~1UL;
early_write_config_dword(hose, bus, devfn, 0xe4, data);
early_write_config_byte(hose, bus,
devfn | 2, PCI_INTERRUPT_LINE,
nec->intrs[0].line);
}
}
}
@ -958,30 +985,26 @@ static int __init add_bridge(struct device_node *dev)
return 0;
}
static void __init pcibios_fixup_OF_interrupts(void)
void __init pmac_pcibios_fixup(void)
{
struct pci_dev* dev = NULL;
/*
* Open Firmware often doesn't initialize the
* PCI_INTERRUPT_LINE config register properly, so we
* should find the device node and apply the interrupt
* obtained from the OF device-tree
*/
for_each_pci_dev(dev) {
struct device_node *node;
node = pci_device_to_OF_node(dev);
/* this is the node, see if it has interrupts */
if (node && node->n_intrs > 0)
dev->irq = node->intrs[0].line;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
}
}
/* Read interrupt from the device-tree */
pci_read_irq_line(dev);
void __init pmac_pcibios_fixup(void)
{
/* Fixup interrupts according to OF tree */
pcibios_fixup_OF_interrupts();
/* Fixup interrupt for the modem/ethernet combo controller.
* on machines with a second ohare chip.
* The number in the device tree (27) is bogus (correct for
* the ethernet-only board but not the combo ethernet/modem
* board). The real interrupt is 28 on the second controller
* -> 28+32 = 60.
*/
if (has_second_ohare &&
dev->vendor == PCI_VENDOR_ID_DEC &&
dev->device == PCI_DEVICE_ID_DEC_TULIP_PLUS)
dev->irq = irq_create_mapping(NULL, 60, 0);
}
}
#ifdef CONFIG_PPC64
@ -1071,6 +1094,7 @@ void __init pmac_pci_init(void)
#else /* CONFIG_PPC64 */
init_p2pbridge();
init_second_ohare();
fixup_nec_usb2();
/* We are still having some issues with the Xserve G4, enabling

View File

@ -24,19 +24,18 @@ static irqreturn_t macio_gpio_irq(int irq, void *data, struct pt_regs *regs)
static int macio_do_gpio_irq_enable(struct pmf_function *func)
{
if (func->node->n_intrs < 1)
unsigned int irq = irq_of_parse_and_map(func->node, 0);
if (irq == NO_IRQ)
return -EINVAL;
return request_irq(func->node->intrs[0].line, macio_gpio_irq, 0,
func->node->name, func);
return request_irq(irq, macio_gpio_irq, 0, func->node->name, func);
}
static int macio_do_gpio_irq_disable(struct pmf_function *func)
{
if (func->node->n_intrs < 1)
unsigned int irq = irq_of_parse_and_map(func->node, 0);
if (irq == NO_IRQ)
return -EINVAL;
free_irq(func->node->intrs[0].line, func);
free_irq(irq, func);
return 0;
}

View File

@ -65,39 +65,36 @@ static u32 level_mask[4];
static DEFINE_SPINLOCK(pmac_pic_lock);
#define GATWICK_IRQ_POOL_SIZE 10
static struct interrupt_info gatwick_int_pool[GATWICK_IRQ_POOL_SIZE];
#define NR_MASK_WORDS ((NR_IRQS + 31) / 32)
static unsigned long ppc_lost_interrupts[NR_MASK_WORDS];
static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS];
static int pmac_irq_cascade = -1;
static struct irq_host *pmac_pic_host;
/*
* Mark an irq as "lost". This is only used on the pmac
* since it can lose interrupts (see pmac_set_irq_mask).
* -- Cort
*/
void __set_lost(unsigned long irq_nr, int nokick)
static void __pmac_retrigger(unsigned int irq_nr)
{
if (!test_and_set_bit(irq_nr, ppc_lost_interrupts)) {
if (irq_nr >= max_real_irqs && pmac_irq_cascade > 0) {
__set_bit(irq_nr, ppc_lost_interrupts);
irq_nr = pmac_irq_cascade;
mb();
}
if (!__test_and_set_bit(irq_nr, ppc_lost_interrupts)) {
atomic_inc(&ppc_n_lost_interrupts);
if (!nokick)
set_dec(1);
set_dec(1);
}
}
static void pmac_mask_and_ack_irq(unsigned int irq_nr)
static void pmac_mask_and_ack_irq(unsigned int virq)
{
unsigned long bit = 1UL << (irq_nr & 0x1f);
int i = irq_nr >> 5;
unsigned int src = irq_map[virq].hwirq;
unsigned long bit = 1UL << (virq & 0x1f);
int i = virq >> 5;
unsigned long flags;
if ((unsigned)irq_nr >= max_irqs)
return;
clear_bit(irq_nr, ppc_cached_irq_mask);
if (test_and_clear_bit(irq_nr, ppc_lost_interrupts))
atomic_dec(&ppc_n_lost_interrupts);
spin_lock_irqsave(&pmac_pic_lock, flags);
__clear_bit(src, ppc_cached_irq_mask);
if (__test_and_clear_bit(src, ppc_lost_interrupts))
atomic_dec(&ppc_n_lost_interrupts);
out_le32(&pmac_irq_hw[i]->enable, ppc_cached_irq_mask[i]);
out_le32(&pmac_irq_hw[i]->ack, bit);
do {
@ -109,16 +106,29 @@ static void pmac_mask_and_ack_irq(unsigned int irq_nr)
spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
static void pmac_set_irq_mask(unsigned int irq_nr, int nokicklost)
static void pmac_ack_irq(unsigned int virq)
{
unsigned int src = irq_map[virq].hwirq;
unsigned long bit = 1UL << (src & 0x1f);
int i = src >> 5;
unsigned long flags;
spin_lock_irqsave(&pmac_pic_lock, flags);
if (__test_and_clear_bit(src, ppc_lost_interrupts))
atomic_dec(&ppc_n_lost_interrupts);
out_le32(&pmac_irq_hw[i]->ack, bit);
(void)in_le32(&pmac_irq_hw[i]->ack);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
static void __pmac_set_irq_mask(unsigned int irq_nr, int nokicklost)
{
unsigned long bit = 1UL << (irq_nr & 0x1f);
int i = irq_nr >> 5;
unsigned long flags;
if ((unsigned)irq_nr >= max_irqs)
return;
spin_lock_irqsave(&pmac_pic_lock, flags);
/* enable unmasked interrupts */
out_le32(&pmac_irq_hw[i]->enable, ppc_cached_irq_mask[i]);
@ -135,71 +145,78 @@ static void pmac_set_irq_mask(unsigned int irq_nr, int nokicklost)
* the bit in the flag register or request another interrupt.
*/
if (bit & ppc_cached_irq_mask[i] & in_le32(&pmac_irq_hw[i]->level))
__set_lost((ulong)irq_nr, nokicklost);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
__pmac_retrigger(irq_nr);
}
/* When an irq gets requested for the first client, if it's an
* edge interrupt, we clear any previous one on the controller
*/
static unsigned int pmac_startup_irq(unsigned int irq_nr)
static unsigned int pmac_startup_irq(unsigned int virq)
{
unsigned long bit = 1UL << (irq_nr & 0x1f);
int i = irq_nr >> 5;
unsigned long flags;
unsigned int src = irq_map[virq].hwirq;
unsigned long bit = 1UL << (src & 0x1f);
int i = src >> 5;
if ((irq_desc[irq_nr].status & IRQ_LEVEL) == 0)
spin_lock_irqsave(&pmac_pic_lock, flags);
if ((irq_desc[virq].status & IRQ_LEVEL) == 0)
out_le32(&pmac_irq_hw[i]->ack, bit);
set_bit(irq_nr, ppc_cached_irq_mask);
pmac_set_irq_mask(irq_nr, 0);
__set_bit(src, ppc_cached_irq_mask);
__pmac_set_irq_mask(src, 0);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
return 0;
}
static void pmac_mask_irq(unsigned int irq_nr)
static void pmac_mask_irq(unsigned int virq)
{
clear_bit(irq_nr, ppc_cached_irq_mask);
pmac_set_irq_mask(irq_nr, 0);
mb();
unsigned long flags;
unsigned int src = irq_map[virq].hwirq;
spin_lock_irqsave(&pmac_pic_lock, flags);
__clear_bit(src, ppc_cached_irq_mask);
__pmac_set_irq_mask(src, 0);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
static void pmac_unmask_irq(unsigned int irq_nr)
static void pmac_unmask_irq(unsigned int virq)
{
set_bit(irq_nr, ppc_cached_irq_mask);
pmac_set_irq_mask(irq_nr, 0);
unsigned long flags;
unsigned int src = irq_map[virq].hwirq;
spin_lock_irqsave(&pmac_pic_lock, flags);
__set_bit(src, ppc_cached_irq_mask);
__pmac_set_irq_mask(src, 0);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
}
static void pmac_end_irq(unsigned int irq_nr)
static int pmac_retrigger(unsigned int virq)
{
if (!(irq_desc[irq_nr].status & (IRQ_DISABLED|IRQ_INPROGRESS))
&& irq_desc[irq_nr].action) {
set_bit(irq_nr, ppc_cached_irq_mask);
pmac_set_irq_mask(irq_nr, 1);
}
unsigned long flags;
spin_lock_irqsave(&pmac_pic_lock, flags);
__pmac_retrigger(irq_map[virq].hwirq);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
return 1;
}
struct hw_interrupt_type pmac_pic = {
static struct irq_chip pmac_pic = {
.typename = " PMAC-PIC ",
.startup = pmac_startup_irq,
.enable = pmac_unmask_irq,
.disable = pmac_mask_irq,
.ack = pmac_mask_and_ack_irq,
.end = pmac_end_irq,
};
struct hw_interrupt_type gatwick_pic = {
.typename = " GATWICK ",
.startup = pmac_startup_irq,
.enable = pmac_unmask_irq,
.disable = pmac_mask_irq,
.ack = pmac_mask_and_ack_irq,
.end = pmac_end_irq,
.mask = pmac_mask_irq,
.ack = pmac_ack_irq,
.mask_ack = pmac_mask_and_ack_irq,
.unmask = pmac_unmask_irq,
.retrigger = pmac_retrigger,
};
static irqreturn_t gatwick_action(int cpl, void *dev_id, struct pt_regs *regs)
{
unsigned long flags;
int irq, bits;
int rc = IRQ_NONE;
spin_lock_irqsave(&pmac_pic_lock, flags);
for (irq = max_irqs; (irq -= 32) >= max_real_irqs; ) {
int i = irq >> 5;
bits = in_le32(&pmac_irq_hw[i]->event) | ppc_lost_interrupts[i];
@ -209,17 +226,20 @@ static irqreturn_t gatwick_action(int cpl, void *dev_id, struct pt_regs *regs)
if (bits == 0)
continue;
irq += __ilog2(bits);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
__do_IRQ(irq, regs);
return IRQ_HANDLED;
spin_lock_irqsave(&pmac_pic_lock, flags);
rc = IRQ_HANDLED;
}
printk("gatwick irq not from gatwick pic\n");
return IRQ_NONE;
spin_unlock_irqrestore(&pmac_pic_lock, flags);
return rc;
}
static int pmac_get_irq(struct pt_regs *regs)
static unsigned int pmac_pic_get_irq(struct pt_regs *regs)
{
int irq;
unsigned long bits = 0;
unsigned long flags;
#ifdef CONFIG_SMP
void psurge_smp_message_recv(struct pt_regs *);
@ -227,9 +247,10 @@ static int pmac_get_irq(struct pt_regs *regs)
/* IPI's are a hack on the powersurge -- Cort */
if ( smp_processor_id() != 0 ) {
psurge_smp_message_recv(regs);
return -2; /* ignore, already handled */
return NO_IRQ_IGNORE; /* ignore, already handled */
}
#endif /* CONFIG_SMP */
spin_lock_irqsave(&pmac_pic_lock, flags);
for (irq = max_real_irqs; (irq -= 32) >= 0; ) {
int i = irq >> 5;
bits = in_le32(&pmac_irq_hw[i]->event) | ppc_lost_interrupts[i];
@ -241,133 +262,10 @@ static int pmac_get_irq(struct pt_regs *regs)
irq += __ilog2(bits);
break;
}
return irq;
}
/* This routine will fix some missing interrupt values in the device tree
* on the gatwick mac-io controller used by some PowerBooks
*
* Walking of OF nodes could use a bit more fixing up here, but it's not
* very important as this is all boot time code on static portions of the
* device-tree.
*
* However, the modifications done to "intrs" will have to be removed and
* replaced with proper updates of the "interrupts" properties or
* AAPL,interrupts, yet to be decided, once the dynamic parsing is there.
*/
static void __init pmac_fix_gatwick_interrupts(struct device_node *gw,
int irq_base)
{
struct device_node *node;
int count;
memset(gatwick_int_pool, 0, sizeof(gatwick_int_pool));
count = 0;
for (node = NULL; (node = of_get_next_child(gw, node)) != NULL;) {
/* Fix SCC */
if ((strcasecmp(node->name, "escc") == 0) && node->child) {
if (node->child->n_intrs < 3) {
node->child->intrs = &gatwick_int_pool[count];
count += 3;
}
node->child->n_intrs = 3;
node->child->intrs[0].line = 15+irq_base;
node->child->intrs[1].line = 4+irq_base;
node->child->intrs[2].line = 5+irq_base;
printk(KERN_INFO "irq: fixed SCC on gatwick"
" (%d,%d,%d)\n",
node->child->intrs[0].line,
node->child->intrs[1].line,
node->child->intrs[2].line);
}
/* Fix media-bay & left SWIM */
if (strcasecmp(node->name, "media-bay") == 0) {
struct device_node* ya_node;
if (node->n_intrs == 0)
node->intrs = &gatwick_int_pool[count++];
node->n_intrs = 1;
node->intrs[0].line = 29+irq_base;
printk(KERN_INFO "irq: fixed media-bay on gatwick"
" (%d)\n", node->intrs[0].line);
ya_node = node->child;
while(ya_node) {
if (strcasecmp(ya_node->name, "floppy") == 0) {
if (ya_node->n_intrs < 2) {
ya_node->intrs = &gatwick_int_pool[count];
count += 2;
}
ya_node->n_intrs = 2;
ya_node->intrs[0].line = 19+irq_base;
ya_node->intrs[1].line = 1+irq_base;
printk(KERN_INFO "irq: fixed floppy on second controller (%d,%d)\n",
ya_node->intrs[0].line, ya_node->intrs[1].line);
}
if (strcasecmp(ya_node->name, "ata4") == 0) {
if (ya_node->n_intrs < 2) {
ya_node->intrs = &gatwick_int_pool[count];
count += 2;
}
ya_node->n_intrs = 2;
ya_node->intrs[0].line = 14+irq_base;
ya_node->intrs[1].line = 3+irq_base;
printk(KERN_INFO "irq: fixed ide on second controller (%d,%d)\n",
ya_node->intrs[0].line, ya_node->intrs[1].line);
}
ya_node = ya_node->sibling;
}
}
}
if (count > 10) {
printk("WARNING !! Gatwick interrupt pool overflow\n");
printk(" GATWICK_IRQ_POOL_SIZE = %d\n", GATWICK_IRQ_POOL_SIZE);
printk(" requested = %d\n", count);
}
}
/*
* The PowerBook 3400/2400/3500 can have a combo ethernet/modem
* card which includes an ohare chip that acts as a second interrupt
* controller. If we find this second ohare, set it up and fix the
* interrupt value in the device tree for the ethernet chip.
*/
static void __init enable_second_ohare(struct device_node *np)
{
unsigned char bus, devfn;
unsigned short cmd;
struct device_node *ether;
/* This code doesn't strictly belong here, it could be part of
* either the PCI initialisation or the feature code. It's kept
* here for historical reasons.
*/
if (pci_device_from_OF_node(np, &bus, &devfn) == 0) {
struct pci_controller* hose =
pci_find_hose_for_OF_device(np);
if (!hose) {
printk(KERN_ERR "Can't find PCI hose for OHare2 !\n");
return;
}
early_read_config_word(hose, bus, devfn, PCI_COMMAND, &cmd);
cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
cmd &= ~PCI_COMMAND_IO;
early_write_config_word(hose, bus, devfn, PCI_COMMAND, cmd);
}
/* Fix interrupt for the modem/ethernet combo controller. The number
* in the device tree (27) is bogus (correct for the ethernet-only
* board but not the combo ethernet/modem board).
* The real interrupt is 28 on the second controller -> 28+32 = 60.
*/
ether = of_find_node_by_name(NULL, "pci1011,14");
if (ether && ether->n_intrs > 0) {
ether->intrs[0].line = 60;
printk(KERN_INFO "irq: Fixed ethernet IRQ to %d\n",
ether->intrs[0].line);
}
of_node_put(ether);
spin_unlock_irqrestore(&pmac_pic_lock, flags);
if (unlikely(irq < 0))
return NO_IRQ;
return irq_linear_revmap(pmac_pic_host, irq);
}
#ifdef CONFIG_XMON
@ -386,17 +284,60 @@ static struct irqaction gatwick_cascade_action = {
.name = "cascade",
};
static int pmac_pic_host_match(struct irq_host *h, struct device_node *node)
{
/* We match all, we don't always have a node anyway */
return 1;
}
static int pmac_pic_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
struct irq_desc *desc = get_irq_desc(virq);
int level;
if (hw >= max_irqs)
return -EINVAL;
/* Mark level interrupts, set delayed disable for edge ones and set
* handlers
*/
level = !!(level_mask[hw >> 5] & (1UL << (hw & 0x1f)));
if (level)
desc->status |= IRQ_LEVEL;
else
desc->status |= IRQ_DELAYED_DISABLE;
set_irq_chip_and_handler(virq, &pmac_pic, level ?
handle_level_irq : handle_edge_irq);
return 0;
}
static int pmac_pic_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq,
unsigned int *out_flags)
{
*out_hwirq = *intspec;
return 0;
}
static struct irq_host_ops pmac_pic_host_ops = {
.match = pmac_pic_host_match,
.map = pmac_pic_host_map,
.xlate = pmac_pic_host_xlate,
};
static void __init pmac_pic_probe_oldstyle(void)
{
int i;
int irq_cascade = -1;
struct device_node *master = NULL;
struct device_node *slave = NULL;
u8 __iomem *addr;
struct resource r;
/* Set our get_irq function */
ppc_md.get_irq = pmac_get_irq;
ppc_md.get_irq = pmac_pic_get_irq;
/*
* Find the interrupt controller type & node
@ -414,7 +355,6 @@ static void __init pmac_pic_probe_oldstyle(void)
if (slave) {
max_irqs = 64;
level_mask[1] = OHARE_LEVEL_MASK;
enable_second_ohare(slave);
}
} else if ((master = of_find_node_by_name(NULL, "mac-io")) != NULL) {
max_irqs = max_real_irqs = 64;
@ -438,14 +378,18 @@ static void __init pmac_pic_probe_oldstyle(void)
max_irqs = 128;
level_mask[2] = HEATHROW_LEVEL_MASK;
level_mask[3] = 0;
pmac_fix_gatwick_interrupts(slave, max_real_irqs);
}
}
BUG_ON(master == NULL);
/* Set the handler for the main PIC */
for ( i = 0; i < max_real_irqs ; i++ )
irq_desc[i].chip = &pmac_pic;
/*
* Allocate an irq host
*/
pmac_pic_host = irq_alloc_host(IRQ_HOST_MAP_LINEAR, max_irqs,
&pmac_pic_host_ops,
max_irqs);
BUG_ON(pmac_pic_host == NULL);
irq_set_default_host(pmac_pic_host);
/* Get addresses of first controller if we have a node for it */
BUG_ON(of_address_to_resource(master, 0, &r));
@ -472,39 +416,38 @@ static void __init pmac_pic_probe_oldstyle(void)
pmac_irq_hw[i++] =
(volatile struct pmac_irq_hw __iomem *)
(addr + 0x10);
irq_cascade = slave->intrs[0].line;
pmac_irq_cascade = irq_of_parse_and_map(slave, 0);
printk(KERN_INFO "irq: Found slave Apple PIC %s for %d irqs"
" cascade: %d\n", slave->full_name,
max_irqs - max_real_irqs, irq_cascade);
max_irqs - max_real_irqs, pmac_irq_cascade);
}
of_node_put(slave);
/* disable all interrupts in all controllers */
/* Disable all interrupts in all controllers */
for (i = 0; i * 32 < max_irqs; ++i)
out_le32(&pmac_irq_hw[i]->enable, 0);
/* mark level interrupts */
for (i = 0; i < max_irqs; i++)
if (level_mask[i >> 5] & (1UL << (i & 0x1f)))
irq_desc[i].status = IRQ_LEVEL;
/* Hookup cascade irq */
if (slave && pmac_irq_cascade != NO_IRQ)
setup_irq(pmac_irq_cascade, &gatwick_cascade_action);
/* Setup handlers for secondary controller and hook cascade irq*/
if (slave) {
for ( i = max_real_irqs ; i < max_irqs ; i++ )
irq_desc[i].chip = &gatwick_pic;
setup_irq(irq_cascade, &gatwick_cascade_action);
}
printk(KERN_INFO "irq: System has %d possible interrupts\n", max_irqs);
#ifdef CONFIG_XMON
setup_irq(20, &xmon_action);
setup_irq(irq_create_mapping(NULL, 20, 0), &xmon_action);
#endif
}
#endif /* CONFIG_PPC32 */
static int pmac_u3_cascade(struct pt_regs *regs, void *data)
static void pmac_u3_cascade(unsigned int irq, struct irq_desc *desc,
struct pt_regs *regs)
{
return mpic_get_one_irq((struct mpic *)data, regs);
struct mpic *mpic = desc->handler_data;
unsigned int cascade_irq = mpic_get_one_irq(mpic, regs);
if (cascade_irq != NO_IRQ)
generic_handle_irq(cascade_irq, regs);
desc->chip->eoi(irq);
}
static void __init pmac_pic_setup_mpic_nmi(struct mpic *mpic)
@ -514,21 +457,20 @@ static void __init pmac_pic_setup_mpic_nmi(struct mpic *mpic)
int nmi_irq;
pswitch = of_find_node_by_name(NULL, "programmer-switch");
if (pswitch && pswitch->n_intrs) {
nmi_irq = pswitch->intrs[0].line;
mpic_irq_set_priority(nmi_irq, 9);
setup_irq(nmi_irq, &xmon_action);
if (pswitch) {
nmi_irq = irq_of_parse_and_map(pswitch, 0);
if (nmi_irq != NO_IRQ) {
mpic_irq_set_priority(nmi_irq, 9);
setup_irq(nmi_irq, &xmon_action);
}
of_node_put(pswitch);
}
of_node_put(pswitch);
#endif /* defined(CONFIG_XMON) && defined(CONFIG_PPC32) */
}
static struct mpic * __init pmac_setup_one_mpic(struct device_node *np,
int master)
{
unsigned char senses[128];
int offset = master ? 0 : 128;
int count = master ? 128 : 124;
const char *name = master ? " MPIC 1 " : " MPIC 2 ";
struct resource r;
struct mpic *mpic;
@ -541,8 +483,6 @@ static struct mpic * __init pmac_setup_one_mpic(struct device_node *np,
pmac_call_feature(PMAC_FTR_ENABLE_MPIC, np, 0, 0);
prom_get_irq_senses(senses, offset, offset + count);
flags |= MPIC_WANTS_RESET;
if (get_property(np, "big-endian", NULL))
flags |= MPIC_BIG_ENDIAN;
@ -553,8 +493,7 @@ static struct mpic * __init pmac_setup_one_mpic(struct device_node *np,
if (master && (flags & MPIC_BIG_ENDIAN))
flags |= MPIC_BROKEN_U3;
mpic = mpic_alloc(r.start, flags, 0, offset, count, master ? 252 : 0,
senses, count, name);
mpic = mpic_alloc(np, r.start, flags, 0, 0, name);
if (mpic == NULL)
return NULL;
@ -567,6 +506,7 @@ static int __init pmac_pic_probe_mpic(void)
{
struct mpic *mpic1, *mpic2;
struct device_node *np, *master = NULL, *slave = NULL;
unsigned int cascade;
/* We can have up to 2 MPICs cascaded */
for (np = NULL; (np = of_find_node_by_type(np, "open-pic"))
@ -603,16 +543,24 @@ static int __init pmac_pic_probe_mpic(void)
of_node_put(master);
/* No slave, let's go out */
if (slave == NULL || slave->n_intrs < 1)
if (slave == NULL)
return 0;
/* Get/Map slave interrupt */
cascade = irq_of_parse_and_map(slave, 0);
if (cascade == NO_IRQ) {
printk(KERN_ERR "Failed to map cascade IRQ\n");
return 0;
}
mpic2 = pmac_setup_one_mpic(slave, 0);
if (mpic2 == NULL) {
printk(KERN_ERR "Failed to setup slave MPIC\n");
of_node_put(slave);
return 0;
}
mpic_setup_cascade(slave->intrs[0].line, pmac_u3_cascade, mpic2);
set_irq_data(cascade, mpic2);
set_irq_chained_handler(cascade, pmac_u3_cascade);
of_node_put(slave);
return 0;
@ -621,6 +569,19 @@ static int __init pmac_pic_probe_mpic(void)
void __init pmac_pic_init(void)
{
unsigned int flags = 0;
/* We configure the OF parsing based on our oldworld vs. newworld
* platform type and wether we were booted by BootX.
*/
#ifdef CONFIG_PPC32
if (!pmac_newworld)
flags |= OF_IMAP_OLDWORLD_MAC;
if (get_property(of_chosen, "linux,bootx", NULL) != NULL)
flags |= OF_IMAP_NO_PHANDLE;
of_irq_map_init(flags);
#endif /* CONFIG_PPC_32 */
/* We first try to detect Apple's new Core99 chipset, since mac-io
* is quite different on those machines and contains an IBM MPIC2.
*/
@ -643,6 +604,7 @@ unsigned long sleep_save_mask[2];
/* This used to be passed by the PMU driver but that link got
* broken with the new driver model. We use this tweak for now...
* We really want to do things differently though...
*/
static int pmacpic_find_viaint(void)
{
@ -656,7 +618,7 @@ static int pmacpic_find_viaint(void)
np = of_find_node_by_name(NULL, "via-pmu");
if (np == NULL)
goto not_found;
viaint = np->intrs[0].line;
viaint = irq_of_parse_and_map(np, 0);;
#endif /* CONFIG_ADB_PMU */
not_found:

View File

@ -12,6 +12,8 @@
struct rtc_time;
extern int pmac_newworld;
extern long pmac_time_init(void);
extern unsigned long pmac_get_boot_time(void);
extern void pmac_get_rtc_time(struct rtc_time *);

View File

@ -613,9 +613,6 @@ static void __init pmac_init_early(void)
udbg_adb_init(!!strstr(cmd_line, "btextdbg"));
#ifdef CONFIG_PPC64
/* Setup interrupt mapping options */
ppc64_interrupt_controller = IC_OPEN_PIC;
iommu_init_early_dart();
#endif
}

View File

@ -72,32 +72,62 @@ static irqreturn_t ras_error_interrupt(int irq, void *dev_id,
/* #define DEBUG */
static void request_ras_irqs(struct device_node *np, char *propname,
static void request_ras_irqs(struct device_node *np,
irqreturn_t (*handler)(int, void *, struct pt_regs *),
const char *name)
{
unsigned int *ireg, len, i;
int virq, n_intr;
int i, index, count = 0;
struct of_irq oirq;
u32 *opicprop;
unsigned int opicplen;
unsigned int virqs[16];
ireg = (unsigned int *)get_property(np, propname, &len);
if (ireg == NULL)
return;
n_intr = prom_n_intr_cells(np);
len /= n_intr * sizeof(*ireg);
/* Check for obsolete "open-pic-interrupt" property. If present, then
* map those interrupts using the default interrupt host and default
* trigger
*/
opicprop = (u32 *)get_property(np, "open-pic-interrupt", &opicplen);
if (opicprop) {
opicplen /= sizeof(u32);
for (i = 0; i < opicplen; i++) {
if (count > 15)
break;
virqs[count] = irq_create_mapping(NULL, *(opicprop++),
IRQ_TYPE_NONE);
if (virqs[count] == NO_IRQ)
printk(KERN_ERR "Unable to allocate interrupt "
"number for %s\n", np->full_name);
else
count++;
for (i = 0; i < len; i++) {
virq = virt_irq_create_mapping(*ireg);
if (virq == NO_IRQ) {
printk(KERN_ERR "Unable to allocate interrupt "
"number for %s\n", np->full_name);
return;
}
if (request_irq(irq_offset_up(virq), handler, 0, name, NULL)) {
}
/* Else use normal interrupt tree parsing */
else {
/* First try to do a proper OF tree parsing */
for (index = 0; of_irq_map_one(np, index, &oirq) == 0;
index++) {
if (count > 15)
break;
virqs[count] = irq_create_of_mapping(oirq.controller,
oirq.specifier,
oirq.size);
if (virqs[count] == NO_IRQ)
printk(KERN_ERR "Unable to allocate interrupt "
"number for %s\n", np->full_name);
else
count++;
}
}
/* Now request them */
for (i = 0; i < count; i++) {
if (request_irq(virqs[i], handler, 0, name, NULL)) {
printk(KERN_ERR "Unable to request interrupt %d for "
"%s\n", irq_offset_up(virq), np->full_name);
"%s\n", virqs[i], np->full_name);
return;
}
ireg += n_intr;
}
}
@ -115,20 +145,14 @@ static int __init init_ras_IRQ(void)
/* Internal Errors */
np = of_find_node_by_path("/event-sources/internal-errors");
if (np != NULL) {
request_ras_irqs(np, "open-pic-interrupt", ras_error_interrupt,
"RAS_ERROR");
request_ras_irqs(np, "interrupts", ras_error_interrupt,
"RAS_ERROR");
request_ras_irqs(np, ras_error_interrupt, "RAS_ERROR");
of_node_put(np);
}
/* EPOW Events */
np = of_find_node_by_path("/event-sources/epow-events");
if (np != NULL) {
request_ras_irqs(np, "open-pic-interrupt", ras_epow_interrupt,
"RAS_EPOW");
request_ras_irqs(np, "interrupts", ras_epow_interrupt,
"RAS_EPOW");
request_ras_irqs(np, ras_epow_interrupt, "RAS_EPOW");
of_node_put(np);
}
@ -162,7 +186,7 @@ ras_epow_interrupt(int irq, void *dev_id, struct pt_regs * regs)
status = rtas_call(ras_check_exception_token, 6, 1, NULL,
RAS_VECTOR_OFFSET,
virt_irq_to_real(irq_offset_down(irq)),
irq_map[irq].hwirq,
RTAS_EPOW_WARNING | RTAS_POWERMGM_EVENTS,
critical, __pa(&ras_log_buf),
rtas_get_error_log_max());
@ -198,7 +222,7 @@ ras_error_interrupt(int irq, void *dev_id, struct pt_regs * regs)
status = rtas_call(ras_check_exception_token, 6, 1, NULL,
RAS_VECTOR_OFFSET,
virt_irq_to_real(irq_offset_down(irq)),
irq_map[irq].hwirq,
RTAS_INTERNAL_ERROR, 1 /*Time Critical */,
__pa(&ras_log_buf),
rtas_get_error_log_max());

View File

@ -76,6 +76,9 @@
#define DBG(fmt...)
#endif
/* move those away to a .h */
extern void smp_init_pseries_mpic(void);
extern void smp_init_pseries_xics(void);
extern void find_udbg_vterm(void);
int fwnmi_active; /* TRUE if an FWNMI handler is present */
@ -83,7 +86,7 @@ int fwnmi_active; /* TRUE if an FWNMI handler is present */
static void pseries_shared_idle_sleep(void);
static void pseries_dedicated_idle_sleep(void);
struct mpic *pSeries_mpic;
static struct device_node *pSeries_mpic_node;
static void pSeries_show_cpuinfo(struct seq_file *m)
{
@ -118,63 +121,92 @@ static void __init fwnmi_init(void)
fwnmi_active = 1;
}
static void __init pSeries_init_mpic(void)
void pseries_8259_cascade(unsigned int irq, struct irq_desc *desc,
struct pt_regs *regs)
{
unsigned int *addrp;
struct device_node *np;
unsigned long intack = 0;
/* All ISUs are setup, complete initialization */
mpic_init(pSeries_mpic);
/* Check what kind of cascade ACK we have */
if (!(np = of_find_node_by_name(NULL, "pci"))
|| !(addrp = (unsigned int *)
get_property(np, "8259-interrupt-acknowledge", NULL)))
printk(KERN_ERR "Cannot find pci to get ack address\n");
else
intack = addrp[prom_n_addr_cells(np)-1];
of_node_put(np);
/* Setup the legacy interrupts & controller */
i8259_init(intack, 0);
/* Hook cascade to mpic */
mpic_setup_cascade(NUM_ISA_INTERRUPTS, i8259_irq_cascade, NULL);
unsigned int cascade_irq = i8259_irq(regs);
if (cascade_irq != NO_IRQ)
generic_handle_irq(cascade_irq, regs);
desc->chip->eoi(irq);
}
static void __init pSeries_setup_mpic(void)
static void __init pseries_mpic_init_IRQ(void)
{
struct device_node *np, *old, *cascade = NULL;
unsigned int *addrp;
unsigned long intack = 0;
unsigned int *opprop;
unsigned long openpic_addr = 0;
unsigned char senses[NR_IRQS - NUM_ISA_INTERRUPTS];
struct device_node *root;
int irq_count;
unsigned int cascade_irq;
int naddr, n, i, opplen;
struct mpic *mpic;
/* Find the Open PIC if present */
root = of_find_node_by_path("/");
opprop = (unsigned int *) get_property(root, "platform-open-pic", NULL);
np = of_find_node_by_path("/");
naddr = prom_n_addr_cells(np);
opprop = (unsigned int *) get_property(np, "platform-open-pic", &opplen);
if (opprop != 0) {
int n = prom_n_addr_cells(root);
for (openpic_addr = 0; n > 0; --n)
openpic_addr = (openpic_addr << 32) + *opprop++;
openpic_addr = of_read_number(opprop, naddr);
printk(KERN_DEBUG "OpenPIC addr: %lx\n", openpic_addr);
}
of_node_put(root);
of_node_put(np);
BUG_ON(openpic_addr == 0);
/* Get the sense values from OF */
prom_get_irq_senses(senses, NUM_ISA_INTERRUPTS, NR_IRQS);
/* Setup the openpic driver */
irq_count = NR_IRQS - NUM_ISA_INTERRUPTS - 4; /* leave room for IPIs */
pSeries_mpic = mpic_alloc(openpic_addr, MPIC_PRIMARY,
16, 16, irq_count, /* isu size, irq offset, irq count */
NR_IRQS - 4, /* ipi offset */
senses, irq_count, /* sense & sense size */
" MPIC ");
mpic = mpic_alloc(pSeries_mpic_node, openpic_addr,
MPIC_PRIMARY,
16, 250, /* isu size, irq count */
" MPIC ");
BUG_ON(mpic == NULL);
/* Add ISUs */
opplen /= sizeof(u32);
for (n = 0, i = naddr; i < opplen; i += naddr, n++) {
unsigned long isuaddr = of_read_number(opprop + i, naddr);
mpic_assign_isu(mpic, n, isuaddr);
}
/* All ISUs are setup, complete initialization */
mpic_init(mpic);
/* Look for cascade */
for_each_node_by_type(np, "interrupt-controller")
if (device_is_compatible(np, "chrp,iic")) {
cascade = np;
break;
}
if (cascade == NULL)
return;
cascade_irq = irq_of_parse_and_map(cascade, 0);
if (cascade == NO_IRQ) {
printk(KERN_ERR "xics: failed to map cascade interrupt");
return;
}
/* Check ACK type */
for (old = of_node_get(cascade); old != NULL ; old = np) {
np = of_get_parent(old);
of_node_put(old);
if (np == NULL)
break;
if (strcmp(np->name, "pci") != 0)
continue;
addrp = (u32 *)get_property(np, "8259-interrupt-acknowledge",
NULL);
if (addrp == NULL)
continue;
naddr = prom_n_addr_cells(np);
intack = addrp[naddr-1];
if (naddr > 1)
intack |= ((unsigned long)addrp[naddr-2]) << 32;
}
if (intack)
printk(KERN_DEBUG "mpic: PCI 8259 intack at 0x%016lx\n",
intack);
i8259_init(cascade, intack);
of_node_put(cascade);
set_irq_chained_handler(cascade_irq, pseries_8259_cascade);
}
static void pseries_lpar_enable_pmcs(void)
@ -192,23 +224,67 @@ static void pseries_lpar_enable_pmcs(void)
get_lppaca()->pmcregs_in_use = 1;
}
#ifdef CONFIG_KEXEC
static void pseries_kexec_cpu_down_mpic(int crash_shutdown, int secondary)
{
mpic_teardown_this_cpu(secondary);
}
static void pseries_kexec_cpu_down_xics(int crash_shutdown, int secondary)
{
/* Don't risk a hypervisor call if we're crashing */
if (firmware_has_feature(FW_FEATURE_SPLPAR) && !crash_shutdown) {
unsigned long vpa = __pa(get_lppaca());
if (unregister_vpa(hard_smp_processor_id(), vpa)) {
printk("VPA deregistration of cpu %u (hw_cpu_id %d) "
"failed\n", smp_processor_id(),
hard_smp_processor_id());
}
}
xics_teardown_cpu(secondary);
}
#endif /* CONFIG_KEXEC */
static void __init pseries_discover_pic(void)
{
struct device_node *np;
char *typep;
for (np = NULL; (np = of_find_node_by_name(np,
"interrupt-controller"));) {
typep = (char *)get_property(np, "compatible", NULL);
if (strstr(typep, "open-pic")) {
pSeries_mpic_node = of_node_get(np);
ppc_md.init_IRQ = pseries_mpic_init_IRQ;
ppc_md.get_irq = mpic_get_irq;
#ifdef CONFIG_KEXEC
ppc_md.kexec_cpu_down = pseries_kexec_cpu_down_mpic;
#endif
#ifdef CONFIG_SMP
smp_init_pseries_mpic();
#endif
return;
} else if (strstr(typep, "ppc-xicp")) {
ppc_md.init_IRQ = xics_init_IRQ;
#ifdef CONFIG_KEXEC
ppc_md.kexec_cpu_down = pseries_kexec_cpu_down_xics;
#endif
#ifdef CONFIG_SMP
smp_init_pseries_xics();
#endif
return;
}
}
printk(KERN_ERR "pSeries_discover_pic: failed to recognize"
" interrupt-controller\n");
}
static void __init pSeries_setup_arch(void)
{
/* Fixup ppc_md depending on the type of interrupt controller */
if (ppc64_interrupt_controller == IC_OPEN_PIC) {
ppc_md.init_IRQ = pSeries_init_mpic;
ppc_md.get_irq = mpic_get_irq;
/* Allocate the mpic now, so that find_and_init_phbs() can
* fill the ISUs */
pSeries_setup_mpic();
} else {
ppc_md.init_IRQ = xics_init_IRQ;
ppc_md.get_irq = xics_get_irq;
}
/* Discover PIC type and setup ppc_md accordingly */
pseries_discover_pic();
#ifdef CONFIG_SMP
smp_init_pSeries();
#endif
/* openpic global configuration register (64-bit format). */
/* openpic Interrupt Source Unit pointer (64-bit format). */
/* python0 facility area (mmio) (64-bit format) REAL address. */
@ -260,41 +336,11 @@ static int __init pSeries_init_panel(void)
}
arch_initcall(pSeries_init_panel);
static void __init pSeries_discover_pic(void)
{
struct device_node *np;
char *typep;
/*
* Setup interrupt mapping options that are needed for finish_device_tree
* to properly parse the OF interrupt tree & do the virtual irq mapping
*/
__irq_offset_value = NUM_ISA_INTERRUPTS;
ppc64_interrupt_controller = IC_INVALID;
for (np = NULL; (np = of_find_node_by_name(np, "interrupt-controller"));) {
typep = (char *)get_property(np, "compatible", NULL);
if (strstr(typep, "open-pic")) {
ppc64_interrupt_controller = IC_OPEN_PIC;
break;
} else if (strstr(typep, "ppc-xicp")) {
ppc64_interrupt_controller = IC_PPC_XIC;
break;
}
}
if (ppc64_interrupt_controller == IC_INVALID)
printk("pSeries_discover_pic: failed to recognize"
" interrupt-controller\n");
}
static void pSeries_mach_cpu_die(void)
{
local_irq_disable();
idle_task_exit();
/* Some hardware requires clearing the CPPR, while other hardware does not
* it is safe either way
*/
pSeriesLP_cppr_info(0, 0);
xics_teardown_cpu(0);
rtas_stop_self();
/* Should never get here... */
BUG();
@ -332,8 +378,6 @@ static void __init pSeries_init_early(void)
iommu_init_early_pSeries();
pSeries_discover_pic();
DBG(" <- pSeries_init_early()\n");
}
@ -505,27 +549,6 @@ static int pSeries_pci_probe_mode(struct pci_bus *bus)
return PCI_PROBE_NORMAL;
}
#ifdef CONFIG_KEXEC
static void pseries_kexec_cpu_down(int crash_shutdown, int secondary)
{
/* Don't risk a hypervisor call if we're crashing */
if (firmware_has_feature(FW_FEATURE_SPLPAR) && !crash_shutdown) {
unsigned long vpa = __pa(get_lppaca());
if (unregister_vpa(hard_smp_processor_id(), vpa)) {
printk("VPA deregistration of cpu %u (hw_cpu_id %d) "
"failed\n", smp_processor_id(),
hard_smp_processor_id());
}
}
if (ppc64_interrupt_controller == IC_OPEN_PIC)
mpic_teardown_this_cpu(secondary);
else
xics_teardown_cpu(secondary);
}
#endif
define_machine(pseries) {
.name = "pSeries",
.probe = pSeries_probe,
@ -550,7 +573,6 @@ define_machine(pseries) {
.system_reset_exception = pSeries_system_reset_exception,
.machine_check_exception = pSeries_machine_check_exception,
#ifdef CONFIG_KEXEC
.kexec_cpu_down = pseries_kexec_cpu_down,
.machine_kexec = default_machine_kexec,
.machine_kexec_prepare = default_machine_kexec_prepare,
.machine_crash_shutdown = default_machine_crash_shutdown,

View File

@ -416,27 +416,12 @@ static struct smp_ops_t pSeries_xics_smp_ops = {
#endif
/* This is called very early */
void __init smp_init_pSeries(void)
static void __init smp_init_pseries(void)
{
int i;
DBG(" -> smp_init_pSeries()\n");
switch (ppc64_interrupt_controller) {
#ifdef CONFIG_MPIC
case IC_OPEN_PIC:
smp_ops = &pSeries_mpic_smp_ops;
break;
#endif
#ifdef CONFIG_XICS
case IC_PPC_XIC:
smp_ops = &pSeries_xics_smp_ops;
break;
#endif
default:
panic("Invalid interrupt controller");
}
#ifdef CONFIG_HOTPLUG_CPU
smp_ops->cpu_disable = pSeries_cpu_disable;
smp_ops->cpu_die = pSeries_cpu_die;
@ -471,3 +456,18 @@ void __init smp_init_pSeries(void)
DBG(" <- smp_init_pSeries()\n");
}
#ifdef CONFIG_MPIC
void __init smp_init_pseries_mpic(void)
{
smp_ops = &pSeries_mpic_smp_ops;
smp_init_pseries();
}
#endif
void __init smp_init_pseries_xics(void)
{
smp_ops = &pSeries_xics_smp_ops;
smp_init_pseries();
}

View File

@ -8,6 +8,9 @@
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#undef DEBUG
#include <linux/types.h>
#include <linux/threads.h>
#include <linux/kernel.h>
@ -19,6 +22,7 @@
#include <linux/gfp.h>
#include <linux/radix-tree.h>
#include <linux/cpu.h>
#include <asm/firmware.h>
#include <asm/prom.h>
#include <asm/io.h>
@ -31,26 +35,6 @@
#include "xics.h"
static unsigned int xics_startup(unsigned int irq);
static void xics_enable_irq(unsigned int irq);
static void xics_disable_irq(unsigned int irq);
static void xics_mask_and_ack_irq(unsigned int irq);
static void xics_end_irq(unsigned int irq);
static void xics_set_affinity(unsigned int irq_nr, cpumask_t cpumask);
static struct hw_interrupt_type xics_pic = {
.typename = " XICS ",
.startup = xics_startup,
.enable = xics_enable_irq,
.disable = xics_disable_irq,
.ack = xics_mask_and_ack_irq,
.end = xics_end_irq,
.set_affinity = xics_set_affinity
};
/* This is used to map real irq numbers to virtual */
static struct radix_tree_root irq_map = RADIX_TREE_INIT(GFP_ATOMIC);
#define XICS_IPI 2
#define XICS_IRQ_SPURIOUS 0
@ -81,12 +65,12 @@ struct xics_ipl {
static struct xics_ipl __iomem *xics_per_cpu[NR_CPUS];
static int xics_irq_8259_cascade = 0;
static int xics_irq_8259_cascade_real = 0;
static unsigned int default_server = 0xFF;
static unsigned int default_distrib_server = 0;
static unsigned int interrupt_server_size = 8;
static struct irq_host *xics_host;
/*
* XICS only has a single IPI, so encode the messages per CPU
*/
@ -98,48 +82,34 @@ static int ibm_set_xive;
static int ibm_int_on;
static int ibm_int_off;
typedef struct {
int (*xirr_info_get)(int cpu);
void (*xirr_info_set)(int cpu, int val);
void (*cppr_info)(int cpu, u8 val);
void (*qirr_info)(int cpu, u8 val);
} xics_ops;
/* Direct HW low level accessors */
/* SMP */
static int pSeries_xirr_info_get(int n_cpu)
static inline unsigned int direct_xirr_info_get(int n_cpu)
{
return in_be32(&xics_per_cpu[n_cpu]->xirr.word);
}
static void pSeries_xirr_info_set(int n_cpu, int value)
static inline void direct_xirr_info_set(int n_cpu, int value)
{
out_be32(&xics_per_cpu[n_cpu]->xirr.word, value);
}
static void pSeries_cppr_info(int n_cpu, u8 value)
static inline void direct_cppr_info(int n_cpu, u8 value)
{
out_8(&xics_per_cpu[n_cpu]->xirr.bytes[0], value);
}
static void pSeries_qirr_info(int n_cpu, u8 value)
static inline void direct_qirr_info(int n_cpu, u8 value)
{
out_8(&xics_per_cpu[n_cpu]->qirr.bytes[0], value);
}
static xics_ops pSeries_ops = {
pSeries_xirr_info_get,
pSeries_xirr_info_set,
pSeries_cppr_info,
pSeries_qirr_info
};
static xics_ops *ops = &pSeries_ops;
/* LPAR low level accessors */
/* LPAR */
static inline long plpar_eoi(unsigned long xirr)
{
return plpar_hcall_norets(H_EOI, xirr);
@ -161,7 +131,7 @@ static inline long plpar_xirr(unsigned long *xirr_ret)
return plpar_hcall(H_XIRR, 0, 0, 0, 0, xirr_ret, &dummy, &dummy);
}
static int pSeriesLP_xirr_info_get(int n_cpu)
static inline unsigned int lpar_xirr_info_get(int n_cpu)
{
unsigned long lpar_rc;
unsigned long return_value;
@ -169,10 +139,10 @@ static int pSeriesLP_xirr_info_get(int n_cpu)
lpar_rc = plpar_xirr(&return_value);
if (lpar_rc != H_SUCCESS)
panic(" bad return code xirr - rc = %lx \n", lpar_rc);
return (int)return_value;
return (unsigned int)return_value;
}
static void pSeriesLP_xirr_info_set(int n_cpu, int value)
static inline void lpar_xirr_info_set(int n_cpu, int value)
{
unsigned long lpar_rc;
unsigned long val64 = value & 0xffffffff;
@ -183,7 +153,7 @@ static void pSeriesLP_xirr_info_set(int n_cpu, int value)
val64);
}
void pSeriesLP_cppr_info(int n_cpu, u8 value)
static inline void lpar_cppr_info(int n_cpu, u8 value)
{
unsigned long lpar_rc;
@ -192,7 +162,7 @@ void pSeriesLP_cppr_info(int n_cpu, u8 value)
panic("bad return code cppr - rc = %lx\n", lpar_rc);
}
static void pSeriesLP_qirr_info(int n_cpu , u8 value)
static inline void lpar_qirr_info(int n_cpu , u8 value)
{
unsigned long lpar_rc;
@ -201,43 +171,16 @@ static void pSeriesLP_qirr_info(int n_cpu , u8 value)
panic("bad return code qirr - rc = %lx\n", lpar_rc);
}
xics_ops pSeriesLP_ops = {
pSeriesLP_xirr_info_get,
pSeriesLP_xirr_info_set,
pSeriesLP_cppr_info,
pSeriesLP_qirr_info
};
static unsigned int xics_startup(unsigned int virq)
{
unsigned int irq;
/* High level handlers and init code */
irq = irq_offset_down(virq);
if (radix_tree_insert(&irq_map, virt_irq_to_real(irq),
&virt_irq_to_real_map[irq]) == -ENOMEM)
printk(KERN_CRIT "Out of memory creating real -> virtual"
" IRQ mapping for irq %u (real 0x%x)\n",
virq, virt_irq_to_real(irq));
xics_enable_irq(virq);
return 0; /* return value is ignored */
}
static unsigned int real_irq_to_virt(unsigned int real_irq)
{
unsigned int *ptr;
ptr = radix_tree_lookup(&irq_map, real_irq);
if (ptr == NULL)
return NO_IRQ;
return ptr - virt_irq_to_real_map;
}
#ifdef CONFIG_SMP
static int get_irq_server(unsigned int irq)
static int get_irq_server(unsigned int virq)
{
unsigned int server;
/* For the moment only implement delivery to all cpus or one cpu */
cpumask_t cpumask = irq_desc[irq].affinity;
cpumask_t cpumask = irq_desc[virq].affinity;
cpumask_t tmp = CPU_MASK_NONE;
if (!distribute_irqs)
@ -258,23 +201,28 @@ static int get_irq_server(unsigned int irq)
}
#else
static int get_irq_server(unsigned int irq)
static int get_irq_server(unsigned int virq)
{
return default_server;
}
#endif
static void xics_enable_irq(unsigned int virq)
static void xics_unmask_irq(unsigned int virq)
{
unsigned int irq;
int call_status;
unsigned int server;
irq = virt_irq_to_real(irq_offset_down(virq));
if (irq == XICS_IPI)
pr_debug("xics: unmask virq %d\n", virq);
irq = (unsigned int)irq_map[virq].hwirq;
pr_debug(" -> map to hwirq 0x%x\n", irq);
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
return;
server = get_irq_server(virq);
call_status = rtas_call(ibm_set_xive, 3, 1, NULL, irq, server,
DEFAULT_PRIORITY);
if (call_status != 0) {
@ -293,7 +241,7 @@ static void xics_enable_irq(unsigned int virq)
}
}
static void xics_disable_real_irq(unsigned int irq)
static void xics_mask_real_irq(unsigned int irq)
{
int call_status;
unsigned int server;
@ -318,75 +266,86 @@ static void xics_disable_real_irq(unsigned int irq)
}
}
static void xics_disable_irq(unsigned int virq)
static void xics_mask_irq(unsigned int virq)
{
unsigned int irq;
irq = virt_irq_to_real(irq_offset_down(virq));
xics_disable_real_irq(irq);
pr_debug("xics: mask virq %d\n", virq);
irq = (unsigned int)irq_map[virq].hwirq;
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
return;
xics_mask_real_irq(irq);
}
static void xics_end_irq(unsigned int irq)
static unsigned int xics_startup(unsigned int virq)
{
unsigned int irq;
/* force a reverse mapping of the interrupt so it gets in the cache */
irq = (unsigned int)irq_map[virq].hwirq;
irq_radix_revmap(xics_host, irq);
/* unmask it */
xics_unmask_irq(virq);
return 0;
}
static void xics_eoi_direct(unsigned int virq)
{
int cpu = smp_processor_id();
unsigned int irq = (unsigned int)irq_map[virq].hwirq;
iosync();
ops->xirr_info_set(cpu, ((0xff << 24) |
(virt_irq_to_real(irq_offset_down(irq)))));
direct_xirr_info_set(cpu, (0xff << 24) | irq);
}
static void xics_mask_and_ack_irq(unsigned int irq)
static void xics_eoi_lpar(unsigned int virq)
{
int cpu = smp_processor_id();
unsigned int irq = (unsigned int)irq_map[virq].hwirq;
if (irq < irq_offset_value()) {
i8259_pic.ack(irq);
iosync();
ops->xirr_info_set(cpu, ((0xff<<24) |
xics_irq_8259_cascade_real));
iosync();
}
iosync();
lpar_xirr_info_set(cpu, (0xff << 24) | irq);
}
int xics_get_irq(struct pt_regs *regs)
static inline unsigned int xics_remap_irq(unsigned int vec)
{
unsigned int cpu = smp_processor_id();
unsigned int vec;
int irq;
unsigned int irq;
vec = ops->xirr_info_get(cpu);
/* (vec >> 24) == old priority */
vec &= 0x00ffffff;
/* for sanity, this had better be < NR_IRQS - 16 */
if (vec == xics_irq_8259_cascade_real) {
irq = i8259_irq(regs);
xics_end_irq(irq_offset_up(xics_irq_8259_cascade));
} else if (vec == XICS_IRQ_SPURIOUS) {
irq = -1;
} else {
irq = real_irq_to_virt(vec);
if (irq == NO_IRQ)
irq = real_irq_to_virt_slowpath(vec);
if (irq == NO_IRQ) {
printk(KERN_ERR "Interrupt %u (real) is invalid,"
" disabling it.\n", vec);
xics_disable_real_irq(vec);
} else
irq = irq_offset_up(irq);
}
return irq;
if (vec == XICS_IRQ_SPURIOUS)
return NO_IRQ;
irq = irq_radix_revmap(xics_host, vec);
if (likely(irq != NO_IRQ))
return irq;
printk(KERN_ERR "Interrupt %u (real) is invalid,"
" disabling it.\n", vec);
xics_mask_real_irq(vec);
return NO_IRQ;
}
static unsigned int xics_get_irq_direct(struct pt_regs *regs)
{
unsigned int cpu = smp_processor_id();
return xics_remap_irq(direct_xirr_info_get(cpu));
}
static unsigned int xics_get_irq_lpar(struct pt_regs *regs)
{
unsigned int cpu = smp_processor_id();
return xics_remap_irq(lpar_xirr_info_get(cpu));
}
#ifdef CONFIG_SMP
static irqreturn_t xics_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
static irqreturn_t xics_ipi_dispatch(int cpu, struct pt_regs *regs)
{
int cpu = smp_processor_id();
ops->qirr_info(cpu, 0xff);
WARN_ON(cpu_is_offline(cpu));
while (xics_ipi_message[cpu].value) {
@ -418,183 +377,42 @@ static irqreturn_t xics_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
return IRQ_HANDLED;
}
void xics_cause_IPI(int cpu)
{
ops->qirr_info(cpu, IPI_PRIORITY);
}
#endif /* CONFIG_SMP */
void xics_setup_cpu(void)
static irqreturn_t xics_ipi_action_direct(int irq, void *dev_id, struct pt_regs *regs)
{
int cpu = smp_processor_id();
ops->cppr_info(cpu, 0xff);
iosync();
direct_qirr_info(cpu, 0xff);
/*
* Put the calling processor into the GIQ. This is really only
* necessary from a secondary thread as the OF start-cpu interface
* performs this function for us on primary threads.
*
* XXX: undo of teardown on kexec needs this too, as may hotplug
*/
rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
(1UL << interrupt_server_size) - 1 - default_distrib_server, 1);
return xics_ipi_dispatch(cpu, regs);
}
void xics_init_IRQ(void)
static irqreturn_t xics_ipi_action_lpar(int irq, void *dev_id, struct pt_regs *regs)
{
int i;
unsigned long intr_size = 0;
struct device_node *np;
uint *ireg, ilen, indx = 0;
unsigned long intr_base = 0;
struct xics_interrupt_node {
unsigned long addr;
unsigned long size;
} intnodes[NR_CPUS];
int cpu = smp_processor_id();
ppc64_boot_msg(0x20, "XICS Init");
lpar_qirr_info(cpu, 0xff);
ibm_get_xive = rtas_token("ibm,get-xive");
ibm_set_xive = rtas_token("ibm,set-xive");
ibm_int_on = rtas_token("ibm,int-on");
ibm_int_off = rtas_token("ibm,int-off");
np = of_find_node_by_type(NULL, "PowerPC-External-Interrupt-Presentation");
if (!np)
panic("xics_init_IRQ: can't find interrupt presentation");
nextnode:
ireg = (uint *)get_property(np, "ibm,interrupt-server-ranges", NULL);
if (ireg) {
/*
* set node starting index for this node
*/
indx = *ireg;
}
ireg = (uint *)get_property(np, "reg", &ilen);
if (!ireg)
panic("xics_init_IRQ: can't find interrupt reg property");
while (ilen) {
intnodes[indx].addr = (unsigned long)*ireg++ << 32;
ilen -= sizeof(uint);
intnodes[indx].addr |= *ireg++;
ilen -= sizeof(uint);
intnodes[indx].size = (unsigned long)*ireg++ << 32;
ilen -= sizeof(uint);
intnodes[indx].size |= *ireg++;
ilen -= sizeof(uint);
indx++;
if (indx >= NR_CPUS) break;
}
np = of_find_node_by_type(np, "PowerPC-External-Interrupt-Presentation");
if ((indx < NR_CPUS) && np) goto nextnode;
/* Find the server numbers for the boot cpu. */
for (np = of_find_node_by_type(NULL, "cpu");
np;
np = of_find_node_by_type(np, "cpu")) {
ireg = (uint *)get_property(np, "reg", &ilen);
if (ireg && ireg[0] == get_hard_smp_processor_id(boot_cpuid)) {
ireg = (uint *)get_property(np, "ibm,ppc-interrupt-gserver#s",
&ilen);
i = ilen / sizeof(int);
if (ireg && i > 0) {
default_server = ireg[0];
default_distrib_server = ireg[i-1]; /* take last element */
}
ireg = (uint *)get_property(np,
"ibm,interrupt-server#-size", NULL);
if (ireg)
interrupt_server_size = *ireg;
break;
}
}
of_node_put(np);
intr_base = intnodes[0].addr;
intr_size = intnodes[0].size;
np = of_find_node_by_type(NULL, "interrupt-controller");
if (!np) {
printk(KERN_DEBUG "xics: no ISA interrupt controller\n");
xics_irq_8259_cascade_real = -1;
xics_irq_8259_cascade = -1;
} else {
ireg = (uint *) get_property(np, "interrupts", NULL);
if (!ireg)
panic("xics_init_IRQ: can't find ISA interrupts property");
xics_irq_8259_cascade_real = *ireg;
xics_irq_8259_cascade
= virt_irq_create_mapping(xics_irq_8259_cascade_real);
i8259_init(0, 0);
of_node_put(np);
}
return xics_ipi_dispatch(cpu, regs);
}
void xics_cause_IPI(int cpu)
{
if (firmware_has_feature(FW_FEATURE_LPAR))
ops = &pSeriesLP_ops;
else {
#ifdef CONFIG_SMP
for_each_possible_cpu(i) {
int hard_id;
lpar_qirr_info(cpu, IPI_PRIORITY);
else
direct_qirr_info(cpu, IPI_PRIORITY);
}
/* FIXME: Do this dynamically! --RR */
if (!cpu_present(i))
continue;
hard_id = get_hard_smp_processor_id(i);
xics_per_cpu[i] = ioremap(intnodes[hard_id].addr,
intnodes[hard_id].size);
}
#else
xics_per_cpu[0] = ioremap(intr_base, intr_size);
#endif /* CONFIG_SMP */
}
for (i = irq_offset_value(); i < NR_IRQS; ++i)
get_irq_desc(i)->chip = &xics_pic;
xics_setup_cpu();
ppc64_boot_msg(0x21, "XICS Done");
}
/*
* We cant do this in init_IRQ because we need the memory subsystem up for
* request_irq()
*/
static int __init xics_setup_i8259(void)
static void xics_set_cpu_priority(int cpu, unsigned char cppr)
{
if (ppc64_interrupt_controller == IC_PPC_XIC &&
xics_irq_8259_cascade != -1) {
if (request_irq(irq_offset_up(xics_irq_8259_cascade),
no_action, 0, "8259 cascade", NULL))
printk(KERN_ERR "xics_setup_i8259: couldn't get 8259 "
"cascade\n");
}
return 0;
if (firmware_has_feature(FW_FEATURE_LPAR))
lpar_cppr_info(cpu, cppr);
else
direct_cppr_info(cpu, cppr);
iosync();
}
arch_initcall(xics_setup_i8259);
#ifdef CONFIG_SMP
void xics_request_IPIs(void)
{
virt_irq_to_real_map[XICS_IPI] = XICS_IPI;
/*
* IPIs are marked IRQF_DISABLED as they must run with irqs
* disabled
*/
request_irq(irq_offset_up(XICS_IPI), xics_ipi_action,
IRQF_DISABLED, "IPI", NULL);
get_irq_desc(irq_offset_up(XICS_IPI))->status |= IRQ_PER_CPU;
}
#endif
static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
{
@ -604,8 +422,8 @@ static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
unsigned long newmask;
cpumask_t tmp = CPU_MASK_NONE;
irq = virt_irq_to_real(irq_offset_down(virq));
if (irq == XICS_IPI || irq == NO_IRQ)
irq = (unsigned int)irq_map[virq].hwirq;
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
return;
status = rtas_call(ibm_get_xive, 1, 3, xics_status, irq);
@ -636,15 +454,333 @@ static void xics_set_affinity(unsigned int virq, cpumask_t cpumask)
}
}
void xics_teardown_cpu(int secondary)
void xics_setup_cpu(void)
{
int cpu = smp_processor_id();
ops->cppr_info(cpu, 0x00);
iosync();
xics_set_cpu_priority(cpu, 0xff);
/* Clear IPI */
ops->qirr_info(cpu, 0xff);
/*
* Put the calling processor into the GIQ. This is really only
* necessary from a secondary thread as the OF start-cpu interface
* performs this function for us on primary threads.
*
* XXX: undo of teardown on kexec needs this too, as may hotplug
*/
rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
(1UL << interrupt_server_size) - 1 - default_distrib_server, 1);
}
static struct irq_chip xics_pic_direct = {
.typename = " XICS ",
.startup = xics_startup,
.mask = xics_mask_irq,
.unmask = xics_unmask_irq,
.eoi = xics_eoi_direct,
.set_affinity = xics_set_affinity
};
static struct irq_chip xics_pic_lpar = {
.typename = " XICS ",
.startup = xics_startup,
.mask = xics_mask_irq,
.unmask = xics_unmask_irq,
.eoi = xics_eoi_lpar,
.set_affinity = xics_set_affinity
};
static int xics_host_match(struct irq_host *h, struct device_node *node)
{
/* IBM machines have interrupt parents of various funky types for things
* like vdevices, events, etc... The trick we use here is to match
* everything here except the legacy 8259 which is compatible "chrp,iic"
*/
return !device_is_compatible(node, "chrp,iic");
}
static int xics_host_map_direct(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
pr_debug("xics: map_direct virq %d, hwirq 0x%lx, flags: 0x%x\n",
virq, hw, flags);
if (sense && sense != IRQ_TYPE_LEVEL_LOW)
printk(KERN_WARNING "xics: using unsupported sense 0x%x"
" for irq %d (h: 0x%lx)\n", flags, virq, hw);
get_irq_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &xics_pic_direct, handle_fasteoi_irq);
return 0;
}
static int xics_host_map_lpar(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
unsigned int sense = flags & IRQ_TYPE_SENSE_MASK;
pr_debug("xics: map_lpar virq %d, hwirq 0x%lx, flags: 0x%x\n",
virq, hw, flags);
if (sense && sense != IRQ_TYPE_LEVEL_LOW)
printk(KERN_WARNING "xics: using unsupported sense 0x%x"
" for irq %d (h: 0x%lx)\n", flags, virq, hw);
get_irq_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &xics_pic_lpar, handle_fasteoi_irq);
return 0;
}
static int xics_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
/* Current xics implementation translates everything
* to level. It is not technically right for MSIs but this
* is irrelevant at this point. We might get smarter in the future
*/
*out_hwirq = intspec[0];
*out_flags = IRQ_TYPE_LEVEL_LOW;
return 0;
}
static struct irq_host_ops xics_host_direct_ops = {
.match = xics_host_match,
.map = xics_host_map_direct,
.xlate = xics_host_xlate,
};
static struct irq_host_ops xics_host_lpar_ops = {
.match = xics_host_match,
.map = xics_host_map_lpar,
.xlate = xics_host_xlate,
};
static void __init xics_init_host(void)
{
struct irq_host_ops *ops;
if (firmware_has_feature(FW_FEATURE_LPAR))
ops = &xics_host_lpar_ops;
else
ops = &xics_host_direct_ops;
xics_host = irq_alloc_host(IRQ_HOST_MAP_TREE, 0, ops,
XICS_IRQ_SPURIOUS);
BUG_ON(xics_host == NULL);
irq_set_default_host(xics_host);
}
static void __init xics_map_one_cpu(int hw_id, unsigned long addr,
unsigned long size)
{
#ifdef CONFIG_SMP
int i;
/* This may look gross but it's good enough for now, we don't quite
* have a hard -> linux processor id matching.
*/
for_each_possible_cpu(i) {
if (!cpu_present(i))
continue;
if (hw_id == get_hard_smp_processor_id(i)) {
xics_per_cpu[i] = ioremap(addr, size);
return;
}
}
#else
if (hw_id != 0)
return;
xics_per_cpu[0] = ioremap(addr, size);
#endif /* CONFIG_SMP */
}
static void __init xics_init_one_node(struct device_node *np,
unsigned int *indx)
{
unsigned int ilen;
u32 *ireg;
/* This code does the theorically broken assumption that the interrupt
* server numbers are the same as the hard CPU numbers.
* This happens to be the case so far but we are playing with fire...
* should be fixed one of these days. -BenH.
*/
ireg = (u32 *)get_property(np, "ibm,interrupt-server-ranges", NULL);
/* Do that ever happen ? we'll know soon enough... but even good'old
* f80 does have that property ..
*/
WARN_ON(ireg == NULL);
if (ireg) {
/*
* set node starting index for this node
*/
*indx = *ireg;
}
ireg = (u32 *)get_property(np, "reg", &ilen);
if (!ireg)
panic("xics_init_IRQ: can't find interrupt reg property");
while (ilen >= (4 * sizeof(u32))) {
unsigned long addr, size;
/* XXX Use proper OF parsing code here !!! */
addr = (unsigned long)*ireg++ << 32;
ilen -= sizeof(u32);
addr |= *ireg++;
ilen -= sizeof(u32);
size = (unsigned long)*ireg++ << 32;
ilen -= sizeof(u32);
size |= *ireg++;
ilen -= sizeof(u32);
xics_map_one_cpu(*indx, addr, size);
(*indx)++;
}
}
static void __init xics_setup_8259_cascade(void)
{
struct device_node *np, *old, *found = NULL;
int cascade, naddr;
u32 *addrp;
unsigned long intack = 0;
for_each_node_by_type(np, "interrupt-controller")
if (device_is_compatible(np, "chrp,iic")) {
found = np;
break;
}
if (found == NULL) {
printk(KERN_DEBUG "xics: no ISA interrupt controller\n");
return;
}
cascade = irq_of_parse_and_map(found, 0);
if (cascade == NO_IRQ) {
printk(KERN_ERR "xics: failed to map cascade interrupt");
return;
}
pr_debug("xics: cascade mapped to irq %d\n", cascade);
for (old = of_node_get(found); old != NULL ; old = np) {
np = of_get_parent(old);
of_node_put(old);
if (np == NULL)
break;
if (strcmp(np->name, "pci") != 0)
continue;
addrp = (u32 *)get_property(np, "8259-interrupt-acknowledge", NULL);
if (addrp == NULL)
continue;
naddr = prom_n_addr_cells(np);
intack = addrp[naddr-1];
if (naddr > 1)
intack |= ((unsigned long)addrp[naddr-2]) << 32;
}
if (intack)
printk(KERN_DEBUG "xics: PCI 8259 intack at 0x%016lx\n", intack);
i8259_init(found, intack);
of_node_put(found);
set_irq_chained_handler(cascade, pseries_8259_cascade);
}
void __init xics_init_IRQ(void)
{
int i;
struct device_node *np;
u32 *ireg, ilen, indx = 0;
int found = 0;
ppc64_boot_msg(0x20, "XICS Init");
ibm_get_xive = rtas_token("ibm,get-xive");
ibm_set_xive = rtas_token("ibm,set-xive");
ibm_int_on = rtas_token("ibm,int-on");
ibm_int_off = rtas_token("ibm,int-off");
for_each_node_by_type(np, "PowerPC-External-Interrupt-Presentation") {
found = 1;
if (firmware_has_feature(FW_FEATURE_LPAR))
break;
xics_init_one_node(np, &indx);
}
if (found == 0)
return;
xics_init_host();
/* Find the server numbers for the boot cpu. */
for (np = of_find_node_by_type(NULL, "cpu");
np;
np = of_find_node_by_type(np, "cpu")) {
ireg = (u32 *)get_property(np, "reg", &ilen);
if (ireg && ireg[0] == get_hard_smp_processor_id(boot_cpuid)) {
ireg = (u32 *)get_property(np,
"ibm,ppc-interrupt-gserver#s",
&ilen);
i = ilen / sizeof(int);
if (ireg && i > 0) {
default_server = ireg[0];
/* take last element */
default_distrib_server = ireg[i-1];
}
ireg = (u32 *)get_property(np,
"ibm,interrupt-server#-size", NULL);
if (ireg)
interrupt_server_size = *ireg;
break;
}
}
of_node_put(np);
if (firmware_has_feature(FW_FEATURE_LPAR))
ppc_md.get_irq = xics_get_irq_lpar;
else
ppc_md.get_irq = xics_get_irq_direct;
xics_setup_cpu();
xics_setup_8259_cascade();
ppc64_boot_msg(0x21, "XICS Done");
}
#ifdef CONFIG_SMP
void xics_request_IPIs(void)
{
unsigned int ipi;
ipi = irq_create_mapping(xics_host, XICS_IPI, 0);
BUG_ON(ipi == NO_IRQ);
/*
* IPIs are marked IRQF_DISABLED as they must run with irqs
* disabled
*/
set_irq_handler(ipi, handle_percpu_irq);
if (firmware_has_feature(FW_FEATURE_LPAR))
request_irq(ipi, xics_ipi_action_lpar, IRQF_DISABLED,
"IPI", NULL);
else
request_irq(ipi, xics_ipi_action_direct, IRQF_DISABLED,
"IPI", NULL);
}
#endif /* CONFIG_SMP */
void xics_teardown_cpu(int secondary)
{
int cpu = smp_processor_id();
unsigned int ipi;
struct irq_desc *desc;
xics_set_cpu_priority(cpu, 0);
/*
* we need to EOI the IPI if we got here from kexec down IPI
@ -653,7 +789,13 @@ void xics_teardown_cpu(int secondary)
* should we be flagging idle loop instead?
* or creating some task to be scheduled?
*/
ops->xirr_info_set(cpu, XICS_IPI);
ipi = irq_find_mapping(xics_host, XICS_IPI);
if (ipi == XICS_IRQ_SPURIOUS)
return;
desc = get_irq_desc(ipi);
if (desc->chip && desc->chip->eoi)
desc->chip->eoi(XICS_IPI);
/*
* Some machines need to have at least one cpu in the GIQ,
@ -661,8 +803,8 @@ void xics_teardown_cpu(int secondary)
*/
if (secondary)
rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
(1UL << interrupt_server_size) - 1 -
default_distrib_server, 0);
(1UL << interrupt_server_size) - 1 -
default_distrib_server, 0);
}
#ifdef CONFIG_HOTPLUG_CPU
@ -674,8 +816,7 @@ void xics_migrate_irqs_away(void)
unsigned int irq, virq, cpu = smp_processor_id();
/* Reject any interrupt that was queued to us... */
ops->cppr_info(cpu, 0);
iosync();
xics_set_cpu_priority(cpu, 0);
/* remove ourselves from the global interrupt queue */
status = rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
@ -683,24 +824,23 @@ void xics_migrate_irqs_away(void)
WARN_ON(status < 0);
/* Allow IPIs again... */
ops->cppr_info(cpu, DEFAULT_PRIORITY);
iosync();
xics_set_cpu_priority(cpu, DEFAULT_PRIORITY);
for_each_irq(virq) {
irq_desc_t *desc;
struct irq_desc *desc;
int xics_status[2];
unsigned long flags;
/* We cant set affinity on ISA interrupts */
if (virq < irq_offset_value())
if (virq < NUM_ISA_INTERRUPTS)
continue;
desc = get_irq_desc(virq);
irq = virt_irq_to_real(irq_offset_down(virq));
if (irq_map[virq].host != xics_host)
continue;
irq = (unsigned int)irq_map[virq].hwirq;
/* We need to get IPIs still. */
if (irq == XICS_IPI || irq == NO_IRQ)
if (irq == XICS_IPI || irq == XICS_IRQ_SPURIOUS)
continue;
desc = get_irq_desc(virq);
/* We only need to migrate enabled IRQS */
if (desc == NULL || desc->chip == NULL

View File

@ -14,13 +14,12 @@
#include <linux/cache.h>
void xics_init_IRQ(void);
int xics_get_irq(struct pt_regs *);
void xics_setup_cpu(void);
void xics_teardown_cpu(int secondary);
void xics_cause_IPI(int cpu);
void xics_request_IPIs(void);
void xics_migrate_irqs_away(void);
extern void xics_init_IRQ(void);
extern void xics_setup_cpu(void);
extern void xics_teardown_cpu(int secondary);
extern void xics_cause_IPI(int cpu);
extern void xics_request_IPIs(void);
extern void xics_migrate_irqs_away(void);
/* first argument is ignored for now*/
void pSeriesLP_cppr_info(int n_cpu, u8 value);
@ -31,4 +30,8 @@ struct xics_ipi_struct {
extern struct xics_ipi_struct xics_ipi_message[NR_CPUS] __cacheline_aligned;
struct irq_desc;
extern void pseries_8259_cascade(unsigned int irq, struct irq_desc *desc,
struct pt_regs *regs);
#endif /* _POWERPC_KERNEL_XICS_H */

View File

@ -4,7 +4,6 @@ endif
obj-$(CONFIG_MPIC) += mpic.o
obj-$(CONFIG_PPC_INDIRECT_PCI) += indirect_pci.o
obj-$(CONFIG_PPC_I8259) += i8259.o
obj-$(CONFIG_PPC_MPC106) += grackle.o
obj-$(CONFIG_BOOKE) += dcr.o
obj-$(CONFIG_40x) += dcr.o
@ -14,3 +13,7 @@ obj-$(CONFIG_PPC_83xx) += ipic.o
obj-$(CONFIG_FSL_SOC) += fsl_soc.o
obj-$(CONFIG_PPC_TODC) += todc.o
obj-$(CONFIG_TSI108_BRIDGE) += tsi108_pci.o tsi108_dev.o
ifeq ($(CONFIG_PPC_MERGE),y)
obj-$(CONFIG_PPC_I8259) += i8259.o
endif

View File

@ -6,11 +6,16 @@
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#undef DEBUG
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <asm/io.h>
#include <asm/i8259.h>
#include <asm/prom.h>
static volatile void __iomem *pci_intack; /* RO, gives us the irq vector */
@ -20,7 +25,8 @@ static unsigned char cached_8259[2] = { 0xff, 0xff };
static DEFINE_SPINLOCK(i8259_lock);
static int i8259_pic_irq_offset;
static struct device_node *i8259_node;
static struct irq_host *i8259_host;
/*
* Acknowledge the IRQ using either the PCI host bridge's interrupt
@ -28,16 +34,18 @@ static int i8259_pic_irq_offset;
* which is called. It should be noted that polling is broken on some
* IBM and Motorola PReP boxes so we must use the int-ack feature on them.
*/
int i8259_irq(struct pt_regs *regs)
unsigned int i8259_irq(struct pt_regs *regs)
{
int irq;
spin_lock(&i8259_lock);
int lock = 0;
/* Either int-ack or poll for the IRQ */
if (pci_intack)
irq = readb(pci_intack);
else {
spin_lock(&i8259_lock);
lock = 1;
/* Perform an interrupt acknowledge cycle on controller 1. */
outb(0x0C, 0x20); /* prepare for poll */
irq = inb(0x20) & 7;
@ -62,16 +70,13 @@ int i8259_irq(struct pt_regs *regs)
if (!pci_intack)
outb(0x0B, 0x20); /* ISR register */
if(~inb(0x20) & 0x80)
irq = -1;
}
irq = NO_IRQ;
} else if (irq == 0xff)
irq = NO_IRQ;
spin_unlock(&i8259_lock);
return irq + i8259_pic_irq_offset;
}
int i8259_irq_cascade(struct pt_regs *regs, void *unused)
{
return i8259_irq(regs);
if (lock)
spin_unlock(&i8259_lock);
return irq;
}
static void i8259_mask_and_ack_irq(unsigned int irq_nr)
@ -79,7 +84,6 @@ static void i8259_mask_and_ack_irq(unsigned int irq_nr)
unsigned long flags;
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr > 7) {
cached_A1 |= 1 << (irq_nr-8);
inb(0xA1); /* DUMMY */
@ -105,8 +109,9 @@ static void i8259_mask_irq(unsigned int irq_nr)
{
unsigned long flags;
pr_debug("i8259_mask_irq(%d)\n", irq_nr);
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 |= 1 << irq_nr;
else
@ -119,8 +124,9 @@ static void i8259_unmask_irq(unsigned int irq_nr)
{
unsigned long flags;
pr_debug("i8259_unmask_irq(%d)\n", irq_nr);
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 &= ~(1 << irq_nr);
else
@ -129,19 +135,11 @@ static void i8259_unmask_irq(unsigned int irq_nr)
spin_unlock_irqrestore(&i8259_lock, flags);
}
static void i8259_end_irq(unsigned int irq)
{
if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS))
&& irq_desc[irq].action)
i8259_unmask_irq(irq);
}
struct hw_interrupt_type i8259_pic = {
.typename = " i8259 ",
.enable = i8259_unmask_irq,
.disable = i8259_mask_irq,
.ack = i8259_mask_and_ack_irq,
.end = i8259_end_irq,
static struct irq_chip i8259_pic = {
.typename = " i8259 ",
.mask = i8259_mask_irq,
.unmask = i8259_unmask_irq,
.mask_ack = i8259_mask_and_ack_irq,
};
static struct resource pic1_iores = {
@ -165,25 +163,84 @@ static struct resource pic_edgectrl_iores = {
.flags = IORESOURCE_BUSY,
};
static struct irqaction i8259_irqaction = {
.handler = no_action,
.flags = IRQF_DISABLED,
.mask = CPU_MASK_NONE,
.name = "82c59 secondary cascade",
static int i8259_host_match(struct irq_host *h, struct device_node *node)
{
return i8259_node == NULL || i8259_node == node;
}
static int i8259_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
pr_debug("i8259_host_map(%d, 0x%lx)\n", virq, hw);
/* We block the internal cascade */
if (hw == 2)
get_irq_desc(virq)->status |= IRQ_NOREQUEST;
/* We use the level stuff only for now, we might want to
* be more cautious here but that works for now
*/
get_irq_desc(virq)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(virq, &i8259_pic, handle_level_irq);
return 0;
}
static void i8259_host_unmap(struct irq_host *h, unsigned int virq)
{
/* Make sure irq is masked in hardware */
i8259_mask_irq(virq);
/* remove chip and handler */
set_irq_chip_and_handler(virq, NULL, NULL);
/* Make sure it's completed */
synchronize_irq(virq);
}
static int i8259_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
static unsigned char map_isa_senses[4] = {
IRQ_TYPE_LEVEL_LOW,
IRQ_TYPE_LEVEL_HIGH,
IRQ_TYPE_EDGE_FALLING,
IRQ_TYPE_EDGE_RISING,
};
*out_hwirq = intspec[0];
if (intsize > 1 && intspec[1] < 4)
*out_flags = map_isa_senses[intspec[1]];
else
*out_flags = IRQ_TYPE_NONE;
return 0;
}
static struct irq_host_ops i8259_host_ops = {
.match = i8259_host_match,
.map = i8259_host_map,
.unmap = i8259_host_unmap,
.xlate = i8259_host_xlate,
};
/*
* i8259_init()
* intack_addr - PCI interrupt acknowledge (real) address which will return
* the active irq from the 8259
/****
* i8259_init - Initialize the legacy controller
* @node: device node of the legacy PIC (can be NULL, but then, it will match
* all interrupts, so beware)
* @intack_addr: PCI interrupt acknowledge (real) address which will return
* the active irq from the 8259
*/
void __init i8259_init(unsigned long intack_addr, int offset)
void i8259_init(struct device_node *node, unsigned long intack_addr)
{
unsigned long flags;
int i;
/* initialize the controller */
spin_lock_irqsave(&i8259_lock, flags);
i8259_pic_irq_offset = offset;
/* Mask all first */
outb(0xff, 0xA1);
outb(0xff, 0x21);
/* init master interrupt controller */
outb(0x11, 0x20); /* Start init sequence */
@ -197,21 +254,36 @@ void __init i8259_init(unsigned long intack_addr, int offset)
outb(0x02, 0xA1); /* edge triggered, Cascade (slave) on IRQ2 */
outb(0x01, 0xA1); /* Select 8086 mode */
/* That thing is slow */
udelay(100);
/* always read ISR */
outb(0x0B, 0x20);
outb(0x0B, 0xA0);
/* Mask all interrupts */
/* Unmask the internal cascade */
cached_21 &= ~(1 << 2);
/* Set interrupt masks */
outb(cached_A1, 0xA1);
outb(cached_21, 0x21);
spin_unlock_irqrestore(&i8259_lock, flags);
for (i = 0; i < NUM_ISA_INTERRUPTS; ++i)
irq_desc[offset + i].chip = &i8259_pic;
/* create a legacy host */
if (node)
i8259_node = of_node_get(node);
i8259_host = irq_alloc_host(IRQ_HOST_MAP_LEGACY, 0, &i8259_host_ops, 0);
if (i8259_host == NULL) {
printk(KERN_ERR "i8259: failed to allocate irq host !\n");
return;
}
/* reserve our resources */
setup_irq(offset + 2, &i8259_irqaction);
/* XXX should we continue doing that ? it seems to cause problems
* with further requesting of PCI IO resources for that range...
* need to look into it.
*/
request_resource(&ioport_resource, &pic1_iores);
request_resource(&ioport_resource, &pic2_iores);
request_resource(&ioport_resource, &pic_edgectrl_iores);
@ -219,4 +291,5 @@ void __init i8259_init(unsigned long intack_addr, int offset)
if (intack_addr != 0)
pci_intack = ioremap(intack_addr, 1);
printk(KERN_INFO "i8259 legacy interrupt controller initialized\n");
}

View File

@ -100,8 +100,8 @@ static inline u32 _mpic_cpu_read(struct mpic *mpic, unsigned int reg)
if (mpic->flags & MPIC_PRIMARY)
cpu = hard_smp_processor_id();
return _mpic_read(mpic->flags & MPIC_BIG_ENDIAN, mpic->cpuregs[cpu], reg);
return _mpic_read(mpic->flags & MPIC_BIG_ENDIAN,
mpic->cpuregs[cpu], reg);
}
static inline void _mpic_cpu_write(struct mpic *mpic, unsigned int reg, u32 value)
@ -340,27 +340,19 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic)
#endif /* CONFIG_MPIC_BROKEN_U3 */
#define mpic_irq_to_hw(virq) ((unsigned int)irq_map[virq].hwirq)
/* Find an mpic associated with a given linux interrupt */
static struct mpic *mpic_find(unsigned int irq, unsigned int *is_ipi)
{
struct mpic *mpic = mpics;
unsigned int src = mpic_irq_to_hw(irq);
while(mpic) {
/* search IPIs first since they may override the main interrupts */
if (irq >= mpic->ipi_offset && irq < (mpic->ipi_offset + 4)) {
if (is_ipi)
*is_ipi = 1;
return mpic;
}
if (irq >= mpic->irq_offset &&
irq < (mpic->irq_offset + mpic->irq_count)) {
if (is_ipi)
*is_ipi = 0;
return mpic;
}
mpic = mpic -> next;
}
return NULL;
if (irq < NUM_ISA_INTERRUPTS)
return NULL;
if (is_ipi)
*is_ipi = (src >= MPIC_VEC_IPI_0 && src <= MPIC_VEC_IPI_3);
return irq_desc[irq].chip_data;
}
/* Convert a cpu mask from logical to physical cpu numbers. */
@ -378,14 +370,14 @@ static inline u32 mpic_physmask(u32 cpumask)
/* Get the mpic structure from the IPI number */
static inline struct mpic * mpic_from_ipi(unsigned int ipi)
{
return container_of(irq_desc[ipi].chip, struct mpic, hc_ipi);
return irq_desc[ipi].chip_data;
}
#endif
/* Get the mpic structure from the irq number */
static inline struct mpic * mpic_from_irq(unsigned int irq)
{
return container_of(irq_desc[irq].chip, struct mpic, hc_irq);
return irq_desc[irq].chip_data;
}
/* Send an EOI */
@ -398,9 +390,7 @@ static inline void mpic_eoi(struct mpic *mpic)
#ifdef CONFIG_SMP
static irqreturn_t mpic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
{
struct mpic *mpic = dev_id;
smp_message_recv(irq - mpic->ipi_offset, regs);
smp_message_recv(mpic_irq_to_hw(irq) - MPIC_VEC_IPI_0, regs);
return IRQ_HANDLED;
}
#endif /* CONFIG_SMP */
@ -410,11 +400,11 @@ static irqreturn_t mpic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
*/
static void mpic_enable_irq(unsigned int irq)
static void mpic_unmask_irq(unsigned int irq)
{
unsigned int loops = 100000;
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = irq - mpic->irq_offset;
unsigned int src = mpic_irq_to_hw(irq);
DBG("%p: %s: enable_irq: %d (src %d)\n", mpic, mpic->name, irq, src);
@ -429,39 +419,13 @@ static void mpic_enable_irq(unsigned int irq)
break;
}
} while(mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI) & MPIC_VECPRI_MASK);
#ifdef CONFIG_MPIC_BROKEN_U3
if (mpic->flags & MPIC_BROKEN_U3) {
unsigned int src = irq - mpic->irq_offset;
if (mpic_is_ht_interrupt(mpic, src) &&
(irq_desc[irq].status & IRQ_LEVEL))
mpic_ht_end_irq(mpic, src);
}
#endif /* CONFIG_MPIC_BROKEN_U3 */
}
static unsigned int mpic_startup_irq(unsigned int irq)
{
#ifdef CONFIG_MPIC_BROKEN_U3
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = irq - mpic->irq_offset;
#endif /* CONFIG_MPIC_BROKEN_U3 */
mpic_enable_irq(irq);
#ifdef CONFIG_MPIC_BROKEN_U3
if (mpic_is_ht_interrupt(mpic, src))
mpic_startup_ht_interrupt(mpic, src, irq_desc[irq].status);
#endif /* CONFIG_MPIC_BROKEN_U3 */
return 0;
}
static void mpic_disable_irq(unsigned int irq)
static void mpic_mask_irq(unsigned int irq)
{
unsigned int loops = 100000;
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = irq - mpic->irq_offset;
unsigned int src = mpic_irq_to_hw(irq);
DBG("%s: disable_irq: %d (src %d)\n", mpic->name, irq, src);
@ -478,20 +442,6 @@ static void mpic_disable_irq(unsigned int irq)
} while(!(mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI) & MPIC_VECPRI_MASK));
}
static void mpic_shutdown_irq(unsigned int irq)
{
#ifdef CONFIG_MPIC_BROKEN_U3
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = irq - mpic->irq_offset;
if (mpic_is_ht_interrupt(mpic, src))
mpic_shutdown_ht_interrupt(mpic, src, irq_desc[irq].status);
#endif /* CONFIG_MPIC_BROKEN_U3 */
mpic_disable_irq(irq);
}
static void mpic_end_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
@ -504,30 +454,74 @@ static void mpic_end_irq(unsigned int irq)
* latched another edge interrupt coming in anyway
*/
#ifdef CONFIG_MPIC_BROKEN_U3
if (mpic->flags & MPIC_BROKEN_U3) {
unsigned int src = irq - mpic->irq_offset;
if (mpic_is_ht_interrupt(mpic, src) &&
(irq_desc[irq].status & IRQ_LEVEL))
mpic_ht_end_irq(mpic, src);
}
#endif /* CONFIG_MPIC_BROKEN_U3 */
mpic_eoi(mpic);
}
#ifdef CONFIG_MPIC_BROKEN_U3
static void mpic_unmask_ht_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = mpic_irq_to_hw(irq);
mpic_unmask_irq(irq);
if (irq_desc[irq].status & IRQ_LEVEL)
mpic_ht_end_irq(mpic, src);
}
static unsigned int mpic_startup_ht_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = mpic_irq_to_hw(irq);
mpic_unmask_irq(irq);
mpic_startup_ht_interrupt(mpic, src, irq_desc[irq].status);
return 0;
}
static void mpic_shutdown_ht_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = mpic_irq_to_hw(irq);
mpic_shutdown_ht_interrupt(mpic, src, irq_desc[irq].status);
mpic_mask_irq(irq);
}
static void mpic_end_ht_irq(unsigned int irq)
{
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = mpic_irq_to_hw(irq);
#ifdef DEBUG_IRQ
DBG("%s: end_irq: %d\n", mpic->name, irq);
#endif
/* We always EOI on end_irq() even for edge interrupts since that
* should only lower the priority, the MPIC should have properly
* latched another edge interrupt coming in anyway
*/
if (irq_desc[irq].status & IRQ_LEVEL)
mpic_ht_end_irq(mpic, src);
mpic_eoi(mpic);
}
#endif /* CONFIG_MPIC_BROKEN_U3 */
#ifdef CONFIG_SMP
static void mpic_enable_ipi(unsigned int irq)
static void mpic_unmask_ipi(unsigned int irq)
{
struct mpic *mpic = mpic_from_ipi(irq);
unsigned int src = irq - mpic->ipi_offset;
unsigned int src = mpic_irq_to_hw(irq) - MPIC_VEC_IPI_0;
DBG("%s: enable_ipi: %d (ipi %d)\n", mpic->name, irq, src);
mpic_ipi_write(src, mpic_ipi_read(src) & ~MPIC_VECPRI_MASK);
}
static void mpic_disable_ipi(unsigned int irq)
static void mpic_mask_ipi(unsigned int irq)
{
/* NEVER disable an IPI... that's just plain wrong! */
}
@ -551,29 +545,176 @@ static void mpic_end_ipi(unsigned int irq)
static void mpic_set_affinity(unsigned int irq, cpumask_t cpumask)
{
struct mpic *mpic = mpic_from_irq(irq);
unsigned int src = mpic_irq_to_hw(irq);
cpumask_t tmp;
cpus_and(tmp, cpumask, cpu_online_map);
mpic_irq_write(irq - mpic->irq_offset, MPIC_IRQ_DESTINATION,
mpic_irq_write(src, MPIC_IRQ_DESTINATION,
mpic_physmask(cpus_addr(tmp)[0]));
}
static unsigned int mpic_flags_to_vecpri(unsigned int flags, int *level)
{
unsigned int vecpri;
/* Now convert sense value */
switch(flags & IRQ_TYPE_SENSE_MASK) {
case IRQ_TYPE_EDGE_RISING:
vecpri = MPIC_VECPRI_SENSE_EDGE |
MPIC_VECPRI_POLARITY_POSITIVE;
*level = 0;
break;
case IRQ_TYPE_EDGE_FALLING:
vecpri = MPIC_VECPRI_SENSE_EDGE |
MPIC_VECPRI_POLARITY_NEGATIVE;
*level = 0;
break;
case IRQ_TYPE_LEVEL_HIGH:
vecpri = MPIC_VECPRI_SENSE_LEVEL |
MPIC_VECPRI_POLARITY_POSITIVE;
*level = 1;
break;
case IRQ_TYPE_LEVEL_LOW:
default:
vecpri = MPIC_VECPRI_SENSE_LEVEL |
MPIC_VECPRI_POLARITY_NEGATIVE;
*level = 1;
}
return vecpri;
}
static struct irq_chip mpic_irq_chip = {
.mask = mpic_mask_irq,
.unmask = mpic_unmask_irq,
.eoi = mpic_end_irq,
};
#ifdef CONFIG_SMP
static struct irq_chip mpic_ipi_chip = {
.mask = mpic_mask_ipi,
.unmask = mpic_unmask_ipi,
.eoi = mpic_end_ipi,
};
#endif /* CONFIG_SMP */
#ifdef CONFIG_MPIC_BROKEN_U3
static struct irq_chip mpic_irq_ht_chip = {
.startup = mpic_startup_ht_irq,
.shutdown = mpic_shutdown_ht_irq,
.mask = mpic_mask_irq,
.unmask = mpic_unmask_ht_irq,
.eoi = mpic_end_ht_irq,
};
#endif /* CONFIG_MPIC_BROKEN_U3 */
static int mpic_host_match(struct irq_host *h, struct device_node *node)
{
struct mpic *mpic = h->host_data;
/* Exact match, unless mpic node is NULL */
return mpic->of_node == NULL || mpic->of_node == node;
}
static int mpic_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw, unsigned int flags)
{
struct irq_desc *desc = get_irq_desc(virq);
struct irq_chip *chip;
struct mpic *mpic = h->host_data;
unsigned int vecpri = MPIC_VECPRI_SENSE_LEVEL |
MPIC_VECPRI_POLARITY_NEGATIVE;
int level;
pr_debug("mpic: map virq %d, hwirq 0x%lx, flags: 0x%x\n",
virq, hw, flags);
if (hw == MPIC_VEC_SPURRIOUS)
return -EINVAL;
#ifdef CONFIG_SMP
else if (hw >= MPIC_VEC_IPI_0) {
WARN_ON(!(mpic->flags & MPIC_PRIMARY));
pr_debug("mpic: mapping as IPI\n");
set_irq_chip_data(virq, mpic);
set_irq_chip_and_handler(virq, &mpic->hc_ipi,
handle_percpu_irq);
return 0;
}
#endif /* CONFIG_SMP */
if (hw >= mpic->irq_count)
return -EINVAL;
/* If no sense provided, check default sense array */
if (((flags & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_NONE) &&
mpic->senses && hw < mpic->senses_count)
flags |= mpic->senses[hw];
vecpri = mpic_flags_to_vecpri(flags, &level);
if (level)
desc->status |= IRQ_LEVEL;
chip = &mpic->hc_irq;
#ifdef CONFIG_MPIC_BROKEN_U3
/* Check for HT interrupts, override vecpri */
if (mpic_is_ht_interrupt(mpic, hw)) {
vecpri &= ~(MPIC_VECPRI_SENSE_MASK |
MPIC_VECPRI_POLARITY_MASK);
vecpri |= MPIC_VECPRI_POLARITY_POSITIVE;
chip = &mpic->hc_ht_irq;
}
#endif
/* Reconfigure irq */
vecpri |= MPIC_VECPRI_MASK | hw | (8 << MPIC_VECPRI_PRIORITY_SHIFT);
mpic_irq_write(hw, MPIC_IRQ_VECTOR_PRI, vecpri);
pr_debug("mpic: mapping as IRQ\n");
set_irq_chip_data(virq, mpic);
set_irq_chip_and_handler(virq, chip, handle_fasteoi_irq);
return 0;
}
static int mpic_host_xlate(struct irq_host *h, struct device_node *ct,
u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
static unsigned char map_mpic_senses[4] = {
IRQ_TYPE_EDGE_RISING,
IRQ_TYPE_LEVEL_LOW,
IRQ_TYPE_LEVEL_HIGH,
IRQ_TYPE_EDGE_FALLING,
};
*out_hwirq = intspec[0];
if (intsize > 1 && intspec[1] < 4)
*out_flags = map_mpic_senses[intspec[1]];
else
*out_flags = IRQ_TYPE_NONE;
return 0;
}
static struct irq_host_ops mpic_host_ops = {
.match = mpic_host_match,
.map = mpic_host_map,
.xlate = mpic_host_xlate,
};
/*
* Exported functions
*/
struct mpic * __init mpic_alloc(unsigned long phys_addr,
struct mpic * __init mpic_alloc(struct device_node *node,
unsigned long phys_addr,
unsigned int flags,
unsigned int isu_size,
unsigned int irq_offset,
unsigned int irq_count,
unsigned int ipi_offset,
unsigned char *senses,
unsigned int senses_count,
const char *name)
{
struct mpic *mpic;
@ -585,33 +726,38 @@ struct mpic * __init mpic_alloc(unsigned long phys_addr,
if (mpic == NULL)
return NULL;
memset(mpic, 0, sizeof(struct mpic));
mpic->name = name;
mpic->of_node = node ? of_node_get(node) : NULL;
mpic->irqhost = irq_alloc_host(IRQ_HOST_MAP_LINEAR, 256,
&mpic_host_ops,
MPIC_VEC_SPURRIOUS);
if (mpic->irqhost == NULL) {
of_node_put(node);
return NULL;
}
mpic->irqhost->host_data = mpic;
mpic->hc_irq = mpic_irq_chip;
mpic->hc_irq.typename = name;
mpic->hc_irq.startup = mpic_startup_irq;
mpic->hc_irq.shutdown = mpic_shutdown_irq;
mpic->hc_irq.enable = mpic_enable_irq;
mpic->hc_irq.disable = mpic_disable_irq;
mpic->hc_irq.end = mpic_end_irq;
if (flags & MPIC_PRIMARY)
mpic->hc_irq.set_affinity = mpic_set_affinity;
#ifdef CONFIG_MPIC_BROKEN_U3
mpic->hc_ht_irq = mpic_irq_ht_chip;
mpic->hc_ht_irq.typename = name;
if (flags & MPIC_PRIMARY)
mpic->hc_ht_irq.set_affinity = mpic_set_affinity;
#endif /* CONFIG_MPIC_BROKEN_U3 */
#ifdef CONFIG_SMP
mpic->hc_ipi = mpic_ipi_chip;
mpic->hc_ipi.typename = name;
mpic->hc_ipi.enable = mpic_enable_ipi;
mpic->hc_ipi.disable = mpic_disable_ipi;
mpic->hc_ipi.end = mpic_end_ipi;
#endif /* CONFIG_SMP */
mpic->flags = flags;
mpic->isu_size = isu_size;
mpic->irq_offset = irq_offset;
mpic->irq_count = irq_count;
mpic->ipi_offset = ipi_offset;
mpic->num_sources = 0; /* so far */
mpic->senses = senses;
mpic->senses_count = senses_count;
/* Map the global registers */
mpic->gregs = ioremap(phys_addr + MPIC_GREG_BASE, 0x1000);
@ -679,8 +825,10 @@ struct mpic * __init mpic_alloc(unsigned long phys_addr,
mpic->next = mpics;
mpics = mpic;
if (flags & MPIC_PRIMARY)
if (flags & MPIC_PRIMARY) {
mpic_primary = mpic;
irq_set_default_host(mpic->irqhost);
}
return mpic;
}
@ -697,26 +845,10 @@ void __init mpic_assign_isu(struct mpic *mpic, unsigned int isu_num,
mpic->num_sources = isu_first + mpic->isu_size;
}
void __init mpic_setup_cascade(unsigned int irq, mpic_cascade_t handler,
void *data)
void __init mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count)
{
struct mpic *mpic = mpic_find(irq, NULL);
unsigned long flags;
/* Synchronization here is a bit dodgy, so don't try to replace cascade
* interrupts on the fly too often ... but normally it's set up at boot.
*/
spin_lock_irqsave(&mpic_lock, flags);
if (mpic->cascade)
mpic_disable_irq(mpic->cascade_vec + mpic->irq_offset);
mpic->cascade = NULL;
wmb();
mpic->cascade_vec = irq - mpic->irq_offset;
mpic->cascade_data = data;
wmb();
mpic->cascade = handler;
mpic_enable_irq(irq);
spin_unlock_irqrestore(&mpic_lock, flags);
mpic->senses = senses;
mpic->senses_count = count;
}
void __init mpic_init(struct mpic *mpic)
@ -724,6 +856,11 @@ void __init mpic_init(struct mpic *mpic)
int i;
BUG_ON(mpic->num_sources == 0);
WARN_ON(mpic->num_sources > MPIC_VEC_IPI_0);
/* Sanitize source count */
if (mpic->num_sources > MPIC_VEC_IPI_0)
mpic->num_sources = MPIC_VEC_IPI_0;
printk(KERN_INFO "mpic: Initializing for %d sources\n", mpic->num_sources);
@ -747,12 +884,6 @@ void __init mpic_init(struct mpic *mpic)
MPIC_VECPRI_MASK |
(10 << MPIC_VECPRI_PRIORITY_SHIFT) |
(MPIC_VEC_IPI_0 + i));
#ifdef CONFIG_SMP
if (!(mpic->flags & MPIC_PRIMARY))
continue;
irq_desc[mpic->ipi_offset+i].status |= IRQ_PER_CPU;
irq_desc[mpic->ipi_offset+i].chip = &mpic->hc_ipi;
#endif /* CONFIG_SMP */
}
/* Initialize interrupt sources */
@ -763,31 +894,21 @@ void __init mpic_init(struct mpic *mpic)
/* Do the HT PIC fixups on U3 broken mpic */
DBG("MPIC flags: %x\n", mpic->flags);
if ((mpic->flags & MPIC_BROKEN_U3) && (mpic->flags & MPIC_PRIMARY))
mpic_scan_ht_pics(mpic);
mpic_scan_ht_pics(mpic);
#endif /* CONFIG_MPIC_BROKEN_U3 */
for (i = 0; i < mpic->num_sources; i++) {
/* start with vector = source number, and masked */
u32 vecpri = MPIC_VECPRI_MASK | i | (8 << MPIC_VECPRI_PRIORITY_SHIFT);
int level = 0;
int level = 1;
/* if it's an IPI, we skip it */
if ((mpic->irq_offset + i) >= (mpic->ipi_offset + i) &&
(mpic->irq_offset + i) < (mpic->ipi_offset + i + 4))
continue;
/* do senses munging */
if (mpic->senses && i < mpic->senses_count) {
if (mpic->senses[i] & IRQ_SENSE_LEVEL)
vecpri |= MPIC_VECPRI_SENSE_LEVEL;
if (mpic->senses[i] & IRQ_POLARITY_POSITIVE)
vecpri |= MPIC_VECPRI_POLARITY_POSITIVE;
} else
if (mpic->senses && i < mpic->senses_count)
vecpri = mpic_flags_to_vecpri(mpic->senses[i],
&level);
else
vecpri |= MPIC_VECPRI_SENSE_LEVEL;
/* remember if it was a level interrupts */
level = (vecpri & MPIC_VECPRI_SENSE_LEVEL);
/* deal with broken U3 */
if (mpic->flags & MPIC_BROKEN_U3) {
#ifdef CONFIG_MPIC_BROKEN_U3
@ -808,12 +929,6 @@ void __init mpic_init(struct mpic *mpic)
mpic_irq_write(i, MPIC_IRQ_VECTOR_PRI, vecpri);
mpic_irq_write(i, MPIC_IRQ_DESTINATION,
1 << hard_smp_processor_id());
/* init linux descriptors */
if (i < mpic->irq_count) {
irq_desc[mpic->irq_offset+i].status = level ? IRQ_LEVEL : 0;
irq_desc[mpic->irq_offset+i].chip = &mpic->hc_irq;
}
}
/* Init spurrious vector */
@ -854,19 +969,20 @@ void mpic_irq_set_priority(unsigned int irq, unsigned int pri)
{
int is_ipi;
struct mpic *mpic = mpic_find(irq, &is_ipi);
unsigned int src = mpic_irq_to_hw(irq);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&mpic_lock, flags);
if (is_ipi) {
reg = mpic_ipi_read(irq - mpic->ipi_offset) &
reg = mpic_ipi_read(src - MPIC_VEC_IPI_0) &
~MPIC_VECPRI_PRIORITY_MASK;
mpic_ipi_write(irq - mpic->ipi_offset,
mpic_ipi_write(src - MPIC_VEC_IPI_0,
reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT));
} else {
reg = mpic_irq_read(irq - mpic->irq_offset,MPIC_IRQ_VECTOR_PRI)
reg = mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI)
& ~MPIC_VECPRI_PRIORITY_MASK;
mpic_irq_write(irq - mpic->irq_offset, MPIC_IRQ_VECTOR_PRI,
mpic_irq_write(src, MPIC_IRQ_VECTOR_PRI,
reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT));
}
spin_unlock_irqrestore(&mpic_lock, flags);
@ -876,14 +992,15 @@ unsigned int mpic_irq_get_priority(unsigned int irq)
{
int is_ipi;
struct mpic *mpic = mpic_find(irq, &is_ipi);
unsigned int src = mpic_irq_to_hw(irq);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&mpic_lock, flags);
if (is_ipi)
reg = mpic_ipi_read(irq - mpic->ipi_offset);
reg = mpic_ipi_read(src = MPIC_VEC_IPI_0);
else
reg = mpic_irq_read(irq - mpic->irq_offset, MPIC_IRQ_VECTOR_PRI);
reg = mpic_irq_read(src, MPIC_IRQ_VECTOR_PRI);
spin_unlock_irqrestore(&mpic_lock, flags);
return (reg & MPIC_VECPRI_PRIORITY_MASK) >> MPIC_VECPRI_PRIORITY_SHIFT;
}
@ -978,37 +1095,20 @@ void mpic_send_ipi(unsigned int ipi_no, unsigned int cpu_mask)
mpic_physmask(cpu_mask & cpus_addr(cpu_online_map)[0]));
}
int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs)
unsigned int mpic_get_one_irq(struct mpic *mpic, struct pt_regs *regs)
{
u32 irq;
u32 src;
irq = mpic_cpu_read(MPIC_CPU_INTACK) & MPIC_VECPRI_VECTOR_MASK;
src = mpic_cpu_read(MPIC_CPU_INTACK) & MPIC_VECPRI_VECTOR_MASK;
#ifdef DEBUG_LOW
DBG("%s: get_one_irq(): %d\n", mpic->name, irq);
DBG("%s: get_one_irq(): %d\n", mpic->name, src);
#endif
if (mpic->cascade && irq == mpic->cascade_vec) {
#ifdef DEBUG_LOW
DBG("%s: cascading ...\n", mpic->name);
#endif
irq = mpic->cascade(regs, mpic->cascade_data);
mpic_eoi(mpic);
return irq;
}
if (unlikely(irq == MPIC_VEC_SPURRIOUS))
return -1;
if (irq < MPIC_VEC_IPI_0) {
#ifdef DEBUG_IRQ
DBG("%s: irq %d\n", mpic->name, irq + mpic->irq_offset);
#endif
return irq + mpic->irq_offset;
}
#ifdef DEBUG_IPI
DBG("%s: ipi %d !\n", mpic->name, irq - MPIC_VEC_IPI_0);
#endif
return irq - MPIC_VEC_IPI_0 + mpic->ipi_offset;
if (unlikely(src == MPIC_VEC_SPURRIOUS))
return NO_IRQ;
return irq_linear_revmap(mpic->irqhost, src);
}
int mpic_get_irq(struct pt_regs *regs)
unsigned int mpic_get_irq(struct pt_regs *regs)
{
struct mpic *mpic = mpic_primary;
@ -1022,25 +1122,27 @@ int mpic_get_irq(struct pt_regs *regs)
void mpic_request_ipis(void)
{
struct mpic *mpic = mpic_primary;
int i;
static char *ipi_names[] = {
"IPI0 (call function)",
"IPI1 (reschedule)",
"IPI2 (unused)",
"IPI3 (debugger break)",
};
BUG_ON(mpic == NULL);
printk("requesting IPIs ... \n");
/*
* IPIs are marked IRQF_DISABLED as they must run with irqs
* disabled
*/
request_irq(mpic->ipi_offset+0, mpic_ipi_action, IRQF_DISABLED,
"IPI0 (call function)", mpic);
request_irq(mpic->ipi_offset+1, mpic_ipi_action, IRQF_DISABLED,
"IPI1 (reschedule)", mpic);
request_irq(mpic->ipi_offset+2, mpic_ipi_action, IRQF_DISABLED,
"IPI2 (unused)", mpic);
request_irq(mpic->ipi_offset+3, mpic_ipi_action, IRQF_DISABLED,
"IPI3 (debugger break)", mpic);
printk(KERN_INFO "mpic: requesting IPIs ... \n");
printk("IPIs requested... \n");
for (i = 0; i < 4; i++) {
unsigned int vipi = irq_create_mapping(mpic->irqhost,
MPIC_VEC_IPI_0 + i, 0);
if (vipi == NO_IRQ) {
printk(KERN_ERR "Failed to map IPI %d\n", i);
break;
}
request_irq(vipi, mpic_ipi_action, IRQF_DISABLED,
ipi_names[i], mpic);
}
}
void smp_mpic_message_pass(int target, int msg)

View File

@ -104,3 +104,5 @@ obj-$(CONFIG_PPC_MPC52xx) += mpc52xx_setup.o mpc52xx_pic.o \
ifeq ($(CONFIG_PPC_MPC52xx),y)
obj-$(CONFIG_PCI) += mpc52xx_pci.o
endif
obj-$(CONFIG_PPC_I8259) += i8259.o

View File

@ -6,7 +6,7 @@
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/version.h>
#include <linux/utsrelease.h>
#include <asm/sections.h>
#include <asm/bootx.h>

212
arch/ppc/syslib/i8259.c Normal file
View File

@ -0,0 +1,212 @@
/*
* i8259 interrupt controller driver.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <asm/io.h>
#include <asm/i8259.h>
static volatile void __iomem *pci_intack; /* RO, gives us the irq vector */
static unsigned char cached_8259[2] = { 0xff, 0xff };
#define cached_A1 (cached_8259[0])
#define cached_21 (cached_8259[1])
static DEFINE_SPINLOCK(i8259_lock);
static int i8259_pic_irq_offset;
/*
* Acknowledge the IRQ using either the PCI host bridge's interrupt
* acknowledge feature or poll. How i8259_init() is called determines
* which is called. It should be noted that polling is broken on some
* IBM and Motorola PReP boxes so we must use the int-ack feature on them.
*/
int i8259_irq(struct pt_regs *regs)
{
int irq;
spin_lock(&i8259_lock);
/* Either int-ack or poll for the IRQ */
if (pci_intack)
irq = readb(pci_intack);
else {
/* Perform an interrupt acknowledge cycle on controller 1. */
outb(0x0C, 0x20); /* prepare for poll */
irq = inb(0x20) & 7;
if (irq == 2 ) {
/*
* Interrupt is cascaded so perform interrupt
* acknowledge on controller 2.
*/
outb(0x0C, 0xA0); /* prepare for poll */
irq = (inb(0xA0) & 7) + 8;
}
}
if (irq == 7) {
/*
* This may be a spurious interrupt.
*
* Read the interrupt status register (ISR). If the most
* significant bit is not set then there is no valid
* interrupt.
*/
if (!pci_intack)
outb(0x0B, 0x20); /* ISR register */
if(~inb(0x20) & 0x80)
irq = -1;
}
spin_unlock(&i8259_lock);
return irq + i8259_pic_irq_offset;
}
static void i8259_mask_and_ack_irq(unsigned int irq_nr)
{
unsigned long flags;
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr > 7) {
cached_A1 |= 1 << (irq_nr-8);
inb(0xA1); /* DUMMY */
outb(cached_A1, 0xA1);
outb(0x20, 0xA0); /* Non-specific EOI */
outb(0x20, 0x20); /* Non-specific EOI to cascade */
} else {
cached_21 |= 1 << irq_nr;
inb(0x21); /* DUMMY */
outb(cached_21, 0x21);
outb(0x20, 0x20); /* Non-specific EOI */
}
spin_unlock_irqrestore(&i8259_lock, flags);
}
static void i8259_set_irq_mask(int irq_nr)
{
outb(cached_A1,0xA1);
outb(cached_21,0x21);
}
static void i8259_mask_irq(unsigned int irq_nr)
{
unsigned long flags;
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 |= 1 << irq_nr;
else
cached_A1 |= 1 << (irq_nr-8);
i8259_set_irq_mask(irq_nr);
spin_unlock_irqrestore(&i8259_lock, flags);
}
static void i8259_unmask_irq(unsigned int irq_nr)
{
unsigned long flags;
spin_lock_irqsave(&i8259_lock, flags);
irq_nr -= i8259_pic_irq_offset;
if (irq_nr < 8)
cached_21 &= ~(1 << irq_nr);
else
cached_A1 &= ~(1 << (irq_nr-8));
i8259_set_irq_mask(irq_nr);
spin_unlock_irqrestore(&i8259_lock, flags);
}
static struct irq_chip i8259_pic = {
.typename = " i8259 ",
.mask = i8259_mask_irq,
.unmask = i8259_unmask_irq,
.mask_ack = i8259_mask_and_ack_irq,
};
static struct resource pic1_iores = {
.name = "8259 (master)",
.start = 0x20,
.end = 0x21,
.flags = IORESOURCE_BUSY,
};
static struct resource pic2_iores = {
.name = "8259 (slave)",
.start = 0xa0,
.end = 0xa1,
.flags = IORESOURCE_BUSY,
};
static struct resource pic_edgectrl_iores = {
.name = "8259 edge control",
.start = 0x4d0,
.end = 0x4d1,
.flags = IORESOURCE_BUSY,
};
static struct irqaction i8259_irqaction = {
.handler = no_action,
.flags = SA_INTERRUPT,
.mask = CPU_MASK_NONE,
.name = "82c59 secondary cascade",
};
/*
* i8259_init()
* intack_addr - PCI interrupt acknowledge (real) address which will return
* the active irq from the 8259
*/
void __init i8259_init(unsigned long intack_addr, int offset)
{
unsigned long flags;
int i;
spin_lock_irqsave(&i8259_lock, flags);
i8259_pic_irq_offset = offset;
/* init master interrupt controller */
outb(0x11, 0x20); /* Start init sequence */
outb(0x00, 0x21); /* Vector base */
outb(0x04, 0x21); /* edge tiggered, Cascade (slave) on IRQ2 */
outb(0x01, 0x21); /* Select 8086 mode */
/* init slave interrupt controller */
outb(0x11, 0xA0); /* Start init sequence */
outb(0x08, 0xA1); /* Vector base */
outb(0x02, 0xA1); /* edge triggered, Cascade (slave) on IRQ2 */
outb(0x01, 0xA1); /* Select 8086 mode */
/* always read ISR */
outb(0x0B, 0x20);
outb(0x0B, 0xA0);
/* Mask all interrupts */
outb(cached_A1, 0xA1);
outb(cached_21, 0x21);
spin_unlock_irqrestore(&i8259_lock, flags);
for (i = 0; i < NUM_ISA_INTERRUPTS; ++i) {
set_irq_chip_and_handler(offset + i, &i8259_pic,
handle_level_irq);
irq_desc[offset + i].status |= IRQ_LEVEL;
}
/* reserve our resources */
setup_irq(offset + 2, &i8259_irqaction);
request_resource(&ioport_resource, &pic1_iores);
request_resource(&ioport_resource, &pic2_iores);
request_resource(&ioport_resource, &pic_edgectrl_iores);
if (intack_addr != 0)
pci_intack = ioremap(intack_addr, 1);
}

View File

@ -7,6 +7,14 @@ config MMU
bool
default y
config LOCKDEP_SUPPORT
bool
default y
config STACKTRACE_SUPPORT
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool

View File

@ -1,5 +1,9 @@
menu "Kernel hacking"
config TRACE_IRQFLAGS_SUPPORT
bool
default y
source "lib/Kconfig.debug"
endmenu

View File

@ -34,6 +34,11 @@ cflags-$(CONFIG_MARCH_G5) += $(call cc-option,-march=g5)
cflags-$(CONFIG_MARCH_Z900) += $(call cc-option,-march=z900)
cflags-$(CONFIG_MARCH_Z990) += $(call cc-option,-march=z990)
#
# Prevent tail-call optimizations, to get clearer backtraces:
#
cflags-$(CONFIG_FRAME_POINTER) += -fno-optimize-sibling-calls
# old style option for packed stacks
ifeq ($(call cc-option-yn,-mkernel-backchain),y)
cflags-$(CONFIG_PACK_STACK) += -mkernel-backchain -D__PACK_STACK

View File

@ -21,6 +21,7 @@ obj-$(CONFIG_COMPAT) += compat_linux.o compat_signal.o \
obj-$(CONFIG_BINFMT_ELF32) += binfmt_elf32.o
obj-$(CONFIG_VIRT_TIMER) += vtime.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
# Kexec part
S390_KEXEC_OBJS := machine_kexec.o crash.o

View File

@ -58,6 +58,21 @@ STACK_SIZE = 1 << STACK_SHIFT
#define BASED(name) name-system_call(%r13)
#ifdef CONFIG_TRACE_IRQFLAGS
.macro TRACE_IRQS_ON
l %r1,BASED(.Ltrace_irq_on)
basr %r14,%r1
.endm
.macro TRACE_IRQS_OFF
l %r1,BASED(.Ltrace_irq_off)
basr %r14,%r1
.endm
#else
#define TRACE_IRQS_ON
#define TRACE_IRQS_OFF
#endif
/*
* Register usage in interrupt handlers:
* R9 - pointer to current task structure
@ -361,6 +376,7 @@ ret_from_fork:
st %r15,SP_R15(%r15) # store stack pointer for new kthread
0: l %r1,BASED(.Lschedtail)
basr %r14,%r1
TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
b BASED(sysc_return)
@ -516,6 +532,7 @@ pgm_no_vtime3:
mvc __THREAD_per+__PER_address(4,%r1),__LC_PER_ADDRESS
mvc __THREAD_per+__PER_access_id(1,%r1),__LC_PER_ACCESS_ID
oi __TI_flags+3(%r9),_TIF_SINGLE_STEP # set TIF_SINGLE_STEP
TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
b BASED(sysc_do_svc)
@ -539,9 +556,11 @@ io_int_handler:
io_no_vtime:
#endif
l %r9,__LC_THREAD_INFO # load pointer to thread_info struct
TRACE_IRQS_OFF
l %r1,BASED(.Ldo_IRQ) # load address of do_IRQ
la %r2,SP_PTREGS(%r15) # address of register-save area
basr %r14,%r1 # branch to standard irq handler
TRACE_IRQS_ON
io_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
@ -651,10 +670,12 @@ ext_int_handler:
ext_no_vtime:
#endif
l %r9,__LC_THREAD_INFO # load pointer to thread_info struct
TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
lh %r3,__LC_EXT_INT_CODE # get interruption code
l %r1,BASED(.Ldo_extint)
basr %r14,%r1
TRACE_IRQS_ON
b BASED(io_return)
__critical_end:
@ -731,8 +752,10 @@ mcck_no_vtime:
stosm __SF_EMPTY(%r15),0x04 # turn dat on
tm __TI_flags+3(%r9),_TIF_MCCK_PENDING
bno BASED(mcck_return)
TRACE_IRQS_OFF
l %r1,BASED(.Ls390_handle_mcck)
basr %r14,%r1 # call machine check handler
TRACE_IRQS_ON
mcck_return:
mvc __LC_RETURN_MCCK_PSW(8),SP_PSW(%r15) # move return PSW
ni __LC_RETURN_MCCK_PSW+1,0xfd # clear wait state bit
@ -1012,7 +1035,11 @@ cleanup_io_leave_insn:
.Lvfork: .long sys_vfork
.Lschedtail: .long schedule_tail
.Lsysc_table: .long sys_call_table
#ifdef CONFIG_TRACE_IRQFLAGS
.Ltrace_irq_on:.long trace_hardirqs_on
.Ltrace_irq_off:
.long trace_hardirqs_off
#endif
.Lcritical_start:
.long __critical_start + 0x80000000
.Lcritical_end:

View File

@ -58,6 +58,19 @@ _TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NEED_RESCHED | \
#define BASED(name) name-system_call(%r13)
#ifdef CONFIG_TRACE_IRQFLAGS
.macro TRACE_IRQS_ON
brasl %r14,trace_hardirqs_on
.endm
.macro TRACE_IRQS_OFF
brasl %r14,trace_hardirqs_off
.endm
#else
#define TRACE_IRQS_ON
#define TRACE_IRQS_OFF
#endif
.macro STORE_TIMER lc_offset
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
stpt \lc_offset
@ -354,6 +367,7 @@ ret_from_fork:
jo 0f
stg %r15,SP_R15(%r15) # store stack pointer for new kthread
0: brasl %r14,schedule_tail
TRACE_IRQS_ON
stosm 24(%r15),0x03 # reenable interrupts
j sysc_return
@ -535,6 +549,7 @@ pgm_no_vtime3:
mvc __THREAD_per+__PER_address(8,%r1),__LC_PER_ADDRESS
mvc __THREAD_per+__PER_access_id(1,%r1),__LC_PER_ACCESS_ID
oi __TI_flags+7(%r9),_TIF_SINGLE_STEP # set TIF_SINGLE_STEP
TRACE_IRQS_ON
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
j sysc_do_svc
@ -557,8 +572,10 @@ io_int_handler:
io_no_vtime:
#endif
lg %r9,__LC_THREAD_INFO # load pointer to thread_info struct
TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
brasl %r14,do_IRQ # call standard irq handler
TRACE_IRQS_ON
io_return:
tm SP_PSW+1(%r15),0x01 # returning to user ?
@ -665,9 +682,11 @@ ext_int_handler:
ext_no_vtime:
#endif
lg %r9,__LC_THREAD_INFO # load pointer to thread_info struct
TRACE_IRQS_OFF
la %r2,SP_PTREGS(%r15) # address of register-save area
llgh %r3,__LC_EXT_INT_CODE # get interruption code
brasl %r14,do_extint
TRACE_IRQS_ON
j io_return
__critical_end:
@ -743,7 +762,9 @@ mcck_no_vtime:
stosm __SF_EMPTY(%r15),0x04 # turn dat on
tm __TI_flags+7(%r9),_TIF_MCCK_PENDING
jno mcck_return
TRACE_IRQS_OFF
brasl %r14,s390_handle_mcck
TRACE_IRQS_ON
mcck_return:
mvc __LC_RETURN_MCCK_PSW(16),SP_PSW(%r15) # move return PSW
ni __LC_RETURN_MCCK_PSW+1,0xfd # clear wait state bit

Some files were not shown because too many files have changed in this diff Show More