2019-08-25 17:49:17 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-05-11 13:22:32 +08:00
|
|
|
/*
|
2017-11-21 05:17:58 +08:00
|
|
|
* Copyright (C) 2017 - Cambridge Greys Ltd
|
|
|
|
* Copyright (C) 2011 - 2014 Cisco Systems Inc
|
2007-10-16 16:27:00 +08:00
|
|
|
* Copyright (C) 2000 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
|
2005-04-17 06:20:36 +08:00
|
|
|
* Derived (i.e. mostly copied) from arch/i386/kernel/irq.c:
|
|
|
|
* Copyright (C) 1992, 1998 Linus Torvalds, Ingo Molnar
|
|
|
|
*/
|
|
|
|
|
2012-10-08 10:27:32 +08:00
|
|
|
#include <linux/cpumask.h>
|
|
|
|
#include <linux/hardirq.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/kernel_stat.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <as-layout.h>
|
|
|
|
#include <kern_util.h>
|
|
|
|
#include <os.h>
|
2017-11-21 05:17:58 +08:00
|
|
|
#include <irq_user.h>
|
2020-12-02 19:59:50 +08:00
|
|
|
#include <irq_kern.h>
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
#include <as-layout.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
|
2019-05-06 20:39:35 +08:00
|
|
|
extern void free_irqs(void);
|
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
/* When epoll triggers we do not know why it did so
|
|
|
|
* we can also have different IRQs for read and write.
|
2020-12-02 19:59:53 +08:00
|
|
|
* This is why we keep a small irq_reg array for each fd -
|
2017-11-21 05:17:58 +08:00
|
|
|
* one entry per IRQ type
|
2007-05-07 05:51:27 +08:00
|
|
|
*/
|
2020-12-02 19:59:53 +08:00
|
|
|
struct irq_reg {
|
|
|
|
void *id;
|
|
|
|
int irq;
|
2020-12-02 19:59:56 +08:00
|
|
|
/* it's cheaper to store this than to query it */
|
2020-12-02 19:59:53 +08:00
|
|
|
int events;
|
|
|
|
bool active;
|
|
|
|
bool pending;
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
bool wakeup;
|
2020-12-02 19:59:53 +08:00
|
|
|
};
|
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
struct irq_entry {
|
2020-12-02 19:59:56 +08:00
|
|
|
struct list_head list;
|
2017-11-21 05:17:58 +08:00
|
|
|
int fd;
|
2020-12-02 19:59:56 +08:00
|
|
|
struct irq_reg reg[NUM_IRQ_TYPES];
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
bool suspended;
|
um: irq/sigio: Support suspend/resume handling of workaround IRQs
If the sigio workaround needed to be applied to a file descriptor,
set_irq_wake() wouldn't work for it since it would get polled by
the thread instead of causing SIGIO, and thus could never really
cause a wakeup, since the thread notification FD wasn't marked as
being able to wake up the system.
Fix this by marking the thread's notification FD explicitly as a
wake source FD, i.e. not suppressing SIGIO for it in suspend. In
order to not cause spurious wakeups, we then need to remove all
FDs that shouldn't wake up the system from the polling thread. In
order to do this, add unlocked versions of ignore_sigio_fd() and
add_sigio_fd() (nothing else is happening in suspend, so this is
fine), and also modify ignore_sigio_fd() to return -ENOENT if the
FD wasn't originally in there. This doesn't matter because nothing
else currently checks the return value, but the irq code needs to
know which ones to restore the workaround for.
All told, this lets us use a timerfd for the RTC clock in the next
patch, which doesn't send SIGIO.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-11 17:56:07 +08:00
|
|
|
bool sigio_workaround;
|
2017-11-21 05:17:58 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static DEFINE_SPINLOCK(irq_lock);
|
2020-12-02 19:59:56 +08:00
|
|
|
static LIST_HEAD(active_fds);
|
2020-12-02 19:59:50 +08:00
|
|
|
static DECLARE_BITMAP(irqs_allocated, NR_IRQS);
|
2017-11-21 05:17:58 +08:00
|
|
|
|
2020-12-02 19:59:53 +08:00
|
|
|
static void irq_io_loop(struct irq_reg *irq, struct uml_pt_regs *regs)
|
2017-11-21 05:17:58 +08:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* irq->active guards against reentry
|
|
|
|
* irq->pending accumulates pending requests
|
|
|
|
* if pending is raised the irq_handler is re-run
|
|
|
|
* until pending is cleared
|
|
|
|
*/
|
|
|
|
if (irq->active) {
|
|
|
|
irq->active = false;
|
2020-12-02 19:59:56 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
do {
|
|
|
|
irq->pending = false;
|
|
|
|
do_IRQ(irq->irq, regs);
|
2020-12-02 19:59:56 +08:00
|
|
|
} while (irq->pending);
|
|
|
|
|
|
|
|
irq->active = true;
|
2017-11-21 05:17:58 +08:00
|
|
|
} else {
|
|
|
|
irq->pending = true;
|
|
|
|
}
|
|
|
|
}
|
2006-03-27 17:14:31 +08:00
|
|
|
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
void sigio_handler_suspend(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
|
|
|
|
{
|
|
|
|
/* nothing */
|
|
|
|
}
|
|
|
|
|
2012-08-02 06:49:17 +08:00
|
|
|
void sigio_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
|
2006-03-27 17:14:31 +08:00
|
|
|
{
|
2017-11-21 05:17:58 +08:00
|
|
|
struct irq_entry *irq_entry;
|
2020-12-02 19:59:56 +08:00
|
|
|
int n, i;
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2006-05-02 03:15:57 +08:00
|
|
|
while (1) {
|
2017-11-21 05:17:58 +08:00
|
|
|
/* This is now lockless - epoll keeps back-referencesto the irqs
|
|
|
|
* which have trigger it so there is no need to walk the irq
|
|
|
|
* list and lock it every time. We avoid locking by turning off
|
|
|
|
* IO for a specific fd by executing os_del_epoll_fd(fd) before
|
|
|
|
* we do any changes to the actual data structures
|
|
|
|
*/
|
|
|
|
n = os_waiting_for_events_epoll();
|
|
|
|
|
2006-03-27 17:14:31 +08:00
|
|
|
if (n <= 0) {
|
2007-10-16 16:27:00 +08:00
|
|
|
if (n == -EINTR)
|
|
|
|
continue;
|
2017-11-21 05:17:58 +08:00
|
|
|
else
|
|
|
|
break;
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
for (i = 0; i < n ; i++) {
|
2020-12-02 19:59:56 +08:00
|
|
|
enum um_irq_type t;
|
|
|
|
|
|
|
|
irq_entry = os_epoll_get_data_pointer(i);
|
|
|
|
|
|
|
|
for (t = 0; t < NUM_IRQ_TYPES; t++) {
|
|
|
|
int events = irq_entry->reg[t].events;
|
|
|
|
|
|
|
|
if (!events)
|
2017-11-21 05:17:58 +08:00
|
|
|
continue;
|
2020-12-02 19:59:56 +08:00
|
|
|
|
|
|
|
if (os_epoll_triggered(i, events) > 0)
|
|
|
|
irq_io_loop(&irq_entry->reg[t], regs);
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2019-05-06 20:39:35 +08:00
|
|
|
|
|
|
|
free_irqs();
|
2017-11-21 05:17:58 +08:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
static struct irq_entry *get_irq_entry_by_fd(int fd)
|
2017-11-21 05:17:58 +08:00
|
|
|
{
|
2020-12-02 19:59:56 +08:00
|
|
|
struct irq_entry *walk;
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
lockdep_assert_held(&irq_lock);
|
|
|
|
|
|
|
|
list_for_each_entry(walk, &active_fds, list) {
|
|
|
|
if (walk->fd == fd)
|
|
|
|
return walk;
|
2017-11-21 05:17:58 +08:00
|
|
|
}
|
2020-12-02 19:59:56 +08:00
|
|
|
|
|
|
|
return NULL;
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
static void free_irq_entry(struct irq_entry *to_free, bool remove)
|
|
|
|
{
|
|
|
|
if (!to_free)
|
|
|
|
return;
|
2017-11-21 05:17:58 +08:00
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
if (remove)
|
|
|
|
os_del_epoll_fd(to_free->fd);
|
|
|
|
list_del(&to_free->list);
|
|
|
|
kfree(to_free);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool update_irq_entry(struct irq_entry *entry)
|
|
|
|
{
|
|
|
|
enum um_irq_type i;
|
|
|
|
int events = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_IRQ_TYPES; i++)
|
|
|
|
events |= entry->reg[i].events;
|
|
|
|
|
|
|
|
if (events) {
|
|
|
|
/* will modify (instead of add) if needed */
|
|
|
|
os_add_epoll_fd(events, entry->fd, entry);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
os_del_epoll_fd(entry->fd);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void update_or_free_irq_entry(struct irq_entry *entry)
|
|
|
|
{
|
|
|
|
if (!update_irq_entry(entry))
|
|
|
|
free_irq_entry(entry, false);
|
|
|
|
}
|
2006-07-10 19:45:10 +08:00
|
|
|
|
2020-12-02 19:59:55 +08:00
|
|
|
static int activate_fd(int irq, int fd, enum um_irq_type type, void *dev_id)
|
2006-03-27 17:14:31 +08:00
|
|
|
{
|
2017-11-21 05:17:58 +08:00
|
|
|
struct irq_entry *irq_entry;
|
2020-12-02 19:59:56 +08:00
|
|
|
int err, events = os_event_mask(type);
|
2006-03-27 17:14:31 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
2008-02-05 14:31:04 +08:00
|
|
|
err = os_set_fd_async(fd);
|
2006-05-02 03:15:57 +08:00
|
|
|
if (err < 0)
|
2006-03-27 17:14:31 +08:00
|
|
|
goto out;
|
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
2020-12-02 19:59:56 +08:00
|
|
|
irq_entry = get_irq_entry_by_fd(fd);
|
|
|
|
if (irq_entry) {
|
|
|
|
/* cannot register the same FD twice with the same type */
|
|
|
|
if (WARN_ON(irq_entry->reg[type].events)) {
|
|
|
|
err = -EALREADY;
|
2006-03-27 17:14:31 +08:00
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
/* temporarily disable to avoid IRQ-side locking */
|
|
|
|
os_del_epoll_fd(fd);
|
2017-11-21 05:17:58 +08:00
|
|
|
} else {
|
2020-12-02 19:59:56 +08:00
|
|
|
irq_entry = kzalloc(sizeof(*irq_entry), GFP_ATOMIC);
|
|
|
|
if (!irq_entry) {
|
|
|
|
err = -ENOMEM;
|
2017-11-21 05:17:58 +08:00
|
|
|
goto out_unlock;
|
2020-12-02 19:59:56 +08:00
|
|
|
}
|
|
|
|
irq_entry->fd = fd;
|
|
|
|
list_add_tail(&irq_entry->list, &active_fds);
|
|
|
|
maybe_sigio_broken(fd);
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:56 +08:00
|
|
|
irq_entry->reg[type].id = dev_id;
|
|
|
|
irq_entry->reg[type].irq = irq;
|
|
|
|
irq_entry->reg[type].active = true;
|
|
|
|
irq_entry->reg[type].events = events;
|
|
|
|
|
|
|
|
WARN_ON(!update_irq_entry(irq_entry));
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
2006-03-27 17:14:31 +08:00
|
|
|
|
[PATCH] uml: SIGIO cleanups
- Various cleanups in the sigio code.
- Removed explicit zero-initializations of a few structures.
- Improved some error messages.
- An API change - there was an asymmetry between reactivate_fd calling
maybe_sigio_broken, which goes through all the machinery of figuring out if
a file descriptor supports SIGIO and applying the workaround to it if not,
and deactivate_fd, which just turns off the descriptor.
This is changed so that only activate_fd calls maybe_sigio_broken, when
the descriptor is first seen. reactivate_fd now calls add_sigio_fd, which
is symmetric with ignore_sigio_fd.
This removes a recursion which makes a critical section look more critical
than it really was, obsoleting a big comment to that effect. This requires
keeping track of all descriptors which are getting the SIGIO treatment, not
just the ones being polled at any given moment, so that reactivate_fd,
through add_sigio_fd, doesn't try to tell the SIGIO thread about descriptors
it doesn't care about.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-26 14:33:04 +08:00
|
|
|
return 0;
|
2017-11-21 05:17:58 +08:00
|
|
|
out_unlock:
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
2017-11-21 05:17:58 +08:00
|
|
|
out:
|
[PATCH] uml: SIGIO cleanups
- Various cleanups in the sigio code.
- Removed explicit zero-initializations of a few structures.
- Improved some error messages.
- An API change - there was an asymmetry between reactivate_fd calling
maybe_sigio_broken, which goes through all the machinery of figuring out if
a file descriptor supports SIGIO and applying the workaround to it if not,
and deactivate_fd, which just turns off the descriptor.
This is changed so that only activate_fd calls maybe_sigio_broken, when
the descriptor is first seen. reactivate_fd now calls add_sigio_fd, which
is symmetric with ignore_sigio_fd.
This removes a recursion which makes a critical section look more critical
than it really was, obsoleting a big comment to that effect. This requires
keeping track of all descriptors which are getting the SIGIO treatment, not
just the ones being polled at any given moment, so that reactivate_fd,
through add_sigio_fd, doesn't try to tell the SIGIO thread about descriptors
it doesn't care about.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-26 14:33:04 +08:00
|
|
|
return err;
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
/*
|
2020-12-02 19:59:56 +08:00
|
|
|
* Remove the entry or entries for a specific FD, if you
|
|
|
|
* don't want to remove all the possible entries then use
|
|
|
|
* um_free_irq() or deactivate_fd() instead.
|
2017-11-21 05:17:58 +08:00
|
|
|
*/
|
|
|
|
void free_irq_by_fd(int fd)
|
2006-03-27 17:14:31 +08:00
|
|
|
{
|
2017-11-21 05:17:58 +08:00
|
|
|
struct irq_entry *to_free;
|
|
|
|
unsigned long flags;
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
|
|
|
to_free = get_irq_entry_by_fd(fd);
|
2020-12-02 19:59:56 +08:00
|
|
|
free_irq_entry(to_free, true);
|
2017-11-21 05:17:58 +08:00
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
2017-11-22 21:49:55 +08:00
|
|
|
EXPORT_SYMBOL(free_irq_by_fd);
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
static void free_irq_by_irq_and_dev(unsigned int irq, void *dev)
|
2006-03-27 17:14:31 +08:00
|
|
|
{
|
2020-12-02 19:59:56 +08:00
|
|
|
struct irq_entry *entry;
|
2006-03-27 17:14:31 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
2020-12-02 19:59:56 +08:00
|
|
|
list_for_each_entry(entry, &active_fds, list) {
|
|
|
|
enum um_irq_type i;
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_IRQ_TYPES; i++) {
|
|
|
|
struct irq_reg *reg = &entry->reg[i];
|
|
|
|
|
|
|
|
if (!reg->events)
|
|
|
|
continue;
|
|
|
|
if (reg->irq != irq)
|
|
|
|
continue;
|
|
|
|
if (reg->id != dev)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
os_del_epoll_fd(entry->fd);
|
|
|
|
reg->events = 0;
|
|
|
|
update_or_free_irq_entry(entry);
|
|
|
|
goto out;
|
|
|
|
}
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
2020-12-02 19:59:56 +08:00
|
|
|
out:
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
2017-11-21 05:17:58 +08:00
|
|
|
}
|
|
|
|
|
2006-03-27 17:14:31 +08:00
|
|
|
void deactivate_fd(int fd, int irqnum)
|
|
|
|
{
|
2020-12-02 19:59:56 +08:00
|
|
|
struct irq_entry *entry;
|
2006-03-27 17:14:31 +08:00
|
|
|
unsigned long flags;
|
2020-12-02 19:59:56 +08:00
|
|
|
enum um_irq_type i;
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
os_del_epoll_fd(fd);
|
2020-12-02 19:59:56 +08:00
|
|
|
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
2020-12-02 19:59:56 +08:00
|
|
|
entry = get_irq_entry_by_fd(fd);
|
|
|
|
if (!entry)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_IRQ_TYPES; i++) {
|
|
|
|
if (!entry->reg[i].events)
|
|
|
|
continue;
|
|
|
|
if (entry->reg[i].irq == irqnum)
|
|
|
|
entry->reg[i].events = 0;
|
[PATCH] uml: SIGIO cleanups
- Various cleanups in the sigio code.
- Removed explicit zero-initializations of a few structures.
- Improved some error messages.
- An API change - there was an asymmetry between reactivate_fd calling
maybe_sigio_broken, which goes through all the machinery of figuring out if
a file descriptor supports SIGIO and applying the workaround to it if not,
and deactivate_fd, which just turns off the descriptor.
This is changed so that only activate_fd calls maybe_sigio_broken, when
the descriptor is first seen. reactivate_fd now calls add_sigio_fd, which
is symmetric with ignore_sigio_fd.
This removes a recursion which makes a critical section look more critical
than it really was, obsoleting a big comment to that effect. This requires
keeping track of all descriptors which are getting the SIGIO treatment, not
just the ones being polled at any given moment, so that reactivate_fd,
through add_sigio_fd, doesn't try to tell the SIGIO thread about descriptors
it doesn't care about.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-26 14:33:04 +08:00
|
|
|
}
|
2020-12-02 19:59:56 +08:00
|
|
|
|
|
|
|
update_or_free_irq_entry(entry);
|
|
|
|
out:
|
2006-07-10 19:45:10 +08:00
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
2020-12-02 19:59:56 +08:00
|
|
|
|
[PATCH] uml: SIGIO cleanups
- Various cleanups in the sigio code.
- Removed explicit zero-initializations of a few structures.
- Improved some error messages.
- An API change - there was an asymmetry between reactivate_fd calling
maybe_sigio_broken, which goes through all the machinery of figuring out if
a file descriptor supports SIGIO and applying the workaround to it if not,
and deactivate_fd, which just turns off the descriptor.
This is changed so that only activate_fd calls maybe_sigio_broken, when
the descriptor is first seen. reactivate_fd now calls add_sigio_fd, which
is symmetric with ignore_sigio_fd.
This removes a recursion which makes a critical section look more critical
than it really was, obsoleting a big comment to that effect. This requires
keeping track of all descriptors which are getting the SIGIO treatment, not
just the ones being polled at any given moment, so that reactivate_fd,
through add_sigio_fd, doesn't try to tell the SIGIO thread about descriptors
it doesn't care about.
Signed-off-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-26 14:33:04 +08:00
|
|
|
ignore_sigio_fd(fd);
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
2011-08-19 03:14:10 +08:00
|
|
|
EXPORT_SYMBOL(deactivate_fd);
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2007-05-07 05:51:27 +08:00
|
|
|
/*
|
|
|
|
* Called just before shutdown in order to provide a clean exec
|
|
|
|
* environment in case the system is rebooting. No locking because
|
|
|
|
* that would cause a pointless shutdown hang if something hadn't
|
|
|
|
* released the lock.
|
|
|
|
*/
|
2006-03-27 17:14:31 +08:00
|
|
|
int deactivate_all_fds(void)
|
|
|
|
{
|
2020-12-02 19:59:56 +08:00
|
|
|
struct irq_entry *entry;
|
2006-03-27 17:14:31 +08:00
|
|
|
|
2017-11-21 05:17:58 +08:00
|
|
|
/* Stop IO. The IRQ loop has no lock so this is our
|
|
|
|
* only way of making sure we are safe to dispose
|
|
|
|
* of all IRQ handlers
|
|
|
|
*/
|
2006-03-27 17:14:31 +08:00
|
|
|
os_set_ioignore();
|
2020-12-02 19:59:56 +08:00
|
|
|
|
|
|
|
/* we can no longer call kfree() here so just deactivate */
|
|
|
|
list_for_each_entry(entry, &active_fds, list)
|
|
|
|
os_del_epoll_fd(entry->fd);
|
2017-11-21 05:17:58 +08:00
|
|
|
os_close_epoll_fd();
|
2006-05-02 03:15:57 +08:00
|
|
|
return 0;
|
2006-03-27 17:14:31 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2007-10-20 07:23:03 +08:00
|
|
|
* do_IRQ handles all normal device IRQs (the special
|
2005-04-17 06:20:36 +08:00
|
|
|
* SMP cross-CPU interrupts have their own specific
|
|
|
|
* handlers).
|
|
|
|
*/
|
2007-10-16 16:26:58 +08:00
|
|
|
unsigned int do_IRQ(int irq, struct uml_pt_regs *regs)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-10-09 05:49:34 +08:00
|
|
|
struct pt_regs *old_regs = set_irq_regs((struct pt_regs *)regs);
|
|
|
|
irq_enter();
|
2010-10-27 05:22:20 +08:00
|
|
|
generic_handle_irq(irq);
|
2006-10-09 05:49:34 +08:00
|
|
|
irq_exit();
|
|
|
|
set_irq_regs(old_regs);
|
|
|
|
return 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:50 +08:00
|
|
|
void um_free_irq(int irq, void *dev)
|
2012-04-18 04:37:13 +08:00
|
|
|
{
|
2020-12-02 19:59:50 +08:00
|
|
|
if (WARN(irq < 0 || irq > NR_IRQS, "freeing invalid irq %d", irq))
|
|
|
|
return;
|
|
|
|
|
2012-04-18 04:37:13 +08:00
|
|
|
free_irq_by_irq_and_dev(irq, dev);
|
|
|
|
free_irq(irq, dev);
|
2020-12-02 19:59:50 +08:00
|
|
|
clear_bit(irq, irqs_allocated);
|
2012-04-18 04:37:13 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(um_free_irq);
|
|
|
|
|
2020-12-02 19:59:55 +08:00
|
|
|
int um_request_irq(int irq, int fd, enum um_irq_type type,
|
|
|
|
irq_handler_t handler, unsigned long irqflags,
|
|
|
|
const char *devname, void *dev_id)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
2020-12-02 19:59:50 +08:00
|
|
|
if (irq == UM_IRQ_ALLOC) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = UM_FIRST_DYN_IRQ; i < NR_IRQS; i++) {
|
|
|
|
if (!test_and_set_bit(i, irqs_allocated)) {
|
|
|
|
irq = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (irq < 0)
|
|
|
|
return -ENOSPC;
|
|
|
|
|
2007-11-15 09:00:23 +08:00
|
|
|
if (fd != -1) {
|
2005-04-17 06:20:36 +08:00
|
|
|
err = activate_fd(irq, fd, type, dev_id);
|
2007-11-15 09:00:23 +08:00
|
|
|
if (err)
|
2020-12-02 19:59:50 +08:00
|
|
|
goto error;
|
2007-11-15 09:00:23 +08:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:50 +08:00
|
|
|
err = request_irq(irq, handler, irqflags, devname, dev_id);
|
|
|
|
if (err < 0)
|
|
|
|
goto error;
|
|
|
|
|
|
|
|
return irq;
|
|
|
|
error:
|
|
|
|
clear_bit(irq, irqs_allocated);
|
|
|
|
return err;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(um_request_irq);
|
|
|
|
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
|
|
|
void um_irqs_suspend(void)
|
|
|
|
{
|
|
|
|
struct irq_entry *entry;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
sig_info[SIGIO] = sigio_handler_suspend;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
|
|
|
list_for_each_entry(entry, &active_fds, list) {
|
|
|
|
enum um_irq_type t;
|
|
|
|
bool wake = false;
|
|
|
|
|
|
|
|
for (t = 0; t < NUM_IRQ_TYPES; t++) {
|
|
|
|
if (!entry->reg[t].events)
|
|
|
|
continue;
|
|
|
|
|
um: irq/sigio: Support suspend/resume handling of workaround IRQs
If the sigio workaround needed to be applied to a file descriptor,
set_irq_wake() wouldn't work for it since it would get polled by
the thread instead of causing SIGIO, and thus could never really
cause a wakeup, since the thread notification FD wasn't marked as
being able to wake up the system.
Fix this by marking the thread's notification FD explicitly as a
wake source FD, i.e. not suppressing SIGIO for it in suspend. In
order to not cause spurious wakeups, we then need to remove all
FDs that shouldn't wake up the system from the polling thread. In
order to do this, add unlocked versions of ignore_sigio_fd() and
add_sigio_fd() (nothing else is happening in suspend, so this is
fine), and also modify ignore_sigio_fd() to return -ENOENT if the
FD wasn't originally in there. This doesn't matter because nothing
else currently checks the return value, but the irq code needs to
know which ones to restore the workaround for.
All told, this lets us use a timerfd for the RTC clock in the next
patch, which doesn't send SIGIO.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-11 17:56:07 +08:00
|
|
|
/*
|
|
|
|
* For the SIGIO_WRITE_IRQ, which is used to handle the
|
|
|
|
* SIGIO workaround thread, we need special handling:
|
|
|
|
* enable wake for it itself, but below we tell it about
|
|
|
|
* any FDs that should be suspended.
|
|
|
|
*/
|
|
|
|
if (entry->reg[t].wakeup ||
|
|
|
|
entry->reg[t].irq == SIGIO_WRITE_IRQ) {
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
wake = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!wake) {
|
|
|
|
entry->suspended = true;
|
|
|
|
os_clear_fd_async(entry->fd);
|
um: irq/sigio: Support suspend/resume handling of workaround IRQs
If the sigio workaround needed to be applied to a file descriptor,
set_irq_wake() wouldn't work for it since it would get polled by
the thread instead of causing SIGIO, and thus could never really
cause a wakeup, since the thread notification FD wasn't marked as
being able to wake up the system.
Fix this by marking the thread's notification FD explicitly as a
wake source FD, i.e. not suppressing SIGIO for it in suspend. In
order to not cause spurious wakeups, we then need to remove all
FDs that shouldn't wake up the system from the polling thread. In
order to do this, add unlocked versions of ignore_sigio_fd() and
add_sigio_fd() (nothing else is happening in suspend, so this is
fine), and also modify ignore_sigio_fd() to return -ENOENT if the
FD wasn't originally in there. This doesn't matter because nothing
else currently checks the return value, but the irq code needs to
know which ones to restore the workaround for.
All told, this lets us use a timerfd for the RTC clock in the next
patch, which doesn't send SIGIO.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-11 17:56:07 +08:00
|
|
|
entry->sigio_workaround =
|
|
|
|
!__ignore_sigio_fd(entry->fd);
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
void um_irqs_resume(void)
|
|
|
|
{
|
|
|
|
struct irq_entry *entry;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
|
|
|
list_for_each_entry(entry, &active_fds, list) {
|
|
|
|
if (entry->suspended) {
|
|
|
|
int err = os_set_fd_async(entry->fd);
|
|
|
|
|
|
|
|
WARN(err < 0, "os_set_fd_async returned %d\n", err);
|
|
|
|
entry->suspended = false;
|
um: irq/sigio: Support suspend/resume handling of workaround IRQs
If the sigio workaround needed to be applied to a file descriptor,
set_irq_wake() wouldn't work for it since it would get polled by
the thread instead of causing SIGIO, and thus could never really
cause a wakeup, since the thread notification FD wasn't marked as
being able to wake up the system.
Fix this by marking the thread's notification FD explicitly as a
wake source FD, i.e. not suppressing SIGIO for it in suspend. In
order to not cause spurious wakeups, we then need to remove all
FDs that shouldn't wake up the system from the polling thread. In
order to do this, add unlocked versions of ignore_sigio_fd() and
add_sigio_fd() (nothing else is happening in suspend, so this is
fine), and also modify ignore_sigio_fd() to return -ENOENT if the
FD wasn't originally in there. This doesn't matter because nothing
else currently checks the return value, but the irq code needs to
know which ones to restore the workaround for.
All told, this lets us use a timerfd for the RTC clock in the next
patch, which doesn't send SIGIO.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-11 17:56:07 +08:00
|
|
|
|
|
|
|
if (entry->sigio_workaround) {
|
|
|
|
err = __add_sigio_fd(entry->fd);
|
|
|
|
WARN(err < 0, "add_sigio_returned %d\n", err);
|
|
|
|
}
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
|
|
|
|
|
|
|
sig_info[SIGIO] = sigio_handler;
|
|
|
|
send_sigio_to_self();
|
|
|
|
}
|
|
|
|
|
|
|
|
static int normal_irq_set_wake(struct irq_data *d, unsigned int on)
|
|
|
|
{
|
|
|
|
struct irq_entry *entry;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&irq_lock, flags);
|
|
|
|
list_for_each_entry(entry, &active_fds, list) {
|
|
|
|
enum um_irq_type t;
|
|
|
|
|
|
|
|
for (t = 0; t < NUM_IRQ_TYPES; t++) {
|
|
|
|
if (!entry->reg[t].events)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (entry->reg[t].irq != d->irq)
|
|
|
|
continue;
|
|
|
|
entry->reg[t].wakeup = on;
|
|
|
|
goto unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
unlock:
|
|
|
|
spin_unlock_irqrestore(&irq_lock, flags);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
#define normal_irq_set_wake NULL
|
|
|
|
#endif
|
|
|
|
|
2007-10-16 16:27:00 +08:00
|
|
|
/*
|
2011-02-07 06:45:34 +08:00
|
|
|
* irq_chip must define at least enable/disable and ack when
|
|
|
|
* the edge handler is used.
|
2007-10-16 16:27:00 +08:00
|
|
|
*/
|
2011-02-07 06:45:34 +08:00
|
|
|
static void dummy(struct irq_data *d)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2020-12-02 19:59:52 +08:00
|
|
|
/* This is used for everything other than the timer. */
|
2009-06-17 06:33:29 +08:00
|
|
|
static struct irq_chip normal_irq_type = {
|
2010-09-24 00:40:07 +08:00
|
|
|
.name = "SIGIO",
|
2011-02-07 06:45:34 +08:00
|
|
|
.irq_disable = dummy,
|
|
|
|
.irq_enable = dummy,
|
|
|
|
.irq_ack = dummy,
|
2013-09-03 04:49:22 +08:00
|
|
|
.irq_mask = dummy,
|
|
|
|
.irq_unmask = dummy,
|
um: Support suspend to RAM
With all the previous bits in place, we can now also support
suspend to RAM, in the sense that everything is suspended,
not just most, including userspace, processes like in s2idle.
Since um_idle_sleep() now waits forever, we can simply call
that to "suspend" the system.
As before, you can wake it up using SIGUSR1 since we're just
in a pause() call that only needs to return.
In order to implement selective resume from certain devices,
and not have any arbitrary device interrupt wake up, suspend
interrupts by removing SIGIO notification (O_ASYNC) from all
the FDs that are not supposed to wake up the system. However,
swap out the handler so we don't actually handle the SIGIO as
an interrupt.
Since we're in pause(), the mere act of receiving SIGIO wakes
us up, and then after things have been restored enough, re-set
O_ASYNC for all previously suspended FDs, reinstall the proper
SIGIO handler, and send SIGIO to self to process anything that
might now be pending.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-12-03 03:58:07 +08:00
|
|
|
.irq_set_wake = normal_irq_set_wake,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2020-12-02 19:59:52 +08:00
|
|
|
static struct irq_chip alarm_irq_type = {
|
|
|
|
.name = "SIGALRM",
|
2011-02-07 06:45:34 +08:00
|
|
|
.irq_disable = dummy,
|
|
|
|
.irq_enable = dummy,
|
|
|
|
.irq_ack = dummy,
|
2013-09-03 04:49:22 +08:00
|
|
|
.irq_mask = dummy,
|
|
|
|
.irq_unmask = dummy,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
void __init init_IRQ(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2020-12-02 19:59:52 +08:00
|
|
|
irq_set_chip_and_handler(TIMER_IRQ, &alarm_irq_type, handle_edge_irq);
|
2017-11-21 05:17:58 +08:00
|
|
|
|
2020-12-02 19:59:50 +08:00
|
|
|
for (i = 1; i < NR_IRQS; i++)
|
2011-03-25 01:24:42 +08:00
|
|
|
irq_set_chip_and_handler(i, &normal_irq_type, handle_edge_irq);
|
2017-11-21 05:17:58 +08:00
|
|
|
/* Initialize EPOLL Loop */
|
|
|
|
os_setup_epoll();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
/*
|
|
|
|
* IRQ stack entry and exit:
|
|
|
|
*
|
|
|
|
* Unlike i386, UML doesn't receive IRQs on the normal kernel stack
|
|
|
|
* and switch over to the IRQ stack after some preparation. We use
|
|
|
|
* sigaltstack to receive signals on a separate stack from the start.
|
|
|
|
* These two functions make sure the rest of the kernel won't be too
|
|
|
|
* upset by being on a different stack. The IRQ stack has a
|
|
|
|
* thread_info structure at the bottom so that current et al continue
|
|
|
|
* to work.
|
|
|
|
*
|
|
|
|
* to_irq_stack copies the current task's thread_info to the IRQ stack
|
|
|
|
* thread_info and sets the tasks's stack to point to the IRQ stack.
|
|
|
|
*
|
|
|
|
* from_irq_stack copies the thread_info struct back (flags may have
|
|
|
|
* been modified) and resets the task's stack pointer.
|
|
|
|
*
|
|
|
|
* Tricky bits -
|
|
|
|
*
|
|
|
|
* What happens when two signals race each other? UML doesn't block
|
|
|
|
* signals with sigprocmask, SA_DEFER, or sa_mask, so a second signal
|
|
|
|
* could arrive while a previous one is still setting up the
|
|
|
|
* thread_info.
|
|
|
|
*
|
|
|
|
* There are three cases -
|
|
|
|
* The first interrupt on the stack - sets up the thread_info and
|
|
|
|
* handles the interrupt
|
|
|
|
* A nested interrupt interrupting the copying of the thread_info -
|
|
|
|
* can't handle the interrupt, as the stack is in an unknown state
|
|
|
|
* A nested interrupt not interrupting the copying of the
|
|
|
|
* thread_info - doesn't do any setup, just handles the interrupt
|
|
|
|
*
|
|
|
|
* The first job is to figure out whether we interrupted stack setup.
|
|
|
|
* This is done by xchging the signal mask with thread_info->pending.
|
|
|
|
* If the value that comes back is zero, then there is no setup in
|
|
|
|
* progress, and the interrupt can be handled. If the value is
|
|
|
|
* non-zero, then there is stack setup in progress. In order to have
|
|
|
|
* the interrupt handled, we leave our signal in the mask, and it will
|
|
|
|
* be handled by the upper handler after it has set up the stack.
|
|
|
|
*
|
|
|
|
* Next is to figure out whether we are the outer handler or a nested
|
|
|
|
* one. As part of setting up the stack, thread_info->real_thread is
|
|
|
|
* set to non-NULL (and is reset to NULL on exit). This is the
|
|
|
|
* nesting indicator. If it is non-NULL, then the stack is already
|
|
|
|
* set up and the handler can run.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static unsigned long pending_mask;
|
|
|
|
|
uml: fix irqstack crash
This patch fixes a crash caused by an interrupt coming in when an IRQ stack
is being torn down. When this happens, handle_signal will loop, setting up
the IRQ stack again because the tearing down had finished, and handling
whatever signals had come in.
However, to_irq_stack returns a mask of pending signals to be handled, plus
bit zero is set if the IRQ stack was already active, and thus shouldn't be
torn down. This causes a problem because when handle_signal goes around
the loop, sig will be zero, and to_irq_stack will duly set bit zero in the
returned mask, faking handle_signal into believing that it shouldn't tear
down the IRQ stack and return thread_info pointers back to their original
values.
This will eventually cause a crash, as the IRQ stack thread_info will
continue pointing to the original task_struct and an interrupt will look
into it after it has been freed.
The fix is to stop passing a signal number into to_irq_stack. Rather, the
pending signals mask is initialized beforehand with the bit for sig already
set. References to sig in to_irq_stack can be replaced with references to
the mask.
[akpm@linux-foundation.org: use UL]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-09-19 13:46:49 +08:00
|
|
|
unsigned long to_irq_stack(unsigned long *mask_out)
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
{
|
|
|
|
struct thread_info *ti;
|
|
|
|
unsigned long mask, old;
|
|
|
|
int nested;
|
|
|
|
|
uml: fix irqstack crash
This patch fixes a crash caused by an interrupt coming in when an IRQ stack
is being torn down. When this happens, handle_signal will loop, setting up
the IRQ stack again because the tearing down had finished, and handling
whatever signals had come in.
However, to_irq_stack returns a mask of pending signals to be handled, plus
bit zero is set if the IRQ stack was already active, and thus shouldn't be
torn down. This causes a problem because when handle_signal goes around
the loop, sig will be zero, and to_irq_stack will duly set bit zero in the
returned mask, faking handle_signal into believing that it shouldn't tear
down the IRQ stack and return thread_info pointers back to their original
values.
This will eventually cause a crash, as the IRQ stack thread_info will
continue pointing to the original task_struct and an interrupt will look
into it after it has been freed.
The fix is to stop passing a signal number into to_irq_stack. Rather, the
pending signals mask is initialized beforehand with the bit for sig already
set. References to sig in to_irq_stack can be replaced with references to
the mask.
[akpm@linux-foundation.org: use UL]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-09-19 13:46:49 +08:00
|
|
|
mask = xchg(&pending_mask, *mask_out);
|
2007-10-16 16:27:00 +08:00
|
|
|
if (mask != 0) {
|
|
|
|
/*
|
|
|
|
* If any interrupts come in at this point, we want to
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
* make sure that their bits aren't lost by our
|
|
|
|
* putting our bit in. So, this loop accumulates bits
|
|
|
|
* until xchg returns the same value that we put in.
|
|
|
|
* When that happens, there were no new interrupts,
|
|
|
|
* and pending_mask contains a bit for each interrupt
|
|
|
|
* that came in.
|
|
|
|
*/
|
uml: fix irqstack crash
This patch fixes a crash caused by an interrupt coming in when an IRQ stack
is being torn down. When this happens, handle_signal will loop, setting up
the IRQ stack again because the tearing down had finished, and handling
whatever signals had come in.
However, to_irq_stack returns a mask of pending signals to be handled, plus
bit zero is set if the IRQ stack was already active, and thus shouldn't be
torn down. This causes a problem because when handle_signal goes around
the loop, sig will be zero, and to_irq_stack will duly set bit zero in the
returned mask, faking handle_signal into believing that it shouldn't tear
down the IRQ stack and return thread_info pointers back to their original
values.
This will eventually cause a crash, as the IRQ stack thread_info will
continue pointing to the original task_struct and an interrupt will look
into it after it has been freed.
The fix is to stop passing a signal number into to_irq_stack. Rather, the
pending signals mask is initialized beforehand with the bit for sig already
set. References to sig in to_irq_stack can be replaced with references to
the mask.
[akpm@linux-foundation.org: use UL]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-09-19 13:46:49 +08:00
|
|
|
old = *mask_out;
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
do {
|
|
|
|
old |= mask;
|
|
|
|
mask = xchg(&pending_mask, old);
|
2007-10-16 16:27:00 +08:00
|
|
|
} while (mask != old);
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
ti = current_thread_info();
|
|
|
|
nested = (ti->real_thread != NULL);
|
2007-10-16 16:27:00 +08:00
|
|
|
if (!nested) {
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
struct task_struct *task;
|
|
|
|
struct thread_info *tti;
|
|
|
|
|
|
|
|
task = cpu_tasks[ti->cpu].task;
|
|
|
|
tti = task_thread_info(task);
|
uml: fix irqstack crash
This patch fixes a crash caused by an interrupt coming in when an IRQ stack
is being torn down. When this happens, handle_signal will loop, setting up
the IRQ stack again because the tearing down had finished, and handling
whatever signals had come in.
However, to_irq_stack returns a mask of pending signals to be handled, plus
bit zero is set if the IRQ stack was already active, and thus shouldn't be
torn down. This causes a problem because when handle_signal goes around
the loop, sig will be zero, and to_irq_stack will duly set bit zero in the
returned mask, faking handle_signal into believing that it shouldn't tear
down the IRQ stack and return thread_info pointers back to their original
values.
This will eventually cause a crash, as the IRQ stack thread_info will
continue pointing to the original task_struct and an interrupt will look
into it after it has been freed.
The fix is to stop passing a signal number into to_irq_stack. Rather, the
pending signals mask is initialized beforehand with the bit for sig already
set. References to sig in to_irq_stack can be replaced with references to
the mask.
[akpm@linux-foundation.org: use UL]
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-09-19 13:46:49 +08:00
|
|
|
|
uml: iRQ stacks
Add a separate IRQ stack. This differs from i386 in having the entire
interrupt run on a separate stack rather than starting on the normal kernel
stack and switching over once some preparation has been done. The underlying
mechanism, is of course, sigaltstack.
Another difference is that interrupts that happen in userspace are handled on
the normal kernel stack. These cause a wait wakeup instead of a signal
delivery so there is no point in trying to switch stacks for these. There's
no other stuff on the stack, so there is no extra stack consumption.
This quirk makes it possible to have the entire interrupt run on a separate
stack - process preemption (and calls to schedule()) happens on a normal
kernel stack. If we enable CONFIG_PREEMPT, this will need to be rethought.
The IRQ stack for CPU 0 is declared in the same way as the initial kernel
stack. IRQ stacks for other CPUs will be allocated dynamically.
An extra field was added to the thread_info structure. When the active
thread_info is copied to the IRQ stack, the real_thread field points back to
the original stack. This makes it easy to tell where to copy the thread_info
struct back to when the interrupt is finished. It also serves as a marker of
a nested interrupt. It is NULL for the first interrupt on the stack, and
non-NULL for any nested interrupts.
Care is taken to behave correctly if a second interrupt comes in when the
thread_info structure is being set up or taken down. I could just disable
interrupts here, but I don't feel like giving up any of the performance gained
by not flipping signals on and off.
If an interrupt comes in during these critical periods, the handler can't run
because it has no idea what shape the stack is in. So, it sets a bit for its
signal in a global mask and returns. The outer handler will deal with this
signal itself.
Atomicity is had with xchg. A nested interrupt that needs to bail out will
xchg its signal mask into pending_mask and repeat in case yet another
interrupt hit at the same time, until the mask stabilizes.
The outermost interrupt will set up the thread_info and xchg a zero into
pending_mask when it is done. At this point, nested interrupts will look at
->real_thread and see that no setup needs to be done. They can just continue
normally.
Similar care needs to be taken when exiting the outer handler. If another
interrupt comes in while it is copying the thread_info, it will drop a bit
into pending_mask. The outer handler will check this and if it is non-zero,
will loop, set up the stack again, and handle the interrupt.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 13:22:34 +08:00
|
|
|
*ti = *tti;
|
|
|
|
ti->real_thread = tti;
|
|
|
|
task->stack = ti;
|
|
|
|
}
|
|
|
|
|
|
|
|
mask = xchg(&pending_mask, 0);
|
|
|
|
*mask_out |= mask | nested;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long from_irq_stack(int nested)
|
|
|
|
{
|
|
|
|
struct thread_info *ti, *to;
|
|
|
|
unsigned long mask;
|
|
|
|
|
|
|
|
ti = current_thread_info();
|
|
|
|
|
|
|
|
pending_mask = 1;
|
|
|
|
|
|
|
|
to = ti->real_thread;
|
|
|
|
current->stack = to;
|
|
|
|
ti->real_thread = NULL;
|
|
|
|
*to = *ti;
|
|
|
|
|
|
|
|
mask = xchg(&pending_mask, 0);
|
|
|
|
return mask & ~1;
|
|
|
|
}
|
|
|
|
|