2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Intel & MS High Precision Event Timer Implementation.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2003 Intel Corporation
|
|
|
|
* Venki Pallipadi
|
|
|
|
* (c) Copyright 2004 Hewlett-Packard Development Company, L.P.
|
|
|
|
* Bob Picco <robert.picco@hp.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/miscdevice.h>
|
|
|
|
#include <linux/major.h>
|
|
|
|
#include <linux/ioport.h>
|
|
|
|
#include <linux/fcntl.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/poll.h>
|
2006-10-21 03:17:02 +08:00
|
|
|
#include <linux/mm.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/sysctl.h>
|
|
|
|
#include <linux/wait.h>
|
2017-02-03 02:15:33 +08:00
|
|
|
#include <linux/sched/signal.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/bcd.h>
|
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/bitops.h>
|
2010-03-23 03:55:11 +08:00
|
|
|
#include <linux/compat.h>
|
2007-07-21 02:22:30 +08:00
|
|
|
#include <linux/clocksource.h>
|
2010-10-27 05:22:13 +08:00
|
|
|
#include <linux/uaccess.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2010-10-27 05:22:13 +08:00
|
|
|
#include <linux/io.h>
|
2013-12-03 08:49:16 +08:00
|
|
|
#include <linux/acpi.h>
|
|
|
|
#include <linux/hpet.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/current.h>
|
|
|
|
#include <asm/irq.h>
|
|
|
|
#include <asm/div64.h>
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The High Precision Event Timer driver.
|
|
|
|
* This driver is closely modelled after the rtc.c driver.
|
2016-02-11 07:05:01 +08:00
|
|
|
* See HPET spec revision 1.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
#define HPET_USER_FREQ (64)
|
|
|
|
#define HPET_DRIFT (500)
|
|
|
|
|
2005-10-31 07:03:42 +08:00
|
|
|
#define HPET_RANGE_SIZE 1024 /* from HPET spec */
|
|
|
|
|
2008-07-30 03:47:38 +08:00
|
|
|
|
|
|
|
/* WARNING -- don't get confused. These macros are never used
|
|
|
|
* to write the (single) counter, and rarely to read it.
|
|
|
|
* They're badly named; to fix, someday.
|
|
|
|
*/
|
2007-07-21 02:22:30 +08:00
|
|
|
#if BITS_PER_LONG == 64
|
|
|
|
#define write_counter(V, MC) writeq(V, MC)
|
|
|
|
#define read_counter(MC) readq(MC)
|
|
|
|
#else
|
|
|
|
#define write_counter(V, MC) writel(V, MC)
|
|
|
|
#define read_counter(MC) readl(MC)
|
|
|
|
#endif
|
|
|
|
|
2010-03-23 03:55:11 +08:00
|
|
|
static DEFINE_MUTEX(hpet_mutex); /* replaces BKL */
|
2005-10-31 07:03:31 +08:00
|
|
|
static u32 hpet_nhpet, hpet_max_freq = HPET_USER_FREQ;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-09-26 17:15:33 +08:00
|
|
|
/* This clocksource driver currently only works on ia64 */
|
|
|
|
#ifdef CONFIG_IA64
|
2007-07-21 02:22:30 +08:00
|
|
|
static void __iomem *hpet_mctr;
|
|
|
|
|
2016-12-22 03:32:01 +08:00
|
|
|
static u64 read_hpet(struct clocksource *cs)
|
2007-07-21 02:22:30 +08:00
|
|
|
{
|
2016-12-22 03:32:01 +08:00
|
|
|
return (u64)read_counter((void __iomem *)hpet_mctr);
|
2007-07-21 02:22:30 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct clocksource clocksource_hpet = {
|
2010-10-27 05:22:13 +08:00
|
|
|
.name = "hpet",
|
|
|
|
.rating = 250,
|
|
|
|
.read = read_hpet,
|
|
|
|
.mask = CLOCKSOURCE_MASK(64),
|
|
|
|
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
|
2007-07-21 02:22:30 +08:00
|
|
|
};
|
|
|
|
static struct clocksource *hpet_clocksource;
|
2007-09-26 17:15:33 +08:00
|
|
|
#endif
|
2007-07-21 02:22:30 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* A lock for concurrent access by app and isr hpet activity. */
|
|
|
|
static DEFINE_SPINLOCK(hpet_lock);
|
|
|
|
|
|
|
|
#define HPET_DEV_NAME (7)
|
|
|
|
|
|
|
|
struct hpet_dev {
|
|
|
|
struct hpets *hd_hpets;
|
|
|
|
struct hpet __iomem *hd_hpet;
|
|
|
|
struct hpet_timer __iomem *hd_timer;
|
|
|
|
unsigned long hd_ireqfreq;
|
|
|
|
unsigned long hd_irqdata;
|
|
|
|
wait_queue_head_t hd_waitqueue;
|
|
|
|
struct fasync_struct *hd_async_queue;
|
|
|
|
unsigned int hd_flags;
|
|
|
|
unsigned int hd_irq;
|
|
|
|
unsigned int hd_hdwirq;
|
|
|
|
char hd_name[HPET_DEV_NAME];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct hpets {
|
|
|
|
struct hpets *hp_next;
|
|
|
|
struct hpet __iomem *hp_hpet;
|
|
|
|
unsigned long hp_hpet_phys;
|
2007-07-21 02:22:30 +08:00
|
|
|
struct clocksource *hp_clocksource;
|
2005-10-31 07:03:31 +08:00
|
|
|
unsigned long long hp_tick_freq;
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned long hp_delta;
|
|
|
|
unsigned int hp_ntimer;
|
|
|
|
unsigned int hp_which;
|
|
|
|
struct hpet_dev hp_dev[1];
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct hpets *hpets;
|
|
|
|
|
|
|
|
#define HPET_OPEN 0x0001
|
|
|
|
#define HPET_IE 0x0002 /* interrupt enabled */
|
|
|
|
#define HPET_PERIODIC 0x0004
|
2005-10-31 07:03:34 +08:00
|
|
|
#define HPET_SHARED_IRQ 0x0008
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
|
|
#ifndef readq
|
2005-09-10 15:26:52 +08:00
|
|
|
static inline unsigned long long readq(void __iomem *addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
return readl(addr) | (((unsigned long long)readl(addr + 4)) << 32LL);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifndef writeq
|
2005-09-10 15:26:52 +08:00
|
|
|
static inline void writeq(unsigned long long v, void __iomem *addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
writel(v & 0xffffffff, addr);
|
|
|
|
writel(v >> 32, addr + 4);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 21:55:46 +08:00
|
|
|
static irqreturn_t hpet_interrupt(int irq, void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
unsigned long isr;
|
|
|
|
|
|
|
|
devp = data;
|
2005-10-31 07:03:34 +08:00
|
|
|
isr = 1 << (devp - devp->hd_hpets->hp_dev);
|
|
|
|
|
|
|
|
if ((devp->hd_flags & HPET_SHARED_IRQ) &&
|
|
|
|
!(isr & readl(&devp->hd_hpet->hpet_isr)))
|
|
|
|
return IRQ_NONE;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
spin_lock(&hpet_lock);
|
|
|
|
devp->hd_irqdata++;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For non-periodic timers, increment the accumulator.
|
|
|
|
* This has the effect of treating non-periodic like periodic.
|
|
|
|
*/
|
|
|
|
if ((devp->hd_flags & (HPET_IE | HPET_PERIODIC)) == HPET_IE) {
|
drivers/char/hpet.c: fix periodic-emulation for delayed interrupts
When interrupts are delayed due to interrupt masking or due to other
interrupts being serviced the HPET periodic-emuation would fail. This
happened because given an interval t and a time for the current interrupt
m we would compute the next time as t + m. This works until we are
delayed for > t, in which case we would be writing a new value which is in
fact in the past.
This can be solved by computing the next time instead as (k * t) + m where
k is large enough to be in the future. The exact computation of k is
described in a comment to the code.
More detail:
Assuming an interval of 5 between each expected interrupt we have a normal
case of
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t5: interrupt, read t5 from comparator, set next interrupt t5 + 5
t10: interrupt, read t10 from comparator, set next interrupt t10 + 5
...
So, what happens when the interrupt is serviced too late?
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t11: delayed interrupt serviced, read t5 from comparator, set next
interrupt t5 + 5, which is in the past!
... counter loops ...
t10: Much much later, get the next interrupt.
This can happen either because we have interrupts masked for too long
(some stupid driver goes on a printk rampage) or just because we are
pushing the limits of the interval (too small a period), or both most
probably.
My solution is to read the main counter as well and set the next interrupt
to occur at the right interval, for example:
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t11: delayed interrupt serviced, read t5 from comparator, set next
interrupt t15 as t10 has been missed.
t15: back on track.
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-06-16 06:08:54 +08:00
|
|
|
unsigned long m, t, mc, base, k;
|
|
|
|
struct hpet __iomem *hpet = devp->hd_hpet;
|
|
|
|
struct hpets *hpetp = devp->hd_hpets;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
t = devp->hd_ireqfreq;
|
hpet: hpet driver periodic timer setup bug fixes
The periodic interrupt from drivers/char/hpet.c does not work correctly,
both when using the periodic capability of the hardware and while
emulating the periodic interrupt (when hardware does not support periodic
mode).
With timers capable of periodic interrupts, the comparator field is first
set with the period value followed by set of hidden accumulator, which has
the side effect of overwriting the comparator value. This results in
wrong periodicity for the interrupts. For, periodic interrupts to work,
following steps are necessary, in that order.
* Set config with Tn_VAL_SET_CNF bit
* Write to hidden accumulator, the value written is the time when the
first interrupt should be generated
* Write compartor with period interval for subsequent interrupts
(http://www.intel.com/hardwaredesign/hpetspec_1.pdf )
When emulating periodic timer with timers not capable of periodic
interrupt, driver is adding the period to counter value instead of
comparator value, which causes slow drift when using this emulation.
Also, driver seems to add hpetp->hp_delta both while setting up periodic
interrupt and while emulating periodic interrupts with timers not capable
of doing periodic interrupts. This hp_delta will result in slower than
expected interrupt rate and should not be used while setting the interval.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24 06:57:13 +08:00
|
|
|
m = read_counter(&devp->hd_timer->hpet_compare);
|
drivers/char/hpet.c: fix periodic-emulation for delayed interrupts
When interrupts are delayed due to interrupt masking or due to other
interrupts being serviced the HPET periodic-emuation would fail. This
happened because given an interval t and a time for the current interrupt
m we would compute the next time as t + m. This works until we are
delayed for > t, in which case we would be writing a new value which is in
fact in the past.
This can be solved by computing the next time instead as (k * t) + m where
k is large enough to be in the future. The exact computation of k is
described in a comment to the code.
More detail:
Assuming an interval of 5 between each expected interrupt we have a normal
case of
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t5: interrupt, read t5 from comparator, set next interrupt t5 + 5
t10: interrupt, read t10 from comparator, set next interrupt t10 + 5
...
So, what happens when the interrupt is serviced too late?
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t11: delayed interrupt serviced, read t5 from comparator, set next
interrupt t5 + 5, which is in the past!
... counter loops ...
t10: Much much later, get the next interrupt.
This can happen either because we have interrupts masked for too long
(some stupid driver goes on a printk rampage) or just because we are
pushing the limits of the interval (too small a period), or both most
probably.
My solution is to read the main counter as well and set the next interrupt
to occur at the right interval, for example:
t0: interrupt, read t0 from comparator, set next interrupt t0 + 5
t11: delayed interrupt serviced, read t5 from comparator, set next
interrupt t15 as t10 has been missed.
t15: back on track.
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-06-16 06:08:54 +08:00
|
|
|
mc = read_counter(&hpet->hpet_mc);
|
|
|
|
/* The time for the next interrupt would logically be t + m,
|
|
|
|
* however, if we are very unlucky and the interrupt is delayed
|
|
|
|
* for longer than t then we will completely miss the next
|
|
|
|
* interrupt if we set t + m and an application will hang.
|
|
|
|
* Therefore we need to make a more complex computation assuming
|
|
|
|
* that there exists a k for which the following is true:
|
|
|
|
* k * t + base < mc + delta
|
|
|
|
* (k + 1) * t + base > mc + delta
|
|
|
|
* where t is the interval in hpet ticks for the given freq,
|
|
|
|
* base is the theoretical start value 0 < base < t,
|
|
|
|
* mc is the main counter value at the time of the interrupt,
|
|
|
|
* delta is the time it takes to write the a value to the
|
|
|
|
* comparator.
|
|
|
|
* k may then be computed as (mc - base + delta) / t .
|
|
|
|
*/
|
|
|
|
base = mc % t;
|
|
|
|
k = (mc - base + hpetp->hp_delta) / t;
|
|
|
|
write_counter(t * (k + 1) + base,
|
|
|
|
&devp->hd_timer->hpet_compare);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:34 +08:00
|
|
|
if (devp->hd_flags & HPET_SHARED_IRQ)
|
|
|
|
writel(isr, &devp->hd_hpet->hpet_isr);
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock(&hpet_lock);
|
|
|
|
|
|
|
|
wake_up_interruptible(&devp->hd_waitqueue);
|
|
|
|
|
|
|
|
kill_fasync(&devp->hd_async_queue, SIGIO, POLL_IN);
|
|
|
|
|
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
2008-05-29 18:41:04 +08:00
|
|
|
static void hpet_timer_set_irq(struct hpet_dev *devp)
|
|
|
|
{
|
|
|
|
unsigned long v;
|
|
|
|
int irq, gsi;
|
|
|
|
struct hpet_timer __iomem *timer;
|
|
|
|
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
if (devp->hd_hdwirq) {
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
timer = devp->hd_timer;
|
|
|
|
|
|
|
|
/* we prefer level triggered mode */
|
|
|
|
v = readl(&timer->hpet_config);
|
|
|
|
if (!(v & Tn_INT_TYPE_CNF_MASK)) {
|
|
|
|
v |= Tn_INT_TYPE_CNF_MASK;
|
|
|
|
writel(v, &timer->hpet_config);
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
v = (readq(&timer->hpet_config) & Tn_INT_ROUTE_CAP_MASK) >>
|
|
|
|
Tn_INT_ROUTE_CAP_SHIFT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In PIC mode, skip IRQ0-4, IRQ6-9, IRQ12-15 which is always used by
|
|
|
|
* legacy device. In IO APIC mode, we skip all the legacy IRQS.
|
|
|
|
*/
|
|
|
|
if (acpi_irq_model == ACPI_IRQ_MODEL_PIC)
|
|
|
|
v &= ~0xf3df;
|
|
|
|
else
|
|
|
|
v &= ~0xffff;
|
|
|
|
|
2010-03-15 12:35:01 +08:00
|
|
|
for_each_set_bit(irq, &v, HPET_MAX_IRQ) {
|
2008-08-20 11:49:49 +08:00
|
|
|
if (irq >= nr_irqs) {
|
2008-05-29 18:41:04 +08:00
|
|
|
irq = HPET_MAX_IRQ;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2009-04-28 09:01:20 +08:00
|
|
|
gsi = acpi_register_gsi(NULL, irq, ACPI_LEVEL_SENSITIVE,
|
2008-05-29 18:41:04 +08:00
|
|
|
ACPI_ACTIVE_LOW);
|
|
|
|
if (gsi > 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* FIXME: Setup interrupt source table */
|
|
|
|
}
|
|
|
|
|
|
|
|
if (irq < HPET_MAX_IRQ) {
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
v = readl(&timer->hpet_config);
|
|
|
|
v |= irq << Tn_INT_ROUTE_CNF_SHIFT;
|
|
|
|
writel(v, &timer->hpet_config);
|
|
|
|
devp->hd_hdwirq = gsi;
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int hpet_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
struct hpets *hpetp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_WRITE)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2010-03-23 03:55:11 +08:00
|
|
|
mutex_lock(&hpet_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
|
|
|
|
for (i = 0; i < hpetp->hp_ntimer; i++)
|
2008-07-30 03:47:38 +08:00
|
|
|
if (hpetp->hp_dev[i].hd_flags & HPET_OPEN)
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
else {
|
|
|
|
devp = &hpetp->hp_dev[i];
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!devp) {
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
2010-03-23 03:55:11 +08:00
|
|
|
mutex_unlock(&hpet_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
file->private_data = devp;
|
|
|
|
devp->hd_irqdata = 0;
|
|
|
|
devp->hd_flags |= HPET_OPEN;
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
2010-03-23 03:55:11 +08:00
|
|
|
mutex_unlock(&hpet_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-05-29 18:41:04 +08:00
|
|
|
hpet_timer_set_irq(devp);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
|
|
|
|
{
|
|
|
|
DECLARE_WAITQUEUE(wait, current);
|
|
|
|
unsigned long data;
|
|
|
|
ssize_t retval;
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
|
|
|
|
devp = file->private_data;
|
|
|
|
if (!devp->hd_ireqfreq)
|
|
|
|
return -EIO;
|
|
|
|
|
|
|
|
if (count < sizeof(unsigned long))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
add_wait_queue(&devp->hd_waitqueue, &wait);
|
|
|
|
|
|
|
|
for ( ; ; ) {
|
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
|
|
|
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
data = devp->hd_irqdata;
|
|
|
|
devp->hd_irqdata = 0;
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
if (data)
|
|
|
|
break;
|
|
|
|
else if (file->f_flags & O_NONBLOCK) {
|
|
|
|
retval = -EAGAIN;
|
|
|
|
goto out;
|
|
|
|
} else if (signal_pending(current)) {
|
|
|
|
retval = -ERESTARTSYS;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
schedule();
|
|
|
|
}
|
|
|
|
|
|
|
|
retval = put_user(data, (unsigned long __user *)buf);
|
|
|
|
if (!retval)
|
|
|
|
retval = sizeof(unsigned long);
|
|
|
|
out:
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
remove_wait_queue(&devp->hd_waitqueue, &wait);
|
|
|
|
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
2017-07-03 18:39:46 +08:00
|
|
|
static __poll_t hpet_poll(struct file *file, poll_table * wait)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long v;
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
|
|
|
|
devp = file->private_data;
|
|
|
|
|
|
|
|
if (!devp->hd_ireqfreq)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
poll_wait(file, &devp->hd_waitqueue, wait);
|
|
|
|
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
v = devp->hd_irqdata;
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
if (v != 0)
|
2018-02-12 06:34:03 +08:00
|
|
|
return EPOLLIN | EPOLLRDNORM;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-11-13 07:08:33 +08:00
|
|
|
#ifdef CONFIG_HPET_MMAP
|
|
|
|
#ifdef CONFIG_HPET_MMAP_DEFAULT
|
|
|
|
static int hpet_mmap_enabled = 1;
|
|
|
|
#else
|
|
|
|
static int hpet_mmap_enabled = 0;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static __init int hpet_mmap_enable(char *str)
|
|
|
|
{
|
|
|
|
get_option(&str, &hpet_mmap_enabled);
|
|
|
|
pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
|
|
|
|
return 1;
|
|
|
|
}
|
2018-12-20 20:05:24 +08:00
|
|
|
__setup("hpet_mmap=", hpet_mmap_enable);
|
2013-11-13 07:08:33 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
unsigned long addr;
|
|
|
|
|
2013-11-13 07:08:33 +08:00
|
|
|
if (!hpet_mmap_enabled)
|
|
|
|
return -EACCES;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
devp = file->private_data;
|
|
|
|
addr = devp->hd_hpets->hp_hpet_phys;
|
|
|
|
|
|
|
|
if (addr & (PAGE_SIZE - 1))
|
|
|
|
return -ENOSYS;
|
|
|
|
|
|
|
|
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
|
2013-04-20 00:46:39 +08:00
|
|
|
return vm_iomap_memory(vma, addr, PAGE_SIZE);
|
2013-11-13 07:08:33 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
#else
|
2013-11-13 07:08:33 +08:00
|
|
|
static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ENOSYS;
|
|
|
|
}
|
2013-11-13 07:08:33 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
static int hpet_fasync(int fd, struct file *file, int on)
|
|
|
|
{
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
|
|
|
|
devp = file->private_data;
|
|
|
|
|
|
|
|
if (fasync_helper(fd, file, on, &devp->hd_async_queue) >= 0)
|
|
|
|
return 0;
|
|
|
|
else
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hpet_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
struct hpet_timer __iomem *timer;
|
|
|
|
int irq = 0;
|
|
|
|
|
|
|
|
devp = file->private_data;
|
|
|
|
timer = devp->hd_timer;
|
|
|
|
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
|
|
|
|
&timer->hpet_config);
|
|
|
|
|
|
|
|
irq = devp->hd_irq;
|
|
|
|
devp->hd_irq = 0;
|
|
|
|
|
|
|
|
devp->hd_ireqfreq = 0;
|
|
|
|
|
|
|
|
if (devp->hd_flags & HPET_PERIODIC
|
|
|
|
&& readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
|
|
|
|
unsigned long v;
|
|
|
|
|
|
|
|
v = readq(&timer->hpet_config);
|
|
|
|
v ^= Tn_TYPE_CNF_MASK;
|
|
|
|
writeq(v, &timer->hpet_config);
|
|
|
|
}
|
|
|
|
|
|
|
|
devp->hd_flags &= ~(HPET_OPEN | HPET_IE | HPET_PERIODIC);
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
if (irq)
|
|
|
|
free_irq(irq, devp);
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hpet_ioctl_ieon(struct hpet_dev *devp)
|
|
|
|
{
|
|
|
|
struct hpet_timer __iomem *timer;
|
|
|
|
struct hpet __iomem *hpet;
|
|
|
|
struct hpets *hpetp;
|
|
|
|
int irq;
|
|
|
|
unsigned long g, v, t, m;
|
|
|
|
unsigned long flags, isr;
|
|
|
|
|
|
|
|
timer = devp->hd_timer;
|
|
|
|
hpet = devp->hd_hpet;
|
|
|
|
hpetp = devp->hd_hpets;
|
|
|
|
|
2005-10-31 07:03:29 +08:00
|
|
|
if (!devp->hd_ireqfreq)
|
|
|
|
return -EIO;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
if (devp->hd_flags & HPET_IE) {
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
devp->hd_flags |= HPET_IE;
|
2005-10-31 07:03:34 +08:00
|
|
|
|
|
|
|
if (readl(&timer->hpet_config) & Tn_INT_TYPE_CNF_MASK)
|
|
|
|
devp->hd_flags |= HPET_SHARED_IRQ;
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
|
|
|
|
irq = devp->hd_hdwirq;
|
|
|
|
|
|
|
|
if (irq) {
|
2005-10-31 07:03:34 +08:00
|
|
|
unsigned long irq_flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-27 05:22:13 +08:00
|
|
|
if (devp->hd_flags & HPET_SHARED_IRQ) {
|
|
|
|
/*
|
|
|
|
* To prevent the interrupt handler from seeing an
|
|
|
|
* unwanted interrupt status bit, program the timer
|
|
|
|
* so that it will not fire in the near future ...
|
|
|
|
*/
|
|
|
|
writel(readl(&timer->hpet_config) & ~Tn_TYPE_CNF_MASK,
|
|
|
|
&timer->hpet_config);
|
|
|
|
write_counter(read_counter(&hpet->hpet_mc),
|
|
|
|
&timer->hpet_compare);
|
|
|
|
/* ... and clear any left-over status. */
|
|
|
|
isr = 1 << (devp - devp->hd_hpets->hp_dev);
|
|
|
|
writel(isr, &hpet->hpet_isr);
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:34 +08:00
|
|
|
sprintf(devp->hd_name, "hpet%d", (int)(devp - hpetp->hp_dev));
|
2013-10-13 11:53:47 +08:00
|
|
|
irq_flags = devp->hd_flags & HPET_SHARED_IRQ ? IRQF_SHARED : 0;
|
2005-10-31 07:03:34 +08:00
|
|
|
if (request_irq(irq, hpet_interrupt, irq_flags,
|
|
|
|
devp->hd_name, (void *)devp)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
printk(KERN_ERR "hpet: IRQ %d is not free\n", irq);
|
|
|
|
irq = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (irq == 0) {
|
|
|
|
spin_lock_irq(&hpet_lock);
|
|
|
|
devp->hd_flags ^= HPET_IE;
|
|
|
|
spin_unlock_irq(&hpet_lock);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
|
|
|
devp->hd_irq = irq;
|
|
|
|
t = devp->hd_ireqfreq;
|
|
|
|
v = readq(&timer->hpet_config);
|
2008-07-30 03:47:38 +08:00
|
|
|
|
|
|
|
/* 64-bit comparators are not yet supported through the ioctls,
|
|
|
|
* so force this into 32-bit mode if it supports both modes
|
|
|
|
*/
|
|
|
|
g = v | Tn_32MODE_CNF_MASK | Tn_INT_ENB_CNF_MASK;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (devp->hd_flags & HPET_PERIODIC) {
|
|
|
|
g |= Tn_TYPE_CNF_MASK;
|
hpet: hpet driver periodic timer setup bug fixes
The periodic interrupt from drivers/char/hpet.c does not work correctly,
both when using the periodic capability of the hardware and while
emulating the periodic interrupt (when hardware does not support periodic
mode).
With timers capable of periodic interrupts, the comparator field is first
set with the period value followed by set of hidden accumulator, which has
the side effect of overwriting the comparator value. This results in
wrong periodicity for the interrupts. For, periodic interrupts to work,
following steps are necessary, in that order.
* Set config with Tn_VAL_SET_CNF bit
* Write to hidden accumulator, the value written is the time when the
first interrupt should be generated
* Write compartor with period interval for subsequent interrupts
(http://www.intel.com/hardwaredesign/hpetspec_1.pdf )
When emulating periodic timer with timers not capable of periodic
interrupt, driver is adding the period to counter value instead of
comparator value, which causes slow drift when using this emulation.
Also, driver seems to add hpetp->hp_delta both while setting up periodic
interrupt and while emulating periodic interrupts with timers not capable
of doing periodic interrupts. This hp_delta will result in slower than
expected interrupt rate and should not be used while setting the interval.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24 06:57:13 +08:00
|
|
|
v |= Tn_TYPE_CNF_MASK | Tn_VAL_SET_CNF_MASK;
|
2005-04-17 06:20:36 +08:00
|
|
|
writeq(v, &timer->hpet_config);
|
|
|
|
local_irq_save(flags);
|
2008-07-30 03:47:38 +08:00
|
|
|
|
hpet: hpet driver periodic timer setup bug fixes
The periodic interrupt from drivers/char/hpet.c does not work correctly,
both when using the periodic capability of the hardware and while
emulating the periodic interrupt (when hardware does not support periodic
mode).
With timers capable of periodic interrupts, the comparator field is first
set with the period value followed by set of hidden accumulator, which has
the side effect of overwriting the comparator value. This results in
wrong periodicity for the interrupts. For, periodic interrupts to work,
following steps are necessary, in that order.
* Set config with Tn_VAL_SET_CNF bit
* Write to hidden accumulator, the value written is the time when the
first interrupt should be generated
* Write compartor with period interval for subsequent interrupts
(http://www.intel.com/hardwaredesign/hpetspec_1.pdf )
When emulating periodic timer with timers not capable of periodic
interrupt, driver is adding the period to counter value instead of
comparator value, which causes slow drift when using this emulation.
Also, driver seems to add hpetp->hp_delta both while setting up periodic
interrupt and while emulating periodic interrupts with timers not capable
of doing periodic interrupts. This hp_delta will result in slower than
expected interrupt rate and should not be used while setting the interval.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24 06:57:13 +08:00
|
|
|
/*
|
|
|
|
* NOTE: First we modify the hidden accumulator
|
2008-07-30 03:47:38 +08:00
|
|
|
* register supported by periodic-capable comparators.
|
|
|
|
* We never want to modify the (single) counter; that
|
hpet: hpet driver periodic timer setup bug fixes
The periodic interrupt from drivers/char/hpet.c does not work correctly,
both when using the periodic capability of the hardware and while
emulating the periodic interrupt (when hardware does not support periodic
mode).
With timers capable of periodic interrupts, the comparator field is first
set with the period value followed by set of hidden accumulator, which has
the side effect of overwriting the comparator value. This results in
wrong periodicity for the interrupts. For, periodic interrupts to work,
following steps are necessary, in that order.
* Set config with Tn_VAL_SET_CNF bit
* Write to hidden accumulator, the value written is the time when the
first interrupt should be generated
* Write compartor with period interval for subsequent interrupts
(http://www.intel.com/hardwaredesign/hpetspec_1.pdf )
When emulating periodic timer with timers not capable of periodic
interrupt, driver is adding the period to counter value instead of
comparator value, which causes slow drift when using this emulation.
Also, driver seems to add hpetp->hp_delta both while setting up periodic
interrupt and while emulating periodic interrupts with timers not capable
of doing periodic interrupts. This hp_delta will result in slower than
expected interrupt rate and should not be used while setting the interval.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24 06:57:13 +08:00
|
|
|
* would affect all the comparators. The value written
|
|
|
|
* is the counter value when the first interrupt is due.
|
2008-07-30 03:47:38 +08:00
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
m = read_counter(&hpet->hpet_mc);
|
|
|
|
write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
|
hpet: hpet driver periodic timer setup bug fixes
The periodic interrupt from drivers/char/hpet.c does not work correctly,
both when using the periodic capability of the hardware and while
emulating the periodic interrupt (when hardware does not support periodic
mode).
With timers capable of periodic interrupts, the comparator field is first
set with the period value followed by set of hidden accumulator, which has
the side effect of overwriting the comparator value. This results in
wrong periodicity for the interrupts. For, periodic interrupts to work,
following steps are necessary, in that order.
* Set config with Tn_VAL_SET_CNF bit
* Write to hidden accumulator, the value written is the time when the
first interrupt should be generated
* Write compartor with period interval for subsequent interrupts
(http://www.intel.com/hardwaredesign/hpetspec_1.pdf )
When emulating periodic timer with timers not capable of periodic
interrupt, driver is adding the period to counter value instead of
comparator value, which causes slow drift when using this emulation.
Also, driver seems to add hpetp->hp_delta both while setting up periodic
interrupt and while emulating periodic interrupts with timers not capable
of doing periodic interrupts. This hp_delta will result in slower than
expected interrupt rate and should not be used while setting the interval.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Nils Carlson <nils.carlson@ericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24 06:57:13 +08:00
|
|
|
/*
|
|
|
|
* Then we modify the comparator, indicating the period
|
|
|
|
* for subsequent interrupt.
|
|
|
|
*/
|
|
|
|
write_counter(t, &timer->hpet_compare);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
|
|
|
local_irq_save(flags);
|
|
|
|
m = read_counter(&hpet->hpet_mc);
|
|
|
|
write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:34 +08:00
|
|
|
if (devp->hd_flags & HPET_SHARED_IRQ) {
|
2005-10-31 07:03:39 +08:00
|
|
|
isr = 1 << (devp - devp->hd_hpets->hp_dev);
|
2005-10-31 07:03:34 +08:00
|
|
|
writel(isr, &hpet->hpet_isr);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
writeq(g, &timer->hpet_config);
|
|
|
|
local_irq_restore(flags);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:31 +08:00
|
|
|
/* converts Hz to number of timer ticks */
|
|
|
|
static inline unsigned long hpet_time_div(struct hpets *hpets,
|
|
|
|
unsigned long dis)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2005-10-31 07:03:31 +08:00
|
|
|
unsigned long long m;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-10-31 07:03:31 +08:00
|
|
|
m = hpets->hp_tick_freq + (dis >> 1);
|
2005-04-17 06:20:36 +08:00
|
|
|
do_div(m, dis);
|
|
|
|
return (unsigned long)m;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2017-03-14 02:57:25 +08:00
|
|
|
hpet_ioctl_common(struct hpet_dev *devp, unsigned int cmd, unsigned long arg,
|
2010-03-23 03:55:11 +08:00
|
|
|
struct hpet_info *info)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct hpet_timer __iomem *timer;
|
|
|
|
struct hpets *hpetp;
|
|
|
|
int err;
|
|
|
|
unsigned long v;
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case HPET_IE_OFF:
|
|
|
|
case HPET_INFO:
|
|
|
|
case HPET_EPI:
|
|
|
|
case HPET_DPI:
|
|
|
|
case HPET_IRQFREQ:
|
|
|
|
timer = devp->hd_timer;
|
|
|
|
hpetp = devp->hd_hpets;
|
|
|
|
break;
|
|
|
|
case HPET_IE_ON:
|
|
|
|
return hpet_ioctl_ieon(devp);
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = 0;
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case HPET_IE_OFF:
|
|
|
|
if ((devp->hd_flags & HPET_IE) == 0)
|
|
|
|
break;
|
|
|
|
v = readq(&timer->hpet_config);
|
|
|
|
v &= ~Tn_INT_ENB_CNF_MASK;
|
|
|
|
writeq(v, &timer->hpet_config);
|
|
|
|
if (devp->hd_irq) {
|
|
|
|
free_irq(devp->hd_irq, devp);
|
|
|
|
devp->hd_irq = 0;
|
|
|
|
}
|
|
|
|
devp->hd_flags ^= HPET_IE;
|
|
|
|
break;
|
|
|
|
case HPET_INFO:
|
|
|
|
{
|
2010-10-27 05:22:15 +08:00
|
|
|
memset(info, 0, sizeof(*info));
|
2005-10-31 07:03:38 +08:00
|
|
|
if (devp->hd_ireqfreq)
|
2010-03-23 03:55:11 +08:00
|
|
|
info->hi_ireqfreq =
|
2005-10-31 07:03:38 +08:00
|
|
|
hpet_time_div(hpetp, devp->hd_ireqfreq);
|
2010-03-23 03:55:11 +08:00
|
|
|
info->hi_flags =
|
2005-04-17 06:20:36 +08:00
|
|
|
readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK;
|
2010-03-23 03:55:11 +08:00
|
|
|
info->hi_hpet = hpetp->hp_which;
|
|
|
|
info->hi_timer = devp - hpetp->hp_dev;
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case HPET_EPI:
|
|
|
|
v = readq(&timer->hpet_config);
|
|
|
|
if ((v & Tn_PER_INT_CAP_MASK) == 0) {
|
|
|
|
err = -ENXIO;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
devp->hd_flags |= HPET_PERIODIC;
|
|
|
|
break;
|
|
|
|
case HPET_DPI:
|
|
|
|
v = readq(&timer->hpet_config);
|
|
|
|
if ((v & Tn_PER_INT_CAP_MASK) == 0) {
|
|
|
|
err = -ENXIO;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (devp->hd_flags & HPET_PERIODIC &&
|
|
|
|
readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
|
|
|
|
v = readq(&timer->hpet_config);
|
|
|
|
v ^= Tn_TYPE_CNF_MASK;
|
|
|
|
writeq(v, &timer->hpet_config);
|
|
|
|
}
|
|
|
|
devp->hd_flags &= ~HPET_PERIODIC;
|
|
|
|
break;
|
|
|
|
case HPET_IRQFREQ:
|
2010-03-23 03:55:11 +08:00
|
|
|
if ((arg > hpet_max_freq) &&
|
2005-04-17 06:20:36 +08:00
|
|
|
!capable(CAP_SYS_RESOURCE)) {
|
|
|
|
err = -EACCES;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:33 +08:00
|
|
|
if (!arg) {
|
2005-04-17 06:20:36 +08:00
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:31 +08:00
|
|
|
devp->hd_ireqfreq = hpet_time_div(hpetp, arg);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2010-03-23 03:55:11 +08:00
|
|
|
static long
|
|
|
|
hpet_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
struct hpet_info info;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
mutex_lock(&hpet_mutex);
|
|
|
|
err = hpet_ioctl_common(file->private_data, cmd, arg, &info);
|
|
|
|
mutex_unlock(&hpet_mutex);
|
|
|
|
|
|
|
|
if ((cmd == HPET_INFO) && !err &&
|
|
|
|
(copy_to_user((void __user *)arg, &info, sizeof(info))))
|
|
|
|
err = -EFAULT;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
struct compat_hpet_info {
|
|
|
|
compat_ulong_t hi_ireqfreq; /* Hz */
|
|
|
|
compat_ulong_t hi_flags; /* information */
|
|
|
|
unsigned short hi_hpet;
|
|
|
|
unsigned short hi_timer;
|
|
|
|
};
|
|
|
|
|
|
|
|
static long
|
|
|
|
hpet_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
|
|
|
struct hpet_info info;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
mutex_lock(&hpet_mutex);
|
|
|
|
err = hpet_ioctl_common(file->private_data, cmd, arg, &info);
|
|
|
|
mutex_unlock(&hpet_mutex);
|
|
|
|
|
|
|
|
if ((cmd == HPET_INFO) && !err) {
|
|
|
|
struct compat_hpet_info __user *u = compat_ptr(arg);
|
|
|
|
if (put_user(info.hi_ireqfreq, &u->hi_ireqfreq) ||
|
|
|
|
put_user(info.hi_flags, &u->hi_flags) ||
|
|
|
|
put_user(info.hi_hpet, &u->hi_hpet) ||
|
|
|
|
put_user(info.hi_timer, &u->hi_timer))
|
|
|
|
err = -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2006-07-03 15:24:21 +08:00
|
|
|
static const struct file_operations hpet_fops = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.llseek = no_llseek,
|
|
|
|
.read = hpet_read,
|
|
|
|
.poll = hpet_poll,
|
2010-04-27 06:24:05 +08:00
|
|
|
.unlocked_ioctl = hpet_ioctl,
|
2010-03-23 03:55:11 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
.compat_ioctl = hpet_compat_ioctl,
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
.open = hpet_open,
|
|
|
|
.release = hpet_release,
|
|
|
|
.fasync = hpet_fasync,
|
|
|
|
.mmap = hpet_mmap,
|
|
|
|
};
|
|
|
|
|
2005-10-31 07:03:44 +08:00
|
|
|
static int hpet_is_known(struct hpet_data *hdp)
|
|
|
|
{
|
|
|
|
struct hpets *hpetp;
|
|
|
|
|
|
|
|
for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
|
|
|
|
if (hpetp->hp_hpet_phys == hdp->hd_phys_address)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-06-14 10:37:35 +08:00
|
|
|
static struct ctl_table hpet_table[] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "max-user-freq",
|
|
|
|
.data = &hpet_max_freq,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2009-11-06 06:34:02 +08:00
|
|
|
{}
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2013-06-14 10:37:35 +08:00
|
|
|
static struct ctl_table hpet_root[] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "hpet",
|
|
|
|
.maxlen = 0,
|
|
|
|
.mode = 0555,
|
|
|
|
.child = hpet_table,
|
|
|
|
},
|
2009-11-06 06:34:02 +08:00
|
|
|
{}
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2013-06-14 10:37:35 +08:00
|
|
|
static struct ctl_table dev_root[] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "dev",
|
|
|
|
.maxlen = 0,
|
|
|
|
.mode = 0555,
|
|
|
|
.child = hpet_root,
|
|
|
|
},
|
2009-11-06 06:34:02 +08:00
|
|
|
{}
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct ctl_table_header *sysctl_header;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adjustment for when arming the timer with
|
|
|
|
* initial conditions. That is, main counter
|
|
|
|
* ticks expired before interrupts are enabled.
|
|
|
|
*/
|
|
|
|
#define TICK_CALIBRATE (1000UL)
|
|
|
|
|
2009-04-03 07:58:31 +08:00
|
|
|
static unsigned long __hpet_calibrate(struct hpets *hpetp)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct hpet_timer __iomem *timer = NULL;
|
|
|
|
unsigned long t, m, count, i, flags, start;
|
|
|
|
struct hpet_dev *devp;
|
|
|
|
int j;
|
|
|
|
struct hpet __iomem *hpet;
|
|
|
|
|
|
|
|
for (j = 0, devp = hpetp->hp_dev; j < hpetp->hp_ntimer; j++, devp++)
|
|
|
|
if ((devp->hd_flags & HPET_OPEN) == 0) {
|
|
|
|
timer = devp->hd_timer;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!timer)
|
|
|
|
return 0;
|
|
|
|
|
2005-10-31 07:03:39 +08:00
|
|
|
hpet = hpetp->hp_hpet;
|
2005-04-17 06:20:36 +08:00
|
|
|
t = read_counter(&timer->hpet_compare);
|
|
|
|
|
|
|
|
i = 0;
|
2005-10-31 07:03:31 +08:00
|
|
|
count = hpet_time_div(hpetp, TICK_CALIBRATE);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
|
|
|
|
start = read_counter(&hpet->hpet_mc);
|
|
|
|
|
|
|
|
do {
|
|
|
|
m = read_counter(&hpet->hpet_mc);
|
|
|
|
write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
|
|
|
|
} while (i++, (m - start) < count);
|
|
|
|
|
|
|
|
local_irq_restore(flags);
|
|
|
|
|
|
|
|
return (m - start) / i;
|
|
|
|
}
|
|
|
|
|
2009-04-03 07:58:31 +08:00
|
|
|
static unsigned long hpet_calibrate(struct hpets *hpetp)
|
|
|
|
{
|
2012-11-23 17:46:43 +08:00
|
|
|
unsigned long ret = ~0UL;
|
2009-04-03 07:58:31 +08:00
|
|
|
unsigned long tmp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Try to calibrate until return value becomes stable small value.
|
|
|
|
* If SMI interruption occurs in calibration loop, the return value
|
|
|
|
* will be big. This avoids its impact.
|
|
|
|
*/
|
|
|
|
for ( ; ; ) {
|
|
|
|
tmp = __hpet_calibrate(hpetp);
|
|
|
|
if (ret <= tmp)
|
|
|
|
break;
|
|
|
|
ret = tmp;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
int hpet_alloc(struct hpet_data *hdp)
|
|
|
|
{
|
2008-04-04 22:26:10 +08:00
|
|
|
u64 cap, mcfg;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hpet_dev *devp;
|
2008-04-04 22:26:10 +08:00
|
|
|
u32 i, ntimer;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hpets *hpetp;
|
|
|
|
struct hpet __iomem *hpet;
|
2010-10-27 05:22:13 +08:00
|
|
|
static struct hpets *last;
|
2008-04-04 22:26:10 +08:00
|
|
|
unsigned long period;
|
2005-10-31 07:03:31 +08:00
|
|
|
unsigned long long temp;
|
2008-08-01 03:59:56 +08:00
|
|
|
u32 remainder;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* hpet_alloc can be called by platform dependent code.
|
2005-10-31 07:03:44 +08:00
|
|
|
* If platform dependent code has allocated the hpet that
|
|
|
|
* ACPI has also reported, then we catch it here.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2005-10-31 07:03:44 +08:00
|
|
|
if (hpet_is_known(hdp)) {
|
|
|
|
printk(KERN_DEBUG "%s: duplicate HPET ignored\n",
|
2008-04-30 15:55:10 +08:00
|
|
|
__func__);
|
2005-10-31 07:03:44 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-02-20 06:10:54 +08:00
|
|
|
hpetp = kzalloc(struct_size(hpetp, hp_dev, hdp->hd_nirqs - 1),
|
|
|
|
GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!hpetp)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
hpetp->hp_which = hpet_nhpet++;
|
|
|
|
hpetp->hp_hpet = hdp->hd_address;
|
|
|
|
hpetp->hp_hpet_phys = hdp->hd_phys_address;
|
|
|
|
|
|
|
|
hpetp->hp_ntimer = hdp->hd_nirqs;
|
2008-01-30 20:30:03 +08:00
|
|
|
|
2008-04-04 22:26:10 +08:00
|
|
|
for (i = 0; i < hdp->hd_nirqs; i++)
|
|
|
|
hpetp->hp_dev[i].hd_hdwirq = hdp->hd_irq[i];
|
2008-01-30 20:30:03 +08:00
|
|
|
|
2008-04-04 22:26:10 +08:00
|
|
|
hpet = hpetp->hp_hpet;
|
2008-01-30 20:30:03 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
cap = readq(&hpet->hpet_cap);
|
|
|
|
|
|
|
|
ntimer = ((cap & HPET_NUM_TIM_CAP_MASK) >> HPET_NUM_TIM_CAP_SHIFT) + 1;
|
|
|
|
|
|
|
|
if (hpetp->hp_ntimer != ntimer) {
|
|
|
|
printk(KERN_WARNING "hpet: number irqs doesn't agree"
|
|
|
|
" with number of timers\n");
|
|
|
|
kfree(hpetp);
|
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (last)
|
|
|
|
last->hp_next = hpetp;
|
|
|
|
else
|
|
|
|
hpets = hpetp;
|
|
|
|
|
|
|
|
last = hpetp;
|
|
|
|
|
2005-10-31 07:03:31 +08:00
|
|
|
period = (cap & HPET_COUNTER_CLK_PERIOD_MASK) >>
|
|
|
|
HPET_COUNTER_CLK_PERIOD_SHIFT; /* fs, 10^-15 */
|
|
|
|
temp = 1000000000000000uLL; /* 10^15 femtoseconds per second */
|
|
|
|
temp += period >> 1; /* round */
|
|
|
|
do_div(temp, period);
|
|
|
|
hpetp->hp_tick_freq = temp; /* ticks per second */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-26 16:52:28 +08:00
|
|
|
printk(KERN_INFO "hpet%d: at MMIO 0x%lx, IRQ%s",
|
|
|
|
hpetp->hp_which, hdp->hd_phys_address,
|
2005-04-17 06:20:36 +08:00
|
|
|
hpetp->hp_ntimer > 1 ? "s" : "");
|
|
|
|
for (i = 0; i < hpetp->hp_ntimer; i++)
|
2012-04-03 09:18:23 +08:00
|
|
|
printk(KERN_CONT "%s %d", i > 0 ? "," : "", hdp->hd_irq[i]);
|
|
|
|
printk(KERN_CONT "\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-08-01 03:59:56 +08:00
|
|
|
temp = hpetp->hp_tick_freq;
|
|
|
|
remainder = do_div(temp, 1000000);
|
2008-07-30 03:47:38 +08:00
|
|
|
printk(KERN_INFO
|
|
|
|
"hpet%u: %u comparators, %d-bit %u.%06u MHz counter\n",
|
|
|
|
hpetp->hp_which, hpetp->hp_ntimer,
|
|
|
|
cap & HPET_COUNTER_SIZE_MASK ? 64 : 32,
|
2008-08-01 03:59:56 +08:00
|
|
|
(unsigned) temp, remainder);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
mcfg = readq(&hpet->hpet_config);
|
|
|
|
if ((mcfg & HPET_ENABLE_CNF_MASK) == 0) {
|
|
|
|
write_counter(0L, &hpet->hpet_mc);
|
|
|
|
mcfg |= HPET_ENABLE_CNF_MASK;
|
|
|
|
writeq(mcfg, &hpet->hpet_config);
|
|
|
|
}
|
|
|
|
|
2005-10-31 07:03:31 +08:00
|
|
|
for (i = 0, devp = hpetp->hp_dev; i < hpetp->hp_ntimer; i++, devp++) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct hpet_timer __iomem *timer;
|
|
|
|
|
|
|
|
timer = &hpet->hpet_timers[devp - hpetp->hp_dev];
|
|
|
|
|
|
|
|
devp->hd_hpets = hpetp;
|
|
|
|
devp->hd_hpet = hpet;
|
|
|
|
devp->hd_timer = timer;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the timer was reserved by platform code,
|
|
|
|
* then make timer unavailable for opens.
|
|
|
|
*/
|
|
|
|
if (hdp->hd_state & (1 << i)) {
|
|
|
|
devp->hd_flags = HPET_OPEN;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
init_waitqueue_head(&devp->hd_waitqueue);
|
|
|
|
}
|
|
|
|
|
|
|
|
hpetp->hp_delta = hpet_calibrate(hpetp);
|
2007-07-21 02:22:30 +08:00
|
|
|
|
2007-09-01 11:13:57 +08:00
|
|
|
/* This clocksource driver currently only works on ia64 */
|
|
|
|
#ifdef CONFIG_IA64
|
2007-07-21 02:22:30 +08:00
|
|
|
if (!hpet_clocksource) {
|
|
|
|
hpet_mctr = (void __iomem *)&hpetp->hp_hpet->hpet_mc;
|
2011-07-13 21:24:15 +08:00
|
|
|
clocksource_hpet.archdata.fsys_mmio = hpet_mctr;
|
2010-04-27 11:20:47 +08:00
|
|
|
clocksource_register_hz(&clocksource_hpet, hpetp->hp_tick_freq);
|
2007-07-21 02:22:30 +08:00
|
|
|
hpetp->hp_clocksource = &clocksource_hpet;
|
|
|
|
hpet_clocksource = &clocksource_hpet;
|
|
|
|
}
|
2007-09-01 11:13:57 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static acpi_status hpet_resources(struct acpi_resource *res, void *data)
|
|
|
|
{
|
|
|
|
struct hpet_data *hdp;
|
|
|
|
acpi_status status;
|
|
|
|
struct acpi_resource_address64 addr;
|
|
|
|
|
|
|
|
hdp = data;
|
|
|
|
|
|
|
|
status = acpi_resource_to_address64(res, &addr);
|
|
|
|
|
|
|
|
if (ACPI_SUCCESS(status)) {
|
2015-01-26 16:58:56 +08:00
|
|
|
hdp->hd_phys_address = addr.address.minimum;
|
|
|
|
hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length);
|
2019-03-09 11:50:24 +08:00
|
|
|
if (!hdp->hd_address)
|
|
|
|
return AE_ERROR;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-10-31 07:03:44 +08:00
|
|
|
if (hpet_is_known(hdp)) {
|
|
|
|
iounmap(hdp->hd_address);
|
2007-09-21 09:23:13 +08:00
|
|
|
return AE_ALREADY_EXISTS;
|
2005-10-31 07:03:44 +08:00
|
|
|
}
|
[ACPI] ACPICA 20050930
Completed a major overhaul of the Resource Manager code -
specifically, optimizations in the area of the AML/internal
resource conversion code. The code has been optimized to
simplify and eliminate duplicated code, CPU stack use has
been decreased by optimizing function parameters and local
variables, and naming conventions across the manager have
been standardized for clarity and ease of maintenance (this
includes function, parameter, variable, and struct/typedef
names.)
All Resource Manager dispatch and information tables have
been moved to a single location for clarity and ease of
maintenance. One new file was created, named "rsinfo.c".
The ACPI return macros (return_ACPI_STATUS, etc.) have
been modified to guarantee that the argument is
not evaluated twice, making them less prone to macro
side-effects. However, since there exists the possibility
of additional stack use if a particular compiler cannot
optimize them (such as in the debug generation case),
the original macros are optionally available. Note that
some invocations of the return_VALUE macro may now cause
size mismatch warnings; the return_UINT8 and return_UINT32
macros are provided to eliminate these. (From Randy Dunlap)
Implemented a new mechanism to enable debug tracing for
individual control methods. A new external interface,
acpi_debug_trace(), is provided to enable this mechanism. The
intent is to allow the host OS to easily enable and disable
tracing for problematic control methods. This interface
can be easily exposed to a user or debugger interface if
desired. See the file psxface.c for details.
acpi_ut_callocate() will now return a valid pointer if a
length of zero is specified - a length of one is used
and a warning is issued. This matches the behavior of
acpi_ut_allocate().
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-10-01 07:03:00 +08:00
|
|
|
} else if (res->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) {
|
|
|
|
struct acpi_resource_fixed_memory32 *fixmem32;
|
2005-10-31 07:03:42 +08:00
|
|
|
|
|
|
|
fixmem32 = &res->data.fixed_memory32;
|
|
|
|
|
[ACPI] ACPICA 20050930
Completed a major overhaul of the Resource Manager code -
specifically, optimizations in the area of the AML/internal
resource conversion code. The code has been optimized to
simplify and eliminate duplicated code, CPU stack use has
been decreased by optimizing function parameters and local
variables, and naming conventions across the manager have
been standardized for clarity and ease of maintenance (this
includes function, parameter, variable, and struct/typedef
names.)
All Resource Manager dispatch and information tables have
been moved to a single location for clarity and ease of
maintenance. One new file was created, named "rsinfo.c".
The ACPI return macros (return_ACPI_STATUS, etc.) have
been modified to guarantee that the argument is
not evaluated twice, making them less prone to macro
side-effects. However, since there exists the possibility
of additional stack use if a particular compiler cannot
optimize them (such as in the debug generation case),
the original macros are optionally available. Note that
some invocations of the return_VALUE macro may now cause
size mismatch warnings; the return_UINT8 and return_UINT32
macros are provided to eliminate these. (From Randy Dunlap)
Implemented a new mechanism to enable debug tracing for
individual control methods. A new external interface,
acpi_debug_trace(), is provided to enable this mechanism. The
intent is to allow the host OS to easily enable and disable
tracing for problematic control methods. This interface
can be easily exposed to a user or debugger interface if
desired. See the file psxface.c for details.
acpi_ut_callocate() will now return a valid pointer if a
length of zero is specified - a length of one is used
and a warning is issued. This matches the behavior of
acpi_ut_allocate().
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-10-01 07:03:00 +08:00
|
|
|
hdp->hd_phys_address = fixmem32->address;
|
|
|
|
hdp->hd_address = ioremap(fixmem32->address,
|
2005-10-31 07:03:42 +08:00
|
|
|
HPET_RANGE_SIZE);
|
|
|
|
|
2005-10-31 07:03:44 +08:00
|
|
|
if (hpet_is_known(hdp)) {
|
|
|
|
iounmap(hdp->hd_address);
|
2007-09-21 09:23:13 +08:00
|
|
|
return AE_ALREADY_EXISTS;
|
2005-10-31 07:03:44 +08:00
|
|
|
}
|
[ACPI] ACPICA 20050930
Completed a major overhaul of the Resource Manager code -
specifically, optimizations in the area of the AML/internal
resource conversion code. The code has been optimized to
simplify and eliminate duplicated code, CPU stack use has
been decreased by optimizing function parameters and local
variables, and naming conventions across the manager have
been standardized for clarity and ease of maintenance (this
includes function, parameter, variable, and struct/typedef
names.)
All Resource Manager dispatch and information tables have
been moved to a single location for clarity and ease of
maintenance. One new file was created, named "rsinfo.c".
The ACPI return macros (return_ACPI_STATUS, etc.) have
been modified to guarantee that the argument is
not evaluated twice, making them less prone to macro
side-effects. However, since there exists the possibility
of additional stack use if a particular compiler cannot
optimize them (such as in the debug generation case),
the original macros are optionally available. Note that
some invocations of the return_VALUE macro may now cause
size mismatch warnings; the return_UINT8 and return_UINT32
macros are provided to eliminate these. (From Randy Dunlap)
Implemented a new mechanism to enable debug tracing for
individual control methods. A new external interface,
acpi_debug_trace(), is provided to enable this mechanism. The
intent is to allow the host OS to easily enable and disable
tracing for problematic control methods. This interface
can be easily exposed to a user or debugger interface if
desired. See the file psxface.c for details.
acpi_ut_callocate() will now return a valid pointer if a
length of zero is specified - a length of one is used
and a warning is issued. This matches the behavior of
acpi_ut_allocate().
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-10-01 07:03:00 +08:00
|
|
|
} else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) {
|
|
|
|
struct acpi_resource_extended_irq *irqp;
|
2006-02-15 05:53:02 +08:00
|
|
|
int i, irq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
irqp = &res->data.extended_irq;
|
|
|
|
|
2006-02-15 05:53:02 +08:00
|
|
|
for (i = 0; i < irqp->interrupt_count; i++) {
|
2012-11-23 17:46:43 +08:00
|
|
|
if (hdp->hd_nirqs >= HPET_MAX_TIMERS)
|
|
|
|
break;
|
|
|
|
|
2009-04-28 09:01:20 +08:00
|
|
|
irq = acpi_register_gsi(NULL, irqp->interrupts[i],
|
2006-02-15 05:53:02 +08:00
|
|
|
irqp->triggering, irqp->polarity);
|
|
|
|
if (irq < 0)
|
|
|
|
return AE_ERROR;
|
|
|
|
|
|
|
|
hdp->hd_irq[hdp->hd_nirqs] = irq;
|
|
|
|
hdp->hd_nirqs++;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return AE_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int hpet_acpi_add(struct acpi_device *device)
|
|
|
|
{
|
|
|
|
acpi_status result;
|
|
|
|
struct hpet_data data;
|
|
|
|
|
|
|
|
memset(&data, 0, sizeof(data));
|
|
|
|
|
|
|
|
result =
|
|
|
|
acpi_walk_resources(device->handle, METHOD_NAME__CRS,
|
|
|
|
hpet_resources, &data);
|
|
|
|
|
|
|
|
if (ACPI_FAILURE(result))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
if (!data.hd_address || !data.hd_nirqs) {
|
hpet: unmap unused I/O space
When the initialization code in hpet finds a memory resource and does not
find an IRQ, it does not unmap the memory resource previously mapped.
There are buggy BIOSes which report resources exactly like this and what
is worse the memory region bases point to normal RAM. This normally would
not matter since the space is not touched. But when PAT is turned on,
ioremap causes the page to be uncached and sets this bit in page->flags.
Then when the page is about to be used by the allocator, it is reported
as:
BUG: Bad page state in process md5sum pfn:3ed00
page:ffffea0000dbd800 count:0 mapcount:0 mapping:(null) index:0x0
page flags: 0x20000001000000(uncached)
Pid: 7956, comm: md5sum Not tainted 2.6.34-12-desktop #1
Call Trace:
[<ffffffff810df851>] bad_page+0xb1/0x100
[<ffffffff810dfa45>] prep_new_page+0x1a5/0x1c0
[<ffffffff810dfe01>] get_page_from_freelist+0x3a1/0x640
[<ffffffff810e01af>] __alloc_pages_nodemask+0x10f/0x6b0
...
In this particular case:
1) HPET returns 3ed00000 as memory region base, but it is not in
reserved ranges reported by the BIOS (excerpt):
BIOS-e820: 0000000000100000 - 00000000af6cf000 (usable)
BIOS-e820: 00000000af6cf000 - 00000000afdcf000 (reserved)
2) there is no IRQ resource reported by HPET method. On the other
hand, the Intel HPET specs (1.0a) says (3.2.5.1):
_CRS (
// Report 1K of memory consumed by this Timer Block
memory range consumed
// Optional: only used if BIOS allocates Interrupts [1]
IRQs consumed
)
[1] For case where Timer Block is configured to consume IRQ0/IRQ8 AND
Legacy 8254/Legacy RTC hardware still exists, the device objects
associated with 8254 & RTC devices should not report IRQ0/IRQ8 as
"consumed resources".
So in theory we should check whether if it is the case and use those
interrupts instead.
Anyway the address reported by the BIOS here is bogus, so non-presence
of IRQ doesn't mean the "optional" part in point 2).
Since I got no reply previously, fix this by simply unmapping the space
when IRQ is not found and memory region was mapped previously. It would
be probably more safe to walk the resources again and unmap appropriately
depending on type. But as we now use only ioremap for both 2 memory
resource types, it is not necessarily needed right now.
Addresses https://bugzilla.novell.com/show_bug.cgi?id=629908
Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Acked-by: Clemens Ladisch <clemens@ladisch.de>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-27 05:22:11 +08:00
|
|
|
if (data.hd_address)
|
|
|
|
iounmap(data.hd_address);
|
2008-04-30 15:55:10 +08:00
|
|
|
printk("%s: no address or irqs in _CRS\n", __func__);
|
2005-04-17 06:20:36 +08:00
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
|
|
|
return hpet_alloc(&data);
|
|
|
|
}
|
|
|
|
|
2007-07-23 20:44:41 +08:00
|
|
|
static const struct acpi_device_id hpet_device_ids[] = {
|
|
|
|
{"PNP0103", 0},
|
|
|
|
{"", 0},
|
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static struct acpi_driver hpet_acpi_driver = {
|
|
|
|
.name = "hpet",
|
2007-07-23 20:44:41 +08:00
|
|
|
.ids = hpet_device_ids,
|
2005-04-17 06:20:36 +08:00
|
|
|
.ops = {
|
|
|
|
.add = hpet_acpi_add,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct miscdevice hpet_misc = { HPET_MINOR, "hpet", &hpet_fops };
|
|
|
|
|
|
|
|
static int __init hpet_init(void)
|
|
|
|
{
|
|
|
|
int result;
|
|
|
|
|
|
|
|
result = misc_register(&hpet_misc);
|
|
|
|
if (result < 0)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2007-02-14 16:34:09 +08:00
|
|
|
sysctl_header = register_sysctl_table(dev_root);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
result = acpi_bus_register_driver(&hpet_acpi_driver);
|
|
|
|
if (result < 0) {
|
|
|
|
if (sysctl_header)
|
|
|
|
unregister_sysctl_table(sysctl_header);
|
|
|
|
misc_deregister(&hpet_misc);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2015-08-09 04:35:04 +08:00
|
|
|
device_initcall(hpet_init);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-08-09 04:35:04 +08:00
|
|
|
/*
|
2005-04-17 06:20:36 +08:00
|
|
|
MODULE_AUTHOR("Bob Picco <Robert.Picco@hp.com>");
|
|
|
|
MODULE_LICENSE("GPL");
|
2015-08-09 04:35:04 +08:00
|
|
|
*/
|