Merge remote branch 'trond/bugfixes' into for-2.6.37

Without some client-side fixes, server testing is currently difficult.
This commit is contained in:
J. Bruce Fields 2010-09-19 23:48:00 -04:00
commit c88739b373
443 changed files with 5153 additions and 2559 deletions

View File

@ -46,7 +46,6 @@
<sect1><title>Atomic and pointer manipulation</title> <sect1><title>Atomic and pointer manipulation</title>
!Iarch/x86/include/asm/atomic.h !Iarch/x86/include/asm/atomic.h
!Iarch/x86/include/asm/unaligned.h
</sect1> </sect1>
<sect1><title>Delaying, scheduling, and timer routines</title> <sect1><title>Delaying, scheduling, and timer routines</title>

View File

@ -57,7 +57,6 @@
</para> </para>
<sect1><title>String Conversions</title> <sect1><title>String Conversions</title>
!Ilib/vsprintf.c
!Elib/vsprintf.c !Elib/vsprintf.c
</sect1> </sect1>
<sect1><title>String Manipulation</title> <sect1><title>String Manipulation</title>

View File

@ -1961,6 +1961,12 @@ machines due to caching.
</sect1> </sect1>
</chapter> </chapter>
<chapter id="apiref">
<title>Mutex API reference</title>
!Iinclude/linux/mutex.h
!Ekernel/mutex.c
</chapter>
<chapter id="references"> <chapter id="references">
<title>Further reading</title> <title>Further reading</title>

View File

@ -104,4 +104,9 @@
<title>Block IO</title> <title>Block IO</title>
!Iinclude/trace/events/block.h !Iinclude/trace/events/block.h
</chapter> </chapter>
<chapter id="workqueue">
<title>Workqueue</title>
!Iinclude/trace/events/workqueue.h
</chapter>
</book> </book>

View File

@ -0,0 +1,45 @@
CFQ ioscheduler tunables
========================
slice_idle
----------
This specifies how long CFQ should idle for next request on certain cfq queues
(for sequential workloads) and service trees (for random workloads) before
queue is expired and CFQ selects next queue to dispatch from.
By default slice_idle is a non-zero value. That means by default we idle on
queues/service trees. This can be very helpful on highly seeky media like
single spindle SATA/SAS disks where we can cut down on overall number of
seeks and see improved throughput.
Setting slice_idle to 0 will remove all the idling on queues/service tree
level and one should see an overall improved throughput on faster storage
devices like multiple SATA/SAS disks in hardware RAID configuration. The down
side is that isolation provided from WRITES also goes down and notion of
IO priority becomes weaker.
So depending on storage and workload, it might be useful to set slice_idle=0.
In general I think for SATA/SAS disks and software RAID of SATA/SAS disks
keeping slice_idle enabled should be useful. For any configurations where
there are multiple spindles behind single LUN (Host based hardware RAID
controller or for storage arrays), setting slice_idle=0 might end up in better
throughput and acceptable latencies.
CFQ IOPS Mode for group scheduling
===================================
Basic CFQ design is to provide priority based time slices. Higher priority
process gets bigger time slice and lower priority process gets smaller time
slice. Measuring time becomes harder if storage is fast and supports NCQ and
it would be better to dispatch multiple requests from multiple cfq queues in
request queue at a time. In such scenario, it is not possible to measure time
consumed by single queue accurately.
What is possible though is to measure number of requests dispatched from a
single queue and also allow dispatch from multiple cfq queue at the same time.
This effectively becomes the fairness in terms of IOPS (IO operations per
second).
If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
to IOPS mode and starts providing fairness in terms of number of requests
dispatched. Note that this mode switching takes effect only for group
scheduling. For non-cgroup users nothing should change.

View File

@ -217,6 +217,7 @@ Details of cgroup files
CFQ sysfs tunable CFQ sysfs tunable
================= =================
/sys/block/<disk>/queue/iosched/group_isolation /sys/block/<disk>/queue/iosched/group_isolation
-----------------------------------------------
If group_isolation=1, it provides stronger isolation between groups at the If group_isolation=1, it provides stronger isolation between groups at the
expense of throughput. By default group_isolation is 0. In general that expense of throughput. By default group_isolation is 0. In general that
@ -243,6 +244,33 @@ By default one should run with group_isolation=0. If that is not sufficient
and one wants stronger isolation between groups, then set group_isolation=1 and one wants stronger isolation between groups, then set group_isolation=1
but this will come at cost of reduced throughput. but this will come at cost of reduced throughput.
/sys/block/<disk>/queue/iosched/slice_idle
------------------------------------------
On a faster hardware CFQ can be slow, especially with sequential workload.
This happens because CFQ idles on a single queue and single queue might not
drive deeper request queue depths to keep the storage busy. In such scenarios
one can try setting slice_idle=0 and that would switch CFQ to IOPS
(IO operations per second) mode on NCQ supporting hardware.
That means CFQ will not idle between cfq queues of a cfq group and hence be
able to driver higher queue depth and achieve better throughput. That also
means that cfq provides fairness among groups in terms of IOPS and not in
terms of disk time.
/sys/block/<disk>/queue/iosched/group_idle
------------------------------------------
If one disables idling on individual cfq queues and cfq service trees by
setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
on the group in an attempt to provide fairness among groups.
By default group_idle is same as slice_idle and does not do anything if
slice_idle is enabled.
One can experience an overall throughput drop if you have created multiple
groups and put applications in that group which are not driving enough
IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
on individual groups and throughput should improve.
What works What works
========== ==========
- Currently only sync IO queues are support. All the buffered writes are - Currently only sync IO queues are support. All the buffered writes are

View File

@ -109,17 +109,19 @@ use numbers 2000-2063 to identify GPIOs in a bank of I2C GPIO expanders.
If you want to initialize a structure with an invalid GPIO number, use If you want to initialize a structure with an invalid GPIO number, use
some negative number (perhaps "-EINVAL"); that will never be valid. To some negative number (perhaps "-EINVAL"); that will never be valid. To
test if a number could reference a GPIO, you may use this predicate: test if such number from such a structure could reference a GPIO, you
may use this predicate:
int gpio_is_valid(int number); int gpio_is_valid(int number);
A number that's not valid will be rejected by calls which may request A number that's not valid will be rejected by calls which may request
or free GPIOs (see below). Other numbers may also be rejected; for or free GPIOs (see below). Other numbers may also be rejected; for
example, a number might be valid but unused on a given board. example, a number might be valid but temporarily unused on a given board.
Whether a platform supports multiple GPIO controllers is currently a
platform-specific implementation issue.
Whether a platform supports multiple GPIO controllers is a platform-specific
implementation issue, as are whether that support can leave "holes" in the space
of GPIO numbers, and whether new controllers can be added at runtime. Such issues
can affect things including whether adjacent GPIO numbers are both valid.
Using GPIOs Using GPIOs
----------- -----------
@ -480,12 +482,16 @@ To support this framework, a platform's Kconfig will "select" either
ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB
and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines
three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep(). three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep().
They may also want to provide a custom value for ARCH_NR_GPIOS.
ARCH_REQUIRE_GPIOLIB means that the gpio-lib code will always get compiled It may also provide a custom value for ARCH_NR_GPIOS, so that it better
reflects the number of GPIOs in actual use on that platform, without
wasting static table space. (It should count both built-in/SoC GPIOs and
also ones on GPIO expanders.
ARCH_REQUIRE_GPIOLIB means that the gpiolib code will always get compiled
into the kernel on that architecture. into the kernel on that architecture.
ARCH_WANT_OPTIONAL_GPIOLIB means the gpio-lib code defaults to off and the user ARCH_WANT_OPTIONAL_GPIOLIB means the gpiolib code defaults to off and the user
can enable it and build it into the kernel optionally. can enable it and build it into the kernel optionally.
If neither of these options are selected, the platform does not support If neither of these options are selected, the platform does not support

View File

@ -345,5 +345,10 @@ documentation, in <filename>, for the functions listed.
section titled <section title> from <filename>. section titled <section title> from <filename>.
Spaces are allowed in <section title>; do not quote the <section title>. Spaces are allowed in <section title>; do not quote the <section title>.
!C<filename> is replaced by nothing, but makes the tools check that
all DOC: sections and documented functions, symbols, etc. are used.
This makes sense to use when you use !F/!P only and want to verify
that all documentation is included.
Tim. Tim.
*/ <twaugh@redhat.com> */ <twaugh@redhat.com>

View File

@ -1974,15 +1974,18 @@ and is between 256 and 4096 characters. It is defined in the file
force Enable ASPM even on devices that claim not to support it. force Enable ASPM even on devices that claim not to support it.
WARNING: Forcing ASPM on may cause system lockups. WARNING: Forcing ASPM on may cause system lockups.
pcie_ports= [PCIE] PCIe ports handling:
auto Ask the BIOS whether or not to use native PCIe services
associated with PCIe ports (PME, hot-plug, AER). Use
them only if that is allowed by the BIOS.
native Use native PCIe services associated with PCIe ports
unconditionally.
compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe
ports driver.
pcie_pme= [PCIE,PM] Native PCIe PME signaling options: pcie_pme= [PCIE,PM] Native PCIe PME signaling options:
Format: {auto|force}[,nomsi]
auto Use native PCIe PME signaling if the BIOS allows the
kernel to control PCIe config registers of root ports.
force Use native PCIe PME signaling even if the BIOS refuses
to allow the kernel to control the relevant PCIe config
registers.
nomsi Do not use MSI for native PCIe PME signaling (this makes nomsi Do not use MSI for native PCIe PME signaling (this makes
all PCIe root ports use INTx for everything). all PCIe root ports use INTx for all services).
pcmv= [HW,PCMCIA] BadgePAD 4 pcmv= [HW,PCMCIA] BadgePAD 4

View File

@ -9,7 +9,7 @@ firstly, there's nothing wrong with semaphores. But if the simpler
mutex semantics are sufficient for your code, then there are a couple mutex semantics are sufficient for your code, then there are a couple
of advantages of mutexes: of advantages of mutexes:
- 'struct mutex' is smaller on most architectures: .e.g on x86, - 'struct mutex' is smaller on most architectures: E.g. on x86,
'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes. 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes.
A smaller structure size means less RAM footprint, and better A smaller structure size means less RAM footprint, and better
CPU-cache utilization. CPU-cache utilization.
@ -136,3 +136,4 @@ the APIs of 'struct mutex' have been streamlined:
void mutex_lock_nested(struct mutex *lock, unsigned int subclass); void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
int mutex_lock_interruptible_nested(struct mutex *lock, int mutex_lock_interruptible_nested(struct mutex *lock,
unsigned int subclass); unsigned int subclass);
int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);

View File

@ -296,6 +296,7 @@ Conexant 5051
Conexant 5066 Conexant 5066
============= =============
laptop Basic Laptop config (default) laptop Basic Laptop config (default)
hp-laptop HP laptops, e g G60
dell-laptop Dell laptops dell-laptop Dell laptops
dell-vostro Dell Vostro dell-vostro Dell Vostro
olpc-xo-1_5 OLPC XO 1.5 olpc-xo-1_5 OLPC XO 1.5

View File

@ -1445,6 +1445,16 @@ S: Maintained
F: Documentation/video4linux/cafe_ccic F: Documentation/video4linux/cafe_ccic
F: drivers/media/video/cafe_ccic* F: drivers/media/video/cafe_ccic*
CAIF NETWORK LAYER
M: Sjur Braendeland <sjur.brandeland@stericsson.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/caif/
F: drivers/net/caif/
F: include/linux/caif/
F: include/net/caif/
F: net/caif/
CALGARY x86-64 IOMMU CALGARY x86-64 IOMMU
M: Muli Ben-Yehuda <muli@il.ibm.com> M: Muli Ben-Yehuda <muli@il.ibm.com>
M: "Jon D. Mason" <jdmason@kudzu.us> M: "Jon D. Mason" <jdmason@kudzu.us>
@ -2201,6 +2211,12 @@ L: linux-rdma@vger.kernel.org
S: Supported S: Supported
F: drivers/infiniband/hw/ehca/ F: drivers/infiniband/hw/ehca/
EHEA (IBM pSeries eHEA 10Gb ethernet adapter) DRIVER
M: Breno Leitao <leitao@linux.vnet.ibm.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ehea/
EMBEDDED LINUX EMBEDDED LINUX
M: Paul Gortmaker <paul.gortmaker@windriver.com> M: Paul Gortmaker <paul.gortmaker@windriver.com>
M: Matt Mackall <mpm@selenic.com> M: Matt Mackall <mpm@selenic.com>
@ -2781,11 +2797,6 @@ S: Maintained
F: arch/x86/kernel/hpet.c F: arch/x86/kernel/hpet.c
F: arch/x86/include/asm/hpet.h F: arch/x86/include/asm/hpet.h
HPET: ACPI
M: Bob Picco <bob.picco@hp.com>
S: Maintained
F: drivers/char/hpet.c
HPFS FILESYSTEM HPFS FILESYSTEM
M: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> M: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
W: http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi W: http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi
@ -3398,7 +3409,7 @@ F: drivers/s390/kvm/
KEXEC KEXEC
M: Eric Biederman <ebiederm@xmission.com> M: Eric Biederman <ebiederm@xmission.com>
W: http://ftp.kernel.org/pub/linux/kernel/people/horms/kexec-tools/ W: http://kernel.org/pub/linux/utils/kernel/kexec/
L: kexec@lists.infradead.org L: kexec@lists.infradead.org
S: Maintained S: Maintained
F: include/linux/kexec.h F: include/linux/kexec.h
@ -3923,8 +3934,7 @@ F: Documentation/sound/oss/MultiSound
F: sound/oss/msnd* F: sound/oss/msnd*
MULTITECH MULTIPORT CARD (ISICOM) MULTITECH MULTIPORT CARD (ISICOM)
M: Jiri Slaby <jirislaby@gmail.com> S: Orphan
S: Maintained
F: drivers/char/isicom.c F: drivers/char/isicom.c
F: include/linux/isicom.h F: include/linux/isicom.h
@ -4604,7 +4614,7 @@ F: include/linux/preempt.h
PRISM54 WIRELESS DRIVER PRISM54 WIRELESS DRIVER
M: "Luis R. Rodriguez" <mcgrof@gmail.com> M: "Luis R. Rodriguez" <mcgrof@gmail.com>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
W: http://prism54.org W: http://wireless.kernel.org/en/users/Drivers/p54
S: Obsolete S: Obsolete
F: drivers/net/wireless/prism54/ F: drivers/net/wireless/prism54/
@ -4805,6 +4815,7 @@ RCUTORTURE MODULE
M: Josh Triplett <josh@freedesktop.org> M: Josh Triplett <josh@freedesktop.org>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
S: Supported S: Supported
T: git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu.git
F: Documentation/RCU/torture.txt F: Documentation/RCU/torture.txt
F: kernel/rcutorture.c F: kernel/rcutorture.c
@ -4829,6 +4840,7 @@ M: Dipankar Sarma <dipankar@in.ibm.com>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
W: http://www.rdrop.com/users/paulmck/rclock/ W: http://www.rdrop.com/users/paulmck/rclock/
S: Supported S: Supported
T: git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu.git
F: Documentation/RCU/ F: Documentation/RCU/
F: include/linux/rcu* F: include/linux/rcu*
F: include/linux/srcu* F: include/linux/srcu*
@ -4836,12 +4848,10 @@ F: kernel/rcu*
F: kernel/srcu* F: kernel/srcu*
X: kernel/rcutorture.c X: kernel/rcutorture.c
REAL TIME CLOCK DRIVER REAL TIME CLOCK DRIVER (LEGACY)
M: Paul Gortmaker <p_gortmaker@yahoo.com> M: Paul Gortmaker <p_gortmaker@yahoo.com>
S: Maintained S: Maintained
F: Documentation/rtc.txt F: drivers/char/rtc.c
F: drivers/rtc/
F: include/linux/rtc.h
REAL TIME CLOCK (RTC) SUBSYSTEM REAL TIME CLOCK (RTC) SUBSYSTEM
M: Alessandro Zummo <a.zummo@towertech.it> M: Alessandro Zummo <a.zummo@towertech.it>

View File

@ -1,7 +1,7 @@
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 36 SUBLEVEL = 36
EXTRAVERSION = -rc3 EXTRAVERSION = -rc4
NAME = Sheep on Meth NAME = Sheep on Meth
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -17,7 +17,6 @@
# define L1_CACHE_SHIFT 5 # define L1_CACHE_SHIFT 5
#endif #endif
#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES-1))&~(L1_CACHE_BYTES-1))
#define SMP_CACHE_BYTES L1_CACHE_BYTES #define SMP_CACHE_BYTES L1_CACHE_BYTES
#endif #endif

View File

@ -109,7 +109,7 @@ marvel_print_err_cyc(u64 err_cyc)
#define IO7__ERR_CYC__CYCLE__M (0x7) #define IO7__ERR_CYC__CYCLE__M (0x7)
printk("%s Packet In Error: %s\n" printk("%s Packet In Error: %s\n"
"%s Error in %s, cycle %ld%s%s\n", "%s Error in %s, cycle %lld%s%s\n",
err_print_prefix, err_print_prefix,
packet_desc[EXTRACT(err_cyc, IO7__ERR_CYC__PACKET)], packet_desc[EXTRACT(err_cyc, IO7__ERR_CYC__PACKET)],
err_print_prefix, err_print_prefix,
@ -313,7 +313,7 @@ marvel_print_po7_ugbge_sym(u64 ugbge_sym)
} }
printk("%s Up Hose Garbage Symptom:\n" printk("%s Up Hose Garbage Symptom:\n"
"%s Source Port: %ld - Dest PID: %ld - OpCode: %s\n", "%s Source Port: %lld - Dest PID: %lld - OpCode: %s\n",
err_print_prefix, err_print_prefix,
err_print_prefix, err_print_prefix,
EXTRACT(ugbge_sym, IO7__PO7_UGBGE_SYM__UPH_SRC_PORT), EXTRACT(ugbge_sym, IO7__PO7_UGBGE_SYM__UPH_SRC_PORT),
@ -552,7 +552,7 @@ marvel_print_pox_spl_cmplt(u64 spl_cmplt)
#define IO7__POX_SPLCMPLT__REM_BYTE_COUNT__M (0xfff) #define IO7__POX_SPLCMPLT__REM_BYTE_COUNT__M (0xfff)
printk("%s Split Completion Error:\n" printk("%s Split Completion Error:\n"
"%s Source (Bus:Dev:Func): %ld:%ld:%ld\n", "%s Source (Bus:Dev:Func): %lld:%lld:%lld\n",
err_print_prefix, err_print_prefix,
err_print_prefix, err_print_prefix,
EXTRACT(spl_cmplt, IO7__POX_SPLCMPLT__SOURCE_BUS), EXTRACT(spl_cmplt, IO7__POX_SPLCMPLT__SOURCE_BUS),

View File

@ -241,20 +241,20 @@ static inline unsigned long alpha_read_pmc(int idx)
static int alpha_perf_event_set_period(struct perf_event *event, static int alpha_perf_event_set_period(struct perf_event *event,
struct hw_perf_event *hwc, int idx) struct hw_perf_event *hwc, int idx)
{ {
long left = atomic64_read(&hwc->period_left); long left = local64_read(&hwc->period_left);
long period = hwc->sample_period; long period = hwc->sample_period;
int ret = 0; int ret = 0;
if (unlikely(left <= -period)) { if (unlikely(left <= -period)) {
left = period; left = period;
atomic64_set(&hwc->period_left, left); local64_set(&hwc->period_left, left);
hwc->last_period = period; hwc->last_period = period;
ret = 1; ret = 1;
} }
if (unlikely(left <= 0)) { if (unlikely(left <= 0)) {
left += period; left += period;
atomic64_set(&hwc->period_left, left); local64_set(&hwc->period_left, left);
hwc->last_period = period; hwc->last_period = period;
ret = 1; ret = 1;
} }
@ -269,7 +269,7 @@ static int alpha_perf_event_set_period(struct perf_event *event,
if (left > (long)alpha_pmu->pmc_max_period[idx]) if (left > (long)alpha_pmu->pmc_max_period[idx])
left = alpha_pmu->pmc_max_period[idx]; left = alpha_pmu->pmc_max_period[idx];
atomic64_set(&hwc->prev_count, (unsigned long)(-left)); local64_set(&hwc->prev_count, (unsigned long)(-left));
alpha_write_pmc(idx, (unsigned long)(-left)); alpha_write_pmc(idx, (unsigned long)(-left));
@ -300,10 +300,10 @@ static unsigned long alpha_perf_event_update(struct perf_event *event,
long delta; long delta;
again: again:
prev_raw_count = atomic64_read(&hwc->prev_count); prev_raw_count = local64_read(&hwc->prev_count);
new_raw_count = alpha_read_pmc(idx); new_raw_count = alpha_read_pmc(idx);
if (atomic64_cmpxchg(&hwc->prev_count, prev_raw_count, if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
new_raw_count) != prev_raw_count) new_raw_count) != prev_raw_count)
goto again; goto again;
@ -316,8 +316,8 @@ again:
delta += alpha_pmu->pmc_max_period[idx] + 1; delta += alpha_pmu->pmc_max_period[idx] + 1;
} }
atomic64_add(delta, &event->count); local64_add(delta, &event->count);
atomic64_sub(delta, &hwc->period_left); local64_sub(delta, &hwc->period_left);
return new_raw_count; return new_raw_count;
} }
@ -636,7 +636,7 @@ static int __hw_perf_event_init(struct perf_event *event)
if (!hwc->sample_period) { if (!hwc->sample_period) {
hwc->sample_period = alpha_pmu->pmc_max_period[0]; hwc->sample_period = alpha_pmu->pmc_max_period[0];
hwc->last_period = hwc->sample_period; hwc->last_period = hwc->sample_period;
atomic64_set(&hwc->period_left, hwc->sample_period); local64_set(&hwc->period_left, hwc->sample_period);
} }
return 0; return 0;

View File

@ -156,9 +156,6 @@ extern void SMC669_Init(int);
/* es1888.c */ /* es1888.c */
extern void es1888_init(void); extern void es1888_init(void);
/* ns87312.c */
extern void ns87312_enable_ide(long ide_base);
/* ../lib/fpreg.c */ /* ../lib/fpreg.c */
extern void alpha_write_fp_reg (unsigned long reg, unsigned long val); extern void alpha_write_fp_reg (unsigned long reg, unsigned long val);
extern unsigned long alpha_read_fp_reg (unsigned long reg); extern unsigned long alpha_read_fp_reg (unsigned long reg);

View File

@ -33,7 +33,7 @@
#include "irq_impl.h" #include "irq_impl.h"
#include "pci_impl.h" #include "pci_impl.h"
#include "machvec_impl.h" #include "machvec_impl.h"
#include "pc873xx.h"
/* Note mask bit is true for DISABLED irqs. */ /* Note mask bit is true for DISABLED irqs. */
static unsigned long cached_irq_mask = ~0UL; static unsigned long cached_irq_mask = ~0UL;
@ -235,18 +235,31 @@ cabriolet_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static inline void __init
cabriolet_enable_ide(void)
{
if (pc873xx_probe() == -1) {
printk(KERN_ERR "Probing for PC873xx Super IO chip failed.\n");
} else {
printk(KERN_INFO "Found %s Super IO chip at 0x%x\n",
pc873xx_get_model(), pc873xx_get_base());
pc873xx_enable_ide();
}
}
static inline void __init static inline void __init
cabriolet_init_pci(void) cabriolet_init_pci(void)
{ {
common_init_pci(); common_init_pci();
ns87312_enable_ide(0x398); cabriolet_enable_ide();
} }
static inline void __init static inline void __init
cia_cab_init_pci(void) cia_cab_init_pci(void)
{ {
cia_init_pci(); cia_init_pci();
ns87312_enable_ide(0x398); cabriolet_enable_ide();
} }
/* /*

View File

@ -29,7 +29,7 @@
#include "irq_impl.h" #include "irq_impl.h"
#include "pci_impl.h" #include "pci_impl.h"
#include "machvec_impl.h" #include "machvec_impl.h"
#include "pc873xx.h"
/* Note mask bit is true for DISABLED irqs. */ /* Note mask bit is true for DISABLED irqs. */
static unsigned long cached_irq_mask[2] = { -1, -1 }; static unsigned long cached_irq_mask[2] = { -1, -1 };
@ -264,7 +264,14 @@ takara_init_pci(void)
alpha_mv.pci_map_irq = takara_map_irq_srm; alpha_mv.pci_map_irq = takara_map_irq_srm;
cia_init_pci(); cia_init_pci();
ns87312_enable_ide(0x26e);
if (pc873xx_probe() == -1) {
printk(KERN_ERR "Probing for PC873xx Super IO chip failed.\n");
} else {
printk(KERN_INFO "Found %s Super IO chip at 0x%x\n",
pc873xx_get_model(), pc873xx_get_base());
pc873xx_enable_ide();
}
} }

View File

@ -1576,96 +1576,6 @@ config AUTO_ZRELADDR
0xf8000000. This assumes the zImage being placed in the first 128MB 0xf8000000. This assumes the zImage being placed in the first 128MB
from start of memory. from start of memory.
config ZRELADDR
hex "Physical address of the decompressed kernel image"
depends on !AUTO_ZRELADDR
default 0x00008000 if ARCH_BCMRING ||\
ARCH_CNS3XXX ||\
ARCH_DOVE ||\
ARCH_EBSA110 ||\
ARCH_FOOTBRIDGE ||\
ARCH_INTEGRATOR ||\
ARCH_IOP13XX ||\
ARCH_IOP33X ||\
ARCH_IXP2000 ||\
ARCH_IXP23XX ||\
ARCH_IXP4XX ||\
ARCH_KIRKWOOD ||\
ARCH_KS8695 ||\
ARCH_LOKI ||\
ARCH_MMP ||\
ARCH_MV78XX0 ||\
ARCH_NOMADIK ||\
ARCH_NUC93X ||\
ARCH_NS9XXX ||\
ARCH_ORION5X ||\
ARCH_SPEAR3XX ||\
ARCH_SPEAR6XX ||\
ARCH_U8500 ||\
ARCH_VERSATILE ||\
ARCH_W90X900
default 0x08008000 if ARCH_MX1 ||\
ARCH_SHARK
default 0x10008000 if ARCH_MSM ||\
ARCH_OMAP1 ||\
ARCH_RPC
default 0x20008000 if ARCH_S5P6440 ||\
ARCH_S5P6442 ||\
ARCH_S5PC100 ||\
ARCH_S5PV210
default 0x30008000 if ARCH_S3C2410 ||\
ARCH_S3C2400 ||\
ARCH_S3C2412 ||\
ARCH_S3C2416 ||\
ARCH_S3C2440 ||\
ARCH_S3C2443
default 0x40008000 if ARCH_STMP378X ||\
ARCH_STMP37XX ||\
ARCH_SH7372 ||\
ARCH_SH7377 ||\
ARCH_S5PV310
default 0x50008000 if ARCH_S3C64XX ||\
ARCH_SH7367
default 0x60008000 if ARCH_VEXPRESS
default 0x80008000 if ARCH_MX25 ||\
ARCH_MX3 ||\
ARCH_NETX ||\
ARCH_OMAP2PLUS ||\
ARCH_PNX4008
default 0x90008000 if ARCH_MX5 ||\
ARCH_MX91231
default 0xa0008000 if ARCH_IOP32X ||\
ARCH_PXA ||\
MACH_MX27
default 0xc0008000 if ARCH_LH7A40X ||\
MACH_MX21
default 0xf0008000 if ARCH_AAEC2000 ||\
ARCH_L7200
default 0xc0028000 if ARCH_CLPS711X
default 0x70008000 if ARCH_AT91 && (ARCH_AT91CAP9 || ARCH_AT91SAM9G45)
default 0x20008000 if ARCH_AT91 && !(ARCH_AT91CAP9 || ARCH_AT91SAM9G45)
default 0xc0008000 if ARCH_DAVINCI && ARCH_DAVINCI_DA8XX
default 0x80008000 if ARCH_DAVINCI && !ARCH_DAVINCI_DA8XX
default 0x00008000 if ARCH_EP93XX && EP93XX_SDCE3_SYNC_PHYS_OFFSET
default 0xc0008000 if ARCH_EP93XX && EP93XX_SDCE0_PHYS_OFFSET
default 0xd0008000 if ARCH_EP93XX && EP93XX_SDCE1_PHYS_OFFSET
default 0xe0008000 if ARCH_EP93XX && EP93XX_SDCE2_PHYS_OFFSET
default 0xf0008000 if ARCH_EP93XX && EP93XX_SDCE3_ASYNC_PHYS_OFFSET
default 0x00008000 if ARCH_GEMINI && GEMINI_MEM_SWAP
default 0x10008000 if ARCH_GEMINI && !GEMINI_MEM_SWAP
default 0x70008000 if ARCH_REALVIEW && REALVIEW_HIGH_PHYS_OFFSET
default 0x00008000 if ARCH_REALVIEW && !REALVIEW_HIGH_PHYS_OFFSET
default 0xc0208000 if ARCH_SA1100 && SA1111
default 0xc0008000 if ARCH_SA1100 && !SA1111
default 0x30108000 if ARCH_S3C2410 && PM_H1940
default 0x28E08000 if ARCH_U300 && MACH_U300_SINGLE_RAM
default 0x48008000 if ARCH_U300 && !MACH_U300_SINGLE_RAM
help
ZRELADDR is the physical address where the decompressed kernel
image will be placed. ZRELADDR has to be specified when the
assumption of AUTO_ZRELADDR is not valid, or when ZBOOT_ROM is
selected.
endmenu endmenu
menu "CPU Power Management" menu "CPU Power Management"

View File

@ -14,16 +14,18 @@
MKIMAGE := $(srctree)/scripts/mkuboot.sh MKIMAGE := $(srctree)/scripts/mkuboot.sh
ifneq ($(MACHINE),) ifneq ($(MACHINE),)
-include $(srctree)/$(MACHINE)/Makefile.boot include $(srctree)/$(MACHINE)/Makefile.boot
endif endif
# Note: the following conditions must always be true: # Note: the following conditions must always be true:
# ZRELADDR == virt_to_phys(PAGE_OFFSET + TEXT_OFFSET)
# PARAMS_PHYS must be within 4MB of ZRELADDR # PARAMS_PHYS must be within 4MB of ZRELADDR
# INITRD_PHYS must be in RAM # INITRD_PHYS must be in RAM
ZRELADDR := $(zreladdr-y)
PARAMS_PHYS := $(params_phys-y) PARAMS_PHYS := $(params_phys-y)
INITRD_PHYS := $(initrd_phys-y) INITRD_PHYS := $(initrd_phys-y)
export INITRD_PHYS PARAMS_PHYS export ZRELADDR INITRD_PHYS PARAMS_PHYS
targets := Image zImage xipImage bootpImage uImage targets := Image zImage xipImage bootpImage uImage
@ -65,7 +67,7 @@ quiet_cmd_uimage = UIMAGE $@
ifeq ($(CONFIG_ZBOOT_ROM),y) ifeq ($(CONFIG_ZBOOT_ROM),y)
$(obj)/uImage: LOADADDR=$(CONFIG_ZBOOT_ROM_TEXT) $(obj)/uImage: LOADADDR=$(CONFIG_ZBOOT_ROM_TEXT)
else else
$(obj)/uImage: LOADADDR=$(CONFIG_ZRELADDR) $(obj)/uImage: LOADADDR=$(ZRELADDR)
endif endif
ifeq ($(CONFIG_THUMB2_KERNEL),y) ifeq ($(CONFIG_THUMB2_KERNEL),y)

View File

@ -79,6 +79,10 @@ endif
EXTRA_CFLAGS := -fpic -fno-builtin EXTRA_CFLAGS := -fpic -fno-builtin
EXTRA_AFLAGS := -Wa,-march=all EXTRA_AFLAGS := -Wa,-march=all
# Supply ZRELADDR to the decompressor via a linker symbol.
ifneq ($(CONFIG_AUTO_ZRELADDR),y)
LDFLAGS_vmlinux := --defsym zreladdr=$(ZRELADDR)
endif
ifeq ($(CONFIG_CPU_ENDIAN_BE8),y) ifeq ($(CONFIG_CPU_ENDIAN_BE8),y)
LDFLAGS_vmlinux += --be8 LDFLAGS_vmlinux += --be8
endif endif

View File

@ -177,7 +177,7 @@ not_angel:
and r4, pc, #0xf8000000 and r4, pc, #0xf8000000
add r4, r4, #TEXT_OFFSET add r4, r4, #TEXT_OFFSET
#else #else
ldr r4, =CONFIG_ZRELADDR ldr r4, =zreladdr
#endif #endif
subs r0, r0, r1 @ calculate the delta offset subs r0, r0, r1 @ calculate the delta offset

View File

@ -263,6 +263,14 @@ static int it8152_pci_platform_notify_remove(struct device *dev)
return 0; return 0;
} }
int dma_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
{
dev_dbg(dev, "%s: dma_addr %08x, size %08x\n",
__func__, dma_addr, size);
return (dev->bus == &pci_bus_type) &&
((dma_addr + size - PHYS_OFFSET) >= SZ_64M);
}
int __init it8152_pci_setup(int nr, struct pci_sys_data *sys) int __init it8152_pci_setup(int nr, struct pci_sys_data *sys)
{ {
it8152_io.start = IT8152_IO_BASE + 0x12000; it8152_io.start = IT8152_IO_BASE + 0x12000;

View File

@ -288,15 +288,7 @@ extern void dmabounce_unregister_dev(struct device *);
* DMA access and 1 if the buffer needs to be bounced. * DMA access and 1 if the buffer needs to be bounced.
* *
*/ */
#ifdef CONFIG_SA1111
extern int dma_needs_bounce(struct device*, dma_addr_t, size_t); extern int dma_needs_bounce(struct device*, dma_addr_t, size_t);
#else
static inline int dma_needs_bounce(struct device *dev, dma_addr_t addr,
size_t size)
{
return 0;
}
#endif
/* /*
* The DMA API, implemented by dmabounce.c. See below for descriptions. * The DMA API, implemented by dmabounce.c. See below for descriptions.

View File

@ -17,7 +17,7 @@
* counter interrupts are regular interrupts and not an NMI. This * counter interrupts are regular interrupts and not an NMI. This
* means that when we receive the interrupt we can call * means that when we receive the interrupt we can call
* perf_event_do_pending() that handles all of the work with * perf_event_do_pending() that handles all of the work with
* interrupts enabled. * interrupts disabled.
*/ */
static inline void static inline void
set_perf_event_pending(void) set_perf_event_pending(void)

View File

@ -393,6 +393,9 @@
#define __NR_perf_event_open (__NR_SYSCALL_BASE+364) #define __NR_perf_event_open (__NR_SYSCALL_BASE+364)
#define __NR_recvmmsg (__NR_SYSCALL_BASE+365) #define __NR_recvmmsg (__NR_SYSCALL_BASE+365)
#define __NR_accept4 (__NR_SYSCALL_BASE+366) #define __NR_accept4 (__NR_SYSCALL_BASE+366)
#define __NR_fanotify_init (__NR_SYSCALL_BASE+367)
#define __NR_fanotify_mark (__NR_SYSCALL_BASE+368)
#define __NR_prlimit64 (__NR_SYSCALL_BASE+369)
/* /*
* The following SWIs are ARM private. * The following SWIs are ARM private.

View File

@ -376,6 +376,9 @@
CALL(sys_perf_event_open) CALL(sys_perf_event_open)
/* 365 */ CALL(sys_recvmmsg) /* 365 */ CALL(sys_recvmmsg)
CALL(sys_accept4) CALL(sys_accept4)
CALL(sys_fanotify_init)
CALL(sys_fanotify_mark)
CALL(sys_prlimit64)
#ifndef syscalls_counted #ifndef syscalls_counted
.equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls .equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls
#define syscalls_counted #define syscalls_counted

View File

@ -319,8 +319,8 @@ validate_event(struct cpu_hw_events *cpuc,
{ {
struct hw_perf_event fake_event = event->hw; struct hw_perf_event fake_event = event->hw;
if (event->pmu && event->pmu != &pmu) if (event->pmu != &pmu || event->state <= PERF_EVENT_STATE_OFF)
return 0; return 1;
return armpmu->get_event_idx(cpuc, &fake_event) >= 0; return armpmu->get_event_idx(cpuc, &fake_event) >= 0;
} }
@ -1041,8 +1041,8 @@ armv6pmu_handle_irq(int irq_num,
/* /*
* Handle the pending perf events. * Handle the pending perf events.
* *
* Note: this call *must* be run with interrupts enabled. For * Note: this call *must* be run with interrupts disabled. For
* platforms that can have the PMU interrupts raised as a PMI, this * platforms that can have the PMU interrupts raised as an NMI, this
* will not work. * will not work.
*/ */
perf_event_do_pending(); perf_event_do_pending();
@ -2017,8 +2017,8 @@ static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev)
/* /*
* Handle the pending perf events. * Handle the pending perf events.
* *
* Note: this call *must* be run with interrupts enabled. For * Note: this call *must* be run with interrupts disabled. For
* platforms that can have the PMU interrupts raised as a PMI, this * platforms that can have the PMU interrupts raised as an NMI, this
* will not work. * will not work.
*/ */
perf_event_do_pending(); perf_event_do_pending();

View File

@ -121,8 +121,8 @@ static struct clk ssc1_clk = {
.pmc_mask = 1 << AT91SAM9G45_ID_SSC1, .pmc_mask = 1 << AT91SAM9G45_ID_SSC1,
.type = CLK_TYPE_PERIPHERAL, .type = CLK_TYPE_PERIPHERAL,
}; };
static struct clk tcb_clk = { static struct clk tcb0_clk = {
.name = "tcb_clk", .name = "tcb0_clk",
.pmc_mask = 1 << AT91SAM9G45_ID_TCB, .pmc_mask = 1 << AT91SAM9G45_ID_TCB,
.type = CLK_TYPE_PERIPHERAL, .type = CLK_TYPE_PERIPHERAL,
}; };
@ -192,6 +192,14 @@ static struct clk ohci_clk = {
.parent = &uhphs_clk, .parent = &uhphs_clk,
}; };
/* One additional fake clock for second TC block */
static struct clk tcb1_clk = {
.name = "tcb1_clk",
.pmc_mask = 0,
.type = CLK_TYPE_PERIPHERAL,
.parent = &tcb0_clk,
};
static struct clk *periph_clocks[] __initdata = { static struct clk *periph_clocks[] __initdata = {
&pioA_clk, &pioA_clk,
&pioB_clk, &pioB_clk,
@ -208,7 +216,7 @@ static struct clk *periph_clocks[] __initdata = {
&spi1_clk, &spi1_clk,
&ssc0_clk, &ssc0_clk,
&ssc1_clk, &ssc1_clk,
&tcb_clk, &tcb0_clk,
&pwm_clk, &pwm_clk,
&tsc_clk, &tsc_clk,
&dma_clk, &dma_clk,
@ -221,6 +229,7 @@ static struct clk *periph_clocks[] __initdata = {
&mmc1_clk, &mmc1_clk,
// irq0 // irq0
&ohci_clk, &ohci_clk,
&tcb1_clk,
}; };
/* /*

View File

@ -46,7 +46,7 @@ static struct resource hdmac_resources[] = {
.end = AT91_BASE_SYS + AT91_DMA + SZ_512 - 1, .end = AT91_BASE_SYS + AT91_DMA + SZ_512 - 1,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, },
[2] = { [1] = {
.start = AT91SAM9G45_ID_DMA, .start = AT91SAM9G45_ID_DMA,
.end = AT91SAM9G45_ID_DMA, .end = AT91SAM9G45_ID_DMA,
.flags = IORESOURCE_IRQ, .flags = IORESOURCE_IRQ,
@ -835,9 +835,9 @@ static struct platform_device at91sam9g45_tcb1_device = {
static void __init at91_add_device_tc(void) static void __init at91_add_device_tc(void)
{ {
/* this chip has one clock and irq for all six TC channels */ /* this chip has one clock and irq for all six TC channels */
at91_clock_associate("tcb_clk", &at91sam9g45_tcb0_device.dev, "t0_clk"); at91_clock_associate("tcb0_clk", &at91sam9g45_tcb0_device.dev, "t0_clk");
platform_device_register(&at91sam9g45_tcb0_device); platform_device_register(&at91sam9g45_tcb0_device);
at91_clock_associate("tcb_clk", &at91sam9g45_tcb1_device.dev, "t0_clk"); at91_clock_associate("tcb1_clk", &at91sam9g45_tcb1_device.dev, "t0_clk");
platform_device_register(&at91sam9g45_tcb1_device); platform_device_register(&at91sam9g45_tcb1_device);
} }
#else #else

View File

@ -93,11 +93,12 @@ static struct resource dm9000_resource[] = {
.start = AT91_PIN_PC11, .start = AT91_PIN_PC11,
.end = AT91_PIN_PC11, .end = AT91_PIN_PC11,
.flags = IORESOURCE_IRQ .flags = IORESOURCE_IRQ
| IORESOURCE_IRQ_LOWEDGE | IORESOURCE_IRQ_HIGHEDGE,
} }
}; };
static struct dm9000_plat_data dm9000_platdata = { static struct dm9000_plat_data dm9000_platdata = {
.flags = DM9000_PLATF_16BITONLY, .flags = DM9000_PLATF_16BITONLY | DM9000_PLATF_NO_EEPROM,
}; };
static struct platform_device dm9000_device = { static struct platform_device dm9000_device = {
@ -167,17 +168,6 @@ static struct at91_udc_data __initdata ek_udc_data = {
}; };
/*
* MCI (SD/MMC)
*/
static struct at91_mmc_data __initdata ek_mmc_data = {
.wire4 = 1,
// .det_pin = ... not connected
// .wp_pin = ... not connected
// .vcc_pin = ... not connected
};
/* /*
* NAND flash * NAND flash
*/ */
@ -246,6 +236,10 @@ static void __init ek_add_device_nand(void)
at91_add_device_nand(&ek_nand_data); at91_add_device_nand(&ek_nand_data);
} }
/*
* SPI related devices
*/
#if defined(CONFIG_SPI_ATMEL) || defined(CONFIG_SPI_ATMEL_MODULE)
/* /*
* ADS7846 Touchscreen * ADS7846 Touchscreen
@ -356,6 +350,19 @@ static struct spi_board_info ek_spi_devices[] = {
#endif #endif
}; };
#else /* CONFIG_SPI_ATMEL_* */
/* spi0 and mmc/sd share the same PIO pins: cannot be used at the same time */
/*
* MCI (SD/MMC)
* det_pin, wp_pin and vcc_pin are not connected
*/
static struct at91_mmc_data __initdata ek_mmc_data = {
.wire4 = 1,
};
#endif /* CONFIG_SPI_ATMEL_* */
/* /*
* LCD Controller * LCD Controller

View File

@ -501,6 +501,7 @@ postcore_initcall(at91_clk_debugfs_init);
int __init clk_register(struct clk *clk) int __init clk_register(struct clk *clk)
{ {
if (clk_is_peripheral(clk)) { if (clk_is_peripheral(clk)) {
if (!clk->parent)
clk->parent = &mck; clk->parent = &mck;
clk->mode = pmc_periph_mode; clk->mode = pmc_periph_mode;
list_add_tail(&clk->node, &clocks); list_add_tail(&clk->node, &clocks);

View File

@ -560,4 +560,4 @@ static int __init ep93xx_clock_init(void)
clkdev_add_table(clocks, ARRAY_SIZE(clocks)); clkdev_add_table(clocks, ARRAY_SIZE(clocks));
return 0; return 0;
} }
arch_initcall(ep93xx_clock_init); postcore_initcall(ep93xx_clock_init);

View File

@ -215,7 +215,7 @@ struct imx_ssi_platform_data eukrea_mbimxsd_ssi_pdata = {
* Add platform devices present on this baseboard and init * Add platform devices present on this baseboard and init
* them from CPU side as far as required to use them later on * them from CPU side as far as required to use them later on
*/ */
void __init eukrea_mbimxsd_baseboard_init(void) void __init eukrea_mbimxsd25_baseboard_init(void)
{ {
if (mxc_iomux_v3_setup_multiple_pads(eukrea_mbimxsd_pads, if (mxc_iomux_v3_setup_multiple_pads(eukrea_mbimxsd_pads,
ARRAY_SIZE(eukrea_mbimxsd_pads))) ARRAY_SIZE(eukrea_mbimxsd_pads)))

View File

@ -147,8 +147,8 @@ static void __init eukrea_cpuimx25_init(void)
if (!otg_mode_host) if (!otg_mode_host)
mxc_register_device(&otg_udc_device, &otg_device_pdata); mxc_register_device(&otg_udc_device, &otg_device_pdata);
#ifdef CONFIG_MACH_EUKREA_MBIMXSD_BASEBOARD #ifdef CONFIG_MACH_EUKREA_MBIMXSD25_BASEBOARD
eukrea_mbimxsd_baseboard_init(); eukrea_mbimxsd25_baseboard_init();
#endif #endif
} }

View File

@ -155,7 +155,7 @@ static unsigned long get_rate_arm(void)
aad = &clk_consumer[(pdr0 >> 16) & 0xf]; aad = &clk_consumer[(pdr0 >> 16) & 0xf];
if (aad->sel) if (aad->sel)
fref = fref * 2 / 3; fref = fref * 3 / 4;
return fref / aad->arm; return fref / aad->arm;
} }
@ -164,7 +164,7 @@ static unsigned long get_rate_ahb(struct clk *clk)
{ {
unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0); unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0);
struct arm_ahb_div *aad; struct arm_ahb_div *aad;
unsigned long fref = get_rate_mpll(); unsigned long fref = get_rate_arm();
aad = &clk_consumer[(pdr0 >> 16) & 0xf]; aad = &clk_consumer[(pdr0 >> 16) & 0xf];
@ -176,16 +176,11 @@ static unsigned long get_rate_ipg(struct clk *clk)
return get_rate_ahb(NULL) >> 1; return get_rate_ahb(NULL) >> 1;
} }
static unsigned long get_3_3_div(unsigned long in)
{
return (((in >> 3) & 0x7) + 1) * ((in & 0x7) + 1);
}
static unsigned long get_rate_uart(struct clk *clk) static unsigned long get_rate_uart(struct clk *clk)
{ {
unsigned long pdr3 = __raw_readl(CCM_BASE + CCM_PDR3); unsigned long pdr3 = __raw_readl(CCM_BASE + CCM_PDR3);
unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4); unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4);
unsigned long div = get_3_3_div(pdr4 >> 10); unsigned long div = ((pdr4 >> 10) & 0x3f) + 1;
if (pdr3 & (1 << 14)) if (pdr3 & (1 << 14))
return get_rate_arm() / div; return get_rate_arm() / div;
@ -216,7 +211,7 @@ static unsigned long get_rate_sdhc(struct clk *clk)
break; break;
} }
return rate / get_3_3_div(div); return rate / (div + 1);
} }
static unsigned long get_rate_mshc(struct clk *clk) static unsigned long get_rate_mshc(struct clk *clk)
@ -270,7 +265,7 @@ static unsigned long get_rate_csi(struct clk *clk)
else else
rate = get_rate_ppll(); rate = get_rate_ppll();
return rate / get_3_3_div((pdr2 >> 16) & 0x3f); return rate / (((pdr2 >> 16) & 0x3f) + 1);
} }
static unsigned long get_rate_otg(struct clk *clk) static unsigned long get_rate_otg(struct clk *clk)
@ -283,25 +278,51 @@ static unsigned long get_rate_otg(struct clk *clk)
else else
rate = get_rate_ppll(); rate = get_rate_ppll();
return rate / get_3_3_div((pdr4 >> 22) & 0x3f); return rate / (((pdr4 >> 22) & 0x3f) + 1);
} }
static unsigned long get_rate_ipg_per(struct clk *clk) static unsigned long get_rate_ipg_per(struct clk *clk)
{ {
unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0); unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0);
unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4); unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4);
unsigned long div1, div2; unsigned long div;
if (pdr0 & (1 << 26)) { if (pdr0 & (1 << 26)) {
div1 = (pdr4 >> 19) & 0x7; div = (pdr4 >> 16) & 0x3f;
div2 = (pdr4 >> 16) & 0x7; return get_rate_arm() / (div + 1);
return get_rate_arm() / ((div1 + 1) * (div2 + 1));
} else { } else {
div1 = (pdr0 >> 12) & 0x7; div = (pdr0 >> 12) & 0x7;
return get_rate_ahb(NULL) / div1; return get_rate_ahb(NULL) / (div + 1);
} }
} }
static unsigned long get_rate_hsp(struct clk *clk)
{
unsigned long hsp_podf = (__raw_readl(CCM_BASE + CCM_PDR0) >> 20) & 0x03;
unsigned long fref = get_rate_mpll();
if (fref > 400 * 1000 * 1000) {
switch (hsp_podf) {
case 0:
return fref >> 2;
case 1:
return fref >> 3;
case 2:
return fref / 3;
}
} else {
switch (hsp_podf) {
case 0:
case 2:
return fref / 3;
case 1:
return fref / 6;
}
}
return 0;
}
static int clk_cgr_enable(struct clk *clk) static int clk_cgr_enable(struct clk *clk)
{ {
u32 reg; u32 reg;
@ -359,7 +380,7 @@ DEFINE_CLOCK(i2c1_clk, 0, CCM_CGR1, 10, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c2_clk, 1, CCM_CGR1, 12, get_rate_ipg_per, NULL); DEFINE_CLOCK(i2c2_clk, 1, CCM_CGR1, 12, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c3_clk, 2, CCM_CGR1, 14, get_rate_ipg_per, NULL); DEFINE_CLOCK(i2c3_clk, 2, CCM_CGR1, 14, get_rate_ipg_per, NULL);
DEFINE_CLOCK(iomuxc_clk, 0, CCM_CGR1, 16, NULL, NULL); DEFINE_CLOCK(iomuxc_clk, 0, CCM_CGR1, 16, NULL, NULL);
DEFINE_CLOCK(ipu_clk, 0, CCM_CGR1, 18, get_rate_ahb, NULL); DEFINE_CLOCK(ipu_clk, 0, CCM_CGR1, 18, get_rate_hsp, NULL);
DEFINE_CLOCK(kpp_clk, 0, CCM_CGR1, 20, get_rate_ipg, NULL); DEFINE_CLOCK(kpp_clk, 0, CCM_CGR1, 20, get_rate_ipg, NULL);
DEFINE_CLOCK(mlb_clk, 0, CCM_CGR1, 22, get_rate_ahb, NULL); DEFINE_CLOCK(mlb_clk, 0, CCM_CGR1, 22, get_rate_ahb, NULL);
DEFINE_CLOCK(mshc_clk, 0, CCM_CGR1, 24, get_rate_mshc, NULL); DEFINE_CLOCK(mshc_clk, 0, CCM_CGR1, 24, get_rate_mshc, NULL);
@ -485,10 +506,10 @@ static struct clk_lookup lookups[] = {
int __init mx35_clocks_init() int __init mx35_clocks_init()
{ {
unsigned int ll = 0; unsigned int cgr2 = 3 << 26, cgr3 = 0;
#if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_ICEDCC) #if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_ICEDCC)
ll = (3 << 16); cgr2 |= 3 << 16;
#endif #endif
clkdev_add_table(lookups, ARRAY_SIZE(lookups)); clkdev_add_table(lookups, ARRAY_SIZE(lookups));
@ -499,8 +520,20 @@ int __init mx35_clocks_init()
__raw_writel((3 << 18), CCM_BASE + CCM_CGR0); __raw_writel((3 << 18), CCM_BASE + CCM_CGR0);
__raw_writel((3 << 2) | (3 << 4) | (3 << 6) | (3 << 8) | (3 << 16), __raw_writel((3 << 2) | (3 << 4) | (3 << 6) | (3 << 8) | (3 << 16),
CCM_BASE + CCM_CGR1); CCM_BASE + CCM_CGR1);
__raw_writel((3 << 26) | ll, CCM_BASE + CCM_CGR2);
__raw_writel(0, CCM_BASE + CCM_CGR3); /*
* Check if we came up in internal boot mode. If yes, we need some
* extra clocks turned on, otherwise the MX35 boot ROM code will
* hang after a watchdog reset.
*/
if (!(__raw_readl(CCM_BASE + CCM_RCSR) & (3 << 10))) {
/* Additionally turn on UART1, SCC, and IIM clocks */
cgr2 |= 3 << 16 | 3 << 4;
cgr3 |= 3 << 2;
}
__raw_writel(cgr2, CCM_BASE + CCM_CGR2);
__raw_writel(cgr3, CCM_BASE + CCM_CGR3);
mxc_timer_init(&gpt_clk, mxc_timer_init(&gpt_clk,
MX35_IO_ADDRESS(MX35_GPT1_BASE_ADDR), MX35_INT_GPT); MX35_IO_ADDRESS(MX35_GPT1_BASE_ADDR), MX35_INT_GPT);

View File

@ -216,7 +216,7 @@ struct imx_ssi_platform_data eukrea_mbimxsd_ssi_pdata = {
* Add platform devices present on this baseboard and init * Add platform devices present on this baseboard and init
* them from CPU side as far as required to use them later on * them from CPU side as far as required to use them later on
*/ */
void __init eukrea_mbimxsd_baseboard_init(void) void __init eukrea_mbimxsd35_baseboard_init(void)
{ {
if (mxc_iomux_v3_setup_multiple_pads(eukrea_mbimxsd_pads, if (mxc_iomux_v3_setup_multiple_pads(eukrea_mbimxsd_pads,
ARRAY_SIZE(eukrea_mbimxsd_pads))) ARRAY_SIZE(eukrea_mbimxsd_pads)))

View File

@ -201,8 +201,8 @@ static void __init mxc_board_init(void)
if (!otg_mode_host) if (!otg_mode_host)
mxc_register_device(&mxc_otg_udc_device, &otg_device_pdata); mxc_register_device(&mxc_otg_udc_device, &otg_device_pdata);
#ifdef CONFIG_MACH_EUKREA_MBIMXSD_BASEBOARD #ifdef CONFIG_MACH_EUKREA_MBIMXSD35_BASEBOARD
eukrea_mbimxsd_baseboard_init(); eukrea_mbimxsd35_baseboard_init();
#endif #endif
} }

View File

@ -56,7 +56,7 @@ static void _clk_ccgr_disable(struct clk *clk)
{ {
u32 reg; u32 reg;
reg = __raw_readl(clk->enable_reg); reg = __raw_readl(clk->enable_reg);
reg &= ~(MXC_CCM_CCGRx_MOD_OFF << clk->enable_shift); reg &= ~(MXC_CCM_CCGRx_CG_MASK << clk->enable_shift);
__raw_writel(reg, clk->enable_reg); __raw_writel(reg, clk->enable_reg);
} }

View File

@ -398,7 +398,7 @@ static int pxa_set_target(struct cpufreq_policy *policy,
return 0; return 0;
} }
static __init int pxa_cpufreq_init(struct cpufreq_policy *policy) static int pxa_cpufreq_init(struct cpufreq_policy *policy)
{ {
int i; int i;
unsigned int freq; unsigned int freq;

View File

@ -204,7 +204,7 @@ static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy,
return 0; return 0;
} }
static __init int pxa3xx_cpufreq_init(struct cpufreq_policy *policy) static int pxa3xx_cpufreq_init(struct cpufreq_policy *policy)
{ {
int ret = -EINVAL; int ret = -EINVAL;

View File

@ -71,10 +71,10 @@
#define GPIO46_CI_DD_7 MFP_CFG_DRV(GPIO46, AF0, DS04X) #define GPIO46_CI_DD_7 MFP_CFG_DRV(GPIO46, AF0, DS04X)
#define GPIO47_CI_DD_8 MFP_CFG_DRV(GPIO47, AF1, DS04X) #define GPIO47_CI_DD_8 MFP_CFG_DRV(GPIO47, AF1, DS04X)
#define GPIO48_CI_DD_9 MFP_CFG_DRV(GPIO48, AF1, DS04X) #define GPIO48_CI_DD_9 MFP_CFG_DRV(GPIO48, AF1, DS04X)
#define GPIO52_CI_HSYNC MFP_CFG_DRV(GPIO52, AF0, DS04X)
#define GPIO51_CI_VSYNC MFP_CFG_DRV(GPIO51, AF0, DS04X)
#define GPIO49_CI_MCLK MFP_CFG_DRV(GPIO49, AF0, DS04X) #define GPIO49_CI_MCLK MFP_CFG_DRV(GPIO49, AF0, DS04X)
#define GPIO50_CI_PCLK MFP_CFG_DRV(GPIO50, AF0, DS04X) #define GPIO50_CI_PCLK MFP_CFG_DRV(GPIO50, AF0, DS04X)
#define GPIO51_CI_HSYNC MFP_CFG_DRV(GPIO51, AF0, DS04X)
#define GPIO52_CI_VSYNC MFP_CFG_DRV(GPIO52, AF0, DS04X)
/* KEYPAD */ /* KEYPAD */
#define GPIO3_KP_DKIN_6 MFP_CFG_LPM(GPIO3, AF2, FLOAT) #define GPIO3_KP_DKIN_6 MFP_CFG_LPM(GPIO3, AF2, FLOAT)

View File

@ -3,7 +3,7 @@
# #
# Common objects # Common objects
obj-y := timer.o console.o clock.o obj-y := timer.o console.o clock.o pm_runtime.o
# CPU objects # CPU objects
obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o

View File

@ -25,6 +25,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/mfd/sh_mobile_sdhi.h> #include <linux/mfd/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/mmc/host.h> #include <linux/mmc/host.h>
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
@ -39,6 +40,7 @@
#include <linux/sh_clk.h> #include <linux/sh_clk.h>
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/input.h> #include <linux/input.h>
#include <linux/leds.h>
#include <linux/input/sh_keysc.h> #include <linux/input/sh_keysc.h>
#include <linux/usb/r8a66597.h> #include <linux/usb/r8a66597.h>
@ -307,6 +309,7 @@ static struct sh_mobile_sdhi_info sdhi1_info = {
.dma_slave_tx = SHDMA_SLAVE_SDHI1_TX, .dma_slave_tx = SHDMA_SLAVE_SDHI1_TX,
.dma_slave_rx = SHDMA_SLAVE_SDHI1_RX, .dma_slave_rx = SHDMA_SLAVE_SDHI1_RX,
.tmio_ocr_mask = MMC_VDD_165_195, .tmio_ocr_mask = MMC_VDD_165_195,
.tmio_flags = TMIO_MMC_WRPROTECT_DISABLE,
}; };
static struct resource sdhi1_resources[] = { static struct resource sdhi1_resources[] = {
@ -558,7 +561,7 @@ static struct resource fsi_resources[] = {
static struct platform_device fsi_device = { static struct platform_device fsi_device = {
.name = "sh_fsi2", .name = "sh_fsi2",
.id = 0, .id = -1,
.num_resources = ARRAY_SIZE(fsi_resources), .num_resources = ARRAY_SIZE(fsi_resources),
.resource = fsi_resources, .resource = fsi_resources,
.dev = { .dev = {
@ -650,7 +653,44 @@ static struct platform_device hdmi_device = {
}, },
}; };
static struct gpio_led ap4evb_leds[] = {
{
.name = "led4",
.gpio = GPIO_PORT185,
.default_state = LEDS_GPIO_DEFSTATE_ON,
},
{
.name = "led2",
.gpio = GPIO_PORT186,
.default_state = LEDS_GPIO_DEFSTATE_ON,
},
{
.name = "led3",
.gpio = GPIO_PORT187,
.default_state = LEDS_GPIO_DEFSTATE_ON,
},
{
.name = "led1",
.gpio = GPIO_PORT188,
.default_state = LEDS_GPIO_DEFSTATE_ON,
}
};
static struct gpio_led_platform_data ap4evb_leds_pdata = {
.num_leds = ARRAY_SIZE(ap4evb_leds),
.leds = ap4evb_leds,
};
static struct platform_device leds_device = {
.name = "leds-gpio",
.id = 0,
.dev = {
.platform_data = &ap4evb_leds_pdata,
},
};
static struct platform_device *ap4evb_devices[] __initdata = { static struct platform_device *ap4evb_devices[] __initdata = {
&leds_device,
&nor_flash_device, &nor_flash_device,
&smc911x_device, &smc911x_device,
&sdhi0_device, &sdhi0_device,
@ -840,20 +880,6 @@ static void __init ap4evb_init(void)
gpio_request(GPIO_FN_CS5A, NULL); gpio_request(GPIO_FN_CS5A, NULL);
gpio_request(GPIO_FN_IRQ6_39, NULL); gpio_request(GPIO_FN_IRQ6_39, NULL);
/* enable LED 1 - 4 */
gpio_request(GPIO_PORT185, NULL);
gpio_request(GPIO_PORT186, NULL);
gpio_request(GPIO_PORT187, NULL);
gpio_request(GPIO_PORT188, NULL);
gpio_direction_output(GPIO_PORT185, 1);
gpio_direction_output(GPIO_PORT186, 1);
gpio_direction_output(GPIO_PORT187, 1);
gpio_direction_output(GPIO_PORT188, 1);
gpio_export(GPIO_PORT185, 0);
gpio_export(GPIO_PORT186, 0);
gpio_export(GPIO_PORT187, 0);
gpio_export(GPIO_PORT188, 0);
/* enable Debug switch (S6) */ /* enable Debug switch (S6) */
gpio_request(GPIO_PORT32, NULL); gpio_request(GPIO_PORT32, NULL);
gpio_request(GPIO_PORT33, NULL); gpio_request(GPIO_PORT33, NULL);

View File

@ -286,7 +286,6 @@ static struct clk_ops pllc2_clk_ops = {
struct clk pllc2_clk = { struct clk pllc2_clk = {
.ops = &pllc2_clk_ops, .ops = &pllc2_clk_ops,
.flags = CLK_ENABLE_ON_INIT,
.parent = &extal1_div2_clk, .parent = &extal1_div2_clk,
.freq_table = pllc2_freq_table, .freq_table = pllc2_freq_table,
.parent_table = pllc2_parent, .parent_table = pllc2_parent,
@ -395,7 +394,7 @@ static struct clk div6_reparent_clks[DIV6_REPARENT_NR] = {
enum { MSTP001, enum { MSTP001,
MSTP131, MSTP130, MSTP131, MSTP130,
MSTP129, MSTP128, MSTP129, MSTP128, MSTP127, MSTP126,
MSTP118, MSTP117, MSTP116, MSTP118, MSTP117, MSTP116,
MSTP106, MSTP101, MSTP100, MSTP106, MSTP101, MSTP100,
MSTP223, MSTP223,
@ -413,6 +412,8 @@ static struct clk mstp_clks[MSTP_NR] = {
[MSTP130] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 30, 0), /* VEU2 */ [MSTP130] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 30, 0), /* VEU2 */
[MSTP129] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 29, 0), /* VEU1 */ [MSTP129] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 29, 0), /* VEU1 */
[MSTP128] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 28, 0), /* VEU0 */ [MSTP128] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 28, 0), /* VEU0 */
[MSTP127] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 27, 0), /* CEU */
[MSTP126] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 26, 0), /* CSI2 */
[MSTP118] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 18, 0), /* DSITX */ [MSTP118] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 18, 0), /* DSITX */
[MSTP117] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 17, 0), /* LCDC1 */ [MSTP117] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 17, 0), /* LCDC1 */
[MSTP116] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR1, 16, 0), /* IIC0 */ [MSTP116] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR1, 16, 0), /* IIC0 */
@ -428,7 +429,7 @@ static struct clk mstp_clks[MSTP_NR] = {
[MSTP201] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 1, 0), /* SCIFA3 */ [MSTP201] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 1, 0), /* SCIFA3 */
[MSTP200] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 0, 0), /* SCIFA4 */ [MSTP200] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 0, 0), /* SCIFA4 */
[MSTP329] = MSTP(&r_clk, SMSTPCR3, 29, 0), /* CMT10 */ [MSTP329] = MSTP(&r_clk, SMSTPCR3, 29, 0), /* CMT10 */
[MSTP328] = MSTP(&div6_clks[DIV6_SPU], SMSTPCR3, 28, CLK_ENABLE_ON_INIT), /* FSIA */ [MSTP328] = MSTP(&div6_clks[DIV6_SPU], SMSTPCR3, 28, 0), /* FSIA */
[MSTP323] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 23, 0), /* IIC1 */ [MSTP323] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 23, 0), /* IIC1 */
[MSTP322] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 22, 0), /* USB0 */ [MSTP322] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 22, 0), /* USB0 */
[MSTP314] = MSTP(&div4_clks[DIV4_HP], SMSTPCR3, 14, 0), /* SDHI0 */ [MSTP314] = MSTP(&div4_clks[DIV4_HP], SMSTPCR3, 14, 0), /* SDHI0 */
@ -498,6 +499,8 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("uio_pdrv_genirq.3", &mstp_clks[MSTP130]), /* VEU2 */ CLKDEV_DEV_ID("uio_pdrv_genirq.3", &mstp_clks[MSTP130]), /* VEU2 */
CLKDEV_DEV_ID("uio_pdrv_genirq.2", &mstp_clks[MSTP129]), /* VEU1 */ CLKDEV_DEV_ID("uio_pdrv_genirq.2", &mstp_clks[MSTP129]), /* VEU1 */
CLKDEV_DEV_ID("uio_pdrv_genirq.1", &mstp_clks[MSTP128]), /* VEU0 */ CLKDEV_DEV_ID("uio_pdrv_genirq.1", &mstp_clks[MSTP128]), /* VEU0 */
CLKDEV_DEV_ID("sh_mobile_ceu.0", &mstp_clks[MSTP127]), /* CEU */
CLKDEV_DEV_ID("sh-mobile-csi2.0", &mstp_clks[MSTP126]), /* CSI2 */
CLKDEV_DEV_ID("sh-mipi-dsi.0", &mstp_clks[MSTP118]), /* DSITX */ CLKDEV_DEV_ID("sh-mipi-dsi.0", &mstp_clks[MSTP118]), /* DSITX */
CLKDEV_DEV_ID("sh_mobile_lcdc_fb.1", &mstp_clks[MSTP117]), /* LCDC1 */ CLKDEV_DEV_ID("sh_mobile_lcdc_fb.1", &mstp_clks[MSTP117]), /* LCDC1 */
CLKDEV_DEV_ID("i2c-sh_mobile.0", &mstp_clks[MSTP116]), /* IIC0 */ CLKDEV_DEV_ID("i2c-sh_mobile.0", &mstp_clks[MSTP116]), /* IIC0 */

View File

@ -1,8 +1,10 @@
/* /*
* SH-Mobile Timer * SH-Mobile Clock Framework
* *
* Copyright (C) 2010 Magnus Damm * Copyright (C) 2010 Magnus Damm
* *
* Used together with arch/arm/common/clkdev.c and drivers/sh/clk.c.
*
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License. * the Free Software Foundation; version 2 of the License.

View File

@ -0,0 +1,169 @@
/*
* arch/arm/mach-shmobile/pm_runtime.c
*
* Runtime PM support code for SuperH Mobile ARM
*
* Copyright (C) 2009-2010 Magnus Damm
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/io.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/sh_clk.h>
#include <linux/bitmap.h>
#ifdef CONFIG_PM_RUNTIME
#define BIT_ONCE 0
#define BIT_ACTIVE 1
#define BIT_CLK_ENABLED 2
struct pm_runtime_data {
unsigned long flags;
struct clk *clk;
};
static void __devres_release(struct device *dev, void *res)
{
struct pm_runtime_data *prd = res;
dev_dbg(dev, "__devres_release()\n");
if (test_bit(BIT_CLK_ENABLED, &prd->flags))
clk_disable(prd->clk);
if (test_bit(BIT_ACTIVE, &prd->flags))
clk_put(prd->clk);
}
static struct pm_runtime_data *__to_prd(struct device *dev)
{
return devres_find(dev, __devres_release, NULL, NULL);
}
static void platform_pm_runtime_init(struct device *dev,
struct pm_runtime_data *prd)
{
if (prd && !test_and_set_bit(BIT_ONCE, &prd->flags)) {
prd->clk = clk_get(dev, NULL);
if (!IS_ERR(prd->clk)) {
set_bit(BIT_ACTIVE, &prd->flags);
dev_info(dev, "clocks managed by runtime pm\n");
}
}
}
static void platform_pm_runtime_bug(struct device *dev,
struct pm_runtime_data *prd)
{
if (prd && !test_and_set_bit(BIT_ONCE, &prd->flags))
dev_err(dev, "runtime pm suspend before resume\n");
}
int platform_pm_runtime_suspend(struct device *dev)
{
struct pm_runtime_data *prd = __to_prd(dev);
dev_dbg(dev, "platform_pm_runtime_suspend()\n");
platform_pm_runtime_bug(dev, prd);
if (prd && test_bit(BIT_ACTIVE, &prd->flags)) {
clk_disable(prd->clk);
clear_bit(BIT_CLK_ENABLED, &prd->flags);
}
return 0;
}
int platform_pm_runtime_resume(struct device *dev)
{
struct pm_runtime_data *prd = __to_prd(dev);
dev_dbg(dev, "platform_pm_runtime_resume()\n");
platform_pm_runtime_init(dev, prd);
if (prd && test_bit(BIT_ACTIVE, &prd->flags)) {
clk_enable(prd->clk);
set_bit(BIT_CLK_ENABLED, &prd->flags);
}
return 0;
}
int platform_pm_runtime_idle(struct device *dev)
{
/* suspend synchronously to disable clocks immediately */
return pm_runtime_suspend(dev);
}
static int platform_bus_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
struct pm_runtime_data *prd;
dev_dbg(dev, "platform_bus_notify() %ld !\n", action);
if (action == BUS_NOTIFY_BIND_DRIVER) {
prd = devres_alloc(__devres_release, sizeof(*prd), GFP_KERNEL);
if (prd)
devres_add(dev, prd);
else
dev_err(dev, "unable to alloc memory for runtime pm\n");
}
return 0;
}
#else /* CONFIG_PM_RUNTIME */
static int platform_bus_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
struct clk *clk;
dev_dbg(dev, "platform_bus_notify() %ld !\n", action);
switch (action) {
case BUS_NOTIFY_BIND_DRIVER:
clk = clk_get(dev, NULL);
if (!IS_ERR(clk)) {
clk_enable(clk);
clk_put(clk);
dev_info(dev, "runtime pm disabled, clock forced on\n");
}
break;
case BUS_NOTIFY_UNBOUND_DRIVER:
clk = clk_get(dev, NULL);
if (!IS_ERR(clk)) {
clk_disable(clk);
clk_put(clk);
dev_info(dev, "runtime pm disabled, clock forced off\n");
}
break;
}
return 0;
}
#endif /* CONFIG_PM_RUNTIME */
static struct notifier_block platform_bus_notifier = {
.notifier_call = platform_bus_notify
};
static int __init sh_pm_runtime_init(void)
{
bus_register_notifier(&platform_bus_type, &platform_bus_notifier);
return 0;
}
core_initcall(sh_pm_runtime_init);

View File

@ -398,7 +398,7 @@ config CPU_V6
# ARMv6k # ARMv6k
config CPU_32v6K config CPU_32v6K
bool "Support ARM V6K processor extensions" if !SMP bool "Support ARM V6K processor extensions" if !SMP
depends on CPU_V6 depends on CPU_V6 || CPU_V7
default y if SMP && !(ARCH_MX3 || ARCH_OMAP2) default y if SMP && !(ARCH_MX3 || ARCH_OMAP2)
help help
Say Y here if your ARMv6 processor supports the 'K' extension. Say Y here if your ARMv6 processor supports the 'K' extension.

View File

@ -229,6 +229,8 @@ __dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot)
} }
} while (size -= PAGE_SIZE); } while (size -= PAGE_SIZE);
dsb();
return (void *)c->vm_start; return (void *)c->vm_start;
} }
return NULL; return NULL;

View File

@ -43,6 +43,7 @@ config ARCH_MXC91231
config ARCH_MX5 config ARCH_MX5
bool "MX5-based" bool "MX5-based"
select CPU_V7 select CPU_V7
select ARM_L1_CACHE_SHIFT_6
help help
This enables support for systems based on the Freescale i.MX51 family This enables support for systems based on the Freescale i.MX51 family

View File

@ -37,9 +37,9 @@
* mach-mx5/eukrea_mbimx51-baseboard.c for cpuimx51 * mach-mx5/eukrea_mbimx51-baseboard.c for cpuimx51
*/ */
extern void eukrea_mbimx25_baseboard_init(void); extern void eukrea_mbimxsd25_baseboard_init(void);
extern void eukrea_mbimx27_baseboard_init(void); extern void eukrea_mbimx27_baseboard_init(void);
extern void eukrea_mbimx35_baseboard_init(void); extern void eukrea_mbimxsd35_baseboard_init(void);
extern void eukrea_mbimx51_baseboard_init(void); extern void eukrea_mbimx51_baseboard_init(void);
#endif #endif

View File

@ -164,8 +164,9 @@ int tzic_enable_wake(int is_idle)
return -EAGAIN; return -EAGAIN;
for (i = 0; i < 4; i++) { for (i = 0; i < 4; i++) {
v = is_idle ? __raw_readl(TZIC_ENSET0(i)) : wakeup_intr[i]; v = is_idle ? __raw_readl(tzic_base + TZIC_ENSET0(i)) :
__raw_writel(v, TZIC_WAKEUP0(i)); wakeup_intr[i];
__raw_writel(v, tzic_base + TZIC_WAKEUP0(i));
} }
return 0; return 0;

View File

@ -176,7 +176,7 @@ static inline void __add_pwm(struct pwm_device *pwm)
static int __devinit pwm_probe(struct platform_device *pdev) static int __devinit pwm_probe(struct platform_device *pdev)
{ {
struct platform_device_id *id = platform_get_device_id(pdev); const struct platform_device_id *id = platform_get_device_id(pdev);
struct pwm_device *pwm, *secondary = NULL; struct pwm_device *pwm, *secondary = NULL;
struct resource *r; struct resource *r;
int ret = 0; int ret = 0;

View File

@ -12,7 +12,7 @@
# #
# http://www.arm.linux.org.uk/developer/machines/?action=new # http://www.arm.linux.org.uk/developer/machines/?action=new
# #
# Last update: Mon Jul 12 21:10:14 2010 # Last update: Thu Sep 9 22:43:01 2010
# #
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number # machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
# #
@ -2622,7 +2622,7 @@ kraken MACH_KRAKEN KRAKEN 2634
gw2388 MACH_GW2388 GW2388 2635 gw2388 MACH_GW2388 GW2388 2635
jadecpu MACH_JADECPU JADECPU 2636 jadecpu MACH_JADECPU JADECPU 2636
carlisle MACH_CARLISLE CARLISLE 2637 carlisle MACH_CARLISLE CARLISLE 2637
lux_sf9 MACH_LUX_SFT9 LUX_SFT9 2638 lux_sf9 MACH_LUX_SF9 LUX_SF9 2638
nemid_tb MACH_NEMID_TB NEMID_TB 2639 nemid_tb MACH_NEMID_TB NEMID_TB 2639
terrier MACH_TERRIER TERRIER 2640 terrier MACH_TERRIER TERRIER 2640
turbot MACH_TURBOT TURBOT 2641 turbot MACH_TURBOT TURBOT 2641
@ -2950,3 +2950,97 @@ davinci_dm365_dvr MACH_DAVINCI_DM365_DVR DAVINCI_DM365_DVR 2963
netviz MACH_NETVIZ NETVIZ 2964 netviz MACH_NETVIZ NETVIZ 2964
flexibity MACH_FLEXIBITY FLEXIBITY 2965 flexibity MACH_FLEXIBITY FLEXIBITY 2965
wlan_computer MACH_WLAN_COMPUTER WLAN_COMPUTER 2966 wlan_computer MACH_WLAN_COMPUTER WLAN_COMPUTER 2966
lpc24xx MACH_LPC24XX LPC24XX 2967
spica MACH_SPICA SPICA 2968
gpsdisplay MACH_GPSDISPLAY GPSDISPLAY 2969
bipnet MACH_BIPNET BIPNET 2970
overo_ctu_inertial MACH_OVERO_CTU_INERTIAL OVERO_CTU_INERTIAL 2971
davinci_dm355_mmm MACH_DAVINCI_DM355_MMM DAVINCI_DM355_MMM 2972
pc9260_v2 MACH_PC9260_V2 PC9260_V2 2973
ptx7545 MACH_PTX7545 PTX7545 2974
tm_efdc MACH_TM_EFDC TM_EFDC 2975
omap3_waldo1 MACH_OMAP3_WALDO1 OMAP3_WALDO1 2977
flyer MACH_FLYER FLYER 2978
tornado3240 MACH_TORNADO3240 TORNADO3240 2979
soli_01 MACH_SOLI_01 SOLI_01 2980
omapl138_europalc MACH_OMAPL138_EUROPALC OMAPL138_EUROPALC 2981
helios_v1 MACH_HELIOS_V1 HELIOS_V1 2982
netspace_lite_v2 MACH_NETSPACE_LITE_V2 NETSPACE_LITE_V2 2983
ssc MACH_SSC SSC 2984
premierwave_en MACH_PREMIERWAVE_EN PREMIERWAVE_EN 2985
wasabi MACH_WASABI WASABI 2986
vivow MACH_VIVOW VIVOW 2987
mx50_rdp MACH_MX50_RDP MX50_RDP 2988
universal MACH_UNIVERSAL UNIVERSAL 2989
real6410 MACH_REAL6410 REAL6410 2990
spx_sakura MACH_SPX_SAKURA SPX_SAKURA 2991
ij3k_2440 MACH_IJ3K_2440 IJ3K_2440 2992
omap3_bc10 MACH_OMAP3_BC10 OMAP3_BC10 2993
thebe MACH_THEBE THEBE 2994
rv082 MACH_RV082 RV082 2995
armlguest MACH_ARMLGUEST ARMLGUEST 2996
tjinc1000 MACH_TJINC1000 TJINC1000 2997
dockstar MACH_DOCKSTAR DOCKSTAR 2998
ax8008 MACH_AX8008 AX8008 2999
gnet_sgce MACH_GNET_SGCE GNET_SGCE 3000
pxwnas_500_1000 MACH_PXWNAS_500_1000 PXWNAS_500_1000 3001
ea20 MACH_EA20 EA20 3002
awm2 MACH_AWM2 AWM2 3003
ti8148evm MACH_TI8148EVM TI8148EVM 3004
tegra_seaboard MACH_TEGRA_SEABOARD TEGRA_SEABOARD 3005
linkstation_chlv2 MACH_LINKSTATION_CHLV2 LINKSTATION_CHLV2 3006
tera_pro2_rack MACH_TERA_PRO2_RACK TERA_PRO2_RACK 3007
rubys MACH_RUBYS RUBYS 3008
aquarius MACH_AQUARIUS AQUARIUS 3009
mx53_ard MACH_MX53_ARD MX53_ARD 3010
mx53_smd MACH_MX53_SMD MX53_SMD 3011
lswxl MACH_LSWXL LSWXL 3012
dove_avng_v3 MACH_DOVE_AVNG_V3 DOVE_AVNG_V3 3013
sdi_ess_9263 MACH_SDI_ESS_9263 SDI_ESS_9263 3014
jocpu550 MACH_JOCPU550 JOCPU550 3015
msm8x60_rumi3 MACH_MSM8X60_RUMI3 MSM8X60_RUMI3 3016
msm8x60_ffa MACH_MSM8X60_FFA MSM8X60_FFA 3017
yanomami MACH_YANOMAMI YANOMAMI 3018
gta04 MACH_GTA04 GTA04 3019
cm_a510 MACH_CM_A510 CM_A510 3020
omap3_rfs200 MACH_OMAP3_RFS200 OMAP3_RFS200 3021
kx33xx MACH_KX33XX KX33XX 3022
ptx7510 MACH_PTX7510 PTX7510 3023
top9000 MACH_TOP9000 TOP9000 3024
teenote MACH_TEENOTE TEENOTE 3025
ts3 MACH_TS3 TS3 3026
a0 MACH_A0 A0 3027
fsm9xxx_surf MACH_FSM9XXX_SURF FSM9XXX_SURF 3028
fsm9xxx_ffa MACH_FSM9XXX_FFA FSM9XXX_FFA 3029
frrhwcdma60w MACH_FRRHWCDMA60W FRRHWCDMA60W 3030
remus MACH_REMUS REMUS 3031
at91cap7xdk MACH_AT91CAP7XDK AT91CAP7XDK 3032
at91cap7stk MACH_AT91CAP7STK AT91CAP7STK 3033
kt_sbc_sam9_1 MACH_KT_SBC_SAM9_1 KT_SBC_SAM9_1 3034
oratisrouter MACH_ORATISROUTER ORATISROUTER 3035
armada_xp_db MACH_ARMADA_XP_DB ARMADA_XP_DB 3036
spdm MACH_SPDM SPDM 3037
gtib MACH_GTIB GTIB 3038
dgm3240 MACH_DGM3240 DGM3240 3039
atlas_i_lpe MACH_ATLAS_I_LPE ATLAS_I_LPE 3040
htcmega MACH_HTCMEGA HTCMEGA 3041
tricorder MACH_TRICORDER TRICORDER 3042
tx28 MACH_TX28 TX28 3043
bstbrd MACH_BSTBRD BSTBRD 3044
pwb3090 MACH_PWB3090 PWB3090 3045
idea6410 MACH_IDEA6410 IDEA6410 3046
qbc9263 MACH_QBC9263 QBC9263 3047
borabora MACH_BORABORA BORABORA 3048
valdez MACH_VALDEZ VALDEZ 3049
ls9g20 MACH_LS9G20 LS9G20 3050
mios_v1 MACH_MIOS_V1 MIOS_V1 3051
s5pc110_crespo MACH_S5PC110_CRESPO S5PC110_CRESPO 3052
controltek9g20 MACH_CONTROLTEK9G20 CONTROLTEK9G20 3053
tin307 MACH_TIN307 TIN307 3054
tin510 MACH_TIN510 TIN510 3055
bluecheese MACH_BLUECHEESE BLUECHEESE 3057
tem3x30 MACH_TEM3X30 TEM3X30 3058
harvest_desoto MACH_HARVEST_DESOTO HARVEST_DESOTO 3059
msm8x60_qrdc MACH_MSM8X60_QRDC MSM8X60_QRDC 3060
spear900 MACH_SPEAR900 SPEAR900 3061
pcontrol_g20 MACH_PCONTROL_G20 PCONTROL_G20 3062

View File

@ -18,7 +18,8 @@
static __inline__ int atomic_add_return(int i, atomic_t *v) static __inline__ int atomic_add_return(int i, atomic_t *v)
{ {
int ret,flags; unsigned long flags;
int ret;
local_irq_save(flags); local_irq_save(flags);
ret = v->counter += i; ret = v->counter += i;
local_irq_restore(flags); local_irq_restore(flags);
@ -30,7 +31,8 @@ static __inline__ int atomic_add_return(int i, atomic_t *v)
static __inline__ int atomic_sub_return(int i, atomic_t *v) static __inline__ int atomic_sub_return(int i, atomic_t *v)
{ {
int ret,flags; unsigned long flags;
int ret;
local_irq_save(flags); local_irq_save(flags);
ret = v->counter -= i; ret = v->counter -= i;
local_irq_restore(flags); local_irq_restore(flags);
@ -42,7 +44,8 @@ static __inline__ int atomic_sub_return(int i, atomic_t *v)
static __inline__ int atomic_inc_return(atomic_t *v) static __inline__ int atomic_inc_return(atomic_t *v)
{ {
int ret,flags; unsigned long flags;
int ret;
local_irq_save(flags); local_irq_save(flags);
v->counter++; v->counter++;
ret = v->counter; ret = v->counter;
@ -64,7 +67,8 @@ static __inline__ int atomic_inc_return(atomic_t *v)
static __inline__ int atomic_dec_return(atomic_t *v) static __inline__ int atomic_dec_return(atomic_t *v)
{ {
int ret,flags; unsigned long flags;
int ret;
local_irq_save(flags); local_irq_save(flags);
--v->counter; --v->counter;
ret = v->counter; ret = v->counter;
@ -76,7 +80,8 @@ static __inline__ int atomic_dec_return(atomic_t *v)
static __inline__ int atomic_dec_and_test(atomic_t *v) static __inline__ int atomic_dec_and_test(atomic_t *v)
{ {
int ret,flags; unsigned long flags;
int ret;
local_irq_save(flags); local_irq_save(flags);
--v->counter; --v->counter;
ret = v->counter; ret = v->counter;

View File

@ -3,6 +3,8 @@
#include <linux/linkage.h> #include <linux/linkage.h>
struct pt_regs;
/* /*
* switch_to(n) should switch tasks to task ptr, first checking that * switch_to(n) should switch tasks to task ptr, first checking that
* ptr isn't the current task, in which case it does nothing. This * ptr isn't the current task, in which case it does nothing. This
@ -155,6 +157,6 @@ static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int siz
#define arch_align_stack(x) (x) #define arch_align_stack(x) (x)
void die(char *str, struct pt_regs *fp, unsigned long err); extern void die(const char *str, struct pt_regs *fp, unsigned long err);
#endif /* _H8300_SYSTEM_H */ #endif /* _H8300_SYSTEM_H */

View File

@ -56,8 +56,8 @@ int kernel_execve(const char *filename,
const char *const envp[]) const char *const envp[])
{ {
register long res __asm__("er0"); register long res __asm__("er0");
register char *const *_c __asm__("er3") = envp; register const char *const *_c __asm__("er3") = envp;
register char *const *_b __asm__("er2") = argv; register const char *const *_b __asm__("er2") = argv;
register const char * _a __asm__("er1") = filename; register const char * _a __asm__("er1") = filename;
__asm__ __volatile__ ("mov.l %1,er0\n\t" __asm__ __volatile__ ("mov.l %1,er0\n\t"
"trapa #0\n\t" "trapa #0\n\t"

View File

@ -96,7 +96,7 @@ static void dump(struct pt_regs *fp)
printk("\n\n"); printk("\n\n");
} }
void die(char *str, struct pt_regs *fp, unsigned long err) void die(const char *str, struct pt_regs *fp, unsigned long err)
{ {
static int diecount; static int diecount;

View File

@ -150,6 +150,8 @@ SECTIONS {
_sdata = . ; _sdata = . ;
DATA_DATA DATA_DATA
CACHELINE_ALIGNED_DATA(32) CACHELINE_ALIGNED_DATA(32)
PAGE_ALIGNED_DATA(PAGE_SIZE)
*(.data..shared_aligned)
INIT_TASK_DATA(THREAD_SIZE) INIT_TASK_DATA(THREAD_SIZE)
_edata = . ; _edata = . ;
} > DATA } > DATA

View File

@ -11,6 +11,7 @@
#ifndef __ARCH_POWERPC_ASM_FSLDMA_H__ #ifndef __ARCH_POWERPC_ASM_FSLDMA_H__
#define __ARCH_POWERPC_ASM_FSLDMA_H__ #define __ARCH_POWERPC_ASM_FSLDMA_H__
#include <linux/slab.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
/* /*

View File

@ -575,13 +575,19 @@ __secondary_start:
/* Initialize the kernel stack. Just a repeat for iSeries. */ /* Initialize the kernel stack. Just a repeat for iSeries. */
LOAD_REG_ADDR(r3, current_set) LOAD_REG_ADDR(r3, current_set)
sldi r28,r24,3 /* get current_set[cpu#] */ sldi r28,r24,3 /* get current_set[cpu#] */
ldx r1,r3,r28 ldx r14,r3,r28
addi r1,r1,THREAD_SIZE-STACK_FRAME_OVERHEAD addi r14,r14,THREAD_SIZE-STACK_FRAME_OVERHEAD
std r1,PACAKSAVE(r13) std r14,PACAKSAVE(r13)
/* Do early setup for that CPU (stab, slb, hash table pointer) */ /* Do early setup for that CPU (stab, slb, hash table pointer) */
bl .early_setup_secondary bl .early_setup_secondary
/*
* setup the new stack pointer, but *don't* use this until
* translation is on.
*/
mr r1, r14
/* Clear backchain so we get nice backtraces */ /* Clear backchain so we get nice backtraces */
li r7,0 li r7,0
mtlr r7 mtlr r7

View File

@ -810,6 +810,9 @@ relocate_new_kernel:
isync isync
sync sync
mfspr r3, SPRN_PIR /* current core we are running on */
mr r4, r5 /* load physical address of chunk called */
/* jump to the entry point, usually the setup routine */ /* jump to the entry point, usually the setup routine */
mtlr r5 mtlr r5
blrl blrl

View File

@ -577,20 +577,11 @@ void timer_interrupt(struct pt_regs * regs)
* some CPUs will continuue to take decrementer exceptions */ * some CPUs will continuue to take decrementer exceptions */
set_dec(DECREMENTER_MAX); set_dec(DECREMENTER_MAX);
#ifdef CONFIG_PPC32 #if defined(CONFIG_PPC32) && defined(CONFIG_PMAC)
if (atomic_read(&ppc_n_lost_interrupts) != 0) if (atomic_read(&ppc_n_lost_interrupts) != 0)
do_IRQ(regs); do_IRQ(regs);
#endif #endif
now = get_tb_or_rtc();
if (now < decrementer->next_tb) {
/* not time for this event yet */
now = decrementer->next_tb - now;
if (now <= DECREMENTER_MAX)
set_dec((int)now);
trace_timer_interrupt_exit(regs);
return;
}
old_regs = set_irq_regs(regs); old_regs = set_irq_regs(regs);
irq_enter(); irq_enter();
@ -606,8 +597,16 @@ void timer_interrupt(struct pt_regs * regs)
get_lppaca()->int_dword.fields.decr_int = 0; get_lppaca()->int_dword.fields.decr_int = 0;
#endif #endif
now = get_tb_or_rtc();
if (now >= decrementer->next_tb) {
decrementer->next_tb = ~(u64)0;
if (evt->event_handler) if (evt->event_handler)
evt->event_handler(evt); evt->event_handler(evt);
} else {
now = decrementer->next_tb - now;
if (now <= DECREMENTER_MAX)
set_dec((int)now);
}
#ifdef CONFIG_PPC_ISERIES #ifdef CONFIG_PPC_ISERIES
if (firmware_has_feature(FW_FEATURE_ISERIES) && hvlpevent_is_pending()) if (firmware_has_feature(FW_FEATURE_ISERIES) && hvlpevent_is_pending())

View File

@ -48,8 +48,10 @@ static int mpc837xmds_usb_cfg(void)
return -1; return -1;
np = of_find_node_by_name(NULL, "usb"); np = of_find_node_by_name(NULL, "usb");
if (!np) if (!np) {
return -ENODEV; ret = -ENODEV;
goto out;
}
phy_type = of_get_property(np, "phy_type", NULL); phy_type = of_get_property(np, "phy_type", NULL);
if (phy_type && !strcmp(phy_type, "ulpi")) { if (phy_type && !strcmp(phy_type, "ulpi")) {
clrbits8(bcsr_regs + 12, BCSR12_USB_SER_PIN); clrbits8(bcsr_regs + 12, BCSR12_USB_SER_PIN);
@ -65,8 +67,9 @@ static int mpc837xmds_usb_cfg(void)
} }
of_node_put(np); of_node_put(np);
out:
iounmap(bcsr_regs); iounmap(bcsr_regs);
return 0; return ret;
} }
/* ************************************************************************ /* ************************************************************************

View File

@ -357,6 +357,7 @@ static void __init mpc85xx_mds_setup_arch(void)
{ {
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
struct pci_controller *hose; struct pci_controller *hose;
struct device_node *np;
#endif #endif
dma_addr_t max = 0xffffffff; dma_addr_t max = 0xffffffff;

View File

@ -19,7 +19,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/lmb.h> #include <linux/memblock.h>
#include <asm/mpic.h> #include <asm/mpic.h>
#include <asm/swiotlb.h> #include <asm/swiotlb.h>
@ -97,7 +97,7 @@ static void __init p1022_ds_setup_arch(void)
#endif #endif
#ifdef CONFIG_SWIOTLB #ifdef CONFIG_SWIOTLB
if (lmb_end_of_DRAM() > max) { if (memblock_end_of_DRAM() > max) {
ppc_swiotlb_enable = 1; ppc_swiotlb_enable = 1;
set_pci_dma_ops(&swiotlb_dma_ops); set_pci_dma_ops(&swiotlb_dma_ops);
ppc_md.pci_dma_dev_setup = pci_dma_dev_setup_swiotlb; ppc_md.pci_dma_dev_setup = pci_dma_dev_setup_swiotlb;

View File

@ -129,20 +129,35 @@ struct device_node *dlpar_configure_connector(u32 drc_index)
struct property *property; struct property *property;
struct property *last_property = NULL; struct property *last_property = NULL;
struct cc_workarea *ccwa; struct cc_workarea *ccwa;
char *data_buf;
int cc_token; int cc_token;
int rc; int rc = -1;
cc_token = rtas_token("ibm,configure-connector"); cc_token = rtas_token("ibm,configure-connector");
if (cc_token == RTAS_UNKNOWN_SERVICE) if (cc_token == RTAS_UNKNOWN_SERVICE)
return NULL; return NULL;
spin_lock(&rtas_data_buf_lock); data_buf = kzalloc(RTAS_DATA_BUF_SIZE, GFP_KERNEL);
ccwa = (struct cc_workarea *)&rtas_data_buf[0]; if (!data_buf)
return NULL;
ccwa = (struct cc_workarea *)&data_buf[0];
ccwa->drc_index = drc_index; ccwa->drc_index = drc_index;
ccwa->zero = 0; ccwa->zero = 0;
do {
/* Since we release the rtas_data_buf lock between configure
* connector calls we want to re-populate the rtas_data_buffer
* with the contents of the previous call.
*/
spin_lock(&rtas_data_buf_lock);
memcpy(rtas_data_buf, data_buf, RTAS_DATA_BUF_SIZE);
rc = rtas_call(cc_token, 2, 1, NULL, rtas_data_buf, NULL); rc = rtas_call(cc_token, 2, 1, NULL, rtas_data_buf, NULL);
while (rc) { memcpy(data_buf, rtas_data_buf, RTAS_DATA_BUF_SIZE);
spin_unlock(&rtas_data_buf_lock);
switch (rc) { switch (rc) {
case NEXT_SIBLING: case NEXT_SIBLING:
dn = dlpar_parse_cc_node(ccwa); dn = dlpar_parse_cc_node(ccwa);
@ -197,20 +212,21 @@ struct device_node *dlpar_configure_connector(u32 drc_index)
"returned from configure-connector\n", rc); "returned from configure-connector\n", rc);
goto cc_error; goto cc_error;
} }
} while (rc);
rc = rtas_call(cc_token, 2, 1, NULL, rtas_data_buf, NULL);
}
spin_unlock(&rtas_data_buf_lock);
return first_dn;
cc_error: cc_error:
kfree(data_buf);
if (rc) {
if (first_dn) if (first_dn)
dlpar_free_cc_nodes(first_dn); dlpar_free_cc_nodes(first_dn);
spin_unlock(&rtas_data_buf_lock);
return NULL; return NULL;
} }
return first_dn;
}
static struct device_node *derive_parent(const char *path) static struct device_node *derive_parent(const char *path)
{ {
struct device_node *parent; struct device_node *parent;

View File

@ -399,6 +399,8 @@ DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1013E, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1013, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1013, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1020E, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1020E, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1020, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1020, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1021E, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1021, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1022E, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1022E, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1022, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P1022, quirk_fsl_pcie_header);
DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P2010E, quirk_fsl_pcie_header); DECLARE_PCI_FIXUP_HEADER(0x1957, PCI_DEVICE_ID_P2010E, quirk_fsl_pcie_header);

View File

@ -240,12 +240,13 @@ struct rio_priv {
static void __iomem *rio_regs_win; static void __iomem *rio_regs_win;
#ifdef CONFIG_E500
static int (*saved_mcheck_exception)(struct pt_regs *regs); static int (*saved_mcheck_exception)(struct pt_regs *regs);
static int fsl_rio_mcheck_exception(struct pt_regs *regs) static int fsl_rio_mcheck_exception(struct pt_regs *regs)
{ {
const struct exception_table_entry *entry = NULL; const struct exception_table_entry *entry = NULL;
unsigned long reason = (mfspr(SPRN_MCSR) & MCSR_MASK); unsigned long reason = mfspr(SPRN_MCSR);
if (reason & MCSR_BUS_RBERR) { if (reason & MCSR_BUS_RBERR) {
reason = in_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR)); reason = in_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR));
@ -269,6 +270,7 @@ static int fsl_rio_mcheck_exception(struct pt_regs *regs)
else else
return cur_cpu_spec->machine_check(regs); return cur_cpu_spec->machine_check(regs);
} }
#endif
/** /**
* fsl_rio_doorbell_send - Send a MPC85xx doorbell message * fsl_rio_doorbell_send - Send a MPC85xx doorbell message
@ -1517,8 +1519,10 @@ int fsl_rio_setup(struct platform_device *dev)
fsl_rio_doorbell_init(port); fsl_rio_doorbell_init(port);
fsl_rio_port_write_init(port); fsl_rio_port_write_init(port);
#ifdef CONFIG_E500
saved_mcheck_exception = ppc_md.machine_check_exception; saved_mcheck_exception = ppc_md.machine_check_exception;
ppc_md.machine_check_exception = fsl_rio_mcheck_exception; ppc_md.machine_check_exception = fsl_rio_mcheck_exception;
#endif
/* Ensure that RFXE is set */ /* Ensure that RFXE is set */
mtspr(SPRN_HID1, (mfspr(SPRN_HID1) | 0x20000)); mtspr(SPRN_HID1, (mfspr(SPRN_HID1) | 0x20000));

View File

@ -640,6 +640,7 @@ unsigned int qe_get_num_of_snums(void)
if ((num_of_snums < 28) || (num_of_snums > QE_NUM_OF_SNUM)) { if ((num_of_snums < 28) || (num_of_snums > QE_NUM_OF_SNUM)) {
/* No QE ever has fewer than 28 SNUMs */ /* No QE ever has fewer than 28 SNUMs */
pr_err("QE: number of snum is invalid\n"); pr_err("QE: number of snum is invalid\n");
of_node_put(qe);
return -EINVAL; return -EINVAL;
} }
} }

View File

@ -166,7 +166,6 @@ sparc_breakpoint (struct pt_regs *regs)
{ {
siginfo_t info; siginfo_t info;
lock_kernel();
#ifdef DEBUG_SPARC_BREAKPOINT #ifdef DEBUG_SPARC_BREAKPOINT
printk ("TRAP: Entering kernel PC=%x, nPC=%x\n", regs->pc, regs->npc); printk ("TRAP: Entering kernel PC=%x, nPC=%x\n", regs->pc, regs->npc);
#endif #endif
@ -180,7 +179,6 @@ sparc_breakpoint (struct pt_regs *regs)
#ifdef DEBUG_SPARC_BREAKPOINT #ifdef DEBUG_SPARC_BREAKPOINT
printk ("TRAP: Returning to space: PC=%x nPC=%x\n", regs->pc, regs->npc); printk ("TRAP: Returning to space: PC=%x nPC=%x\n", regs->pc, regs->npc);
#endif #endif
unlock_kernel();
} }
asmlinkage int asmlinkage int

View File

@ -323,7 +323,6 @@ asmlinkage void user_unaligned_trap(struct pt_regs *regs, unsigned int insn)
{ {
enum direction dir; enum direction dir;
lock_kernel();
if(!(current->thread.flags & SPARC_FLAG_UNALIGNED) || if(!(current->thread.flags & SPARC_FLAG_UNALIGNED) ||
(((insn >> 30) & 3) != 3)) (((insn >> 30) & 3) != 3))
goto kill_user; goto kill_user;
@ -377,5 +376,5 @@ asmlinkage void user_unaligned_trap(struct pt_regs *regs, unsigned int insn)
kill_user: kill_user:
user_mna_trap_fault(regs, insn); user_mna_trap_fault(regs, insn);
out: out:
unlock_kernel(); ;
} }

View File

@ -112,7 +112,6 @@ void try_to_clear_window_buffer(struct pt_regs *regs, int who)
struct thread_info *tp = current_thread_info(); struct thread_info *tp = current_thread_info();
int window; int window;
lock_kernel();
flush_user_windows(); flush_user_windows();
for(window = 0; window < tp->w_saved; window++) { for(window = 0; window < tp->w_saved; window++) {
unsigned long sp = tp->rwbuf_stkptrs[window]; unsigned long sp = tp->rwbuf_stkptrs[window];
@ -123,5 +122,4 @@ void try_to_clear_window_buffer(struct pt_regs *regs, int who)
do_exit(SIGILL); do_exit(SIGILL);
} }
tp->w_saved = 0; tp->w_saved = 0;
unlock_kernel();
} }

View File

@ -26,11 +26,11 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
void * void __iomem *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot); iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);
void void
iounmap_atomic(void *kvaddr, enum km_type type); iounmap_atomic(void __iomem *kvaddr, enum km_type type);
int int
iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);

View File

@ -152,9 +152,14 @@ struct x86_emulate_ops {
struct operand { struct operand {
enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type; enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type;
unsigned int bytes; unsigned int bytes;
unsigned long orig_val, *ptr; union {
unsigned long orig_val;
u64 orig_val64;
};
unsigned long *ptr;
union { union {
unsigned long val; unsigned long val;
u64 val64;
char valptr[sizeof(unsigned long) + 2]; char valptr[sizeof(unsigned long) + 2];
}; };
}; };

View File

@ -27,6 +27,9 @@ extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops,
int node); int node);
extern struct pci_bus *pci_scan_bus_with_sysdata(int busno); extern struct pci_bus *pci_scan_bus_with_sysdata(int busno);
#ifdef CONFIG_PCI
#ifdef CONFIG_PCI_DOMAINS
static inline int pci_domain_nr(struct pci_bus *bus) static inline int pci_domain_nr(struct pci_bus *bus)
{ {
struct pci_sysdata *sd = bus->sysdata; struct pci_sysdata *sd = bus->sysdata;
@ -37,13 +40,12 @@ static inline int pci_proc_domain(struct pci_bus *bus)
{ {
return pci_domain_nr(bus); return pci_domain_nr(bus);
} }
#endif
/* Can be used to override the logic in pci_scan_bus for skipping /* Can be used to override the logic in pci_scan_bus for skipping
already-configured bus numbers - to be used for buggy BIOSes already-configured bus numbers - to be used for buggy BIOSes
or architectures with incomplete PCI setup by the loader */ or architectures with incomplete PCI setup by the loader */
#ifdef CONFIG_PCI
extern unsigned int pcibios_assign_all_busses(void); extern unsigned int pcibios_assign_all_busses(void);
extern int pci_legacy_init(void); extern int pci_legacy_init(void);
# ifdef CONFIG_ACPI # ifdef CONFIG_ACPI

View File

@ -530,7 +530,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
if (!alloc_cpumask_var(&b->cpus, GFP_KERNEL)) { if (!zalloc_cpumask_var(&b->cpus, GFP_KERNEL)) {
kfree(b); kfree(b);
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
@ -543,7 +543,7 @@ static __cpuinit int threshold_create_bank(unsigned int cpu, unsigned int bank)
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
cpumask_setall(b->cpus); cpumask_setall(b->cpus);
#else #else
cpumask_copy(b->cpus, c->llc_shared_map); cpumask_set_cpu(cpu, b->cpus);
#endif #endif
per_cpu(threshold_banks, cpu)[bank] = b; per_cpu(threshold_banks, cpu)[bank] = b;

View File

@ -202,10 +202,11 @@ static int therm_throt_process(bool new_event, int event, int level)
#ifdef CONFIG_SYSFS #ifdef CONFIG_SYSFS
/* Add/Remove thermal_throttle interface for CPU device: */ /* Add/Remove thermal_throttle interface for CPU device: */
static __cpuinit int thermal_throttle_add_dev(struct sys_device *sys_dev) static __cpuinit int thermal_throttle_add_dev(struct sys_device *sys_dev,
unsigned int cpu)
{ {
int err; int err;
struct cpuinfo_x86 *c = &cpu_data(smp_processor_id()); struct cpuinfo_x86 *c = &cpu_data(cpu);
err = sysfs_create_group(&sys_dev->kobj, &thermal_attr_group); err = sysfs_create_group(&sys_dev->kobj, &thermal_attr_group);
if (err) if (err)
@ -251,7 +252,7 @@ thermal_throttle_cpu_callback(struct notifier_block *nfb,
case CPU_UP_PREPARE: case CPU_UP_PREPARE:
case CPU_UP_PREPARE_FROZEN: case CPU_UP_PREPARE_FROZEN:
mutex_lock(&therm_cpu_lock); mutex_lock(&therm_cpu_lock);
err = thermal_throttle_add_dev(sys_dev); err = thermal_throttle_add_dev(sys_dev, cpu);
mutex_unlock(&therm_cpu_lock); mutex_unlock(&therm_cpu_lock);
WARN_ON(err); WARN_ON(err);
break; break;
@ -287,7 +288,7 @@ static __init int thermal_throttle_init_device(void)
#endif #endif
/* connect live CPUs to sysfs */ /* connect live CPUs to sysfs */
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
err = thermal_throttle_add_dev(get_cpu_sysdev(cpu)); err = thermal_throttle_add_dev(get_cpu_sysdev(cpu), cpu);
WARN_ON(err); WARN_ON(err);
} }
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU

View File

@ -1154,7 +1154,7 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
/* /*
* event overflow * event overflow
*/ */
handled = 1; handled++;
data.period = event->hw.last_period; data.period = event->hw.last_period;
if (!x86_perf_event_set_period(event)) if (!x86_perf_event_set_period(event))
@ -1200,12 +1200,20 @@ void perf_events_lapic_init(void)
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
} }
struct pmu_nmi_state {
unsigned int marked;
int handled;
};
static DEFINE_PER_CPU(struct pmu_nmi_state, pmu_nmi);
static int __kprobes static int __kprobes
perf_event_nmi_handler(struct notifier_block *self, perf_event_nmi_handler(struct notifier_block *self,
unsigned long cmd, void *__args) unsigned long cmd, void *__args)
{ {
struct die_args *args = __args; struct die_args *args = __args;
struct pt_regs *regs; unsigned int this_nmi;
int handled;
if (!atomic_read(&active_events)) if (!atomic_read(&active_events))
return NOTIFY_DONE; return NOTIFY_DONE;
@ -1214,22 +1222,47 @@ perf_event_nmi_handler(struct notifier_block *self,
case DIE_NMI: case DIE_NMI:
case DIE_NMI_IPI: case DIE_NMI_IPI:
break; break;
case DIE_NMIUNKNOWN:
this_nmi = percpu_read(irq_stat.__nmi_count);
if (this_nmi != __get_cpu_var(pmu_nmi).marked)
/* let the kernel handle the unknown nmi */
return NOTIFY_DONE;
/*
* This one is a PMU back-to-back nmi. Two events
* trigger 'simultaneously' raising two back-to-back
* NMIs. If the first NMI handles both, the latter
* will be empty and daze the CPU. So, we drop it to
* avoid false-positive 'unknown nmi' messages.
*/
return NOTIFY_STOP;
default: default:
return NOTIFY_DONE; return NOTIFY_DONE;
} }
regs = args->regs;
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
handled = x86_pmu.handle_irq(args->regs);
if (!handled)
return NOTIFY_DONE;
this_nmi = percpu_read(irq_stat.__nmi_count);
if ((handled > 1) ||
/* the next nmi could be a back-to-back nmi */
((__get_cpu_var(pmu_nmi).marked == this_nmi) &&
(__get_cpu_var(pmu_nmi).handled > 1))) {
/* /*
* Can't rely on the handled return value to say it was our NMI, two * We could have two subsequent back-to-back nmis: The
* events could trigger 'simultaneously' raising two back-to-back NMIs. * first handles more than one counter, the 2nd
* handles only one counter and the 3rd handles no
* counter.
* *
* If the first NMI handles both, the latter will be empty and daze * This is the 2nd nmi because the previous was
* the CPU. * handling more than one counter. We will mark the
* next (3rd) and then drop it if unhandled.
*/ */
x86_pmu.handle_irq(regs); __get_cpu_var(pmu_nmi).marked = this_nmi + 1;
__get_cpu_var(pmu_nmi).handled = handled;
}
return NOTIFY_STOP; return NOTIFY_STOP;
} }

View File

@ -712,7 +712,8 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
struct perf_sample_data data; struct perf_sample_data data;
struct cpu_hw_events *cpuc; struct cpu_hw_events *cpuc;
int bit, loops; int bit, loops;
u64 ack, status; u64 status;
int handled = 0;
perf_sample_data_init(&data, 0); perf_sample_data_init(&data, 0);
@ -728,6 +729,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
loops = 0; loops = 0;
again: again:
intel_pmu_ack_status(status);
if (++loops > 100) { if (++loops > 100) {
WARN_ONCE(1, "perfevents: irq loop stuck!\n"); WARN_ONCE(1, "perfevents: irq loop stuck!\n");
perf_event_print_debug(); perf_event_print_debug();
@ -736,19 +738,22 @@ again:
} }
inc_irq_stat(apic_perf_irqs); inc_irq_stat(apic_perf_irqs);
ack = status;
intel_pmu_lbr_read(); intel_pmu_lbr_read();
/* /*
* PEBS overflow sets bit 62 in the global status register * PEBS overflow sets bit 62 in the global status register
*/ */
if (__test_and_clear_bit(62, (unsigned long *)&status)) if (__test_and_clear_bit(62, (unsigned long *)&status)) {
handled++;
x86_pmu.drain_pebs(regs); x86_pmu.drain_pebs(regs);
}
for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) { for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
struct perf_event *event = cpuc->events[bit]; struct perf_event *event = cpuc->events[bit];
handled++;
if (!test_bit(bit, cpuc->active_mask)) if (!test_bit(bit, cpuc->active_mask))
continue; continue;
@ -761,8 +766,6 @@ again:
x86_pmu_stop(event); x86_pmu_stop(event);
} }
intel_pmu_ack_status(ack);
/* /*
* Repeat if there is more work to be done: * Repeat if there is more work to be done:
*/ */
@ -772,7 +775,7 @@ again:
done: done:
intel_pmu_enable_all(0); intel_pmu_enable_all(0);
return 1; return handled;
} }
static struct event_constraint * static struct event_constraint *

View File

@ -692,7 +692,7 @@ static int p4_pmu_handle_irq(struct pt_regs *regs)
inc_irq_stat(apic_perf_irqs); inc_irq_stat(apic_perf_irqs);
} }
return handled > 0; return handled;
} }
/* /*

View File

@ -45,8 +45,7 @@ void __init setup_trampoline_page_table(void)
/* Copy kernel address range */ /* Copy kernel address range */
clone_pgd_range(trampoline_pg_dir + KERNEL_PGD_BOUNDARY, clone_pgd_range(trampoline_pg_dir + KERNEL_PGD_BOUNDARY,
swapper_pg_dir + KERNEL_PGD_BOUNDARY, swapper_pg_dir + KERNEL_PGD_BOUNDARY,
min_t(unsigned long, KERNEL_PGD_PTRS, KERNEL_PGD_PTRS);
KERNEL_PGD_BOUNDARY));
/* Initialize low mappings */ /* Initialize low mappings */
clone_pgd_range(trampoline_pg_dir, clone_pgd_range(trampoline_pg_dir,

View File

@ -655,7 +655,7 @@ void restore_sched_clock_state(void)
local_irq_save(flags); local_irq_save(flags);
get_cpu_var(cyc2ns_offset) = 0; __get_cpu_var(cyc2ns_offset) = 0;
offset = cyc2ns_suspend - sched_clock(); offset = cyc2ns_suspend - sched_clock();
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)

View File

@ -1870,16 +1870,15 @@ static inline int emulate_grp9(struct x86_emulate_ctxt *ctxt,
struct x86_emulate_ops *ops) struct x86_emulate_ops *ops)
{ {
struct decode_cache *c = &ctxt->decode; struct decode_cache *c = &ctxt->decode;
u64 old = c->dst.orig_val; u64 old = c->dst.orig_val64;
if (((u32) (old >> 0) != (u32) c->regs[VCPU_REGS_RAX]) || if (((u32) (old >> 0) != (u32) c->regs[VCPU_REGS_RAX]) ||
((u32) (old >> 32) != (u32) c->regs[VCPU_REGS_RDX])) { ((u32) (old >> 32) != (u32) c->regs[VCPU_REGS_RDX])) {
c->regs[VCPU_REGS_RAX] = (u32) (old >> 0); c->regs[VCPU_REGS_RAX] = (u32) (old >> 0);
c->regs[VCPU_REGS_RDX] = (u32) (old >> 32); c->regs[VCPU_REGS_RDX] = (u32) (old >> 32);
ctxt->eflags &= ~EFLG_ZF; ctxt->eflags &= ~EFLG_ZF;
} else { } else {
c->dst.val = ((u64)c->regs[VCPU_REGS_RCX] << 32) | c->dst.val64 = ((u64)c->regs[VCPU_REGS_RCX] << 32) |
(u32) c->regs[VCPU_REGS_RBX]; (u32) c->regs[VCPU_REGS_RBX];
ctxt->eflags |= EFLG_ZF; ctxt->eflags |= EFLG_ZF;
@ -2616,7 +2615,7 @@ x86_emulate_insn(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)
c->src.valptr, c->src.bytes); c->src.valptr, c->src.bytes);
if (rc != X86EMUL_CONTINUE) if (rc != X86EMUL_CONTINUE)
goto done; goto done;
c->src.orig_val = c->src.val; c->src.orig_val64 = c->src.val64;
} }
if (c->src2.type == OP_MEM) { if (c->src2.type == OP_MEM) {

View File

@ -64,6 +64,9 @@ static void pic_unlock(struct kvm_pic *s)
if (!found) if (!found)
found = s->kvm->bsp_vcpu; found = s->kvm->bsp_vcpu;
if (!found)
return;
kvm_vcpu_kick(found); kvm_vcpu_kick(found);
} }
} }

View File

@ -43,7 +43,6 @@ struct kvm_kpic_state {
u8 irr; /* interrupt request register */ u8 irr; /* interrupt request register */
u8 imr; /* interrupt mask register */ u8 imr; /* interrupt mask register */
u8 isr; /* interrupt service register */ u8 isr; /* interrupt service register */
u8 isr_ack; /* interrupt ack detection */
u8 priority_add; /* highest irq priority */ u8 priority_add; /* highest irq priority */
u8 irq_base; u8 irq_base;
u8 read_reg_select; u8 read_reg_select;
@ -56,6 +55,7 @@ struct kvm_kpic_state {
u8 init4; /* true if 4 byte init */ u8 init4; /* true if 4 byte init */
u8 elcr; /* PIIX edge/trigger selection */ u8 elcr; /* PIIX edge/trigger selection */
u8 elcr_mask; u8 elcr_mask;
u8 isr_ack; /* interrupt ack detection */
struct kvm_pic *pics_state; struct kvm_pic *pics_state;
}; };

View File

@ -74,7 +74,7 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot)
/* /*
* Map 'pfn' using fixed map 'type' and protections 'prot' * Map 'pfn' using fixed map 'type' and protections 'prot'
*/ */
void * void __iomem *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot) iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot)
{ {
/* /*
@ -86,12 +86,12 @@ iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot)
if (!pat_enabled && pgprot_val(prot) == pgprot_val(PAGE_KERNEL_WC)) if (!pat_enabled && pgprot_val(prot) == pgprot_val(PAGE_KERNEL_WC))
prot = PAGE_KERNEL_UC_MINUS; prot = PAGE_KERNEL_UC_MINUS;
return kmap_atomic_prot_pfn(pfn, type, prot); return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, type, prot);
} }
EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn); EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn);
void void
iounmap_atomic(void *kvaddr, enum km_type type) iounmap_atomic(void __iomem *kvaddr, enum km_type type)
{ {
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id(); enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();

View File

@ -568,8 +568,13 @@ static int __init init_sysfs(void)
int error; int error;
error = sysdev_class_register(&oprofile_sysclass); error = sysdev_class_register(&oprofile_sysclass);
if (!error) if (error)
return error;
error = sysdev_register(&device_oprofile); error = sysdev_register(&device_oprofile);
if (error)
sysdev_class_unregister(&oprofile_sysclass);
return error; return error;
} }
@ -580,8 +585,10 @@ static void exit_sysfs(void)
} }
#else #else
#define init_sysfs() do { } while (0)
#define exit_sysfs() do { } while (0) static inline int init_sysfs(void) { return 0; }
static inline void exit_sysfs(void) { }
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
static int __init p4_init(char **cpu_type) static int __init p4_init(char **cpu_type)
@ -695,6 +702,8 @@ int __init op_nmi_init(struct oprofile_operations *ops)
char *cpu_type = NULL; char *cpu_type = NULL;
int ret = 0; int ret = 0;
using_nmi = 0;
if (!cpu_has_apic) if (!cpu_has_apic)
return -ENODEV; return -ENODEV;
@ -774,7 +783,10 @@ int __init op_nmi_init(struct oprofile_operations *ops)
mux_init(ops); mux_init(ops);
init_sysfs(); ret = init_sysfs();
if (ret)
return ret;
using_nmi = 1; using_nmi = 1;
printk(KERN_INFO "oprofile: using NMI interrupt.\n"); printk(KERN_INFO "oprofile: using NMI interrupt.\n");
return 0; return 0;

View File

@ -966,7 +966,7 @@ blkiocg_create(struct cgroup_subsys *subsys, struct cgroup *cgroup)
/* Currently we do not support hierarchy deeper than two level (0,1) */ /* Currently we do not support hierarchy deeper than two level (0,1) */
if (parent != cgroup->top_cgroup) if (parent != cgroup->top_cgroup)
return ERR_PTR(-EINVAL); return ERR_PTR(-EPERM);
blkcg = kzalloc(sizeof(*blkcg), GFP_KERNEL); blkcg = kzalloc(sizeof(*blkcg), GFP_KERNEL);
if (!blkcg) if (!blkcg)

View File

@ -1198,9 +1198,9 @@ static int __make_request(struct request_queue *q, struct bio *bio)
int el_ret; int el_ret;
unsigned int bytes = bio->bi_size; unsigned int bytes = bio->bi_size;
const unsigned short prio = bio_prio(bio); const unsigned short prio = bio_prio(bio);
const bool sync = (bio->bi_rw & REQ_SYNC); const bool sync = !!(bio->bi_rw & REQ_SYNC);
const bool unplug = (bio->bi_rw & REQ_UNPLUG); const bool unplug = !!(bio->bi_rw & REQ_UNPLUG);
const unsigned int ff = bio->bi_rw & REQ_FAILFAST_MASK; const unsigned long ff = bio->bi_rw & REQ_FAILFAST_MASK;
int rw_flags; int rw_flags;
if ((bio->bi_rw & REQ_HARDBARRIER) && if ((bio->bi_rw & REQ_HARDBARRIER) &&

View File

@ -511,6 +511,7 @@ int blk_register_queue(struct gendisk *disk)
kobject_uevent(&q->kobj, KOBJ_REMOVE); kobject_uevent(&q->kobj, KOBJ_REMOVE);
kobject_del(&q->kobj); kobject_del(&q->kobj);
blk_trace_remove_sysfs(disk_to_dev(disk)); blk_trace_remove_sysfs(disk_to_dev(disk));
kobject_put(&dev->kobj);
return ret; return ret;
} }

View File

@ -142,14 +142,18 @@ static inline int queue_congestion_off_threshold(struct request_queue *q)
static inline int blk_cpu_to_group(int cpu) static inline int blk_cpu_to_group(int cpu)
{ {
int group = NR_CPUS;
#ifdef CONFIG_SCHED_MC #ifdef CONFIG_SCHED_MC
const struct cpumask *mask = cpu_coregroup_mask(cpu); const struct cpumask *mask = cpu_coregroup_mask(cpu);
return cpumask_first(mask); group = cpumask_first(mask);
#elif defined(CONFIG_SCHED_SMT) #elif defined(CONFIG_SCHED_SMT)
return cpumask_first(topology_thread_cpumask(cpu)); group = cpumask_first(topology_thread_cpumask(cpu));
#else #else
return cpu; return cpu;
#endif #endif
if (likely(group < NR_CPUS))
return group;
return cpu;
} }
/* /*

View File

@ -30,6 +30,7 @@ static const int cfq_slice_sync = HZ / 10;
static int cfq_slice_async = HZ / 25; static int cfq_slice_async = HZ / 25;
static const int cfq_slice_async_rq = 2; static const int cfq_slice_async_rq = 2;
static int cfq_slice_idle = HZ / 125; static int cfq_slice_idle = HZ / 125;
static int cfq_group_idle = HZ / 125;
static const int cfq_target_latency = HZ * 3/10; /* 300 ms */ static const int cfq_target_latency = HZ * 3/10; /* 300 ms */
static const int cfq_hist_divisor = 4; static const int cfq_hist_divisor = 4;
@ -147,6 +148,8 @@ struct cfq_queue {
struct cfq_queue *new_cfqq; struct cfq_queue *new_cfqq;
struct cfq_group *cfqg; struct cfq_group *cfqg;
struct cfq_group *orig_cfqg; struct cfq_group *orig_cfqg;
/* Number of sectors dispatched from queue in single dispatch round */
unsigned long nr_sectors;
}; };
/* /*
@ -198,6 +201,8 @@ struct cfq_group {
struct hlist_node cfqd_node; struct hlist_node cfqd_node;
atomic_t ref; atomic_t ref;
#endif #endif
/* number of requests that are on the dispatch list or inside driver */
int dispatched;
}; };
/* /*
@ -271,6 +276,7 @@ struct cfq_data {
unsigned int cfq_slice[2]; unsigned int cfq_slice[2];
unsigned int cfq_slice_async_rq; unsigned int cfq_slice_async_rq;
unsigned int cfq_slice_idle; unsigned int cfq_slice_idle;
unsigned int cfq_group_idle;
unsigned int cfq_latency; unsigned int cfq_latency;
unsigned int cfq_group_isolation; unsigned int cfq_group_isolation;
@ -378,6 +384,21 @@ CFQ_CFQQ_FNS(wait_busy);
&cfqg->service_trees[i][j]: NULL) \ &cfqg->service_trees[i][j]: NULL) \
static inline bool iops_mode(struct cfq_data *cfqd)
{
/*
* If we are not idling on queues and it is a NCQ drive, parallel
* execution of requests is on and measuring time is not possible
* in most of the cases until and unless we drive shallower queue
* depths and that becomes a performance bottleneck. In such cases
* switch to start providing fairness in terms of number of IOs.
*/
if (!cfqd->cfq_slice_idle && cfqd->hw_tag)
return true;
else
return false;
}
static inline enum wl_prio_t cfqq_prio(struct cfq_queue *cfqq) static inline enum wl_prio_t cfqq_prio(struct cfq_queue *cfqq)
{ {
if (cfq_class_idle(cfqq)) if (cfq_class_idle(cfqq))
@ -906,7 +927,6 @@ static inline unsigned int cfq_cfqq_slice_usage(struct cfq_queue *cfqq)
slice_used = cfqq->allocated_slice; slice_used = cfqq->allocated_slice;
} }
cfq_log_cfqq(cfqq->cfqd, cfqq, "sl_used=%u", slice_used);
return slice_used; return slice_used;
} }
@ -914,19 +934,21 @@ static void cfq_group_served(struct cfq_data *cfqd, struct cfq_group *cfqg,
struct cfq_queue *cfqq) struct cfq_queue *cfqq)
{ {
struct cfq_rb_root *st = &cfqd->grp_service_tree; struct cfq_rb_root *st = &cfqd->grp_service_tree;
unsigned int used_sl, charge_sl; unsigned int used_sl, charge;
int nr_sync = cfqg->nr_cfqq - cfqg_busy_async_queues(cfqd, cfqg) int nr_sync = cfqg->nr_cfqq - cfqg_busy_async_queues(cfqd, cfqg)
- cfqg->service_tree_idle.count; - cfqg->service_tree_idle.count;
BUG_ON(nr_sync < 0); BUG_ON(nr_sync < 0);
used_sl = charge_sl = cfq_cfqq_slice_usage(cfqq); used_sl = charge = cfq_cfqq_slice_usage(cfqq);
if (!cfq_cfqq_sync(cfqq) && !nr_sync) if (iops_mode(cfqd))
charge_sl = cfqq->allocated_slice; charge = cfqq->slice_dispatch;
else if (!cfq_cfqq_sync(cfqq) && !nr_sync)
charge = cfqq->allocated_slice;
/* Can't update vdisktime while group is on service tree */ /* Can't update vdisktime while group is on service tree */
cfq_rb_erase(&cfqg->rb_node, st); cfq_rb_erase(&cfqg->rb_node, st);
cfqg->vdisktime += cfq_scale_slice(charge_sl, cfqg); cfqg->vdisktime += cfq_scale_slice(charge, cfqg);
__cfq_group_service_tree_add(st, cfqg); __cfq_group_service_tree_add(st, cfqg);
/* This group is being expired. Save the context */ /* This group is being expired. Save the context */
@ -940,6 +962,9 @@ static void cfq_group_served(struct cfq_data *cfqd, struct cfq_group *cfqg,
cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime, cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime,
st->min_vdisktime); st->min_vdisktime);
cfq_log_cfqq(cfqq->cfqd, cfqq, "sl_used=%u disp=%u charge=%u iops=%u"
" sect=%u", used_sl, cfqq->slice_dispatch, charge,
iops_mode(cfqd), cfqq->nr_sectors);
cfq_blkiocg_update_timeslice_used(&cfqg->blkg, used_sl); cfq_blkiocg_update_timeslice_used(&cfqg->blkg, used_sl);
cfq_blkiocg_set_start_empty_time(&cfqg->blkg); cfq_blkiocg_set_start_empty_time(&cfqg->blkg);
} }
@ -1587,6 +1612,7 @@ static void __cfq_set_active_queue(struct cfq_data *cfqd,
cfqq->allocated_slice = 0; cfqq->allocated_slice = 0;
cfqq->slice_end = 0; cfqq->slice_end = 0;
cfqq->slice_dispatch = 0; cfqq->slice_dispatch = 0;
cfqq->nr_sectors = 0;
cfq_clear_cfqq_wait_request(cfqq); cfq_clear_cfqq_wait_request(cfqq);
cfq_clear_cfqq_must_dispatch(cfqq); cfq_clear_cfqq_must_dispatch(cfqq);
@ -1839,6 +1865,9 @@ static bool cfq_should_idle(struct cfq_data *cfqd, struct cfq_queue *cfqq)
BUG_ON(!service_tree); BUG_ON(!service_tree);
BUG_ON(!service_tree->count); BUG_ON(!service_tree->count);
if (!cfqd->cfq_slice_idle)
return false;
/* We never do for idle class queues. */ /* We never do for idle class queues. */
if (prio == IDLE_WORKLOAD) if (prio == IDLE_WORKLOAD)
return false; return false;
@ -1863,7 +1892,7 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
{ {
struct cfq_queue *cfqq = cfqd->active_queue; struct cfq_queue *cfqq = cfqd->active_queue;
struct cfq_io_context *cic; struct cfq_io_context *cic;
unsigned long sl; unsigned long sl, group_idle = 0;
/* /*
* SSD device without seek penalty, disable idling. But only do so * SSD device without seek penalty, disable idling. But only do so
@ -1879,8 +1908,13 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
/* /*
* idle is disabled, either manually or by past process history * idle is disabled, either manually or by past process history
*/ */
if (!cfqd->cfq_slice_idle || !cfq_should_idle(cfqd, cfqq)) if (!cfq_should_idle(cfqd, cfqq)) {
/* no queue idling. Check for group idling */
if (cfqd->cfq_group_idle)
group_idle = cfqd->cfq_group_idle;
else
return; return;
}
/* /*
* still active requests from this queue, don't idle * still active requests from this queue, don't idle
@ -1907,13 +1941,21 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd)
return; return;
} }
/* There are other queues in the group, don't do group idle */
if (group_idle && cfqq->cfqg->nr_cfqq > 1)
return;
cfq_mark_cfqq_wait_request(cfqq); cfq_mark_cfqq_wait_request(cfqq);
if (group_idle)
sl = cfqd->cfq_group_idle;
else
sl = cfqd->cfq_slice_idle; sl = cfqd->cfq_slice_idle;
mod_timer(&cfqd->idle_slice_timer, jiffies + sl); mod_timer(&cfqd->idle_slice_timer, jiffies + sl);
cfq_blkiocg_update_set_idle_time_stats(&cfqq->cfqg->blkg); cfq_blkiocg_update_set_idle_time_stats(&cfqq->cfqg->blkg);
cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl); cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu group_idle: %d", sl,
group_idle ? 1 : 0);
} }
/* /*
@ -1929,9 +1971,11 @@ static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
cfqq->next_rq = cfq_find_next_rq(cfqd, cfqq, rq); cfqq->next_rq = cfq_find_next_rq(cfqd, cfqq, rq);
cfq_remove_request(rq); cfq_remove_request(rq);
cfqq->dispatched++; cfqq->dispatched++;
(RQ_CFQG(rq))->dispatched++;
elv_dispatch_sort(q, rq); elv_dispatch_sort(q, rq);
cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]++; cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]++;
cfqq->nr_sectors += blk_rq_sectors(rq);
cfq_blkiocg_update_dispatch_stats(&cfqq->cfqg->blkg, blk_rq_bytes(rq), cfq_blkiocg_update_dispatch_stats(&cfqq->cfqg->blkg, blk_rq_bytes(rq),
rq_data_dir(rq), rq_is_sync(rq)); rq_data_dir(rq), rq_is_sync(rq));
} }
@ -2198,7 +2242,7 @@ static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
cfqq = NULL; cfqq = NULL;
goto keep_queue; goto keep_queue;
} else } else
goto expire; goto check_group_idle;
} }
/* /*
@ -2226,8 +2270,23 @@ static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
* flight or is idling for a new request, allow either of these * flight or is idling for a new request, allow either of these
* conditions to happen (or time out) before selecting a new queue. * conditions to happen (or time out) before selecting a new queue.
*/ */
if (timer_pending(&cfqd->idle_slice_timer) || if (timer_pending(&cfqd->idle_slice_timer)) {
(cfqq->dispatched && cfq_should_idle(cfqd, cfqq))) { cfqq = NULL;
goto keep_queue;
}
if (cfqq->dispatched && cfq_should_idle(cfqd, cfqq)) {
cfqq = NULL;
goto keep_queue;
}
/*
* If group idle is enabled and there are requests dispatched from
* this group, wait for requests to complete.
*/
check_group_idle:
if (cfqd->cfq_group_idle && cfqq->cfqg->nr_cfqq == 1
&& cfqq->cfqg->dispatched) {
cfqq = NULL; cfqq = NULL;
goto keep_queue; goto keep_queue;
} }
@ -3375,6 +3434,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
WARN_ON(!cfqq->dispatched); WARN_ON(!cfqq->dispatched);
cfqd->rq_in_driver--; cfqd->rq_in_driver--;
cfqq->dispatched--; cfqq->dispatched--;
(RQ_CFQG(rq))->dispatched--;
cfq_blkiocg_update_completion_stats(&cfqq->cfqg->blkg, cfq_blkiocg_update_completion_stats(&cfqq->cfqg->blkg,
rq_start_time_ns(rq), rq_io_start_time_ns(rq), rq_start_time_ns(rq), rq_io_start_time_ns(rq),
rq_data_dir(rq), rq_is_sync(rq)); rq_data_dir(rq), rq_is_sync(rq));
@ -3404,7 +3464,10 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
* the queue. * the queue.
*/ */
if (cfq_should_wait_busy(cfqd, cfqq)) { if (cfq_should_wait_busy(cfqd, cfqq)) {
cfqq->slice_end = jiffies + cfqd->cfq_slice_idle; unsigned long extend_sl = cfqd->cfq_slice_idle;
if (!cfqd->cfq_slice_idle)
extend_sl = cfqd->cfq_group_idle;
cfqq->slice_end = jiffies + extend_sl;
cfq_mark_cfqq_wait_busy(cfqq); cfq_mark_cfqq_wait_busy(cfqq);
cfq_log_cfqq(cfqd, cfqq, "will busy wait"); cfq_log_cfqq(cfqd, cfqq, "will busy wait");
} }
@ -3850,6 +3913,7 @@ static void *cfq_init_queue(struct request_queue *q)
cfqd->cfq_slice[1] = cfq_slice_sync; cfqd->cfq_slice[1] = cfq_slice_sync;
cfqd->cfq_slice_async_rq = cfq_slice_async_rq; cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
cfqd->cfq_slice_idle = cfq_slice_idle; cfqd->cfq_slice_idle = cfq_slice_idle;
cfqd->cfq_group_idle = cfq_group_idle;
cfqd->cfq_latency = 1; cfqd->cfq_latency = 1;
cfqd->cfq_group_isolation = 0; cfqd->cfq_group_isolation = 0;
cfqd->hw_tag = -1; cfqd->hw_tag = -1;
@ -3922,6 +3986,7 @@ SHOW_FUNCTION(cfq_fifo_expire_async_show, cfqd->cfq_fifo_expire[0], 1);
SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0); SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0);
SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0); SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0);
SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1); SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1);
SHOW_FUNCTION(cfq_group_idle_show, cfqd->cfq_group_idle, 1);
SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1); SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1);
SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1); SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1);
SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0); SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0);
@ -3954,6 +4019,7 @@ STORE_FUNCTION(cfq_back_seek_max_store, &cfqd->cfq_back_max, 0, UINT_MAX, 0);
STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1, STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1,
UINT_MAX, 0); UINT_MAX, 0);
STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1); STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1);
STORE_FUNCTION(cfq_group_idle_store, &cfqd->cfq_group_idle, 0, UINT_MAX, 1);
STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1); STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1);
STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1); STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1);
STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1, STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1,
@ -3975,6 +4041,7 @@ static struct elv_fs_entry cfq_attrs[] = {
CFQ_ATTR(slice_async), CFQ_ATTR(slice_async),
CFQ_ATTR(slice_async_rq), CFQ_ATTR(slice_async_rq),
CFQ_ATTR(slice_idle), CFQ_ATTR(slice_idle),
CFQ_ATTR(group_idle),
CFQ_ATTR(low_latency), CFQ_ATTR(low_latency),
CFQ_ATTR(group_isolation), CFQ_ATTR(group_isolation),
__ATTR_NULL __ATTR_NULL
@ -4028,6 +4095,12 @@ static int __init cfq_init(void)
if (!cfq_slice_idle) if (!cfq_slice_idle)
cfq_slice_idle = 1; cfq_slice_idle = 1;
#ifdef CONFIG_CFQ_GROUP_IOSCHED
if (!cfq_group_idle)
cfq_group_idle = 1;
#else
cfq_group_idle = 0;
#endif
if (cfq_slab_setup()) if (cfq_slab_setup())
return -ENOMEM; return -ENOMEM;

View File

@ -1009,18 +1009,19 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
{ {
struct elevator_queue *old_elevator, *e; struct elevator_queue *old_elevator, *e;
void *data; void *data;
int err;
/* /*
* Allocate new elevator * Allocate new elevator
*/ */
e = elevator_alloc(q, new_e); e = elevator_alloc(q, new_e);
if (!e) if (!e)
return 0; return -ENOMEM;
data = elevator_init_queue(q, e); data = elevator_init_queue(q, e);
if (!data) { if (!data) {
kobject_put(&e->kobj); kobject_put(&e->kobj);
return 0; return -ENOMEM;
} }
/* /*
@ -1043,7 +1044,8 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
__elv_unregister_queue(old_elevator); __elv_unregister_queue(old_elevator);
if (elv_register_queue(q)) err = elv_register_queue(q);
if (err)
goto fail_register; goto fail_register;
/* /*
@ -1056,7 +1058,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
blk_add_trace_msg(q, "elv switch: %s", e->elevator_type->elevator_name); blk_add_trace_msg(q, "elv switch: %s", e->elevator_type->elevator_name);
return 1; return 0;
fail_register: fail_register:
/* /*
@ -1071,17 +1073,19 @@ fail_register:
queue_flag_clear(QUEUE_FLAG_ELVSWITCH, q); queue_flag_clear(QUEUE_FLAG_ELVSWITCH, q);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
return 0; return err;
} }
ssize_t elv_iosched_store(struct request_queue *q, const char *name, /*
size_t count) * Switch this queue to the given IO scheduler.
*/
int elevator_change(struct request_queue *q, const char *name)
{ {
char elevator_name[ELV_NAME_MAX]; char elevator_name[ELV_NAME_MAX];
struct elevator_type *e; struct elevator_type *e;
if (!q->elevator) if (!q->elevator)
return count; return -ENXIO;
strlcpy(elevator_name, name, sizeof(elevator_name)); strlcpy(elevator_name, name, sizeof(elevator_name));
e = elevator_get(strstrip(elevator_name)); e = elevator_get(strstrip(elevator_name));
@ -1092,13 +1096,27 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name,
if (!strcmp(elevator_name, q->elevator->elevator_type->elevator_name)) { if (!strcmp(elevator_name, q->elevator->elevator_type->elevator_name)) {
elevator_put(e); elevator_put(e);
return count; return 0;
} }
if (!elevator_switch(q, e)) return elevator_switch(q, e);
printk(KERN_ERR "elevator: switch to %s failed\n", }
elevator_name); EXPORT_SYMBOL(elevator_change);
ssize_t elv_iosched_store(struct request_queue *q, const char *name,
size_t count)
{
int ret;
if (!q->elevator)
return count; return count;
ret = elevator_change(q, name);
if (!ret)
return count;
printk(KERN_ERR "elevator: switch to %s failed\n", name);
return ret;
} }
ssize_t elv_iosched_show(struct request_queue *q, char *name) ssize_t elv_iosched_show(struct request_queue *q, char *name)

View File

@ -101,13 +101,13 @@ config CRYPTO_MANAGER2
select CRYPTO_BLKCIPHER2 select CRYPTO_BLKCIPHER2
select CRYPTO_PCOMP2 select CRYPTO_PCOMP2
config CRYPTO_MANAGER_TESTS config CRYPTO_MANAGER_DISABLE_TESTS
bool "Run algolithms' self-tests" bool "Disable run-time self tests"
default y default y
depends on CRYPTO_MANAGER2 depends on CRYPTO_MANAGER2
help help
Run cryptomanager's tests for the new crypto algorithms being Disable run-time self tests that normally take place at
registered. algorithm registration.
config CRYPTO_GF128MUL config CRYPTO_GF128MUL
tristate "GF(2^128) multiplication functions (EXPERIMENTAL)" tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"

View File

@ -47,8 +47,11 @@ static int hash_walk_next(struct crypto_hash_walk *walk)
walk->data = crypto_kmap(walk->pg, 0); walk->data = crypto_kmap(walk->pg, 0);
walk->data += offset; walk->data += offset;
if (offset & alignmask) if (offset & alignmask) {
nbytes = alignmask + 1 - (offset & alignmask); unsigned int unaligned = alignmask + 1 - (offset & alignmask);
if (nbytes > unaligned)
nbytes = unaligned;
}
walk->entrylen -= nbytes; walk->entrylen -= nbytes;
return nbytes; return nbytes;

View File

@ -206,13 +206,16 @@ err:
return NOTIFY_OK; return NOTIFY_OK;
} }
#ifdef CONFIG_CRYPTO_MANAGER_TESTS
static int cryptomgr_test(void *data) static int cryptomgr_test(void *data)
{ {
struct crypto_test_param *param = data; struct crypto_test_param *param = data;
u32 type = param->type; u32 type = param->type;
int err = 0; int err = 0;
#ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
goto skiptest;
#endif
if (type & CRYPTO_ALG_TESTED) if (type & CRYPTO_ALG_TESTED)
goto skiptest; goto skiptest;
@ -267,7 +270,6 @@ err_put_module:
err: err:
return NOTIFY_OK; return NOTIFY_OK;
} }
#endif /* CONFIG_CRYPTO_MANAGER_TESTS */
static int cryptomgr_notify(struct notifier_block *this, unsigned long msg, static int cryptomgr_notify(struct notifier_block *this, unsigned long msg,
void *data) void *data)
@ -275,10 +277,8 @@ static int cryptomgr_notify(struct notifier_block *this, unsigned long msg,
switch (msg) { switch (msg) {
case CRYPTO_MSG_ALG_REQUEST: case CRYPTO_MSG_ALG_REQUEST:
return cryptomgr_schedule_probe(data); return cryptomgr_schedule_probe(data);
#ifdef CONFIG_CRYPTO_MANAGER_TESTS
case CRYPTO_MSG_ALG_REGISTER: case CRYPTO_MSG_ALG_REGISTER:
return cryptomgr_schedule_test(data); return cryptomgr_schedule_test(data);
#endif
} }
return NOTIFY_DONE; return NOTIFY_DONE;

View File

@ -23,7 +23,7 @@
#include "internal.h" #include "internal.h"
#ifndef CONFIG_CRYPTO_MANAGER_TESTS #ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
/* a perfect nop */ /* a perfect nop */
int alg_test(const char *driver, const char *alg, u32 type, u32 mask) int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
@ -2542,6 +2542,6 @@ non_fips_alg:
return -EINVAL; return -EINVAL;
} }
#endif /* CONFIG_CRYPTO_MANAGER_TESTS */ #endif /* CONFIG_CRYPTO_MANAGER_DISABLE_TESTS */
EXPORT_SYMBOL_GPL(alg_test); EXPORT_SYMBOL_GPL(alg_test);

View File

@ -33,7 +33,6 @@
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-acpi.h> #include <linux/pci-acpi.h>
#include <linux/pci-aspm.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <acpi/acpi_bus.h> #include <acpi/acpi_bus.h>
@ -226,22 +225,31 @@ static acpi_status acpi_pci_run_osc(acpi_handle handle,
return status; return status;
} }
static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root, u32 flags) static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
u32 support,
u32 *control)
{ {
acpi_status status; acpi_status status;
u32 support_set, result, capbuf[3]; u32 result, capbuf[3];
support &= OSC_PCI_SUPPORT_MASKS;
support |= root->osc_support_set;
/* do _OSC query for all possible controls */
support_set = root->osc_support_set | (flags & OSC_PCI_SUPPORT_MASKS);
capbuf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE; capbuf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_TYPE] = support_set; capbuf[OSC_SUPPORT_TYPE] = support;
if (control) {
*control &= OSC_PCI_CONTROL_MASKS;
capbuf[OSC_CONTROL_TYPE] = *control | root->osc_control_set;
} else {
/* Run _OSC query for all possible controls. */
capbuf[OSC_CONTROL_TYPE] = OSC_PCI_CONTROL_MASKS; capbuf[OSC_CONTROL_TYPE] = OSC_PCI_CONTROL_MASKS;
}
status = acpi_pci_run_osc(root->device->handle, capbuf, &result); status = acpi_pci_run_osc(root->device->handle, capbuf, &result);
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
root->osc_support_set = support_set; root->osc_support_set = support;
root->osc_control_qry = result; if (control)
root->osc_queried = 1; *control = result;
} }
return status; return status;
} }
@ -255,7 +263,7 @@ static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return status; return status;
mutex_lock(&osc_lock); mutex_lock(&osc_lock);
status = acpi_pci_query_osc(root, flags); status = acpi_pci_query_osc(root, flags, NULL);
mutex_unlock(&osc_lock); mutex_unlock(&osc_lock);
return status; return status;
} }
@ -365,55 +373,70 @@ out:
EXPORT_SYMBOL_GPL(acpi_get_pci_dev); EXPORT_SYMBOL_GPL(acpi_get_pci_dev);
/** /**
* acpi_pci_osc_control_set - commit requested control to Firmware * acpi_pci_osc_control_set - Request control of PCI root _OSC features.
* @handle: acpi_handle for the target ACPI object * @handle: ACPI handle of a PCI root bridge (or PCIe Root Complex).
* @flags: driver's requested control bits * @mask: Mask of _OSC bits to request control of, place to store control mask.
* @req: Mask of _OSC bits the control of is essential to the caller.
* *
* Attempt to take control from Firmware on requested control bits. * Run _OSC query for @mask and if that is successful, compare the returned
* mask of control bits with @req. If all of the @req bits are set in the
* returned mask, run _OSC request for it.
*
* The variable at the @mask address may be modified regardless of whether or
* not the function returns success. On success it will contain the mask of
* _OSC bits the BIOS has granted control of, but its contents are meaningless
* on failure.
**/ **/
acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 flags) acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
{ {
acpi_status status;
u32 control_req, result, capbuf[3];
acpi_handle tmp;
struct acpi_pci_root *root; struct acpi_pci_root *root;
acpi_status status;
u32 ctrl, capbuf[3];
acpi_handle tmp;
status = acpi_get_handle(handle, "_OSC", &tmp); if (!mask)
if (ACPI_FAILURE(status)) return AE_BAD_PARAMETER;
return status;
control_req = (flags & OSC_PCI_CONTROL_MASKS); ctrl = *mask & OSC_PCI_CONTROL_MASKS;
if (!control_req) if ((ctrl & req) != req)
return AE_TYPE; return AE_TYPE;
root = acpi_pci_find_root(handle); root = acpi_pci_find_root(handle);
if (!root) if (!root)
return AE_NOT_EXIST; return AE_NOT_EXIST;
status = acpi_get_handle(handle, "_OSC", &tmp);
if (ACPI_FAILURE(status))
return status;
mutex_lock(&osc_lock); mutex_lock(&osc_lock);
*mask = ctrl | root->osc_control_set;
/* No need to evaluate _OSC if the control was already granted. */ /* No need to evaluate _OSC if the control was already granted. */
if ((root->osc_control_set & control_req) == control_req) if ((root->osc_control_set & ctrl) == ctrl)
goto out; goto out;
/* Need to query controls first before requesting them */ /* Need to check the available controls bits before requesting them. */
if (!root->osc_queried) { while (*mask) {
status = acpi_pci_query_osc(root, root->osc_support_set); status = acpi_pci_query_osc(root, root->osc_support_set, mask);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
goto out; goto out;
if (ctrl == *mask)
break;
ctrl = *mask;
} }
if ((root->osc_control_qry & control_req) != control_req) {
printk(KERN_DEBUG if ((ctrl & req) != req) {
"Firmware did not grant requested _OSC control\n");
status = AE_SUPPORT; status = AE_SUPPORT;
goto out; goto out;
} }
capbuf[OSC_QUERY_TYPE] = 0; capbuf[OSC_QUERY_TYPE] = 0;
capbuf[OSC_SUPPORT_TYPE] = root->osc_support_set; capbuf[OSC_SUPPORT_TYPE] = root->osc_support_set;
capbuf[OSC_CONTROL_TYPE] = root->osc_control_set | control_req; capbuf[OSC_CONTROL_TYPE] = ctrl;
status = acpi_pci_run_osc(handle, capbuf, &result); status = acpi_pci_run_osc(handle, capbuf, mask);
if (ACPI_SUCCESS(status)) if (ACPI_SUCCESS(status))
root->osc_control_set = result; root->osc_control_set = *mask;
out: out:
mutex_unlock(&osc_lock); mutex_unlock(&osc_lock);
return status; return status;
@ -544,14 +567,6 @@ static int __devinit acpi_pci_root_add(struct acpi_device *device)
if (flags != base_flags) if (flags != base_flags)
acpi_pci_osc_support(root, flags); acpi_pci_osc_support(root, flags);
status = acpi_pci_osc_control_set(root->device->handle,
OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
if (ACPI_FAILURE(status)) {
printk(KERN_INFO "Unable to assume PCIe control: Disabling ASPM\n");
pcie_no_aspm();
}
pci_acpi_add_bus_pm_notifier(device, root->bus); pci_acpi_add_bus_pm_notifier(device, root->bus);
if (device->wakeup.flags.run_wake) if (device->wakeup.flags.run_wake)
device_set_run_wake(root->bus->bridge, true); device_set_run_wake(root->bus->bridge, true);

Some files were not shown because too many files have changed in this diff Show More