Merge branches 'release', 'asus', 'bugzilla-8573', 'bugzilla-9995', 'bugzilla-10272', 'lockdep' and 'thermal' into release

This commit is contained in:
Len Brown 2008-03-18 04:52:57 -04:00
118 changed files with 1053 additions and 746 deletions

View File

@ -225,8 +225,6 @@ kprobes.txt
- documents the kernel probes debugging feature.
kref.txt
- docs on adding reference counters (krefs) to kernel objects.
laptop-mode.txt
- how to conserve battery power using laptop-mode.
laptops/
- directory with laptop related info and laptop driver documentation.
ldm.txt
@ -301,12 +299,8 @@ pcmcia/
- info on the Linux PCMCIA driver.
pi-futex.txt
- documentation on lightweight PI-futexes.
pm.txt
- info on Linux power management support.
pnp.txt
- Linux Plug and Play documentation.
power_supply_class.txt
- Tells userspace about battery, UPS, AC or DC power supply properties
power/
- directory with info on Linux PCI power management.
powerpc/

View File

@ -1,15 +1,7 @@
Linux supports two methods of overriding the BIOS DSDT:
Linux supports a method of overriding the BIOS DSDT:
CONFIG_ACPI_CUSTOM_DSDT builds the image into the kernel.
CONFIG_ACPI_CUSTOM_DSDT_INITRD adds the image to the initrd.
When to use these methods is described in detail on the
When to use this method is described in detail on the
Linux/ACPI home page:
http://www.lesswatts.org/projects/acpi/overridingDSDT.php
Note that if both options are used, the DSDT supplied
by the INITRD method takes precedence.
Documentation/initramfs-add-dsdt.sh is provided for convenience
for use with the CONFIG_ACPI_CUSTOM_DSDT_INITRD method.

View File

@ -1,43 +0,0 @@
#!/bin/bash
# Adds a DSDT file to the initrd (if it's an initramfs)
# first argument is the name of archive
# second argument is the name of the file to add
# The file will be copied as /DSDT.aml
# 20060126: fix "Premature end of file" with some old cpio (Roland Robic)
# 20060205: this time it should really work
# check the arguments
if [ $# -ne 2 ]; then
program_name=$(basename $0)
echo "\
$program_name: too few arguments
Usage: $program_name initrd-name.img DSDT-to-add.aml
Adds a DSDT file to an initrd (in initramfs format)
initrd-name.img: filename of the initrd in initramfs format
DSDT-to-add.aml: filename of the DSDT file to add
" 1>&2
exit 1
fi
# we should check it's an initramfs
tempcpio=$(mktemp -d)
# cleanup on exit, hangup, interrupt, quit, termination
trap 'rm -rf $tempcpio' 0 1 2 3 15
# extract the archive
gunzip -c "$1" > "$tempcpio"/initramfs.cpio || exit 1
# copy the DSDT file at the root of the directory so that we can call it "/DSDT.aml"
cp -f "$2" "$tempcpio"/DSDT.aml
# add the file
cd "$tempcpio"
(echo DSDT.aml | cpio --quiet -H newc -o -A -O "$tempcpio"/initramfs.cpio) || exit 1
cd "$OLDPWD"
# re-compress the archive
gzip -c "$tempcpio"/initramfs.cpio > "$1"

View File

@ -1506,13 +1506,13 @@ laptop_mode
-----------
laptop_mode is a knob that controls "laptop mode". All the things that are
controlled by this knob are discussed in Documentation/laptop-mode.txt.
controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt.
block_dump
----------
block_dump enables block I/O debugging when set to a nonzero value. More
information on block I/O debugging is in Documentation/laptop-mode.txt.
information on block I/O debugging is in Documentation/laptops/laptop-mode.txt.
swap_token_timeout
------------------

View File

@ -138,7 +138,7 @@ and is between 256 and 4096 characters. It is defined in the file
strict -- Be less tolerant of platforms that are not
strictly ACPI specification compliant.
See also Documentation/pm.txt, pci=noacpi
See also Documentation/power/pm.txt, pci=noacpi
acpi_apic_instance= [ACPI, IOAPIC]
Format: <int>
@ -177,9 +177,6 @@ and is between 256 and 4096 characters. It is defined in the file
acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT
acpi_no_initrd_override [KNL,ACPI]
Disable loading custom ACPI tables from the initramfs
acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS
Format: To spoof as Windows 98: ="Microsoft Windows"

View File

@ -2,6 +2,8 @@
- This file
acer-wmi.txt
- information on the Acer Laptop WMI Extras driver.
laptop-mode.txt
- how to conserve battery power using laptop-mode.
sony-laptop.txt
- Sony Notebook Control Driver (SNC) Readme.
sonypi.txt

View File

@ -48,7 +48,7 @@ DSDT.
To send me the DSDT, as root/sudo:
cat /sys/firmware/acpi/DSDT > dsdt
cat /sys/firmware/acpi/tables/DSDT > dsdt
And send me the resulting 'dsdt' file.
@ -169,7 +169,7 @@ can be added to acer-wmi.
The LED is exposed through the LED subsystem, and can be found in:
/sys/devices/platform/acer-wmi/leds/acer-mail:green/
/sys/devices/platform/acer-wmi/leds/acer-wmi::mail/
The mail LED is autodetected, so if you don't have one, the LED device won't
be registered.

View File

@ -14,6 +14,12 @@ notifiers.txt
- Registering suspend notifiers in device drivers
pci.txt
- How the PCI Subsystem Does Power Management
pm.txt
- info on Linux power management support.
pm_qos_interface.txt
- info on Linux PM Quality of Service interface
power_supply_class.txt
- Tells userspace about battery, UPS, AC or DC power supply properties
s2ram.txt
- How to get suspend to ram working (and debug it when it isn't)
states.txt

View File

@ -108,7 +108,7 @@ void pm_unregister_all(pm_callback cback);
* EINVAL if the request is not supported
* EBUSY if the device is now busy and cannot handle the request
* ENOMEM if the device was unable to handle the request due to memory
*
*
* Details: The device request callback will be called before the
* device/system enters a suspend state (ACPI D1-D3) or
* or after the device/system resumes from suspend (ACPI D0).

View File

@ -143,10 +143,10 @@ type Strings which represent the thermal zone type.
This is given by thermal zone driver as part of registration.
Eg: "ACPI thermal zone" indicates it's a ACPI thermal device
RO
Optional
Required
temp Current temperature as reported by thermal zone (sensor)
Unit: degree Celsius
Unit: millidegree Celsius
RO
Required
@ -163,7 +163,7 @@ mode One of the predefined values in [kernel, user]
charge of the thermal management.
trip_point_[0-*]_temp The temperature above which trip point will be fired
Unit: degree Celsius
Unit: millidegree Celsius
RO
Optional
@ -193,7 +193,7 @@ type String which represents the type of device
eg. For memory controller device on intel_menlow platform:
this should be "Memory controller"
RO
Optional
Required
max_state The maximum permissible cooling state of this cooling device.
RO
@ -219,16 +219,16 @@ the sys I/F structure will be built like this:
|thermal_zone1:
|-----type: ACPI thermal zone
|-----temp: 37
|-----temp: 37000
|-----mode: kernel
|-----trip_point_0_temp: 100
|-----trip_point_0_temp: 100000
|-----trip_point_0_type: critical
|-----trip_point_1_temp: 80
|-----trip_point_1_temp: 80000
|-----trip_point_1_type: passive
|-----trip_point_2_temp: 70
|-----trip_point_2_type: active[0]
|-----trip_point_3_temp: 60
|-----trip_point_3_type: active[1]
|-----trip_point_2_temp: 70000
|-----trip_point_2_type: active0
|-----trip_point_3_temp: 60000
|-----trip_point_3_type: active1
|-----cdev0: --->/sys/class/thermal/cooling_device0
|-----cdev0_trip_point: 1 /* cdev0 can be used for passive */
|-----cdev1: --->/sys/class/thermal/cooling_device3

View File

@ -266,6 +266,15 @@ L: linux-acpi@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Maintained
AD1889 ALSA SOUND DRIVER
P: Kyle McMartin
M: kyle@parisc-linux.org
P: Thibaut Varene
M: T-Bone@parisc-linux.org
W: http://wiki.parisc-linux.org/AD1889
L: linux-parisc@vger.kernel.org
S: Maintained
ADM1025 HARDWARE MONITOR DRIVER
P: Jean Delvare
M: khali@linux-fr.org

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 25
EXTRAVERSION = -rc5
EXTRAVERSION = -rc6
NAME = Funky Weasel is Jiggy wit it
# *DOCUMENTATION*

View File

@ -330,6 +330,9 @@ config PCI_DOMAINS
config PCI_SYSCALL
def_bool PCI
config IOMMU_HELPER
def_bool PCI
config ALPHA_CORE_AGP
bool
depends on ALPHA_GENERIC || ALPHA_TITAN || ALPHA_MARVEL

View File

@ -10,6 +10,7 @@
#include <linux/scatterlist.h>
#include <linux/log2.h>
#include <linux/dma-mapping.h>
#include <linux/iommu-helper.h>
#include <asm/io.h>
#include <asm/hwrpb.h>
@ -125,14 +126,6 @@ iommu_arena_new(struct pci_controller *hose, dma_addr_t base,
return iommu_arena_new_node(0, hose, base, window_size, align);
}
static inline int is_span_boundary(unsigned int index, unsigned int nr,
unsigned long shift,
unsigned long boundary_size)
{
shift = (shift + index) & (boundary_size - 1);
return shift + nr > boundary_size;
}
/* Must be called with the arena lock held */
static long
iommu_arena_find_pages(struct device *dev, struct pci_iommu_arena *arena,
@ -147,7 +140,6 @@ iommu_arena_find_pages(struct device *dev, struct pci_iommu_arena *arena,
base = arena->dma_base >> PAGE_SHIFT;
if (dev) {
boundary_size = dma_get_seg_boundary(dev) + 1;
BUG_ON(!is_power_of_2(boundary_size));
boundary_size >>= PAGE_SHIFT;
} else {
boundary_size = 1UL << (32 - PAGE_SHIFT);
@ -161,7 +153,7 @@ iommu_arena_find_pages(struct device *dev, struct pci_iommu_arena *arena,
again:
while (i < n && p+i < nent) {
if (!i && is_span_boundary(p, n, base, boundary_size)) {
if (!i && iommu_is_span_boundary(p, n, base, boundary_size)) {
p = ALIGN(p + 1, mask + 1);
goto again;
}

View File

@ -16,6 +16,9 @@
# Modified for PA-RISC Linux by Paul Lahaie, Alex deVries,
# Mike Shaver, Helge Deller and Martin K. Petersen
#
KBUILD_DEFCONFIG := default_defconfig
NM = sh $(srctree)/arch/parisc/nm
CHECKFLAGS += -D__hppa__=1

View File

@ -1080,6 +1080,9 @@ void pdc_io_reset_devices(void)
spin_unlock_irqrestore(&pdc_lock, flags);
}
/* locked by pdc_console_lock */
static int __attribute__((aligned(8))) iodc_retbuf[32];
static char __attribute__((aligned(64))) iodc_dbuf[4096];
/**
* pdc_iodc_print - Console print using IODC.
@ -1091,24 +1094,20 @@ void pdc_io_reset_devices(void)
* Since the HP console requires CR+LF to perform a 'newline', we translate
* "\n" to "\r\n".
*/
int pdc_iodc_print(unsigned char *str, unsigned count)
int pdc_iodc_print(const unsigned char *str, unsigned count)
{
/* XXX Should we spinlock posx usage */
static int posx; /* for simple TAB-Simulation... */
int __attribute__((aligned(8))) iodc_retbuf[32];
char __attribute__((aligned(64))) iodc_dbuf[4096];
unsigned int i;
unsigned long flags;
memset(iodc_dbuf, 0, 4096);
for (i = 0; i < count && i < 2048;) {
for (i = 0; i < count && i < 79;) {
switch(str[i]) {
case '\n':
iodc_dbuf[i+0] = '\r';
iodc_dbuf[i+1] = '\n';
i += 2;
posx = 0;
break;
goto print;
case '\t':
while (posx & 7) {
iodc_dbuf[i] = ' ';
@ -1124,6 +1123,16 @@ int pdc_iodc_print(unsigned char *str, unsigned count)
}
}
/* if we're at the end of line, and not already inserting a newline,
* insert one anyway. iodc console doesn't claim to support >79 char
* lines. don't account for this in the return value.
*/
if (i == 79 && iodc_dbuf[i-1] != '\n') {
iodc_dbuf[i+0] = '\r';
iodc_dbuf[i+1] = '\n';
}
print:
spin_lock_irqsave(&pdc_lock, flags);
real32_call(PAGE0->mem_cons.iodc_io,
(unsigned long)PAGE0->mem_cons.hpa, ENTRY_IO_COUT,
@ -1142,11 +1151,9 @@ int pdc_iodc_print(unsigned char *str, unsigned count)
*/
int pdc_iodc_getc(void)
{
unsigned long flags;
static int __attribute__((aligned(8))) iodc_retbuf[32];
static char __attribute__((aligned(64))) iodc_dbuf[4096];
int ch;
int status;
unsigned long flags;
/* Bail if no console input device. */
if (!PAGE0->mem_kbd.iodc_io)

View File

@ -274,7 +274,18 @@ static struct hp_hardware hp_hardware_list[] __devinitdata = {
{HPHW_NPROC,0x887,0x4,0x91,"Storm Peak Slow"},
{HPHW_NPROC,0x888,0x4,0x91,"Storm Peak Fast DC-"},
{HPHW_NPROC,0x889,0x4,0x91,"Storm Peak Fast"},
{HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak"},
{HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak Slow"},
{HPHW_NPROC,0x88C,0x4,0x91,"Orca Mako+"},
{HPHW_NPROC,0x88D,0x4,0x91,"Rainier/Medel Mako+ Slow"},
{HPHW_NPROC,0x88E,0x4,0x91,"Rainier/Medel Mako+ Fast"},
{HPHW_NPROC,0x894,0x4,0x91,"Mt. Hamilton Fast Mako+"},
{HPHW_NPROC,0x895,0x4,0x91,"Storm Peak Slow Mako+"},
{HPHW_NPROC,0x896,0x4,0x91,"Storm Peak Fast Mako+"},
{HPHW_NPROC,0x897,0x4,0x91,"Storm Peak DC- Slow Mako+"},
{HPHW_NPROC,0x898,0x4,0x91,"Storm Peak DC- Fast Mako+"},
{HPHW_NPROC,0x899,0x4,0x91,"Mt. Hamilton Slow Mako+"},
{HPHW_NPROC,0x89B,0x4,0x91,"Crestone Peak Mako+ Slow"},
{HPHW_NPROC,0x89C,0x4,0x91,"Crestone Peak Mako+ Fast"},
{HPHW_A_DIRECT, 0x004, 0x0000D, 0x00, "Arrakis MUX"},
{HPHW_A_DIRECT, 0x005, 0x0000D, 0x00, "Dyun Kiuh MUX"},
{HPHW_A_DIRECT, 0x006, 0x0000D, 0x00, "Baat Kiuh AP/MUX (40299B)"},

View File

@ -20,10 +20,11 @@
#include <asm/pgtable.h>
#include <linux/linkage.h>
#include <linux/init.h>
.level LEVEL
.data
__INITDATA
ENTRY(boot_args)
.word 0 /* arg0 */
.word 0 /* arg1 */
@ -31,7 +32,7 @@ ENTRY(boot_args)
.word 0 /* arg3 */
END(boot_args)
.text
.section .text.head
.align 4
.import init_thread_union,data
.import fault_vector_20,code /* IVA parisc 2.0 32 bit */
@ -343,7 +344,7 @@ smp_slave_stext:
ENDPROC(stext)
#ifndef CONFIG_64BIT
.data
.section .data.read_mostly
.align 4
.export $global$,data

View File

@ -52,28 +52,30 @@
#include <linux/tty.h>
#include <asm/pdc.h> /* for iodc_call() proto and friends */
static spinlock_t pdc_console_lock = SPIN_LOCK_UNLOCKED;
static void pdc_console_write(struct console *co, const char *s, unsigned count)
{
pdc_iodc_print(s, count);
}
int i = 0;
unsigned long flags;
void pdc_printf(const char *fmt, ...)
{
va_list args;
char buf[1024];
int i, len;
va_start(args, fmt);
len = vscnprintf(buf, sizeof(buf), fmt, args);
va_end(args);
pdc_iodc_print(buf, len);
spin_lock_irqsave(&pdc_console_lock, flags);
do {
i += pdc_iodc_print(s + i, count - i);
} while (i < count);
spin_unlock_irqrestore(&pdc_console_lock, flags);
}
int pdc_console_poll_key(struct console *co)
{
return pdc_iodc_getc();
int c;
unsigned long flags;
spin_lock_irqsave(&pdc_console_lock, flags);
c = pdc_iodc_getc();
spin_unlock_irqrestore(&pdc_console_lock, flags);
return c;
}
static int pdc_console_setup(struct console *co, char *options)

View File

@ -401,9 +401,12 @@
ENTRY_COMP(kexec_load) /* 300 */
ENTRY_COMP(utimensat)
ENTRY_COMP(signalfd)
ENTRY_COMP(timerfd)
ENTRY_SAME(ni_syscall) /* was timerfd */
ENTRY_SAME(eventfd)
ENTRY_COMP(fallocate) /* 305 */
ENTRY_SAME(timerfd_create)
ENTRY_COMP(timerfd_settime)
ENTRY_COMP(timerfd_gettime)
/* Nothing yet */

View File

@ -51,6 +51,9 @@
DEFINE_SPINLOCK(pa_dbit_lock);
#endif
void parisc_show_stack(struct task_struct *t, unsigned long *sp,
struct pt_regs *regs);
static int printbinary(char *buf, unsigned long x, int nbits)
{
unsigned long mask = 1UL << (nbits - 1);
@ -148,6 +151,8 @@ void show_regs(struct pt_regs *regs)
print_symbol(" IAOQ[1]: %s\n", regs->iaoq[1]);
printk(level);
print_symbol(" RP(r2): %s\n", regs->gr[2]);
parisc_show_stack(current, NULL, regs);
}
@ -181,11 +186,19 @@ static void do_show_stack(struct unwind_frame_info *info)
printk("\n");
}
void show_stack(struct task_struct *task, unsigned long *s)
void parisc_show_stack(struct task_struct *task, unsigned long *sp,
struct pt_regs *regs)
{
struct unwind_frame_info info;
struct task_struct *t;
if (!task) {
t = task ? task : current;
if (regs) {
unwind_frame_init(&info, t, regs);
goto show_stack;
}
if (t == current) {
unsigned long sp;
HERE:
@ -201,12 +214,18 @@ HERE:
unwind_frame_init(&info, current, &r);
}
} else {
unwind_frame_init_from_blocked_task(&info, task);
unwind_frame_init_from_blocked_task(&info, t);
}
show_stack:
do_show_stack(&info);
}
void show_stack(struct task_struct *t, unsigned long *sp)
{
return parisc_show_stack(t, sp, NULL);
}
int is_valid_bugaddr(unsigned long iaoq)
{
return 1;

View File

@ -1259,7 +1259,7 @@ menuconfig APM
machines with more than one CPU.
In order to use APM, you will need supporting software. For location
and more information, read <file:Documentation/pm.txt> and the
and more information, read <file:Documentation/power/pm.txt> and the
Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.

View File

@ -66,11 +66,11 @@ async_memcpy(struct page *dest, struct page *src, unsigned int dest_offset,
}
if (tx) {
pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (async) len: %zu\n", __func__, len);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else {
void *dest_buf, *src_buf;
pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (sync) len: %zu\n", __func__, len);
/* wait for any prerequisite operations */
if (depend_tx) {
@ -80,7 +80,7 @@ async_memcpy(struct page *dest, struct page *src, unsigned int dest_offset,
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
__FUNCTION__);
__func__);
}
dest_buf = kmap_atomic(dest, KM_USER0) + dest_offset;

View File

@ -63,11 +63,11 @@ async_memset(struct page *dest, int val, unsigned int offset,
}
if (tx) {
pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (async) len: %zu\n", __func__, len);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else { /* run the memset synchronously */
void *dest_buf;
pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (sync) len: %zu\n", __func__, len);
dest_buf = (void *) (((char *) page_address(dest)) + offset);
@ -79,7 +79,7 @@ async_memset(struct page *dest, int val, unsigned int offset,
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
__FUNCTION__);
__func__);
}
memset(dest_buf, val, len);

View File

@ -472,11 +472,11 @@ async_trigger_callback(enum async_tx_flags flags,
tx = NULL;
if (tx) {
pr_debug("%s: (async)\n", __FUNCTION__);
pr_debug("%s: (async)\n", __func__);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else {
pr_debug("%s: (sync)\n", __FUNCTION__);
pr_debug("%s: (sync)\n", __func__);
/* wait for any prerequisite operations */
if (depend_tx) {
@ -486,7 +486,7 @@ async_trigger_callback(enum async_tx_flags flags,
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
__FUNCTION__);
__func__);
}
async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);

View File

@ -47,7 +47,7 @@ do_async_xor(struct dma_device *device,
int i;
unsigned long dma_prep_flags = cb_fn ? DMA_PREP_INTERRUPT : 0;
pr_debug("%s: len: %zu\n", __FUNCTION__, len);
pr_debug("%s: len: %zu\n", __func__, len);
dma_dest = dma_map_page(device->dev, dest, offset, len,
DMA_FROM_DEVICE);
@ -86,7 +86,7 @@ do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,
void *_dest;
int i;
pr_debug("%s: len: %zu\n", __FUNCTION__, len);
pr_debug("%s: len: %zu\n", __func__, len);
/* reuse the 'src_list' array to convert to buffer pointers */
for (i = 0; i < src_cnt; i++)
@ -196,7 +196,7 @@ async_xor(struct page *dest, struct page **src_list, unsigned int offset,
DMA_ERROR)
panic("%s: DMA_ERROR waiting for "
"depend_tx\n",
__FUNCTION__);
__func__);
}
do_sync_xor(dest, &src_list[src_off], offset,
@ -276,7 +276,7 @@ async_xor_zero_sum(struct page *dest, struct page **src_list,
unsigned long dma_prep_flags = cb_fn ? DMA_PREP_INTERRUPT : 0;
int i;
pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (async) len: %zu\n", __func__, len);
for (i = 0; i < src_cnt; i++)
dma_src[i] = dma_map_page(device->dev, src_list[i],
@ -299,7 +299,7 @@ async_xor_zero_sum(struct page *dest, struct page **src_list,
} else {
unsigned long xor_flags = flags;
pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
pr_debug("%s: (sync) len: %zu\n", __func__, len);
xor_flags |= ASYNC_TX_XOR_DROP_DST;
xor_flags &= ~ASYNC_TX_ACK;
@ -310,7 +310,7 @@ async_xor_zero_sum(struct page *dest, struct page **src_list,
if (tx) {
if (dma_wait_for_async_tx(tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for tx\n",
__FUNCTION__);
__func__);
async_tx_ack(tx);
}

View File

@ -283,34 +283,22 @@ config ACPI_TOSHIBA
If you have a legacy free Toshiba laptop (such as the Libretto L1
series), say Y.
config ACPI_CUSTOM_DSDT
bool "Include Custom DSDT"
config ACPI_CUSTOM_DSDT_FILE
string "Custom DSDT Table file to include"
default ""
depends on !STANDALONE
default n
help
This option supports a custom DSDT by linking it into the kernel.
See Documentation/acpi/dsdt-override.txt
If unsure, say N.
config ACPI_CUSTOM_DSDT_FILE
string "Custom DSDT Table file to include"
depends on ACPI_CUSTOM_DSDT
default ""
help
Enter the full path name to the file which includes the AmlCode
declaration.
config ACPI_CUSTOM_DSDT_INITRD
bool "Read Custom DSDT from initramfs"
depends on BLK_DEV_INITRD
default n
help
This option supports a custom DSDT by optionally loading it from initrd.
See Documentation/acpi/dsdt-override.txt
If unsure, don't enter a file name.
If you are not using this feature now, but may use it later,
it is safe to say Y here.
config ACPI_CUSTOM_DSDT
bool
default ACPI_CUSTOM_DSDT_FILE != ""
config ACPI_BLACKLIST_YEAR
int "Disable ACPI for systems before Jan 1st this year" if X86_32

View File

@ -293,13 +293,12 @@ static int extract_package(struct acpi_battery *battery,
strncpy(ptr, (u8 *)&element->integer.value,
sizeof(acpi_integer));
ptr[sizeof(acpi_integer)] = 0;
} else return -EFAULT;
} else
*ptr = 0; /* don't have value */
} else {
if (element->type == ACPI_TYPE_INTEGER) {
int *x = (int *)((u8 *)battery +
offsets[i].offset);
*x = element->integer.value;
} else return -EFAULT;
int *x = (int *)((u8 *)battery + offsets[i].offset);
*x = (element->type == ACPI_TYPE_INTEGER) ?
element->integer.value : -1;
}
}
return 0;

View File

@ -776,7 +776,7 @@ static int __init acpi_init(void)
acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
if (!acpi_kobj) {
printk(KERN_WARNING "%s: kset create error\n", __FUNCTION__);
printk(KERN_WARNING "%s: kset create error\n", __func__);
acpi_kobj = NULL;
}

View File

@ -449,6 +449,7 @@ static int acpi_button_add(struct acpi_device *device)
input->phys = button->phys;
input->id.bustype = BUS_HOST;
input->id.product = button->type;
input->dev.parent = &device->dev;
switch (button->type) {
case ACPI_BUTTON_TYPE_POWER:

View File

@ -129,6 +129,7 @@ static struct acpi_ec {
struct mutex lock;
wait_queue_head_t wait;
struct list_head list;
atomic_t irq_count;
u8 handlers_installed;
} *boot_ec, *first_ec;
@ -181,6 +182,8 @@ static int acpi_ec_wait(struct acpi_ec *ec, enum ec_event event, int force_poll)
{
int ret = 0;
atomic_set(&ec->irq_count, 0);
if (unlikely(event == ACPI_EC_EVENT_OBF_1 &&
test_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags)))
force_poll = 1;
@ -227,6 +230,7 @@ static int acpi_ec_wait(struct acpi_ec *ec, enum ec_event event, int force_poll)
while (time_before(jiffies, delay)) {
if (acpi_ec_check_status(ec, event))
goto end;
msleep(5);
}
}
pr_err(PREFIX "acpi_ec_wait timeout,"
@ -529,6 +533,13 @@ static u32 acpi_ec_gpe_handler(void *data)
struct acpi_ec *ec = data;
pr_debug(PREFIX "~~~> interrupt\n");
atomic_inc(&ec->irq_count);
if (atomic_read(&ec->irq_count) > 5) {
pr_err(PREFIX "GPE storm detected, disabling EC GPE\n");
acpi_disable_gpe(NULL, ec->gpe, ACPI_ISR);
clear_bit(EC_FLAGS_GPE_MODE, &ec->flags);
return ACPI_INTERRUPT_HANDLED;
}
clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
if (test_bit(EC_FLAGS_GPE_MODE, &ec->flags))
wake_up(&ec->wait);
@ -943,11 +954,7 @@ int __init acpi_ec_ecdt_probe(void)
boot_ec->command_addr = ecdt_ptr->control.address;
boot_ec->data_addr = ecdt_ptr->data.address;
boot_ec->gpe = ecdt_ptr->gpe;
if (ACPI_FAILURE(acpi_get_handle(NULL, ecdt_ptr->id,
&boot_ec->handle))) {
pr_info("Failed to locate handle for boot EC\n");
boot_ec->handle = ACPI_ROOT_OBJECT;
}
boot_ec->handle = ACPI_ROOT_OBJECT;
} else {
/* This workaround is needed only on some broken machines,
* which require early EC, but fail to provide ECDT */

View File

@ -91,10 +91,6 @@ static DEFINE_SPINLOCK(acpi_res_lock);
#define OSI_STRING_LENGTH_MAX 64 /* arbitrary */
static char osi_additional_string[OSI_STRING_LENGTH_MAX];
#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
static int acpi_no_initrd_override;
#endif
/*
* "Ode to _OSI(Linux)"
*
@ -324,67 +320,6 @@ acpi_os_predefined_override(const struct acpi_predefined_names *init_val,
return AE_OK;
}
#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
static struct acpi_table_header *acpi_find_dsdt_initrd(void)
{
struct file *firmware_file;
mm_segment_t oldfs;
unsigned long len, len2;
struct acpi_table_header *dsdt_buffer, *ret = NULL;
struct kstat stat;
char *ramfs_dsdt_name = "/DSDT.aml";
printk(KERN_INFO PREFIX "Checking initramfs for custom DSDT\n");
/*
* Never do this at home, only the user-space is allowed to open a file.
* The clean way would be to use the firmware loader.
* But this code must be run before there is any userspace available.
* A static/init firmware infrastructure doesn't exist yet...
*/
if (vfs_stat(ramfs_dsdt_name, &stat) < 0)
return ret;
len = stat.size;
/* check especially against empty files */
if (len <= 4) {
printk(KERN_ERR PREFIX "Failed: DSDT only %lu bytes.\n", len);
return ret;
}
firmware_file = filp_open(ramfs_dsdt_name, O_RDONLY, 0);
if (IS_ERR(firmware_file)) {
printk(KERN_ERR PREFIX "Failed to open %s.\n", ramfs_dsdt_name);
return ret;
}
dsdt_buffer = kmalloc(len, GFP_ATOMIC);
if (!dsdt_buffer) {
printk(KERN_ERR PREFIX "Failed to allocate %lu bytes.\n", len);
goto err;
}
oldfs = get_fs();
set_fs(KERNEL_DS);
len2 = vfs_read(firmware_file, (char __user *)dsdt_buffer, len,
&firmware_file->f_pos);
set_fs(oldfs);
if (len2 < len) {
printk(KERN_ERR PREFIX "Failed to read %lu bytes from %s.\n",
len, ramfs_dsdt_name);
ACPI_FREE(dsdt_buffer);
goto err;
}
printk(KERN_INFO PREFIX "Found %lu byte DSDT in %s.\n",
len, ramfs_dsdt_name);
ret = dsdt_buffer;
err:
filp_close(firmware_file, NULL);
return ret;
}
#endif
acpi_status
acpi_os_table_override(struct acpi_table_header * existing_table,
struct acpi_table_header ** new_table)
@ -397,16 +332,6 @@ acpi_os_table_override(struct acpi_table_header * existing_table,
#ifdef CONFIG_ACPI_CUSTOM_DSDT
if (strncmp(existing_table->signature, "DSDT", 4) == 0)
*new_table = (struct acpi_table_header *)AmlCode;
#endif
#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
if ((strncmp(existing_table->signature, "DSDT", 4) == 0) &&
!acpi_no_initrd_override) {
struct acpi_table_header *initrd_table;
initrd_table = acpi_find_dsdt_initrd();
if (initrd_table)
*new_table = initrd_table;
}
#endif
if (*new_table != NULL) {
printk(KERN_WARNING PREFIX "Override [%4.4s-%8.8s], "
@ -418,15 +343,6 @@ acpi_os_table_override(struct acpi_table_header * existing_table,
return AE_OK;
}
#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
static int __init acpi_no_initrd_override_setup(char *s)
{
acpi_no_initrd_override = 1;
return 1;
}
__setup("acpi_no_initrd_override", acpi_no_initrd_override_setup);
#endif
static irqreturn_t acpi_irq(int irq, void *dev_id)
{
u32 handled;
@ -1237,7 +1153,7 @@ int acpi_check_resource_conflict(struct resource *res)
if (clash) {
if (acpi_enforce_resources != ENFORCE_RESOURCES_NO) {
printk(KERN_INFO "%sACPI: %s resource %s [0x%llx-0x%llx]"
printk("%sACPI: %s resource %s [0x%llx-0x%llx]"
" conflicts with ACPI region %s"
" [0x%llx-0x%llx]\n",
acpi_enforce_resources == ENFORCE_RESOURCES_LAX

View File

@ -25,6 +25,7 @@
*/
#include <linux/dmi.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@ -76,6 +77,101 @@ static struct acpi_prt_entry *acpi_pci_irq_find_prt_entry(int segment,
return NULL;
}
/* http://bugzilla.kernel.org/show_bug.cgi?id=4773 */
static struct dmi_system_id medion_md9580[] = {
{
.ident = "Medion MD9580-F laptop",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "MEDIONNB"),
DMI_MATCH(DMI_PRODUCT_NAME, "A555"),
},
},
{ }
};
/* http://bugzilla.kernel.org/show_bug.cgi?id=5044 */
static struct dmi_system_id dell_optiplex[] = {
{
.ident = "Dell Optiplex GX1",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex GX1 600S+"),
},
},
{ }
};
/* http://bugzilla.kernel.org/show_bug.cgi?id=10138 */
static struct dmi_system_id hp_t5710[] = {
{
.ident = "HP t5710",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_PRODUCT_NAME, "hp t5000 series"),
DMI_MATCH(DMI_BOARD_NAME, "098Ch"),
},
},
{ }
};
struct prt_quirk {
struct dmi_system_id *system;
unsigned int segment;
unsigned int bus;
unsigned int device;
unsigned char pin;
char *source; /* according to BIOS */
char *actual_source;
};
/*
* These systems have incorrect _PRT entries. The BIOS claims the PCI
* interrupt at the listed segment/bus/device/pin is connected to the first
* link device, but it is actually connected to the second.
*/
static struct prt_quirk prt_quirks[] = {
{ medion_md9580, 0, 0, 9, 'A',
"\\_SB_.PCI0.ISA.LNKA",
"\\_SB_.PCI0.ISA.LNKB"},
{ dell_optiplex, 0, 0, 0xd, 'A',
"\\_SB_.LNKB",
"\\_SB_.LNKA"},
{ hp_t5710, 0, 0, 1, 'A',
"\\_SB_.PCI0.LNK1",
"\\_SB_.PCI0.LNK3"},
};
static void
do_prt_fixups(struct acpi_prt_entry *entry, struct acpi_pci_routing_table *prt)
{
int i;
struct prt_quirk *quirk;
for (i = 0; i < ARRAY_SIZE(prt_quirks); i++) {
quirk = &prt_quirks[i];
/* All current quirks involve link devices, not GSIs */
if (!prt->source)
continue;
if (dmi_check_system(quirk->system) &&
entry->id.segment == quirk->segment &&
entry->id.bus == quirk->bus &&
entry->id.device == quirk->device &&
entry->pin + 'A' == quirk->pin &&
!strcmp(prt->source, quirk->source) &&
strlen(prt->source) >= strlen(quirk->actual_source)) {
printk(KERN_WARNING PREFIX "firmware reports "
"%04x:%02x:%02x[%c] connected to %s; "
"changing to %s\n",
entry->id.segment, entry->id.bus,
entry->id.device, 'A' + entry->pin,
prt->source, quirk->actual_source);
strcpy(prt->source, quirk->actual_source);
}
}
}
static int
acpi_pci_irq_add_entry(acpi_handle handle,
int segment, int bus, struct acpi_pci_routing_table *prt)
@ -96,6 +192,8 @@ acpi_pci_irq_add_entry(acpi_handle handle,
entry->id.function = prt->address & 0xFFFF;
entry->pin = prt->pin;
do_prt_fixups(entry, prt);
/*
* Type 1: Dynamic
* ---------------

View File

@ -184,7 +184,7 @@ static void acpi_pci_bridge_scan(struct acpi_device *device)
}
}
static int acpi_pci_root_add(struct acpi_device *device)
static int __devinit acpi_pci_root_add(struct acpi_device *device)
{
int result = 0;
struct acpi_pci_root *root = NULL;

View File

@ -840,17 +840,19 @@ static int is_processor_present(acpi_handle handle)
status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
/*
* if a processor object does not have an _STA object,
* OSPM assumes that the processor is present.
*/
if (status == AE_NOT_FOUND)
return 1;
if (ACPI_SUCCESS(status) && (sta & ACPI_STA_DEVICE_PRESENT))
return 1;
ACPI_EXCEPTION((AE_INFO, status, "Processor Device is not present"));
/*
* _STA is mandatory for a processor that supports hot plug
*/
if (status == AE_NOT_FOUND)
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Processor does not support hot plug\n"));
else
ACPI_EXCEPTION((AE_INFO, status,
"Processor Device is not present"));
return 0;
}
@ -886,8 +888,8 @@ int acpi_processor_device_add(acpi_handle handle, struct acpi_device **device)
return 0;
}
static void
acpi_processor_hotplug_notify(acpi_handle handle, u32 event, void *data)
static void __ref acpi_processor_hotplug_notify(acpi_handle handle,
u32 event, void *data)
{
struct acpi_processor *pr;
struct acpi_device *device = NULL;
@ -897,9 +899,10 @@ acpi_processor_hotplug_notify(acpi_handle handle, u32 event, void *data)
switch (event) {
case ACPI_NOTIFY_BUS_CHECK:
case ACPI_NOTIFY_DEVICE_CHECK:
printk("Processor driver received %s event\n",
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Processor driver received %s event\n",
(event == ACPI_NOTIFY_BUS_CHECK) ?
"ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK");
"ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK"));
if (!is_processor_present(handle))
break;

View File

@ -216,8 +216,10 @@ static void acpi_safe_halt(void)
* test NEED_RESCHED:
*/
smp_mb();
if (!need_resched())
if (!need_resched()) {
safe_halt();
local_irq_disable();
}
current_thread_info()->status |= TS_POLLING;
}
@ -421,7 +423,9 @@ static void acpi_processor_idle(void)
else
acpi_safe_halt();
local_irq_enable();
if (irqs_disabled())
local_irq_enable();
return;
}
@ -530,7 +534,9 @@ static void acpi_processor_idle(void)
* skew otherwise.
*/
sleep_ticks = 0xFFFFFFFF;
local_irq_enable();
if (irqs_disabled())
local_irq_enable();
break;
case ACPI_STATE_C2:

View File

@ -609,7 +609,8 @@ acpi_bus_get_ejd(acpi_handle handle, acpi_handle *ejd)
status = acpi_evaluate_object(handle, "_EJD", NULL, &buffer);
if (ACPI_SUCCESS(status)) {
obj = buffer.pointer;
status = acpi_get_handle(NULL, obj->string.pointer, ejd);
status = acpi_get_handle(ACPI_ROOT_OBJECT, obj->string.pointer,
ejd);
kfree(buffer.pointer);
}
return status;
@ -966,7 +967,7 @@ static void acpi_device_set_id(struct acpi_device *device,
case ACPI_BUS_TYPE_DEVICE:
status = acpi_get_object_info(handle, &buffer);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "%s: Error reading device info\n", __FUNCTION__);
printk(KERN_ERR PREFIX "%s: Error reading device info\n", __func__);
return;
}

View File

@ -504,7 +504,7 @@ static void acpi_power_off_prepare(void)
static void acpi_power_off(void)
{
/* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */
printk("%s called\n", __FUNCTION__);
printk("%s called\n", __func__);
local_irq_disable();
acpi_enable_wakeup_device(ACPI_STATE_S5);
acpi_enter_sleep_state(ACPI_STATE_S5);

View File

@ -319,7 +319,7 @@ void acpi_irq_stats_init(void)
goto fail;
for (i = 0; i < num_counters; ++i) {
char buffer[10];
char buffer[12];
char *name;
if (i < num_gpes)

View File

@ -879,6 +879,8 @@ static void acpi_thermal_check(void *data)
}
/* sys I/F for generic thermal sysfs support */
#define KELVIN_TO_MILLICELSIUS(t) (t * 100 - 273200)
static int thermal_get_temp(struct thermal_zone_device *thermal, char *buf)
{
struct acpi_thermal *tz = thermal->devdata;
@ -886,7 +888,7 @@ static int thermal_get_temp(struct thermal_zone_device *thermal, char *buf)
if (!tz)
return -EINVAL;
return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(tz->temperature));
return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(tz->temperature));
}
static const char enabled[] = "kernel";
@ -980,21 +982,21 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
if (tz->trips.critical.flags.valid) {
if (!trip)
return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.critical.temperature));
trip--;
}
if (tz->trips.hot.flags.valid) {
if (!trip)
return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.hot.temperature));
trip--;
}
if (tz->trips.passive.flags.valid) {
if (!trip)
return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.passive.temperature));
trip--;
}
@ -1002,7 +1004,7 @@ static int thermal_get_trip_temp(struct thermal_zone_device *thermal,
for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE &&
tz->trips.active[i].flags.valid; i++) {
if (!trip)
return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.active[i].temperature));
trip--;
}

View File

@ -99,6 +99,13 @@ MODULE_LICENSE("GPL");
#define HCI_VIDEO_OUT_CRT 0x2
#define HCI_VIDEO_OUT_TV 0x4
static const struct acpi_device_id toshiba_device_ids[] = {
{"TOS6200", 0},
{"TOS1900", 0},
{"", 0},
};
MODULE_DEVICE_TABLE(acpi, toshiba_device_ids);
/* utility
*/

View File

@ -109,7 +109,7 @@ void acpi_ut_track_stack_ptr(void)
* RETURN: Updated pointer to the function name
*
* DESCRIPTION: Remove the "Acpi" prefix from the function name, if present.
* This allows compiler macros such as __FUNCTION__ to be used
* This allows compiler macros such as __func__ to be used
* with no change to the debug output.
*
******************************************************************************/

View File

@ -432,7 +432,7 @@ acpi_ut_get_simple_object_size(union acpi_operand_object *internal_object,
* element -- which is legal)
*/
if (!internal_object) {
*obj_length = 0;
*obj_length = sizeof(union acpi_object);
return_ACPI_STATUS(AE_OK);
}

View File

@ -407,6 +407,12 @@ acpi_evaluate_reference(acpi_handle handle,
break;
}
if (!element->reference.handle) {
printk(KERN_WARNING PREFIX "Invalid reference in"
" package %s\n", pathname);
status = AE_NULL_ENTRY;
break;
}
/* Get the acpi_handle. */
list->handles[i] = element->reference.handle;

View File

@ -713,7 +713,7 @@ static void acpi_video_device_find_cap(struct acpi_video_device *device)
kfree(obj);
if (device->cap._BCL && device->cap._BCM && device->cap._BQC && max_level > 0){
if (device->cap._BCL && device->cap._BCM && max_level > 0) {
int result;
static int count = 0;
char *name;
@ -807,40 +807,11 @@ static void acpi_video_bus_find_cap(struct acpi_video_bus *video)
static int acpi_video_bus_check(struct acpi_video_bus *video)
{
acpi_status status = -ENOENT;
long device_id;
struct device *dev;
struct acpi_device *device;
if (!video)
return -EINVAL;
device = video->device;
status =
acpi_evaluate_integer(device->handle, "_ADR", NULL, &device_id);
if (!ACPI_SUCCESS(status))
return -ENODEV;
/* We need to attempt to determine whether the _ADR refers to a
PCI device or not. There's no terribly good way to do this,
so the best we can hope for is to assume that there'll never
be a video device in the host bridge */
if (device_id >= 0x10000) {
/* It looks like a PCI device. Does it exist? */
dev = acpi_get_physical_device(device->handle);
} else {
/* It doesn't look like a PCI device. Does its parent
exist? */
acpi_handle phandle;
if (acpi_get_parent(device->handle, &phandle))
return -ENODEV;
dev = acpi_get_physical_device(phandle);
}
if (!dev)
return -ENODEV;
put_device(dev);
/* Since there is no HID, CID and so on for VGA driver, we have
* to check well known required nodes.
*/
@ -1201,7 +1172,7 @@ static int acpi_video_bus_ROM_seq_show(struct seq_file *seq, void *offset)
if (!video)
goto end;
printk(KERN_INFO PREFIX "Please implement %s\n", __FUNCTION__);
printk(KERN_INFO PREFIX "Please implement %s\n", __func__);
seq_printf(seq, "<TODO>\n");
end:
@ -1366,37 +1337,8 @@ acpi_video_bus_write_DOS(struct file *file,
static int acpi_video_bus_add_fs(struct acpi_device *device)
{
long device_id;
int status;
struct proc_dir_entry *entry = NULL;
struct acpi_video_bus *video;
struct device *dev;
status =
acpi_evaluate_integer(device->handle, "_ADR", NULL, &device_id);
if (!ACPI_SUCCESS(status))
return -ENODEV;
/* We need to attempt to determine whether the _ADR refers to a
PCI device or not. There's no terribly good way to do this,
so the best we can hope for is to assume that there'll never
be a video device in the host bridge */
if (device_id >= 0x10000) {
/* It looks like a PCI device. Does it exist? */
dev = acpi_get_physical_device(device->handle);
} else {
/* It doesn't look like a PCI device. Does its parent
exist? */
acpi_handle phandle;
if (acpi_get_parent(device->handle, &phandle))
return -ENODEV;
dev = acpi_get_physical_device(phandle);
}
if (!dev)
return -ENODEV;
put_device(dev);
video = acpi_driver_data(device);

View File

@ -293,7 +293,7 @@ struct acpi_buffer *out)
{
struct guid_block *block = NULL;
struct wmi_block *wblock = NULL;
acpi_handle handle;
acpi_handle handle, wc_handle;
acpi_status status, wc_status = AE_ERROR;
struct acpi_object_list input, wc_input;
union acpi_object wc_params[1], wq_params[1];
@ -338,8 +338,10 @@ struct acpi_buffer *out)
* expensive, but have no corresponding WCxx method. So we
* should not fail if this happens.
*/
wc_status = acpi_evaluate_object(handle, wc_method,
&wc_input, NULL);
wc_status = acpi_get_handle(handle, wc_method, &wc_handle);
if (ACPI_SUCCESS(wc_status))
wc_status = acpi_evaluate_object(handle, wc_method,
&wc_input, NULL);
}
strcpy(method, "WQ");
@ -351,7 +353,7 @@ struct acpi_buffer *out)
* If ACPI_WMI_EXPENSIVE, call the relevant WCxx method, even if
* the WQxx method failed - we should disable collection anyway.
*/
if ((block->flags & ACPI_WMI_EXPENSIVE) && wc_status) {
if ((block->flags & ACPI_WMI_EXPENSIVE) && ACPI_SUCCESS(wc_status)) {
wc_params[0].integer.value = 0;
status = acpi_evaluate_object(handle,
wc_method, &wc_input, NULL);

View File

@ -30,6 +30,7 @@ config ATA_NONSTANDARD
config ATA_ACPI
bool
depends on ACPI && PCI
select ACPI_DOCK
default y
help
This option adds support for ATA-related ACPI objects.

View File

@ -49,6 +49,10 @@
#define DRV_NAME "ahci"
#define DRV_VERSION "3.0"
static int ahci_skip_host_reset;
module_param_named(skip_host_reset, ahci_skip_host_reset, int, 0444);
MODULE_PARM_DESC(skip_host_reset, "skip global host reset (0=don't skip, 1=skip)");
static int ahci_enable_alpm(struct ata_port *ap,
enum link_pm policy);
static void ahci_disable_alpm(struct ata_port *ap);
@ -587,6 +591,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
/* Marvell */
{ PCI_VDEVICE(MARVELL, 0x6145), board_ahci_mv }, /* 6145 */
{ PCI_VDEVICE(MARVELL, 0x6121), board_ahci_mv }, /* 6121 */
/* Generic, PCI class code for AHCI */
{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
@ -661,6 +666,7 @@ static void ahci_save_initial_config(struct pci_dev *pdev,
void __iomem *mmio = pcim_iomap_table(pdev)[AHCI_PCI_BAR];
u32 cap, port_map;
int i;
int mv;
/* make sure AHCI mode is enabled before accessing CAP */
ahci_enable_ahci(mmio);
@ -696,12 +702,16 @@ static void ahci_save_initial_config(struct pci_dev *pdev,
* presence register, as bit 4 (counting from 0)
*/
if (hpriv->flags & AHCI_HFLAG_MV_PATA) {
if (pdev->device == 0x6121)
mv = 0x3;
else
mv = 0xf;
dev_printk(KERN_ERR, &pdev->dev,
"MV_AHCI HACK: port_map %x -> %x\n",
hpriv->port_map,
hpriv->port_map & 0xf);
port_map,
port_map & mv);
port_map &= 0xf;
port_map &= mv;
}
/* cross check port_map and cap.n_ports */
@ -1088,29 +1098,35 @@ static int ahci_reset_controller(struct ata_host *host)
ahci_enable_ahci(mmio);
/* global controller reset */
tmp = readl(mmio + HOST_CTL);
if ((tmp & HOST_RESET) == 0) {
writel(tmp | HOST_RESET, mmio + HOST_CTL);
readl(mmio + HOST_CTL); /* flush */
}
if (!ahci_skip_host_reset) {
tmp = readl(mmio + HOST_CTL);
if ((tmp & HOST_RESET) == 0) {
writel(tmp | HOST_RESET, mmio + HOST_CTL);
readl(mmio + HOST_CTL); /* flush */
}
/* reset must complete within 1 second, or
* the hardware should be considered fried.
*/
ssleep(1);
/* reset must complete within 1 second, or
* the hardware should be considered fried.
*/
ssleep(1);
tmp = readl(mmio + HOST_CTL);
if (tmp & HOST_RESET) {
dev_printk(KERN_ERR, host->dev,
"controller reset failed (0x%x)\n", tmp);
return -EIO;
}
tmp = readl(mmio + HOST_CTL);
if (tmp & HOST_RESET) {
dev_printk(KERN_ERR, host->dev,
"controller reset failed (0x%x)\n", tmp);
return -EIO;
}
/* turn on AHCI mode */
ahci_enable_ahci(mmio);
/* turn on AHCI mode */
ahci_enable_ahci(mmio);
/* some registers might be cleared on reset. restore initial values */
ahci_restore_initial_config(host);
/* Some registers might be cleared on reset. Restore
* initial values.
*/
ahci_restore_initial_config(host);
} else
dev_printk(KERN_INFO, host->dev,
"skipping global host reset\n");
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
u16 tmp16;
@ -1162,9 +1178,14 @@ static void ahci_init_controller(struct ata_host *host)
int i;
void __iomem *port_mmio;
u32 tmp;
int mv;
if (hpriv->flags & AHCI_HFLAG_MV_PATA) {
port_mmio = __ahci_port_base(host, 4);
if (pdev->device == 0x6121)
mv = 2;
else
mv = 4;
port_mmio = __ahci_port_base(host, mv);
writel(0, port_mmio + PORT_IRQ_MASK);
@ -2241,7 +2262,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc)
return rc;
rc = pcim_iomap_regions(pdev, 1 << AHCI_PCI_BAR, DRV_NAME);
/* AHCI controllers often implement SFF compatible interface.
* Grab all PCI BARs just in case.
*/
rc = pcim_iomap_regions_request_all(pdev, 1 << AHCI_PCI_BAR, DRV_NAME);
if (rc == -EBUSY)
pcim_pin_device(pdev);
if (rc)

View File

@ -118,45 +118,77 @@ static void ata_acpi_associate_ide_port(struct ata_port *ap)
ap->pflags |= ATA_PFLAG_INIT_GTM_VALID;
}
static void ata_acpi_handle_hotplug(struct ata_port *ap, struct kobject *kobj,
static void ata_acpi_handle_hotplug(struct ata_port *ap, struct ata_device *dev,
u32 event)
{
char event_string[12];
char *envp[] = { event_string, NULL };
struct ata_eh_info *ehi = &ap->link.eh_info;
struct ata_eh_info *ehi;
struct kobject *kobj = NULL;
int wait = 0;
unsigned long flags;
if (event == 0 || event == 1) {
unsigned long flags;
spin_lock_irqsave(ap->lock, flags);
ata_ehi_clear_desc(ehi);
ata_ehi_push_desc(ehi, "ACPI event");
ata_ehi_hotplugged(ehi);
ata_port_freeze(ap);
spin_unlock_irqrestore(ap->lock, flags);
if (!ap)
ap = dev->link->ap;
ehi = &ap->link.eh_info;
spin_lock_irqsave(ap->lock, flags);
switch (event) {
case ACPI_NOTIFY_BUS_CHECK:
case ACPI_NOTIFY_DEVICE_CHECK:
ata_ehi_push_desc(ehi, "ACPI event");
ata_ehi_hotplugged(ehi);
ata_port_freeze(ap);
break;
case ACPI_NOTIFY_EJECT_REQUEST:
ata_ehi_push_desc(ehi, "ACPI event");
if (dev)
dev->flags |= ATA_DFLAG_DETACH;
else {
struct ata_link *tlink;
struct ata_device *tdev;
ata_port_for_each_link(tlink, ap)
ata_link_for_each_dev(tdev, tlink)
tdev->flags |= ATA_DFLAG_DETACH;
}
ata_port_schedule_eh(ap);
wait = 1;
break;
}
if (dev) {
if (dev->sdev)
kobj = &dev->sdev->sdev_gendev.kobj;
} else
kobj = &ap->dev->kobj;
if (kobj) {
sprintf(event_string, "BAY_EVENT=%d", event);
kobject_uevent_env(kobj, KOBJ_CHANGE, envp);
}
spin_unlock_irqrestore(ap->lock, flags);
if (wait)
ata_port_wait_eh(ap);
}
static void ata_acpi_dev_notify(acpi_handle handle, u32 event, void *data)
{
struct ata_device *dev = data;
struct kobject *kobj = NULL;
if (dev->sdev)
kobj = &dev->sdev->sdev_gendev.kobj;
ata_acpi_handle_hotplug(dev->link->ap, kobj, event);
ata_acpi_handle_hotplug(NULL, dev, event);
}
static void ata_acpi_ap_notify(acpi_handle handle, u32 event, void *data)
{
struct ata_port *ap = data;
ata_acpi_handle_hotplug(ap, &ap->dev->kobj, event);
ata_acpi_handle_hotplug(ap, NULL, event);
}
/**
@ -191,20 +223,30 @@ void ata_acpi_associate(struct ata_host *host)
else
ata_acpi_associate_ide_port(ap);
if (ap->acpi_handle)
acpi_install_notify_handler (ap->acpi_handle,
ACPI_SYSTEM_NOTIFY,
ata_acpi_ap_notify,
ap);
if (ap->acpi_handle) {
acpi_install_notify_handler(ap->acpi_handle,
ACPI_SYSTEM_NOTIFY,
ata_acpi_ap_notify, ap);
#if defined(CONFIG_ACPI_DOCK) || defined(CONFIG_ACPI_DOCK_MODULE)
/* we might be on a docking station */
register_hotplug_dock_device(ap->acpi_handle,
ata_acpi_ap_notify, ap);
#endif
}
for (j = 0; j < ata_link_max_devices(&ap->link); j++) {
struct ata_device *dev = &ap->link.device[j];
if (dev->acpi_handle)
acpi_install_notify_handler (dev->acpi_handle,
ACPI_SYSTEM_NOTIFY,
ata_acpi_dev_notify,
dev);
if (dev->acpi_handle) {
acpi_install_notify_handler(dev->acpi_handle,
ACPI_SYSTEM_NOTIFY,
ata_acpi_dev_notify, dev);
#if defined(CONFIG_ACPI_DOCK) || defined(CONFIG_ACPI_DOCK_MODULE)
/* we might be on a docking station */
register_hotplug_dock_device(dev->acpi_handle,
ata_acpi_dev_notify, dev);
#endif
}
}
}
}

View File

@ -295,7 +295,7 @@ static void ali_lock_sectors(struct ata_device *adev)
static int ali_check_atapi_dma(struct ata_queued_cmd *qc)
{
/* If its not a media command, its not worth it */
if (qc->nbytes < 2048)
if (atapi_cmd_type(qc->cdb[0]) == ATAPI_MISC)
return -EOPNOTSUPP;
return 0;
}

View File

@ -217,7 +217,6 @@ static int use_virtual_dma;
*/
static DEFINE_SPINLOCK(floppy_lock);
static struct completion device_release;
static unsigned short virtual_dma_port = 0x3f0;
irqreturn_t floppy_interrupt(int irq, void *dev_id);
@ -4144,7 +4143,6 @@ DEVICE_ATTR(cmos,S_IRUGO,floppy_cmos_show,NULL);
static void floppy_device_release(struct device *dev)
{
complete(&device_release);
}
static struct platform_device floppy_device[N_DRIVE];
@ -4539,7 +4537,6 @@ void cleanup_module(void)
{
int drive;
init_completion(&device_release);
blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
unregister_blkdev(FLOPPY_MAJOR, "fd");
@ -4564,8 +4561,6 @@ void cleanup_module(void)
/* eject disk, if any */
fd_eject(0);
wait_for_completion(&device_release);
}
module_param(floppy, charp, 0);

View File

@ -238,6 +238,7 @@ static int virtblk_probe(struct virtio_device *vdev)
vblk->disk->first_minor = index_to_minor(index);
vblk->disk->private_data = vblk;
vblk->disk->fops = &virtblk_fops;
vblk->disk->driverfs_dev = &vdev->dev;
index++;
/* If barriers are supported, tell block layer that queue is ordered */

View File

@ -1709,7 +1709,7 @@ static int __init riscom8_init_module (void)
if (iobase || iobase1 || iobase2 || iobase3) {
for(i = 0; i < RC_NBOARD; i++)
rc_board[0].base = 0;
rc_board[i].base = 0;
}
if (iobase)

View File

@ -357,7 +357,7 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_zero_sum);
BUG_ON(dma_has_cap(DMA_MEMSET, device->cap_mask) &&
!device->device_prep_dma_memset);
BUG_ON(dma_has_cap(DMA_ZERO_SUM, device->cap_mask) &&
BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
!device->device_prep_dma_interrupt);
BUG_ON(!device->device_alloc_chan_resources);

View File

@ -57,12 +57,12 @@ static void dma_init(struct fsl_dma_chan *fsl_chan)
}
static void set_sr(struct fsl_dma_chan *fsl_chan, dma_addr_t val)
static void set_sr(struct fsl_dma_chan *fsl_chan, u32 val)
{
DMA_OUT(fsl_chan, &fsl_chan->reg_base->sr, val, 32);
}
static dma_addr_t get_sr(struct fsl_dma_chan *fsl_chan)
static u32 get_sr(struct fsl_dma_chan *fsl_chan)
{
return DMA_IN(fsl_chan, &fsl_chan->reg_base->sr, 32);
}
@ -406,6 +406,32 @@ static void fsl_dma_free_chan_resources(struct dma_chan *chan)
dma_pool_destroy(fsl_chan->desc_pool);
}
static struct dma_async_tx_descriptor *
fsl_dma_prep_interrupt(struct dma_chan *chan)
{
struct fsl_dma_chan *fsl_chan;
struct fsl_desc_sw *new;
if (!chan)
return NULL;
fsl_chan = to_fsl_chan(chan);
new = fsl_dma_alloc_descriptor(fsl_chan);
if (!new) {
dev_err(fsl_chan->dev, "No free memory for link descriptor\n");
return NULL;
}
new->async_tx.cookie = -EBUSY;
new->async_tx.ack = 0;
/* Set End-of-link to the last link descriptor of new list*/
set_ld_eol(fsl_chan, new);
return &new->async_tx;
}
static struct dma_async_tx_descriptor *fsl_dma_prep_memcpy(
struct dma_chan *chan, dma_addr_t dma_dest, dma_addr_t dma_src,
size_t len, unsigned long flags)
@ -436,7 +462,7 @@ static struct dma_async_tx_descriptor *fsl_dma_prep_memcpy(
dev_dbg(fsl_chan->dev, "new link desc alloc %p\n", new);
#endif
copy = min(len, FSL_DMA_BCR_MAX_CNT);
copy = min(len, (size_t)FSL_DMA_BCR_MAX_CNT);
set_desc_cnt(fsl_chan, &new->hw, copy);
set_desc_src(fsl_chan, &new->hw, dma_src);
@ -513,7 +539,6 @@ static void fsl_chan_ld_cleanup(struct fsl_dma_chan *fsl_chan)
spin_lock_irqsave(&fsl_chan->desc_lock, flags);
fsl_dma_update_completed_cookie(fsl_chan);
dev_dbg(fsl_chan->dev, "chan completed_cookie = %d\n",
fsl_chan->completed_cookie);
list_for_each_entry_safe(desc, _desc, &fsl_chan->ld_queue, node) {
@ -581,8 +606,8 @@ static void fsl_chan_xfer_ld_queue(struct fsl_dma_chan *fsl_chan)
if (ld_node != &fsl_chan->ld_queue) {
/* Get the ld start address from ld_queue */
next_dest_addr = to_fsl_desc(ld_node)->async_tx.phys;
dev_dbg(fsl_chan->dev, "xfer LDs staring from 0x%016llx\n",
(u64)next_dest_addr);
dev_dbg(fsl_chan->dev, "xfer LDs staring from %p\n",
(void *)next_dest_addr);
set_cdar(fsl_chan, next_dest_addr);
dma_start(fsl_chan);
} else {
@ -662,7 +687,7 @@ static enum dma_status fsl_dma_is_complete(struct dma_chan *chan,
static irqreturn_t fsl_dma_chan_do_interrupt(int irq, void *data)
{
struct fsl_dma_chan *fsl_chan = (struct fsl_dma_chan *)data;
dma_addr_t stat;
u32 stat;
stat = get_sr(fsl_chan);
dev_dbg(fsl_chan->dev, "event: channel %d, stat = 0x%x\n",
@ -681,10 +706,10 @@ static irqreturn_t fsl_dma_chan_do_interrupt(int irq, void *data)
*/
if (stat & FSL_DMA_SR_EOSI) {
dev_dbg(fsl_chan->dev, "event: End-of-segments INT\n");
dev_dbg(fsl_chan->dev, "event: clndar 0x%016llx, "
"nlndar 0x%016llx\n", (u64)get_cdar(fsl_chan),
(u64)get_ndar(fsl_chan));
dev_dbg(fsl_chan->dev, "event: clndar %p, nlndar %p\n",
(void *)get_cdar(fsl_chan), (void *)get_ndar(fsl_chan));
stat &= ~FSL_DMA_SR_EOSI;
fsl_dma_update_completed_cookie(fsl_chan);
}
/* If it current transfer is the end-of-transfer,
@ -726,12 +751,15 @@ static void dma_do_tasklet(unsigned long data)
fsl_chan_ld_cleanup(fsl_chan);
}
#ifdef FSL_DMA_CALLBACKTEST
static void fsl_dma_callback_test(struct fsl_dma_chan *fsl_chan)
{
if (fsl_chan)
dev_info(fsl_chan->dev, "selftest: callback is ok!\n");
}
#endif
#ifdef CONFIG_FSL_DMA_SELFTEST
static int fsl_dma_self_test(struct fsl_dma_chan *fsl_chan)
{
struct dma_chan *chan;
@ -837,9 +865,9 @@ static int fsl_dma_self_test(struct fsl_dma_chan *fsl_chan)
if (err) {
for (i = 0; (*(src + i) == *(dest + i)) && (i < test_size);
i++);
dev_err(fsl_chan->dev, "selftest: Test failed, data %d/%d is "
dev_err(fsl_chan->dev, "selftest: Test failed, data %d/%ld is "
"error! src 0x%x, dest 0x%x\n",
i, test_size, *(src + i), *(dest + i));
i, (long)test_size, *(src + i), *(dest + i));
}
free_resources:
@ -848,6 +876,7 @@ out:
kfree(src);
return err;
}
#endif
static int __devinit of_fsl_dma_chan_probe(struct of_device *dev,
const struct of_device_id *match)
@ -1008,8 +1037,8 @@ static int __devinit of_fsl_dma_probe(struct of_device *dev,
}
dev_info(&dev->dev, "Probe the Freescale DMA driver for %s "
"controller at 0x%08x...\n",
match->compatible, fdev->reg.start);
"controller at %p...\n",
match->compatible, (void *)fdev->reg.start);
fdev->reg_base = ioremap(fdev->reg.start, fdev->reg.end
- fdev->reg.start + 1);
@ -1017,6 +1046,7 @@ static int __devinit of_fsl_dma_probe(struct of_device *dev,
dma_cap_set(DMA_INTERRUPT, fdev->common.cap_mask);
fdev->common.device_alloc_chan_resources = fsl_dma_alloc_chan_resources;
fdev->common.device_free_chan_resources = fsl_dma_free_chan_resources;
fdev->common.device_prep_dma_interrupt = fsl_dma_prep_interrupt;
fdev->common.device_prep_dma_memcpy = fsl_dma_prep_memcpy;
fdev->common.device_is_tx_complete = fsl_dma_is_complete;
fdev->common.device_issue_pending = fsl_dma_memcpy_issue_pending;

View File

@ -140,7 +140,7 @@ static void __iop_adma_slot_cleanup(struct iop_adma_chan *iop_chan)
int busy = iop_chan_is_busy(iop_chan);
int seen_current = 0, slot_cnt = 0, slots_per_op = 0;
dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
/* free completed slots from the chain starting with
* the oldest descriptor
*/
@ -438,7 +438,7 @@ iop_adma_tx_submit(struct dma_async_tx_descriptor *tx)
spin_unlock_bh(&iop_chan->lock);
dev_dbg(iop_chan->device->common.dev, "%s cookie: %d slot: %d\n",
__FUNCTION__, sw_desc->async_tx.cookie, sw_desc->idx);
__func__, sw_desc->async_tx.cookie, sw_desc->idx);
return cookie;
}
@ -520,7 +520,7 @@ iop_adma_prep_dma_interrupt(struct dma_chan *chan)
struct iop_adma_desc_slot *sw_desc, *grp_start;
int slot_cnt, slots_per_op;
dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_interrupt_slot_count(&slots_per_op, iop_chan);
@ -548,7 +548,7 @@ iop_adma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dma_dest,
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
__FUNCTION__, len);
__func__, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memcpy_slot_count(len, &slots_per_op);
@ -580,7 +580,7 @@ iop_adma_prep_dma_memset(struct dma_chan *chan, dma_addr_t dma_dest,
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
__FUNCTION__, len);
__func__, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memset_slot_count(len, &slots_per_op);
@ -614,7 +614,7 @@ iop_adma_prep_dma_xor(struct dma_chan *chan, dma_addr_t dma_dest,
dev_dbg(iop_chan->device->common.dev,
"%s src_cnt: %d len: %u flags: %lx\n",
__FUNCTION__, src_cnt, len, flags);
__func__, src_cnt, len, flags);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_xor_slot_count(len, src_cnt, &slots_per_op);
@ -648,7 +648,7 @@ iop_adma_prep_dma_zero_sum(struct dma_chan *chan, dma_addr_t *dma_src,
return NULL;
dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n",
__FUNCTION__, src_cnt, len);
__func__, src_cnt, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_zero_sum_slot_count(len, src_cnt, &slots_per_op);
@ -659,7 +659,7 @@ iop_adma_prep_dma_zero_sum(struct dma_chan *chan, dma_addr_t *dma_src,
iop_desc_set_zero_sum_byte_count(grp_start, len);
grp_start->xor_check_result = result;
pr_debug("\t%s: grp_start->xor_check_result: %p\n",
__FUNCTION__, grp_start->xor_check_result);
__func__, grp_start->xor_check_result);
sw_desc->unmap_src_cnt = src_cnt;
sw_desc->unmap_len = len;
while (src_cnt--)
@ -700,7 +700,7 @@ static void iop_adma_free_chan_resources(struct dma_chan *chan)
iop_chan->last_used = NULL;
dev_dbg(iop_chan->device->common.dev, "%s slots_allocated %d\n",
__FUNCTION__, iop_chan->slots_allocated);
__func__, iop_chan->slots_allocated);
spin_unlock_bh(&iop_chan->lock);
/* one is ok since we left it on there on purpose */
@ -753,7 +753,7 @@ static irqreturn_t iop_adma_eot_handler(int irq, void *data)
{
struct iop_adma_chan *chan = data;
dev_dbg(chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(chan->device->common.dev, "%s\n", __func__);
tasklet_schedule(&chan->irq_tasklet);
@ -766,7 +766,7 @@ static irqreturn_t iop_adma_eoc_handler(int irq, void *data)
{
struct iop_adma_chan *chan = data;
dev_dbg(chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(chan->device->common.dev, "%s\n", __func__);
tasklet_schedule(&chan->irq_tasklet);
@ -823,7 +823,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device)
int err = 0;
struct iop_adma_chan *iop_chan;
dev_dbg(device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(device->common.dev, "%s\n", __func__);
src = kzalloc(sizeof(u8) * IOP_ADMA_TEST_SIZE, GFP_KERNEL);
if (!src)
@ -906,7 +906,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device)
int err = 0;
struct iop_adma_chan *iop_chan;
dev_dbg(device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(device->common.dev, "%s\n", __func__);
for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) {
xor_srcs[src_idx] = alloc_page(GFP_KERNEL);
@ -1159,7 +1159,7 @@ static int __devinit iop_adma_probe(struct platform_device *pdev)
}
dev_dbg(&pdev->dev, "%s: allocted descriptor pool virt %p phys %p\n",
__FUNCTION__, adev->dma_desc_pool_virt,
__func__, adev->dma_desc_pool_virt,
(void *) adev->dma_desc_pool);
adev->id = plat_data->hw_id;
@ -1289,7 +1289,7 @@ static void iop_chan_start_null_memcpy(struct iop_adma_chan *iop_chan)
dma_cookie_t cookie;
int slot_cnt, slots_per_op;
dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memcpy_slot_count(0, &slots_per_op);
@ -1346,7 +1346,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan)
dma_cookie_t cookie;
int slot_cnt, slots_per_op;
dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_xor_slot_count(0, 2, &slots_per_op);

View File

@ -1,5 +1,3 @@
# -*- shell-script -*-
comment "An alternative FireWire stack is available with EXPERIMENTAL=y"
depends on EXPERIMENTAL=n
@ -21,27 +19,7 @@ config FIREWIRE
NOTE:
You should only build ONE of the stacks, unless you REALLY know what
you are doing. If you install both, you should configure them only as
modules rather than link them statically, and you should blacklist one
of the concurrent low-level drivers in /etc/modprobe.conf. Add either
blacklist firewire-ohci
or
blacklist ohci1394
there depending on which driver you DON'T want to have auto-loaded.
You can optionally do the same with the other IEEE 1394/ FireWire
drivers.
If you have an old modprobe which doesn't implement the blacklist
directive, use either
install firewire-ohci /bin/true
or
install ohci1394 /bin/true
and so on, depending on which modules you DON't want to have
auto-loaded.
you are doing.
config FIREWIRE_OHCI
tristate "Support for OHCI FireWire host controllers"
@ -57,8 +35,24 @@ config FIREWIRE_OHCI
NOTE:
If you also build ohci1394 of the classic stack, blacklist either
ohci1394 or firewire-ohci to let hotplug load only the desired driver.
You should only build ohci1394 or firewire-ohci, but not both.
If you nevertheless want to install both, you should configure them
only as modules and blacklist the driver(s) which you don't want to
have auto-loaded. Add either
blacklist firewire-ohci
or
blacklist ohci1394
blacklist video1394
blacklist dv1394
to /etc/modprobe.conf or /etc/modprobe.d/* and update modprobe.conf
depending on your distribution. The latter two modules should be
blacklisted together with ohci1394 because they depend on ohci1394.
If you have an old modprobe which doesn't implement the blacklist
directive, use "install modulename /bin/true" for the modules to be
blacklisted.
config FIREWIRE_SBP2
tristate "Support for storage devices (SBP-2 protocol driver)"
@ -75,9 +69,3 @@ config FIREWIRE_SBP2
You should also enable support for disks, CD-ROMs, etc. in the SCSI
configuration section.
NOTE:
If you also build sbp2 of the classic stack, blacklist either sbp2
or firewire-sbp2 to let hotplug load only the desired driver.

View File

@ -33,6 +33,10 @@
#include <asm/page.h>
#include <asm/system.h>
#ifdef CONFIG_PPC_PMAC
#include <asm/pmac_feature.h>
#endif
#include "fw-ohci.h"
#include "fw-transaction.h"
@ -175,6 +179,7 @@ struct fw_ohci {
int generation;
int request_generation;
u32 bus_seconds;
bool old_uninorth;
/*
* Spinlock for accessing fw_ohci data. Never call out of
@ -276,19 +281,13 @@ static int ar_context_add_page(struct ar_context *ctx)
{
struct device *dev = ctx->ohci->card.device;
struct ar_buffer *ab;
dma_addr_t ab_bus;
dma_addr_t uninitialized_var(ab_bus);
size_t offset;
ab = (struct ar_buffer *) __get_free_page(GFP_ATOMIC);
ab = dma_alloc_coherent(dev, PAGE_SIZE, &ab_bus, GFP_ATOMIC);
if (ab == NULL)
return -ENOMEM;
ab_bus = dma_map_single(dev, ab, PAGE_SIZE, DMA_BIDIRECTIONAL);
if (dma_mapping_error(ab_bus)) {
free_page((unsigned long) ab);
return -ENOMEM;
}
memset(&ab->descriptor, 0, sizeof(ab->descriptor));
ab->descriptor.control = cpu_to_le16(DESCRIPTOR_INPUT_MORE |
DESCRIPTOR_STATUS |
@ -299,8 +298,6 @@ static int ar_context_add_page(struct ar_context *ctx)
ab->descriptor.res_count = cpu_to_le16(PAGE_SIZE - offset);
ab->descriptor.branch_address = 0;
dma_sync_single_for_device(dev, ab_bus, PAGE_SIZE, DMA_BIDIRECTIONAL);
ctx->last_buffer->descriptor.branch_address = cpu_to_le32(ab_bus | 1);
ctx->last_buffer->next = ab;
ctx->last_buffer = ab;
@ -311,15 +308,22 @@ static int ar_context_add_page(struct ar_context *ctx)
return 0;
}
#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
#define cond_le32_to_cpu(v) \
(ohci->old_uninorth ? (__force __u32)(v) : le32_to_cpu(v))
#else
#define cond_le32_to_cpu(v) le32_to_cpu(v)
#endif
static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
{
struct fw_ohci *ohci = ctx->ohci;
struct fw_packet p;
u32 status, length, tcode;
p.header[0] = le32_to_cpu(buffer[0]);
p.header[1] = le32_to_cpu(buffer[1]);
p.header[2] = le32_to_cpu(buffer[2]);
p.header[0] = cond_le32_to_cpu(buffer[0]);
p.header[1] = cond_le32_to_cpu(buffer[1]);
p.header[2] = cond_le32_to_cpu(buffer[2]);
tcode = (p.header[0] >> 4) & 0x0f;
switch (tcode) {
@ -331,7 +335,7 @@ static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
break;
case TCODE_READ_BLOCK_REQUEST :
p.header[3] = le32_to_cpu(buffer[3]);
p.header[3] = cond_le32_to_cpu(buffer[3]);
p.header_length = 16;
p.payload_length = 0;
break;
@ -340,7 +344,7 @@ static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
case TCODE_READ_BLOCK_RESPONSE:
case TCODE_LOCK_REQUEST:
case TCODE_LOCK_RESPONSE:
p.header[3] = le32_to_cpu(buffer[3]);
p.header[3] = cond_le32_to_cpu(buffer[3]);
p.header_length = 16;
p.payload_length = p.header[3] >> 16;
break;
@ -357,7 +361,7 @@ static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
/* FIXME: What to do about evt_* errors? */
length = (p.header_length + p.payload_length + 3) / 4;
status = le32_to_cpu(buffer[length]);
status = cond_le32_to_cpu(buffer[length]);
p.ack = ((status >> 16) & 0x1f) - 16;
p.speed = (status >> 21) & 0x7;
@ -375,7 +379,7 @@ static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
*/
if (p.ack + 16 == 0x09)
ohci->request_generation = (buffer[2] >> 16) & 0xff;
ohci->request_generation = (p.header[2] >> 16) & 0xff;
else if (ctx == &ohci->ar_request_ctx)
fw_core_handle_request(&ohci->card, &p);
else
@ -397,6 +401,7 @@ static void ar_context_tasklet(unsigned long data)
if (d->res_count == 0) {
size_t size, rest, offset;
dma_addr_t buffer_bus;
/*
* This descriptor is finished and we may have a
@ -405,9 +410,7 @@ static void ar_context_tasklet(unsigned long data)
*/
offset = offsetof(struct ar_buffer, data);
dma_unmap_single(ohci->card.device,
le32_to_cpu(ab->descriptor.data_address) - offset,
PAGE_SIZE, DMA_BIDIRECTIONAL);
buffer_bus = le32_to_cpu(ab->descriptor.data_address) - offset;
buffer = ab;
ab = ab->next;
@ -423,7 +426,8 @@ static void ar_context_tasklet(unsigned long data)
while (buffer < end)
buffer = handle_ar_packet(ctx, buffer);
free_page((unsigned long)buffer);
dma_free_coherent(ohci->card.device, PAGE_SIZE,
buffer, buffer_bus);
ar_context_add_page(ctx);
} else {
buffer = ctx->pointer;
@ -532,7 +536,7 @@ static int
context_add_buffer(struct context *ctx)
{
struct descriptor_buffer *desc;
dma_addr_t bus_addr;
dma_addr_t uninitialized_var(bus_addr);
int offset;
/*
@ -1022,13 +1026,14 @@ static void bus_reset_tasklet(unsigned long data)
*/
self_id_count = (reg_read(ohci, OHCI1394_SelfIDCount) >> 3) & 0x3ff;
generation = (le32_to_cpu(ohci->self_id_cpu[0]) >> 16) & 0xff;
generation = (cond_le32_to_cpu(ohci->self_id_cpu[0]) >> 16) & 0xff;
rmb();
for (i = 1, j = 0; j < self_id_count; i += 2, j++) {
if (ohci->self_id_cpu[i] != ~ohci->self_id_cpu[i + 1])
fw_error("inconsistent self IDs\n");
ohci->self_id_buffer[j] = le32_to_cpu(ohci->self_id_cpu[i]);
ohci->self_id_buffer[j] =
cond_le32_to_cpu(ohci->self_id_cpu[i]);
}
rmb();
@ -1316,7 +1321,7 @@ ohci_set_config_rom(struct fw_card *card, u32 *config_rom, size_t length)
unsigned long flags;
int retval = -EBUSY;
__be32 *next_config_rom;
dma_addr_t next_config_rom_bus;
dma_addr_t uninitialized_var(next_config_rom_bus);
ohci = fw_ohci(card);
@ -1487,7 +1492,7 @@ static int handle_ir_dualbuffer_packet(struct context *context,
void *p, *end;
int i;
if (db->first_res_count > 0 && db->second_res_count > 0) {
if (db->first_res_count != 0 && db->second_res_count != 0) {
if (ctx->excess_bytes <= le16_to_cpu(db->second_req_count)) {
/* This descriptor isn't done yet, stop iteration. */
return 0;
@ -1513,7 +1518,7 @@ static int handle_ir_dualbuffer_packet(struct context *context,
memcpy(ctx->header + i + 4, p + 8, ctx->base.header_size - 4);
i += ctx->base.header_size;
ctx->excess_bytes +=
(le32_to_cpu(*(u32 *)(p + 4)) >> 16) & 0xffff;
(le32_to_cpu(*(__le32 *)(p + 4)) >> 16) & 0xffff;
p += ctx->base.header_size + 4;
}
ctx->header_length = i;
@ -2048,6 +2053,18 @@ pci_probe(struct pci_dev *dev, const struct pci_device_id *ent)
int err;
size_t size;
#ifdef CONFIG_PPC_PMAC
/* Necessary on some machines if fw-ohci was loaded/ unloaded before */
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(dev);
if (ofn) {
pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 1);
pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1);
}
}
#endif /* CONFIG_PPC_PMAC */
ohci = kzalloc(sizeof(*ohci), GFP_KERNEL);
if (ohci == NULL) {
fw_error("Could not malloc fw_ohci data.\n");
@ -2066,6 +2083,10 @@ pci_probe(struct pci_dev *dev, const struct pci_device_id *ent)
pci_write_config_dword(dev, OHCI1394_PCI_HCI_Control, 0);
pci_set_drvdata(dev, ohci);
#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
ohci->old_uninorth = dev->vendor == PCI_VENDOR_ID_APPLE &&
dev->device == PCI_DEVICE_ID_APPLE_UNI_N_FW;
#endif
spin_lock_init(&ohci->lock);
tasklet_init(&ohci->bus_reset_tasklet,
@ -2182,6 +2203,19 @@ static void pci_remove(struct pci_dev *dev)
pci_disable_device(dev);
fw_card_put(&ohci->card);
#ifdef CONFIG_PPC_PMAC
/* On UniNorth, power down the cable and turn off the chip clock
* to save power on laptops */
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(dev);
if (ofn) {
pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0);
pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 0);
}
}
#endif /* CONFIG_PPC_PMAC */
fw_notify("Removed fw-ohci device.\n");
}
@ -2202,6 +2236,16 @@ static int pci_suspend(struct pci_dev *pdev, pm_message_t state)
if (err)
fw_error("pci_set_power_state failed with %d\n", err);
/* PowerMac suspend code comes last */
#ifdef CONFIG_PPC_PMAC
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(pdev);
if (ofn)
pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0);
}
#endif /* CONFIG_PPC_PMAC */
return 0;
}
@ -2210,6 +2254,16 @@ static int pci_resume(struct pci_dev *pdev)
struct fw_ohci *ohci = pci_get_drvdata(pdev);
int err;
/* PowerMac resume code comes first */
#ifdef CONFIG_PPC_PMAC
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(pdev);
if (ofn)
pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1);
}
#endif /* CONFIG_PPC_PMAC */
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
err = pci_enable_device(pdev);

View File

@ -173,6 +173,7 @@ struct sbp2_target {
#define SBP2_ORB_TIMEOUT 2000U /* Timeout in ms */
#define SBP2_ORB_NULL 0x80000000
#define SBP2_MAX_SG_ELEMENT_LENGTH 0xf000
#define SBP2_RETRY_LIMIT 0xf /* 15 retries */
#define SBP2_DIRECTION_TO_MEDIA 0x0
#define SBP2_DIRECTION_FROM_MEDIA 0x1
@ -330,6 +331,11 @@ static const struct {
.model = ~0,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
/* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ {
.firmware_revision = 0x002600,
.model = ~0,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
/*
* There are iPods (2nd gen, 3rd gen) with model_id == 0, but
@ -812,6 +818,30 @@ static void sbp2_target_put(struct sbp2_target *tgt)
kref_put(&tgt->kref, sbp2_release_target);
}
static void
complete_set_busy_timeout(struct fw_card *card, int rcode,
void *payload, size_t length, void *done)
{
complete(done);
}
static void sbp2_set_busy_timeout(struct sbp2_logical_unit *lu)
{
struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
DECLARE_COMPLETION_ONSTACK(done);
struct fw_transaction t;
static __be32 busy_timeout;
/* FIXME: we should try to set dual-phase cycle_limit too */
busy_timeout = cpu_to_be32(SBP2_RETRY_LIMIT);
fw_send_request(device->card, &t, TCODE_WRITE_QUADLET_REQUEST,
lu->tgt->node_id, lu->generation, device->max_speed,
CSR_REGISTER_BASE + CSR_BUSY_TIMEOUT, &busy_timeout,
sizeof(busy_timeout), complete_set_busy_timeout, &done);
wait_for_completion(&done);
}
static void sbp2_reconnect(struct work_struct *work);
static void sbp2_login(struct work_struct *work)
@ -864,10 +894,8 @@ static void sbp2_login(struct work_struct *work)
fw_notify("%s: logged in to LUN %04x (%d retries)\n",
tgt->bus_id, lu->lun, lu->retries);
#if 0
/* FIXME: The linux1394 sbp2 does this last step. */
sbp2_set_busy_timeout(scsi_id);
#endif
/* set appropriate retry limit(s) in BUSY_TIMEOUT register */
sbp2_set_busy_timeout(lu);
PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect);
sbp2_agent_reset(lu);

View File

@ -21,6 +21,7 @@
#include <linux/module.h>
#include <linux/wait.h>
#include <linux/errno.h>
#include <asm/bug.h>
#include <asm/system.h>
#include "fw-transaction.h"
#include "fw-topology.h"
@ -424,8 +425,8 @@ update_tree(struct fw_card *card, struct fw_node *root)
node1 = fw_node(list1.next);
while (&node0->link != &list0) {
WARN_ON(node0->port_count != node1->port_count);
/* assert(node0->port_count == node1->port_count); */
if (node0->link_on && !node1->link_on)
event = FW_NODE_LINK_OFF;
else if (!node0->link_on && node1->link_on)

View File

@ -751,7 +751,7 @@ handle_topology_map(struct fw_card *card, struct fw_request *request,
void *payload, size_t length, void *callback_data)
{
int i, start, end;
u32 *map;
__be32 *map;
if (!TCODE_IS_READ_REQUEST(tcode)) {
fw_send_response(card, request, RCODE_TYPE_ERROR);

View File

@ -86,12 +86,12 @@
static inline void
fw_memcpy_from_be32(void *_dst, void *_src, size_t size)
{
u32 *dst = _dst;
u32 *src = _src;
u32 *dst = _dst;
__be32 *src = _src;
int i;
for (i = 0; i < size / 4; i++)
dst[i] = cpu_to_be32(src[i]);
dst[i] = be32_to_cpu(src[i]);
}
static inline void

View File

@ -376,6 +376,11 @@ static const struct {
.model_id = SBP2_ROM_VALUE_WILDCARD,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
/* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ {
.firmware_revision = 0x002600,
.model_id = SBP2_ROM_VALUE_WILDCARD,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
/* iPod 4th generation */ {
.firmware_revision = 0x0a2700,
.model_id = 0x000021,

View File

@ -75,7 +75,7 @@
#define IPATH_IB_LINKDOWN 0
#define IPATH_IB_LINKARM 1
#define IPATH_IB_LINKACTIVE 2
#define IPATH_IB_LINKINIT 3
#define IPATH_IB_LINKDOWN_ONLY 3
#define IPATH_IB_LINKDOWN_SLEEP 4
#define IPATH_IB_LINKDOWN_DISABLE 5
#define IPATH_IB_LINK_LOOPBACK 6 /* enable local loopback */

View File

@ -851,8 +851,7 @@ void ipath_disarm_piobufs(struct ipath_devdata *dd, unsigned first,
* -ETIMEDOUT state can have multiple states set, for any of several
* transitions.
*/
static int ipath_wait_linkstate(struct ipath_devdata *dd, u32 state,
int msecs)
int ipath_wait_linkstate(struct ipath_devdata *dd, u32 state, int msecs)
{
dd->ipath_state_wanted = state;
wait_event_interruptible_timeout(ipath_state_wait,
@ -1656,8 +1655,8 @@ void ipath_cancel_sends(struct ipath_devdata *dd, int restore_sendctrl)
static void ipath_set_ib_lstate(struct ipath_devdata *dd, int which)
{
static const char *what[4] = {
[0] = "DOWN",
[INFINIPATH_IBCC_LINKCMD_INIT] = "INIT",
[0] = "NOP",
[INFINIPATH_IBCC_LINKCMD_DOWN] = "DOWN",
[INFINIPATH_IBCC_LINKCMD_ARMED] = "ARMED",
[INFINIPATH_IBCC_LINKCMD_ACTIVE] = "ACTIVE"
};
@ -1672,9 +1671,9 @@ static void ipath_set_ib_lstate(struct ipath_devdata *dd, int which)
(dd, dd->ipath_kregs->kr_ibcstatus) >>
INFINIPATH_IBCS_LINKTRAININGSTATE_SHIFT) &
INFINIPATH_IBCS_LINKTRAININGSTATE_MASK]);
/* flush all queued sends when going to DOWN or INIT, to be sure that
/* flush all queued sends when going to DOWN to be sure that
* they don't block MAD packets */
if (!linkcmd || linkcmd == INFINIPATH_IBCC_LINKCMD_INIT)
if (linkcmd == INFINIPATH_IBCC_LINKCMD_DOWN)
ipath_cancel_sends(dd, 1);
ipath_write_kreg(dd, dd->ipath_kregs->kr_ibcctrl,
@ -1687,6 +1686,13 @@ int ipath_set_linkstate(struct ipath_devdata *dd, u8 newstate)
int ret;
switch (newstate) {
case IPATH_IB_LINKDOWN_ONLY:
ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKCMD_DOWN <<
INFINIPATH_IBCC_LINKCMD_SHIFT);
/* don't wait */
ret = 0;
goto bail;
case IPATH_IB_LINKDOWN:
ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKINITCMD_POLL <<
INFINIPATH_IBCC_LINKINITCMD_SHIFT);
@ -1709,16 +1715,6 @@ int ipath_set_linkstate(struct ipath_devdata *dd, u8 newstate)
ret = 0;
goto bail;
case IPATH_IB_LINKINIT:
if (dd->ipath_flags & IPATH_LINKINIT) {
ret = 0;
goto bail;
}
ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKCMD_INIT <<
INFINIPATH_IBCC_LINKCMD_SHIFT);
lstate = IPATH_LINKINIT;
break;
case IPATH_IB_LINKARM:
if (dd->ipath_flags & IPATH_LINKARMED) {
ret = 0;

View File

@ -767,6 +767,7 @@ void ipath_kreceive(struct ipath_portdata *);
int ipath_setrcvhdrsize(struct ipath_devdata *, unsigned);
int ipath_reset_device(int);
void ipath_get_faststats(unsigned long);
int ipath_wait_linkstate(struct ipath_devdata *, u32, int);
int ipath_set_linkstate(struct ipath_devdata *, u8);
int ipath_set_mtu(struct ipath_devdata *, u16);
int ipath_set_lid(struct ipath_devdata *, u32, u8);

View File

@ -555,10 +555,7 @@ static int recv_subn_set_portinfo(struct ib_smp *smp,
/* FALLTHROUGH */
case IB_PORT_DOWN:
if (lstate == 0)
if (get_linkdowndefaultstate(dd))
lstate = IPATH_IB_LINKDOWN_SLEEP;
else
lstate = IPATH_IB_LINKDOWN;
lstate = IPATH_IB_LINKDOWN_ONLY;
else if (lstate == 1)
lstate = IPATH_IB_LINKDOWN_SLEEP;
else if (lstate == 2)
@ -568,6 +565,8 @@ static int recv_subn_set_portinfo(struct ib_smp *smp,
else
goto err;
ipath_set_linkstate(dd, lstate);
ipath_wait_linkstate(dd, IPATH_LINKINIT | IPATH_LINKARMED |
IPATH_LINKACTIVE, 1000);
break;
case IB_PORT_ARMED:
ipath_set_linkstate(dd, IPATH_IB_LINKARM);

View File

@ -329,8 +329,9 @@ struct ipath_qp *ipath_lookup_qpn(struct ipath_qp_table *qpt, u32 qpn)
/**
* ipath_reset_qp - initialize the QP state to the reset state
* @qp: the QP to reset
* @type: the QP type
*/
static void ipath_reset_qp(struct ipath_qp *qp)
static void ipath_reset_qp(struct ipath_qp *qp, enum ib_qp_type type)
{
qp->remote_qpn = 0;
qp->qkey = 0;
@ -342,7 +343,7 @@ static void ipath_reset_qp(struct ipath_qp *qp)
qp->s_psn = 0;
qp->r_psn = 0;
qp->r_msn = 0;
if (qp->ibqp.qp_type == IB_QPT_RC) {
if (type == IB_QPT_RC) {
qp->s_state = IB_OPCODE_RC_SEND_LAST;
qp->r_state = IB_OPCODE_RC_SEND_LAST;
} else {
@ -414,7 +415,7 @@ int ipath_error_qp(struct ipath_qp *qp, enum ib_wc_status err)
wc.wr_id = qp->r_wr_id;
wc.opcode = IB_WC_RECV;
wc.status = err;
ipath_cq_enter(to_icq(qp->ibqp.send_cq), &wc, 1);
ipath_cq_enter(to_icq(qp->ibqp.recv_cq), &wc, 1);
}
wc.status = IB_WC_WR_FLUSH_ERR;
@ -534,7 +535,7 @@ int ipath_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
switch (new_state) {
case IB_QPS_RESET:
ipath_reset_qp(qp);
ipath_reset_qp(qp, ibqp->qp_type);
break;
case IB_QPS_ERR:
@ -647,7 +648,7 @@ int ipath_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
attr->port_num = 1;
attr->timeout = qp->timeout;
attr->retry_cnt = qp->s_retry_cnt;
attr->rnr_retry = qp->s_rnr_retry;
attr->rnr_retry = qp->s_rnr_retry_cnt;
attr->alt_port_num = 0;
attr->alt_timeout = 0;
@ -839,7 +840,7 @@ struct ib_qp *ipath_create_qp(struct ib_pd *ibpd,
goto bail_qp;
}
qp->ip = NULL;
ipath_reset_qp(qp);
ipath_reset_qp(qp, init_attr->qp_type);
break;
default:

View File

@ -1196,6 +1196,10 @@ static inline void ipath_rc_rcv_resp(struct ipath_ibdev *dev,
list_move_tail(&qp->timerwait,
&dev->pending[dev->pending_index]);
spin_unlock(&dev->pending_lock);
if (opcode == OP(RDMA_READ_RESPONSE_MIDDLE))
qp->s_retry = qp->s_retry_cnt;
/*
* Update the RDMA receive state but do the copy w/o
* holding the locks and blocking interrupts.

View File

@ -185,7 +185,7 @@
#define INFINIPATH_IBCC_LINKINITCMD_SLEEP 3
#define INFINIPATH_IBCC_LINKINITCMD_SHIFT 16
#define INFINIPATH_IBCC_LINKCMD_MASK 0x3ULL
#define INFINIPATH_IBCC_LINKCMD_INIT 1 /* move to 0x11 */
#define INFINIPATH_IBCC_LINKCMD_DOWN 1 /* move to 0x11 */
#define INFINIPATH_IBCC_LINKCMD_ARMED 2 /* move to 0x21 */
#define INFINIPATH_IBCC_LINKCMD_ACTIVE 3 /* move to 0x31 */
#define INFINIPATH_IBCC_LINKCMD_SHIFT 18

View File

@ -38,6 +38,7 @@
#include <net/icmp.h>
#include <linux/icmpv6.h>
#include <linux/delay.h>
#include <linux/vmalloc.h>
#include "ipoib.h"
@ -637,6 +638,7 @@ static inline int post_send(struct ipoib_dev_priv *priv,
priv->tx_sge[0].addr = addr;
priv->tx_sge[0].length = len;
priv->tx_wr.num_sge = 1;
priv->tx_wr.wr_id = wr_id | IPOIB_OP_CM;
return ib_post_send(tx->qp, &priv->tx_wr, &bad_wr);
@ -1030,13 +1032,13 @@ static int ipoib_cm_tx_init(struct ipoib_cm_tx *p, u32 qpn,
struct ipoib_dev_priv *priv = netdev_priv(p->dev);
int ret;
p->tx_ring = kzalloc(ipoib_sendq_size * sizeof *p->tx_ring,
GFP_KERNEL);
p->tx_ring = vmalloc(ipoib_sendq_size * sizeof *p->tx_ring);
if (!p->tx_ring) {
ipoib_warn(priv, "failed to allocate tx ring\n");
ret = -ENOMEM;
goto err_tx;
}
memset(p->tx_ring, 0, ipoib_sendq_size * sizeof *p->tx_ring);
p->qp = ipoib_cm_create_tx_qp(p->dev, p);
if (IS_ERR(p->qp)) {
@ -1077,6 +1079,7 @@ err_id:
ib_destroy_qp(p->qp);
err_qp:
p->qp = NULL;
vfree(p->tx_ring);
err_tx:
return ret;
}
@ -1127,7 +1130,7 @@ timeout:
if (p->qp)
ib_destroy_qp(p->qp);
kfree(p->tx_ring);
vfree(p->tx_ring);
kfree(p);
}

View File

@ -41,6 +41,7 @@
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/vmalloc.h>
#include <linux/if_arp.h> /* For ARPHRD_xxx */
@ -887,13 +888,13 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
goto out;
}
priv->tx_ring = kzalloc(ipoib_sendq_size * sizeof *priv->tx_ring,
GFP_KERNEL);
priv->tx_ring = vmalloc(ipoib_sendq_size * sizeof *priv->tx_ring);
if (!priv->tx_ring) {
printk(KERN_WARNING "%s: failed to allocate TX ring (%d entries)\n",
ca->name, ipoib_sendq_size);
goto out_rx_ring_cleanup;
}
memset(priv->tx_ring, 0, ipoib_sendq_size * sizeof *priv->tx_ring);
/* priv->tx_head, tx_tail & tx_outstanding are already 0 */
@ -903,7 +904,7 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
return 0;
out_tx_ring_cleanup:
kfree(priv->tx_ring);
vfree(priv->tx_ring);
out_rx_ring_cleanup:
kfree(priv->rx_ring);
@ -928,7 +929,7 @@ void ipoib_dev_cleanup(struct net_device *dev)
ipoib_ib_dev_cleanup(dev);
kfree(priv->rx_ring);
kfree(priv->tx_ring);
vfree(priv->tx_ring);
priv->rx_ring = NULL;
priv->tx_ring = NULL;

View File

@ -650,7 +650,7 @@ void ipoib_mcast_send(struct net_device *dev, void *mgid, struct sk_buff *skb)
*/
spin_lock(&priv->lock);
if (!test_bit(IPOIB_MCAST_STARTED, &priv->flags) ||
if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags) ||
!priv->broadcast ||
!test_bit(IPOIB_MCAST_FLAG_ATTACHED, &priv->broadcast->flags)) {
++dev->stats.tx_dropped;

View File

@ -108,6 +108,7 @@ config ACER_WMI
depends on ACPI
depends on LEDS_CLASS
depends on BACKLIGHT_CLASS_DEVICE
depends on SERIO_I8042
select ACPI_WMI
---help---
This is a driver for newer Acer (and Wistron) laptops. It adds

View File

@ -217,6 +217,15 @@ static struct dmi_system_id acer_quirks[] = {
},
.driver_data = &quirk_acer_travelmate_2490,
},
{
.callback = dmi_matched,
.ident = "Acer Aspire 3610",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 3610"),
},
.driver_data = &quirk_acer_travelmate_2490,
},
{
.callback = dmi_matched,
.ident = "Acer Aspire 5100",
@ -226,6 +235,15 @@ static struct dmi_system_id acer_quirks[] = {
},
.driver_data = &quirk_acer_travelmate_2490,
},
{
.callback = dmi_matched,
.ident = "Acer Aspire 5610",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
},
.driver_data = &quirk_acer_travelmate_2490,
},
{
.callback = dmi_matched,
.ident = "Acer Aspire 5630",
@ -761,11 +779,11 @@ enum led_brightness value)
}
static struct led_classdev mail_led = {
.name = "acer-mail:green",
.name = "acer-wmi::mail",
.brightness_set = mail_led_set,
};
static int __init acer_led_init(struct device *dev)
static int __devinit acer_led_init(struct device *dev)
{
return led_classdev_register(dev, &mail_led);
}
@ -798,7 +816,7 @@ static struct backlight_ops acer_bl_ops = {
.update_status = update_bl_status,
};
static int __init acer_backlight_init(struct device *dev)
static int __devinit acer_backlight_init(struct device *dev)
{
struct backlight_device *bd;
@ -817,7 +835,7 @@ static int __init acer_backlight_init(struct device *dev)
return 0;
}
static void __exit acer_backlight_exit(void)
static void acer_backlight_exit(void)
{
backlight_device_unregister(acer_backlight_device);
}
@ -1052,11 +1070,12 @@ static int __init acer_wmi_init(void)
if (wmi_has_guid(WMID_GUID2) && interface) {
if (ACPI_FAILURE(WMID_set_capabilities())) {
printk(ACER_ERR "Unable to detect available devices\n");
printk(ACER_ERR "Unable to detect available WMID "
"devices\n");
return -ENODEV;
}
} else if (!wmi_has_guid(WMID_GUID2) && interface) {
printk(ACER_ERR "Unable to detect available devices\n");
printk(ACER_ERR "No WMID device detection method found\n");
return -ENODEV;
}
@ -1064,21 +1083,20 @@ static int __init acer_wmi_init(void)
interface = &AMW0_interface;
if (ACPI_FAILURE(AMW0_set_capabilities())) {
printk(ACER_ERR "Unable to detect available devices\n");
printk(ACER_ERR "Unable to detect available AMW0 "
"devices\n");
return -ENODEV;
}
}
if (wmi_has_guid(AMW0_GUID1)) {
if (ACPI_FAILURE(AMW0_find_mailled()))
printk(ACER_ERR "Unable to detect mail LED\n");
}
if (wmi_has_guid(AMW0_GUID1))
AMW0_find_mailled();
find_quirks();
if (!interface) {
printk(ACER_ERR "No or unsupported WMI interface, unable to ");
printk(KERN_CONT "load.\n");
printk(ACER_ERR "No or unsupported WMI interface, unable to "
"load\n");
return -ENODEV;
}

View File

@ -315,7 +315,7 @@ static void sony_laptop_report_input_event(u8 event)
break;
default:
if (event > ARRAY_SIZE(sony_laptop_input_index)) {
if (event >= ARRAY_SIZE(sony_laptop_input_index)) {
dprintk("sony_laptop_report_input_event, event not known: %d\n", event);
break;
}

View File

@ -180,7 +180,7 @@ static void tifm_sd_transfer_data(struct tifm_sd *host)
host->sg_pos++;
if (host->sg_pos == host->sg_len) {
if ((r_data->flags & MMC_DATA_WRITE)
&& DATA_CARRY)
&& (host->cmd_flags & DATA_CARRY))
writel(host->bounce_buf_data[0],
host->dev->addr
+ SOCK_MMCSD_DATA);

View File

@ -203,8 +203,11 @@ again:
if (received < budget) {
netif_rx_complete(vi->dev, napi);
if (unlikely(!vi->rvq->vq_ops->enable_cb(vi->rvq))
&& netif_rx_reschedule(vi->dev, napi))
&& napi_schedule_prep(napi)) {
vi->rvq->vq_ops->disable_cb(vi->rvq);
__netif_rx_schedule(vi->dev, napi);
goto again;
}
}
return received;
@ -278,10 +281,11 @@ again:
pr_debug("%s: virtio not prepared to send\n", dev->name);
netif_stop_queue(dev);
/* Activate callback for using skbs: if this fails it
/* Activate callback for using skbs: if this returns false it
* means some were used in the meantime. */
if (unlikely(!vi->svq->vq_ops->enable_cb(vi->svq))) {
printk("Unlikely: restart svq failed\n");
printk("Unlikely: restart svq race\n");
vi->svq->vq_ops->disable_cb(vi->svq);
netif_start_queue(dev);
goto again;
}
@ -294,6 +298,15 @@ again:
return 0;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
static void virtnet_netpoll(struct net_device *dev)
{
struct virtnet_info *vi = netdev_priv(dev);
napi_schedule(&vi->napi);
}
#endif
static int virtnet_open(struct net_device *dev)
{
struct virtnet_info *vi = netdev_priv(dev);
@ -336,6 +349,9 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->stop = virtnet_close;
dev->hard_start_xmit = start_xmit;
dev->features = NETIF_F_HIGHDMA;
#ifdef CONFIG_NET_POLL_CONTROLLER
dev->poll_controller = virtnet_netpoll;
#endif
SET_NETDEV_DEV(dev, &vdev->dev);
/* Do we support "hardware" checksums? */

View File

@ -829,7 +829,7 @@ static ssize_t pdcs_autoboot_write(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
return pdcs_auto_write(kset, attr, buf, count, PF_AUTOBOOT);
return pdcs_auto_write(kobj, attr, buf, count, PF_AUTOBOOT);
}
/**
@ -845,7 +845,7 @@ static ssize_t pdcs_autosearch_write(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
return pdcs_auto_write(kset, attr, buf, count, PF_AUTOSEARCH);
return pdcs_auto_write(kobj, attr, buf, count, PF_AUTOSEARCH);
}
/**
@ -1066,7 +1066,7 @@ pdc_stable_init(void)
}
/* Don't forget the root entries */
error = sysfs_create_group(stable_kobj, pdcs_attr_group);
error = sysfs_create_group(stable_kobj, &pdcs_attr_group);
/* register the paths kset as a child of the stable kset */
paths_kset = kset_create_and_add("paths", NULL, stable_kobj);

View File

@ -314,8 +314,8 @@ sba_dump_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
#define RESMAP_MASK(n) (~0UL << (BITS_PER_LONG - (n)))
#define RESMAP_IDX_MASK (sizeof(unsigned long) - 1)
unsigned long ptr_to_pide(struct ioc *ioc, unsigned long *res_ptr,
unsigned int bitshiftcnt)
static unsigned long ptr_to_pide(struct ioc *ioc, unsigned long *res_ptr,
unsigned int bitshiftcnt)
{
return (((unsigned long)res_ptr - (unsigned long)ioc->res_map) << 3)
+ bitshiftcnt;

View File

@ -143,14 +143,18 @@ void pci_bus_add_devices(struct pci_bus *bus)
/* register the bus with sysfs as the parent is now
* properly registered. */
child_bus = dev->subordinate;
if (child_bus->is_added)
continue;
child_bus->dev.parent = child_bus->bridge;
retval = device_register(&child_bus->dev);
if (retval)
dev_err(&dev->dev, "Error registering pci_bus,"
" continuing...\n");
else
else {
child_bus->is_added = 1;
retval = device_create_file(&child_bus->dev,
&dev_attr_cpuaffinity);
}
if (retval)
dev_err(&dev->dev, "Error creating cpuaffinity"
" file, continuing...\n");

View File

@ -272,21 +272,29 @@ static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
{
acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
acpi_handle tmp;
static int state_conv[] = {
[0] = 0,
[1] = 1,
[2] = 2,
[3] = 3,
[4] = 3
static const u8 state_conv[] = {
[PCI_D0] = ACPI_STATE_D0,
[PCI_D1] = ACPI_STATE_D1,
[PCI_D2] = ACPI_STATE_D2,
[PCI_D3hot] = ACPI_STATE_D3,
[PCI_D3cold] = ACPI_STATE_D3
};
int acpi_state = state_conv[(int __force) state];
if (!handle)
return -ENODEV;
/* If the ACPI device has _EJ0, ignore the device */
if (ACPI_SUCCESS(acpi_get_handle(handle, "_EJ0", &tmp)))
return 0;
return acpi_bus_set_power(handle, acpi_state);
switch (state) {
case PCI_D0:
case PCI_D1:
case PCI_D2:
case PCI_D3hot:
case PCI_D3cold:
return acpi_bus_set_power(handle, state_conv[state]);
}
return -EINVAL;
}

View File

@ -99,7 +99,7 @@ static dbdev_tab_t au1550_spi_mem_dbdev =
static void au1550_spi_bits_handlers_set(struct au1550_spi *hw, int bpw);
/**
/*
* compute BRG and DIV bits to setup spi clock based on main input clock rate
* that was specified in platform data structure
* according to au1550 datasheet:
@ -650,7 +650,7 @@ static int au1550_spi_txrx_bufs(struct spi_device *spi, struct spi_transfer *t)
return hw->txrx_bufs(spi, t);
}
static irqreturn_t au1550_spi_irq(int irq, void *dev, struct pt_regs *regs)
static irqreturn_t au1550_spi_irq(int irq, void *dev)
{
struct au1550_spi *hw = dev;
return hw->irq_callback(hw);

View File

@ -344,12 +344,14 @@ static void bitbang_work(struct work_struct *work)
t->rx_dma = t->tx_dma = 0;
status = bitbang->txrx_bufs(spi, t);
}
if (status > 0)
m->actual_length += status;
if (status != t->len) {
if (status > 0)
status = -EMSGSIZE;
/* always report some kind of error */
if (status >= 0)
status = -EREMOTEIO;
break;
}
m->actual_length += status;
status = 0;
/* protocol tweaks before next transfer */

View File

@ -4,7 +4,6 @@
menuconfig THERMAL
bool "Generic Thermal sysfs driver"
default y
help
Generic Thermal Sysfs driver offers a generic mechanism for
thermal management. Usually it's made up of one or more thermal

View File

@ -152,7 +152,7 @@ static void virtballoon_changed(struct virtio_device *vdev)
wake_up(&vb->config_change);
}
static inline int towards_target(struct virtio_balloon *vb)
static inline s64 towards_target(struct virtio_balloon *vb)
{
u32 v;
__virtio_config_val(vb->vdev,
@ -176,7 +176,7 @@ static int balloon(void *_vballoon)
set_freezable();
while (!kthread_should_stop()) {
int diff;
s64 diff;
try_to_freeze();
wait_event_interruptible(vb->config_change,

View File

@ -177,6 +177,7 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
struct virtio_pci_device *vp_dev = opaque;
struct virtio_pci_vq_info *info;
irqreturn_t ret = IRQ_NONE;
unsigned long flags;
u8 isr;
/* reading the ISR has the effect of also clearing it so it's very
@ -197,12 +198,12 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
drv->config_changed(&vp_dev->vdev);
}
spin_lock(&vp_dev->lock);
spin_lock_irqsave(&vp_dev->lock, flags);
list_for_each_entry(info, &vp_dev->virtqueues, node) {
if (vring_interrupt(irq, info->vq) == IRQ_HANDLED)
ret = IRQ_HANDLED;
}
spin_unlock(&vp_dev->lock);
spin_unlock_irqrestore(&vp_dev->lock, flags);
return ret;
}
@ -214,6 +215,7 @@ static struct virtqueue *vp_find_vq(struct virtio_device *vdev, unsigned index,
struct virtio_pci_device *vp_dev = to_vp_device(vdev);
struct virtio_pci_vq_info *info;
struct virtqueue *vq;
unsigned long flags;
u16 num;
int err;
@ -255,9 +257,9 @@ static struct virtqueue *vp_find_vq(struct virtio_device *vdev, unsigned index,
vq->priv = info;
info->vq = vq;
spin_lock(&vp_dev->lock);
spin_lock_irqsave(&vp_dev->lock, flags);
list_add(&info->node, &vp_dev->virtqueues);
spin_unlock(&vp_dev->lock);
spin_unlock_irqrestore(&vp_dev->lock, flags);
return vq;
@ -274,10 +276,11 @@ static void vp_del_vq(struct virtqueue *vq)
{
struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
struct virtio_pci_vq_info *info = vq->priv;
unsigned long flags;
spin_lock(&vp_dev->lock);
spin_lock_irqsave(&vp_dev->lock, flags);
list_del(&info->node);
spin_unlock(&vp_dev->lock);
spin_unlock_irqrestore(&vp_dev->lock, flags);
vring_del_virtqueue(vq);

View File

@ -232,7 +232,6 @@ static bool vring_enable_cb(struct virtqueue *_vq)
vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT;
mb();
if (unlikely(more_used(vq))) {
vq->vring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
END_USE(vq);
return false;
}

View File

@ -229,7 +229,7 @@ skip:
static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
{
struct hfs_btree *tree;
struct hfs_bnode *node, *new_node;
struct hfs_bnode *node, *new_node, *next_node;
struct hfs_bnode_desc node_desc;
int num_recs, new_rec_off, new_off, old_rec_off;
int data_start, data_end, size;
@ -248,6 +248,17 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
new_node->type = node->type;
new_node->height = node->height;
if (node->next)
next_node = hfs_bnode_find(tree, node->next);
else
next_node = NULL;
if (IS_ERR(next_node)) {
hfs_bnode_put(node);
hfs_bnode_put(new_node);
return next_node;
}
size = tree->node_size / 2 - node->num_recs * 2 - 14;
old_rec_off = tree->node_size - 4;
num_recs = 1;
@ -261,6 +272,8 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
/* panic? */
hfs_bnode_put(node);
hfs_bnode_put(new_node);
if (next_node)
hfs_bnode_put(next_node);
return ERR_PTR(-ENOSPC);
}
@ -315,8 +328,7 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
hfs_bnode_write(node, &node_desc, 0, sizeof(node_desc));
/* update next bnode header */
if (new_node->next) {
struct hfs_bnode *next_node = hfs_bnode_find(tree, new_node->next);
if (next_node) {
next_node->prev = new_node->this;
hfs_bnode_read(next_node, &node_desc, 0, sizeof(node_desc));
node_desc.prev = cpu_to_be32(next_node->prev);

View File

@ -232,6 +232,7 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access)
fhp->fh_dentry = dentry;
fhp->fh_export = exp;
nfsd_nr_verified++;
cache_get(&exp->h);
} else {
/*
* just rechecking permissions
@ -241,6 +242,7 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access)
dprintk("nfsd: fh_verify - just checking\n");
dentry = fhp->fh_dentry;
exp = fhp->fh_export;
cache_get(&exp->h);
/*
* Set user creds for this exportpoint; necessary even
* in the "just checking" case because this may be a
@ -252,8 +254,6 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, int type, int access)
if (error)
goto out;
}
cache_get(&exp->h);
error = nfsd_mode_check(rqstp, dentry->d_inode->i_mode, type);
if (error)

View File

@ -640,17 +640,17 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
ret = -EACCES;
if (!ptrace_may_attach(task))
goto out;
goto out_task;
ret = -EINVAL;
/* file position must be aligned */
if (*ppos % PM_ENTRY_BYTES)
goto out;
goto out_task;
ret = 0;
mm = get_task_mm(task);
if (!mm)
goto out;
goto out_task;
ret = -ENOMEM;
uaddr = (unsigned long)buf & PAGE_MASK;
@ -658,7 +658,7 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
pagecount = (PAGE_ALIGN(uend) - uaddr) / PAGE_SIZE;
pages = kmalloc(pagecount * sizeof(struct page *), GFP_KERNEL);
if (!pages)
goto out_task;
goto out_mm;
down_read(&current->mm->mmap_sem);
ret = get_user_pages(current, current->mm, uaddr, pagecount,
@ -668,6 +668,12 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
if (ret < 0)
goto out_free;
if (ret != pagecount) {
pagecount = ret;
ret = -EFAULT;
goto out_pages;
}
pm.out = buf;
pm.end = buf + count;
@ -699,15 +705,17 @@ static ssize_t pagemap_read(struct file *file, char __user *buf,
ret = pm.out - buf;
}
out_pages:
for (; pagecount; pagecount--) {
page = pages[pagecount-1];
if (!PageReserved(page))
SetPageDirty(page);
page_cache_release(page);
}
mmput(mm);
out_free:
kfree(pages);
out_mm:
mmput(mm);
out_task:
put_task_struct(task);
out:

View File

@ -91,22 +91,19 @@ extern int __put_user_bad(void);
#define get_user(x, ptr) \
({ \
int __gu_err = 0; \
uint32_t __gu_val = 0; \
typeof(*(ptr)) __gu_val = *ptr; \
switch (sizeof(*(ptr))) { \
case 1: \
case 2: \
case 4: \
__gu_val = *(ptr); \
break; \
case 8: \
memcpy(&__gu_val, ptr, sizeof (*(ptr))); \
case 8: \
break; \
default: \
__gu_val = 0; \
__gu_err = __get_user_bad(); \
__gu_val = 0; \
break; \
} \
(x) = (typeof(*(ptr)))__gu_val; \
(x) = __gu_val; \
__gu_err; \
})
#define __get_user(x, ptr) get_user(x, ptr)

View File

@ -204,7 +204,7 @@ typedef struct elf64_fdesc {
/*
* The following definitions are those for 32-bit ELF binaries on a 32-bit
* kernel and for 64-bit binaries on a 64-bit kernel. To run 32-bit binaries
* on a 64-bit kernel, arch/parisc64/kernel/binfmt_elf32.c defines these
* on a 64-bit kernel, arch/parisc/kernel/binfmt_elf32.c defines these
* macros appropriately and then #includes binfmt_elf.c, which then includes
* this file.
*/
@ -216,26 +216,25 @@ typedef struct elf64_fdesc {
* Note that this header file is used by default in fs/binfmt_elf.c. So
* the following macros are for the default case. However, for the 64
* bit kernel we also support 32 bit parisc binaries. To do that
* arch/parisc64/kernel/binfmt_elf32.c defines its own set of these
* arch/parisc/kernel/binfmt_elf32.c defines its own set of these
* macros, and then it includes fs/binfmt_elf.c to provide an alternate
* elf binary handler for 32 bit binaries (on the 64 bit kernel).
*/
#ifdef CONFIG_64BIT
#define ELF_CLASS ELFCLASS64
#define ELF_CLASS ELFCLASS64
#else
#define ELF_CLASS ELFCLASS32
#endif
typedef unsigned long elf_greg_t;
/* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in
intent than poking at uname or /proc/cpuinfo.
/*
* This yields a string that ld.so will use to load implementation
* specific libraries for optimization. This is more specific in
* intent than poking at uname or /proc/cpuinfo.
*/
For the moment, we have only optimizations for the Intel generations,
but that could change... */
#define ELF_PLATFORM ("PARISC\0" /*+((boot_cpu_data.x86-3)*5) */)
#define ELF_PLATFORM ("PARISC\0")
#define SET_PERSONALITY(ex, ibcs2) \
current->personality = PER_LINUX; \
@ -310,7 +309,7 @@ struct pt_regs; /* forward declaration... */
#define ELF_OSABI ELFOSABI_LINUX
/* %r23 is set by ld.so to a pointer to a function which might be
registered using atexit. This provides a mean for the dynamic
registered using atexit. This provides a means for the dynamic
linker to call DT_FINI functions for shared libraries that have
been loaded before the code runs.
@ -339,6 +338,5 @@ struct pt_regs; /* forward declaration... */
but it's not easy, and we've already done it here. */
#define ELF_HWCAP 0
/* (boot_cpu_data.x86_capability) */
#endif

View File

@ -20,4 +20,11 @@
#define KERNEL_MAP_START (GATEWAY_PAGE_SIZE)
#define KERNEL_MAP_END (TMPALIAS_MAP_START)
#endif
#ifndef __ASSEMBLY__
extern void *vmalloc_start;
#define PCXL_DMA_MAP_SIZE (8*1024*1024)
#define VMALLOC_START ((unsigned long)vmalloc_start)
#define VMALLOC_END (KERNEL_MAP_END)
#endif /*__ASSEMBLY__*/
#endif /*_ASM_FIXMAP_H*/

View File

@ -56,6 +56,12 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
int err = 0;
int uval;
/* futex.c wants to do a cmpxchg_inatomic on kernel NULL, which is
* our gateway page, and causes no end of trouble...
*/
if (segment_eq(KERNEL_DS, get_fs()) && !uaddr)
return -EFAULT;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
@ -67,5 +73,5 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
return uval;
}
#endif
#endif
#endif /*__KERNEL__*/
#endif /*_ASM_PARISC_FUTEX_H*/

View File

@ -645,8 +645,7 @@ int pdc_soft_power_button(int sw_control);
void pdc_io_reset(void);
void pdc_io_reset_devices(void);
int pdc_iodc_getc(void);
int pdc_iodc_print(unsigned char *str, unsigned count);
void pdc_printf(const char *fmt, ...);
int pdc_iodc_print(const unsigned char *str, unsigned count);
void pdc_emergency_unlock(void);
int pdc_sti_call(unsigned long func, unsigned long flags,

Some files were not shown because too many files have changed in this diff Show More